Home Notes Research CV Outside of Math

Back to Course Resources

MATH 2410 — Differential Equations

Lecture Notes. This is NOT meant to be a substitute for coming to lecture or consulting the textbook directly, but rather a helpful resource to review for exams/ review what was discussed in lecture.

Lecture 1: Section 1.1 — Introduction to Differential Equations and Classification ↑ Back to top

Definition A differential equation is an equation that involves an unknown function and one or more of its derivatives.

Throughout calculus, we have encountered many differential equations.

Example \[ \frac{dy}{dx} = 3x^2 \] \[ \frac{dy}{dx} = x^4 + 5 \sin(x) \] These are differential equations since they involve the derivative of the unknown function $y(x)$.
Example \[ y'' + 4y = 0 \] This is a second-order differential equation.

Classification by Type

Definition An ordinary differential equation (ODE) contains only ordinary derivatives of one or more unknown functions.
Definition A partial differential equation (PDE) involves partial derivatives of unknown functions of two or more variables.

Classification by Order

Definition The order of a differential equation is the highest derivative that appears.

Classification by Linearity

Definition A differential equation is linear if it can be written as: \[ a_n(x)y^{(n)} + a_{n-1}(x)y^{(n-1)} + \dots + a_1(x)y' + a_0(x)y = g(x) \] where the coefficients depend only on $x$.
Example Linear: \[ y' + 2y = e^x \] Nonlinear: \[ y' + y^2 = 0 \]

Examples Table by Order and Linearity

Order Linear Nonlinear
1st \(y' + 2y = e^x\) \(y' + y^2 = 0\)
2nd \(y'' - 3\sin(x)y' + 2y = x\) \(y'' + (y')^2 = 0\)
3rd \(y''' + e^{3x}y'' - xy' + 2y = \sin x\) \(y''' + \sin(y) = 0\)

Implicit vs Explicit Solutions

Definition. A solution to a differential equation written in the form \( y = \phi(x) \) is called an explicit solution. A solution written in the form \( G(x,y) = 0 \) is called an implicit solution.

Often, implicit solutions are sufficient to describe the behavior of solutions, especially when solving explicitly is difficult or unnecessary.

Families of Solutions

Differential equations often admit infinitely many solutions that differ by constants.

Example. The differential equation \[ y' = 2x \] has a family of solutions \[ y = x^2 + C, \] where \( C \) is an arbitrary constant.
Example. Verify that the one-parameter family \[ y = cx - x\cos(x) \] is a solution to the differential equation \[ xy' - y = x^2\sin(x) \] on \( (-\infty, \infty) \).

Lecture 2: Section 1.2 - Intital Value Problems ↑ Back to top


Initial Value Problems

As we saw in the previous section, there can be infinitely many solutions to a differential equation. Some may be constant solutions, while others may form a one-parameter family of solutions. This naturally leads to the question: what if we want our solutions to satisfy an additional condition?

Definition. An initial value problem (IVP) is a differential equation together with one or more initial conditions that the solution must satisfy.

By prescribing initial conditions, we reduce the number of possible solutions, often to a single solution. The solution to an initial value problem is called a particular solution.

Example.

Verify that the family \( y = ce^x \) is a solution to \[ y' = y \] on \( (-\infty, \infty) \), and find the particular solution such that \( y(0) = 3 \).

Since \( y = ce^x \) is continuous and differentiable on \( (-\infty, \infty) \), we compute \[ y' = \frac{d}{dx}(ce^x) = ce^x = y. \] This verifies that \( y = ce^x \) is a one-parameter family of solutions.

To find the particular solution, we impose the initial condition: \[ y(0) = ce^0 = c = 3. \] Hence, the particular solution is \[ y = 3e^x. \]


When Does an Initial Value Problem Have Solutions?

Given an initial value problem, it is natural to ask whether a solution exists, and if so, whether that solution is unique. The following theorem provides sufficient conditions for both existence and uniqueness.

Theorem (Existence and Uniqueness)

Let \( R \) be a rectangle in the \( xy \)-plane containing the point \( (x_0, y_0) \) in its interior. If both \( f(x,y) \) and \( \frac{\partial f}{\partial y} \) are continuous on \( R \), then there exists an interval \( I \) containing \( x_0 \) and a unique function \( y(x) \), defined on \( I \), that solves the initial value problem \[ \frac{dy}{dx} = f(x,y), \quad y(x_0) = y_0. \]

Let’s look at a few examples to determine whether solutions to initial value problems are guaranteed to exist.

Example.

Consider the initial value problem \[ \frac{dy}{dx} = xy^{1/2}. \] Let \( f(x,y) = xy^{1/2} \). Then \[ \frac{\partial f}{\partial y} = x\left(\frac{1}{2}y^{-1/2}\right) = \frac{x}{2\sqrt{y}}. \]

The function \( \frac{\partial f}{\partial y} \) is continuous for all \( x \) and all \( y \neq 0 \). Therefore, if the initial condition satisfies \( y_0 \neq 0 \), the theorem guarantees the existence of a unique solution.

Remark. The conditions of the existence and uniqueness theorem are sufficient but not necessary. An initial value problem may fail to meet these conditions and still have a unique solution.

Second-Order Initial Value Problems

Example.

Consider the two-parameter family of functions \[ x(t) = c_1\cos(t) + c_2\sin(t). \] Verify that this family satisfies the differential equation \[ x'' + x = 0. \]

Compute the derivatives: \[ x' = -c_1\sin(t) + c_2\cos(t), \] \[ x'' = -c_1\cos(t) - c_2\sin(t). \] Substituting into the equation gives \[ x'' + x = 0, \] which is an identity. Thus, this family is a solution.

Now find the particular solution such that \[ x(0) = -1, \quad x'(0) = 8. \] From \( x(0) \), \[ c_1 = -1. \] From \( x'(0) \), \[ c_2 = 8. \]

Therefore, the particular solution is \[ x(t) = -\cos(t) + 8\sin(t). \]

Lecture 3: Section 1.3 — Mathematical Modeling ↑ Back to top

Many real-world phenomena are best modeled using differential equations. This is because derivatives describe the instantaneous rate at which a quantity changes with respect to another variable, typically time.

The goal of mathematical modeling is to translate physical assumptions into equations that describe how a system evolves.

We typically construct mathematical models using the following steps:

  1. Identify all variables contributing to the system.
  2. Make reasonable assumptions to limit the number of changing variables.
  3. Refine the model as additional data or constraints become available.

There is often a balance between making a model realistic and keeping it mathematically tractable. Including too many variables may lead to a model that is difficult or impossible to analyze.

A key translation used throughout this section is:

\[ \text{“rate proportional to”} \quad \Longleftrightarrow \quad k \cdot \text{(quantity)} \]


Example (Radioactive Decay).

One of the most common examples of mathematical modeling using differential equations is radioactive decay.

Let \( t \) represent time (typically measured in years), and let \( A(t) \) denote the amount of radioactive substance remaining at time \( t \). Then, \[ \frac{dA}{dt} \] represents the rate of change of the substance.

A standard assumption is that the rate of decay is proportional to the amount of substance remaining. This leads to the model \[ \frac{dA}{dt} = kA, \quad k < 0. \]

The condition \( k < 0 \) ensures that the amount of substance decreases over time. Some texts instead write the model as \[ \frac{dA}{dt} = -kA, \quad k > 0. \]

This model describes a single unstable element decaying into a stable state. More complex decay chains can be modeled using systems of differential equations, which we will study later in the course.


Example (Spread of Disease).

Another important application of differential equations is modeling the spread of disease through a population.

Let \[ x(t) = \text{number of infected individuals}, \quad y(t) = \text{number of uninfected individuals}. \]

We assume that the rate at which the disease spreads is proportional to the number of encounters between infected and uninfected individuals. Since the number of encounters is proportional to both \( x(t) \) and \( y(t) \), we obtain the model \[ \frac{dx}{dt} = kxy. \]

If the total population is fixed at \( n \) individuals and initially one person is infected, so that \( x(0) = 1 \), then \( y = n + 1 - x \), and the model becomes \[ \frac{dx}{dt} = kx(n + 1 - x), \quad x(0) = 1. \]

This model can be further refined into a system that tracks susceptible, infected, and recovered populations, commonly referred to as an SIR model.


Example (Draining Tank).

Suppose water leaks from a tank through a circular hole of area \( A_h \) at the bottom. The volume of water leaving the tank per unit time is given by \[ cA_h\sqrt{2gh}, \quad 0 < c < 1, \] where \( h \) is the height of the water and \( g \) is the acceleration due to gravity.

Let \( V(t) \) denote the volume of water in the tank at time \( t \). Then \[ \frac{dV}{dt} = cA_h\sqrt{2gh}. \]

Suppose the tank is cubical and has base area \( A_w \). Since volume is related to height by \[ V(t) = A_wh, \] implicit differentiation gives \[ \frac{dV}{dt} = A_w\frac{dh}{dt}. \]

Substituting into the expression for \( \frac{dV}{dt} \), we obtain the model \[ \frac{dh}{dt} = \frac{1}{A_w} cA_h\sqrt{2gh}. \]

These examples illustrate how differential equations arise naturally in a wide range of physical, biological, and engineering problems.

Lecture 4: Section 2.1 — Autonomous Differential Equations and Critical Points ↑ Back to top

Definition An ordinary differential equation is said to be autonomous if the independent variable does not appear explicitly in the differential equation: \[ \frac{dy}{dx} = f(y). \]

An autonomous differential equation describes relationships solely between the dependent variable and its derivatives. The independent variable does not appear explicitly.

Example The following are examples of autonomous differential equations:

Although solutions to autonomous differential equations can often be found using separation of variables (discussed in Section 2.2), we can gain significant qualitative information about solution behavior without explicitly solving the equation.


Critical Points and Phase Portraits

Definition For an autonomous differential equation \( \frac{dy}{dx} = f(y) \), a critical point (or stationary point) is a value \( y=c \) such that \[ f(c)=0. \]

Because the equation is autonomous, critical points correspond to constant solutions and are values of the dependent variable—not the independent variable.

Recall:

\(f'(x)\) Behavior of \(f(x)\)
\(>0\) Increasing
\(<0\) Decreasing

Example

Consider the autonomous differential equation \[ \frac{dy}{dx} = y(a - by), \quad a,b>0. \]

Critical points: \[ y=0, \quad y=\frac{a}{b}. \]

Interval Sign of \(f(y)\) Behavior of \(y(x)\)
\((-\infty,0)\) Negative Decreasing
\((0,\frac{a}{b})\) Positive Increasing
\((\frac{a}{b},\infty)\) Negative Decreasing

Phase Portrait

0 a/b

Example

Consider \[ \frac{dy}{dx} = (y-1)^2. \]

The critical point is \( y=1 \). Since \( (y-1)^2 \ge 0 \), solutions are increasing on both sides.

Interval Sign of \(f(y)\) Behavior of \(y(x)\)
\((-\infty,1)\) Positive Increasing
\((1,\infty)\) Positive Increasing

Phase Portrait

1

Classifying Critical Points

Definition
Example

\[ \frac{dy}{dx} = 10 + 3y - y^2 = (5-y)(2+y) \]

Critical points: \( y=-2, \; y=5 \)

Interval Sign of \(f(y)\) Behavior of \(y(x)\)
\((-\infty,-2)\) Negative Decreasing
\((-2,5)\) Positive Increasing
\((5,\infty)\) Negative Decreasing

Phase Portrait

-2 5

Thus:

Lecture 5: Section 2.2 — Separable Differential Equations ↑ Back to top

Definition A first-order differential equation of the form \[ \frac{dy}{dx} = g(x)h(y) \] is said to be separable or to have separable variables.

The method for finding solutions to separable differential equations is known as separation of variables. Suppose we are given

\[ \frac{dy}{dx} = g(x)h(y). \]

We rearrange by dividing both sides by \(h(y)\) and multiplying by \(dx\):

\[ \frac{dy}{h(y)} = g(x)\,dx. \]

If an explicit solution \(y=\phi(x)\) exists, then

\[ \frac{dy}{h(y)} = \frac{1}{h(\phi(x))}\phi'(x)\,dx = g(x)\,dx. \]

Integrating both sides gives

\[ \int \frac{dy}{h(y)} = \int g(x)\,dx. \]

Therefore,

\[ H(y) = G(x) + C, \]

where \(H'(y)=\frac{1}{h(y)}\) and \(G'(x)=g(x)\).

Remark Only one constant of integration is needed. If \[ H(y) + c_1 = G(x) + c_2, \] then \[ H(y) = G(x) + (c_2 - c_1) = G(x) + C. \]

In practice, constants are frequently absorbed into one another. This is valid as long as we do not combine constants with expressions containing the independent variable.


Example

Find a solution to \[ \frac{dy}{dx} = 2xy^2. \]

This equation is separable since \[ g(x)=2x, \quad h(y)=y^2. \]

Separate variables:

\[ \frac{dy}{y^2} = 2x\,dx. \]

Integrate both sides:

\[ \int y^{-2}dy = \int 2x\,dx. \] \[ -\,y^{-1} = x^2 + C. \]

Rewrite:

\[ \frac{1}{y} = -x^2 + C. \]

Equivalently,

\[ y = \frac{1}{C - x^2}. \]

Example

Find the particular solution of \[ (1+x)\frac{dy}{dx} = y, \quad y(0)=3. \]

Rewrite in separable form:

\[ \frac{dy}{y} = \frac{1}{1+x}\,dx. \]

Integrate:

\[ \int \frac{dy}{y} = \int \frac{1}{1+x}\,dx. \] \[ \ln|y| = \ln|1+x| + C. \]

Exponentiate:

\[ y = C(1+x). \]

Apply the initial condition:

\[ y(0) = C(1) = 3 \Rightarrow C=3. \]

Therefore, the particular solution is

\[ y = 3(1+x). \]

Lecture 6: Section 2.3 — First-Order Linear Differential Equations ↑ Back to top

So far, the only method for finding solutions to differential equations we have is separation of variables. This works when the first-order differential equation is of the form

\[ \frac{dy}{dx} = g(x)h(y). \]

However, in general, most first-order differential equations will not be separable. Thus, we need a method that works for more first-order differential equations.

Consider a first-order linear differential equation

\[ y' + a_0(x)y = g(x). \]

This is known as standard form. Note that even if the coefficient in front of \(y'\) is not 1, we can always divide the equation by any nonzero function and put the differential equation in standard form.


Definition

A first-order linear differential equation in standard form is

\[ y' + P(x)y = f(x). \]

The method used to solve all first-order linear differential equations is known as the method of integrating factor.

Multiply both sides of the equation by some function \(\mu(x)\):

\[ \mu(x)(y' + P(x)y) = \mu(x)y' + \mu(x)P(x)y = \mu(x)f(x). \]

Recall the product rule:

\[ \frac{d}{dx}(\mu(x)y) = \mu(x)y' + \mu'(x)y. \]

These expressions are equal if

\[ \mu'(x) = \mu(x)P(x). \]

This is a separable differential equation:

\[ \frac{d\mu}{dx} = \mu(x)P(x). \]

Using separation of variables:

\[ \frac{d\mu}{\mu} = P(x)\,dx. \] \[ \int \frac{d\mu}{\mu} = \int P(x)\,dx. \] \[ \ln|\mu| = \int P(x)\,dx. \] \[ \mu(x) = e^{\int P(x)\,dx}. \]
Integrating Factor \[ \mu(x) = e^{\int P(x)\,dx}. \]

With this choice of \(\mu(x)\), we obtain

\[ \frac{d}{dx}(\mu(x)y) = \mu(x)f(x). \]

Integrating both sides:

\[ y = \frac{1}{\mu(x)}\int \mu(x)f(x)\,dx = e^{-\int P(x)\,dx}\int e^{\int P(x)\,dx} f(x)\,dx. \]

Checklist for the Method:


Example

Consider the differential equation

\[ y' - 3y = 0. \]

Compute the integrating factor:

\[ \mu(x) = e^{\int -3\,dx} = e^{-3x}. \]

Multiply both sides:

\[ e^{-3x}y' - 3e^{-3x}y = 0. \]

Left side becomes:

\[ \frac{d}{dx}(e^{-3x}y) = 0. \]

Integrate:

\[ e^{-3x}y = C. \] \[ y = Ce^{3x}. \]

Example

Consider the differential equation

\[ xy' + y = e^x, \quad y(1)=2. \]

Divide by \(x\):

\[ y' + \frac{1}{x}y = \frac{e^x}{x}. \]

Integrating factor:

\[ \mu(x) = e^{\int \frac{1}{x}dx} = e^{\ln|x|} = x. \]

Multiply both sides by \(x\):

\[ x y' + y = e^x. \]

Left side becomes:

\[ \frac{d}{dx}(xy) = e^x. \]

Integrate:

\[ xy = e^x + C. \] \[ y = \frac{e^x + C}{x}. \]

Apply the initial condition:

\[ 2 = \frac{e + C}{1}. \] \[ C = 2 - e. \]

Therefore,

\[ y = \frac{e^x + (2 - e)}{x}. \]

Lecture 7: Section 2.4 — Exact Differential Equations ↑ Back to top

We begin this section with a definition:

Definition

A first-order differential equation of the form

\[ M(x,y)\,dx + N(x,y)\,dy = 0 \]

is said to be an exact differential equation if the left-hand side of the equation is an exact differential.

This may seem circular at first, but the condition for checking exactness is straightforward.

Theorem

A differential \(M(x,y)\,dx + N(x,y)\,dy\) is exact if and only if

\[ \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}. \]

Suppose we are given

\[ M(x,y)\,dx + N(x,y)\,dy = 0. \]

To find a solution, we use the following method:

  1. Check exactness: \[ \frac{\partial M}{\partial y} = \frac{\partial N}{\partial x}. \]
  2. Find a function \(f(x,y)\) such that \[ \frac{\partial f}{\partial x} = M(x,y). \] Compute \[ f(x,y) = \int M(x,y)\,dx + g(y). \]
  3. Compute \(\frac{\partial f}{\partial y}\) and use \[ \frac{\partial f}{\partial y} = N(x,y) \] to find \(g'(y)\).
  4. Integrate \(g'(y)\) to find \(g(y)\).
  5. Write the solution curve \(f(x,y) = 0\).

This method involves integration and partial differentiation, so some review may be helpful. We now look at examples.


Example

Consider the differential equation

\[ 2xy\,dx + (x^2-1)\,dy = 0. \]

Check exactness:

\[ \frac{\partial M}{\partial y} = 2x, \quad \frac{\partial N}{\partial x} = 2x. \]

Thus the equation is exact.

Find \(f(x,y)\) such that \(\frac{\partial f}{\partial x} = M(x,y)\):

\[ f(x,y) = \int 2xy\,dx + g(y) = x^2y + g(y). \]

Differentiate with respect to \(y\):

\[ \frac{\partial f}{\partial y} = x^2 + g'(y). \]

Set equal to \(N(x,y) = x^2-1\):

\[ x^2 + g'(y) = x^2 - 1 \Rightarrow g'(y) = -1. \]

Integrate:

\[ g(y) = -y + C. \]

Solution curve:

\[ x^2y - y + C = 0, \quad y = \frac{C}{x^2-1}. \]

Verification (optional):

\[ y = \frac{C}{x^2-1}, \quad dy = \frac{-2Cx}{(x^2-1)^2}\,dx. \] \[ 2xy\,dx + (x^2-1)dy = 0. \]

Example

Consider the differential equation

\[ (y^2\cos x - 3x^2y - 2x)\,dx + (2y\sin x - x^3 + \ln y)\,dy = 0. \]

Check exactness:

\[ \frac{\partial M}{\partial y} = 2y\cos x - 3x^2, \quad \frac{\partial N}{\partial x} = 2y\cos x - 3x^2. \]

Thus the equation is exact.

Find \(f(x,y)\):

\[ f(x,y) = \int (y^2\cos x - 3x^2y - 2x)\,dx + g(y). \] \[ f(x,y) = y^2\sin x - x^3y - x^2 + g(y). \]

Differentiate with respect to \(y\):

\[ \frac{\partial f}{\partial y} = 2y\sin x - x^3 + g'(y). \]

Set equal to \(N(x,y)\):

\[ g'(y) = \ln y. \]

Integrate:

\[ g(y) = y\ln y - y + C. \]

Solution curve:

\[ y^2\sin x - x^3y - x^2 + y\ln y - y + C = 0. \]

Lecture 8: Section 2.5 — Linear Substitutions ↑ Back to top