Home Notes Research CV Outside of Math

Back to Course Resources

MATH 2410 — Differential Equations

Lecture Notes. This is NOT meant to be a substitute for coming to lecture or consulting the textbook directly, but rather a helpful resource to review for exams/ review what was discussed in lecture.

Lecture 8: Section 3.1 - Mathematical Modeling (Linear) ↑ Back to top

In this lecture, we will use some of the methods discussed in Lecture 3 on how to construct models for certain real-world situations/systems and now solve those differential equations.


Example: Radioactive Decay

Suppose that we have a substance that decays at a rate proportional to the amount present at time $t$, and that has a half-life of $3.3$ hours. If $1$g of the substance is present initially, how long will it take for $90\%$ of the substance to decay?

First, we define the variables:

\[ \begin{aligned} A(t) &= \text{amount of substance present at time } t \\ t &= \text{time in hours} \\ A'(t) &= \text{rate of decay} \end{aligned} \]

The statement “decays at a rate proportional to the amount present” gives

\[ \frac{dA}{dt} = -kA, \quad k>0. \]

The half-life of $3.3$ hours means

\[ A(3.3) = \frac{1}{2}A(0). \]

Since initially $A(0)=1$, we obtain

\[ A(3.3) = \frac{1}{2}. \]

If $90\%$ decays, then $10\%$ remains, so we want

\[ A(t) = \frac{1}{10}. \]

The differential equation is separable, giving the general solution

\[ A(t) = Ce^{-kt}. \]

Using the initial condition:

\[ 1 = A(0) = C. \]

Using the half-life:

\[ \frac{1}{2} = e^{-k(3.3)} \quad \Rightarrow \quad k = \frac{-1}{3.3}\ln\!\left(\frac{1}{2}\right). \]

Finally:

\[ e^{-kt} = \frac{1}{10} \quad \Rightarrow \quad t = \frac{-1}{k}\ln\!\left(\frac{1}{10}\right) \approx 10.96 \text{ hours}. \]

This type of model is used for exponential growth and decay processes, including population growth.


Example: SI Model for Infectious Disease

Suppose that we have a fixed isolated population of $n$ people and we introduce one infected person into the population. The rate of spread of the disease is proportional to the number of interactions between susceptible and infected individuals.

Define the variables:

\[ \begin{aligned} S(t) &= \text{number of susceptible individuals at time } t \\ I(t) &= \text{number of infected individuals at time } t \\ t &= \text{time} \\ I'(t) &= \text{rate of spread of the disease} \end{aligned} \]

The statement that the disease spreads proportional to encounters between susceptible and infected individuals gives:

\[ \frac{dI}{dt} = kSI. \]

Since the population is fixed and starts with one infected person:

\[ S + I = n + 1 \] \[ I(0)=1, \quad S(0)=n. \]

Substituting $S = n+1-I$ gives the model:

\[ I'(t) = kI(n+1-I), \qquad I(0)=1. \]

This is a logistic-type differential equation that models the spread of infection over time.


Lecture 9: Section 3.2 - Mathematical Modeling (Nonlinear) ↑ Back to top

So far, many models we have studied were linear differential equations. However, real systems often depend on multiple interacting factors, leading to nonlinear models.

General Nonlinear Growth Model

Suppose a growth rate depends both on the amount present and another factor that itself depends on the amount present. Then we may have a model of the form

\[ \frac{dx}{dt} = x f(x) \]

This is nonlinear because the dependent variable appears multiplied by a function of itself.

Carrying Capacity and Logistic Equation

Suppose an environment has a carrying capacity for a population. Then we model

\[ \frac{dP}{dt} = P f(P) \]

where

\[ f(K)=0 \]

and \(K\) is the carrying capacity.

A common choice is a linear function:
\[ f(P)=a-bP \]
which produces the **logistic equation**
\[ \frac{dP}{dt}=P(a-bP). \]
---

Solving the Logistic Equation

Separate variables:
\[ \frac{dP}{P(a-bP)} = dt. \]
Using partial fractions:
\[ \frac{1}{P(a-bP)} = \frac{C_1}{P} + \frac{C_2}{a-bP}. \]
After integration and algebra:
\[ P(t)=\frac{aP_0}{bP_0+(a-bP_0)e^{-at}}. \]
---

Example — Disease Spread

A student carrying a flu virus returns to a school of 1000 students. The rate of spread is proportional to both infected students and not infected students.

Model:
\[ \frac{dx}{dt}=k x(1000-x). \]
With initial condition:
\[ x(0)=1. \]
Solution form:
\[ x(t)=\frac{1000}{1+999e^{-1000kt}}. \]
Using data \(x(4)=50\), we can solve for \(k\) and predict future infections. ---

Adjusting Logistic Models

Suppose a constant number of individuals is removed (harvesting):
\[ \frac{dP}{dt}=P\left(r-\frac{r}{K}P\right)-h, \quad h>0. \]
This equation is still: But the **critical points change** because of the constant removal term.

Lecture 10: Section 3.3 - Mathematical Modeling (Systems of Linear Equations) ↑ Back to top

Definition of a System

A system of first-order differential equations has the form
\[ \frac{dx}{dt}=g_1(t,x,y), \qquad \frac{dy}{dt}=g_2(t,x,y). \]
--- A system is linear if both equations are linear in the dependent variables.
\[ \frac{dx}{dt}=c_1x+c_2y+f_1(t) \] \[ \frac{dy}{dt}=c_3x+c_4y+f_2(t). \]
If the system is not linear, then it is nonlinear. ---

Why Systems?

Systems model interactions between multiple quantities, such as: ---

Example — Radioactive Decay Chain

Suppose substances decay in sequence:

X → Y → Z

Model:
\[ \frac{dx}{dt}=-\lambda_1 x \] \[ \frac{dy}{dt}=-\lambda_2 y + \lambda_1 x \] \[ \frac{dz}{dt}=\lambda_2 y. \]
Each equation depends on another variable, so we must solve them together. ---

Example — Mixing Tanks

Two tanks exchange fluid. Let: Then:
\[ \frac{dx_1}{dt}=-\frac{2}{25}x_1+\frac{1}{50}x_2 \] \[ \frac{dx_2}{dt}=\frac{2}{25}x_1-\frac{2}{25}x_2. \]
Initial conditions:
\[ x_1(0)=25, \qquad x_2(0)=0. \]
This is a linear system with constant coefficients. ---

Key Takeaways

Lecture 11: Section 4.1 - Higher-Order Differential Equations ↑ Back to top

In the previous lectures, we have looked at methods to solve certain types of first-order differential equations. Now, we will look at methods of solving higher-order linear differential equations. Let’s begin with a definition.

Definition

A linear \(n\)th-order differential equation of the form \[ a_n(x)y^{(n)}+a_{n-1}(x)y^{(n-1)}+\cdots + a_1(x)y'+a_0(x)y = 0 \] is said to be homogeneous. Otherwise, we have \[ a_n(x)y^{(n)}+a_{n-1}(x)y^{(n-1)}+\cdots + a_1(x)y'+a_0(x)y = g(x) \] where \(g(x) \neq 0\), which is said to be nonhomogeneous.

Here we insist that the functions \(a_i(x)\) depend only on \(x\) and are continuous on the interval we are considering.

Remark

All homogeneous differential equations have a trivial constant solution \(y=0\). So we are more interested in finding non-constant solutions. From now on, when we refer to “solutions,” we mean a nontrivial solution.

When we find one solution to a higher-order linear differential equation, we actually find infinitely many. Suppose that \(y\) is a solution to \[ a_n(x)y^{(n)}+\cdots + a_1(x)y'+a_0(x)y =0 \] and \(c\) is a constant. Then \[ a_n(x)(cy)^{(n)}+\cdots+a_0(x)(cy) = c\left(a_n(x)y^{(n)}+\cdots+a_0(x)y\right)=0. \]

This reveals that for any constant \(c\), the curve \(cy\) is also a solution. This follows from the linearity of derivatives: \[ (cy)' = cy', \qquad (y_1+y_2)' = y_1'+y_2'. \]

Similarly, if \(y_1\) and \(y_2\) are solutions, then \[ a_n(x)(y_1+y_2)^{(n)}+\cdots+a_0(x)(y_1+y_2)=0, \] so \(y_1+y_2\) is also a solution.

Theorem

Let \(y_1, \dots, y_n\) be solutions to a homogeneous differential equation. Then for any constants \(c_1, \dots , c_n\), \[ y = c_1y_1 + \cdots + c_ny_n \] is also a solution.

The question remains: how do we find all the solutions?

Definition

A set of functions \(f_1(x), \dots, f_n(x)\) are said to be linearly dependent on an interval \(I\) if there exist constants \(c_1, \dots, c_n\) (not all zero) such that \[ c_1f_1(x) + \cdots + c_nf_n(x) = 0 \] for all \(x\) in \(I\). Otherwise, the set is said to be linearly independent.

To find all solutions, we need a set of linearly independent solutions. Any solution can then be written as a linear combination of these.

Example

Show that \(f_1(x)=x\) and \(f_2(x)=x^2\) are linearly independent.

We set \[ c_1x+c_2x^2=0. \] Factoring gives \[ x(c_1+c_2x)=0. \] This would require \(c_2=-c_1/x\), which is not constant. Thus the only constants that work are \(c_1=c_2=0\), so the functions are linearly independent.

An alternative way to check linear independence is using the Wronskian.

Theorem (Wronskian Test)

The functions \(f_1,\dots,f_n\) are linearly independent if \[ W(f_1,\dots,f_n)= \det \begin{bmatrix} f_1 & \cdots & f_n\\ \vdots & \ddots & \vdots\\ f_1^{(n-1)} & \cdots & f_n^{(n-1)} \end{bmatrix} \neq 0. \]

For the previous example, \[ W(x,x^2)= \det \begin{bmatrix} x & x^2\\ 1 & 2x \end{bmatrix} = x(2x)-x^2(1)=x^2 \neq 0. \] Thus the functions are linearly independent.

Remark

For two functions, remember \[ \det \begin{bmatrix} a & b\\ c & d \end{bmatrix} = ad-bc. \]

Definition

Any set of \(n\) linearly independent solutions to an \(n\)th-order homogeneous differential equation is called a fundamental set of solutions.

Theorem

Given a fundamental set of solutions, the general solution is \[ y = c_1f_1 + \cdots + c_nf_n \] where \(c_1,\dots,c_n\) are constants.

Example

The differential equation \[ y''+y=0 \] has general solution \[ y=c_1\sin(x)+c_2\cos(x). \]

We check independence using the Wronskian: \[ \det \begin{bmatrix} \sin(x) & \cos(x)\\ \cos(x) & -\sin(x) \end{bmatrix} = \sin^2(x)+\cos^2(x)=1\neq 0. \]

Since the equation is second order, we only need two linearly independent solutions.

Lecture 12: Section 4.2 - Reduction of Order ↑ Back to top

In this lecture, we discuss a method called reduction of order, which allows us to find a second linearly independent solution of a second-order linear homogeneous differential equation when one solution is already known.

Definition Consider the second-order linear homogeneous differential equation \[ y'' + P(x)y' + Q(x)y = 0. \] If one nontrivial solution \(y_1(x)\) is known, the method of reduction of order can be used to find a second solution \(y_2(x)\) that is linearly independent from \(y_1\).

Idea of the Method

We assume the second solution has the form \[ y_2 = v(x)y_1(x), \] where \(v(x)\) is an unknown nonconstant function to be determined.

Then via product rule \[ y_2' = v'y_1 + vy_1', \quad y_2'' = v''y_1 + 2v'y_1' + vy_1''. \]

Substituting into the differential equation and using the fact that \(y_1\) satisfies the equation simplifies the expression and eventually leads to a first-order equation in \(v'\).

Theorem (Reduction of Order Formula) If \(y_1(x)\) is a known solution to \[ y'' + P(x)y' + Q(x)y = 0, \] then a second solution is given by \[ y_2 = y_1(x)\int \frac{e^{-\int P(x)\,dx}}{(y_1(x))^2}\,dx. \]
Remark The two solutions \(y_1\) and \(y_2\) will be linearly independent provided the Wronskian is nonzero.

Example 1

Example Find a second solution of \[ y'' + y = 0 \] given that \(y_1 = \cos x\) is a solution.

Here \(P(x)=0\), so the formula becomes \[ y_2 = y_1 \int \frac{1}{y_1^2}\,dx. \]

Thus, \[ y_2 = \cos x \int \frac{1}{\cos^2 x}\,dx = \cos x \int \sec^2 x\,dx = \cos x \tan x = \sin x. \]

So the general solution is \[ y = c_1\cos x + c_2\sin x. \]

Example 2

Example Find a second solution of \[ y'' - y = 0 \] given \(y_1 = e^x.\)

Again \(P(x)=0\), so \[ y_2 = y_1 \int \frac{1}{y_1^2}\,dx = e^x \int e^{-2x}\,dx. \]

Compute the integral: \[ \int e^{-2x}\,dx = -\frac{1}{2}e^{-2x}. \]

Therefore, \[ y_2 = e^x\left(-\frac{1}{2}e^{-2x}\right) = -\frac{1}{2}e^{-x}. \]

Since constants do not matter for linear independence, we take \[ y_2 = e^{-x}. \]

Hence the general solution is \[ y = c_1 e^x + c_2 e^{-x}. \]

Summary

Lecture 13: Section 4.3 - Auxillary Equations ↑ Back to top

Now that we have seen the method of reduction of order, we know how to find more linearly independent solutions to a higher-order differential equation once we have one solution already.

The missing piece in solving homogeneous differential equations is finding the first solution.

We now focus on equations of the form

\[ y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_1y' + a_0y = 0 \]

where each \(a_i\) is a constant.

Choosing a Candidate Solution

We want a function whose derivatives are constant multiples of itself. The most natural choice is

\[ y_1 = e^{mx}. \]

Substituting into the differential equation gives:

\[ e^{mx}(m^n + a_{n-1}m^{n-1} + \cdots + a_1m + a_0). \]
Definition (Auxiliary Equation)
The polynomial \[ f(m) = m^n + a_{n-1}m^{n-1} + \cdots + a_1m + a_0 \] is called the auxiliary equation (or characteristic equation).

Roots of the auxiliary equation determine solutions of the differential equation.


Second-Order Case

Consider \[ y'' + a_1 y' + a_2 y = 0. \] The auxiliary equation is \[ m^2 + a_1 m + a_2 = 0. \] There are three possible cases.

Case 1: Two Distinct Real Roots

If \[ f(m) = (m-a)(m-b), \quad a \neq b, \] then the solutions are \[ y = c_1 e^{ax} + c_2 e^{bx}. \] This is the simplest case.

Case 2: Repeated Root

If \[ f(m) = (m-a)^2, \] then we only get one solution \(y_1 = e^{ax}\). We must use reduction of order.
Example
Solve \[ y'' - 2ay' + a^2 y = 0. \]

Auxiliary equation: \[ (m-a)^2 = 0. \]

First solution: \[ y_1 = e^{ax}. \] Let \(y_2 = u e^{ax}\). Substitution simplifies to: \[ u'' e^{ax} = 0. \] So \[ u'' = 0. \] Integrating twice: \[ u = c_1 x + c_2. \] To maintain independence, take \(u = x\). Thus \[ y_2 = x e^{ax}. \] General solution: \[ y = c_1 e^{ax} + c_2 x e^{ax}. \]

Case 3: Complex Roots

If the auxiliary equation has complex roots \[ m = \alpha \pm i\beta, \] we use Euler's identity \[ e^{i\theta} = \cos(\theta) + i\sin(\theta). \] The real-valued solution becomes \[ y = e^{\alpha x}\left( c_1 \cos(\beta x) + c_2 \sin(\beta x) \right). \] Special case: \[ y'' + k^2 y = 0 \] has general solution \[ y = c_1 \cos(kx) + c_2 \sin(kx). \]

Worked Examples

Example 1
Solve \[ y'' + 4y' + 3y = 0. \] Auxiliary equation: \[ m^2 + 4m + 3 = 0. \] Factor: \[ (m+1)(m+3)=0. \] Roots: \[ m=-1, \quad m=-3. \] General solution: \[ y = c_1 e^{-x} + c_2 e^{-3x}. \]
Example 2
Solve \[ y'' + 2y' + y = 0. \] Auxiliary equation: \[ m^2 + 2m + 1 = 0. \] Factor: \[ (m+1)^2=0. \] Repeated root \(m=-1\). General solution: \[ y = c_1 e^{-x} + c_2 x e^{-x}. \]

Summary

Lecture 14: Section 4.4 - Method of Undetermined Coefficients ↑ Back to top

We now study equations of the form \[ y'' + a_1 y' + a_2 y = g(x), \] where \(g(x) \neq 0\). The general solution has the form \[ y = y_h + y_p, \] where:

Idea of the Method

The method of undetermined coefficients works when \(g(x)\) is one of: We guess a form for \(y_p\) with unknown constants and determine them by substitution.

Step-by-Step Procedure

  1. Solve the homogeneous equation to find \(y_h\).
  2. Guess the form of \(y_p\) based on \(g(x)\).
  3. If guess duplicates a homogeneous term, multiply by \(x\).
  4. Plug into the equation and solve for coefficients.

Table of Guesses

\(g(x)\) Guess for \(y_p\)
Polynomial degree n General polynomial degree n
\(e^{ax}\) \(Ae^{ax}\)
\(\sin kx\), \(\cos kx\) \(A\cos kx + B\sin kx\)
Sum/Difference/Product or functions above Product of guesses
Example 1
Solve \[ y''+5y'-6y = -6x^2-2x+1. \] Step 1: Homogeneous Solution Auxiliary equation: \[ m^2+5m-6=0. \] Factor: \[ (m+6)(m-1)=0. \] \[ m=-6, \quad m=1. \] \[ y_h = c_1 e^{-6x}+c_2 e^{x}. \] Step 2: Guess Particular Solution Since the right-hand side is a quadratic polynomial, guess \[ y_p=Ax^2+Bx+C. \] Step 3: Compute Derivatives \[ y_p'=2Ax+B \] \[ y_p''=2A \] Step 4: Substitute \[ y''+5y'-6y \] Substitute: \[ 2A + 5(2Ax+B) -6(Ax^2+Bx+C). \] Simplify: \[ -6Ax^2 + (10A-6B)x + (2A+5B-6C). \] Match coefficients with \[ -6x^2-2x+1. \] System: \[ -6A=-6 \] \[ 10A-6B=-2 \] \[ 2A+5B-6C=1 \] Solve: \[ A=1 \] \[ 10-6B=-2 \Rightarrow B=2 \] \[ 2+10-6C=1 \Rightarrow C=\frac{11}{6}. \] Final Answer \[ y=c_1 e^{-6x}+c_2 e^{x}+x^2+2x+\frac{11}{6}. \]
Example 2
Solve \[ y''-5y'+4y=\sin(2x). \] Step 1: Homogeneous Solution Auxiliary equation: \[ m^2-5m+4=0. \] Factor: \[ (m-1)(m-4)=0. \] \[ y_h=c_1 e^{x}+c_2 e^{4x}. \] Step 2: Guess Particular Solution Since the right-hand side is \(\sin(2x)\), guess \[ y_p=A\cos(2x)+B\sin(2x). \] Step 3: Compute Derivatives \[ y_p'=-2A\sin(2x)+2B\cos(2x) \] \[ y_p''=-4A\cos(2x)-4B\sin(2x) \] Step 4: Substitute Substitute into \[ y''-5y'+4y. \] After simplifying: \[ (10A-10B)\cos(2x) + (10A+10B)\sin(2x). \] Match coefficients with \(\sin(2x)\): \[ 10A-10B=0 \] \[ 10A+10B=1 \] Solve: \[ A=B \] \[ 20A=1 \Rightarrow A=\frac{1}{20}, \quad B=\frac{1}{20}. \] Final Answer \[ y=c_1 e^{x}+c_2 e^{4x} +\frac{1}{20}\cos(2x) +\frac{1}{20}\sin(2x). \]
Example 3
Solve \[ y''+y=10x+e^{5x}, \quad y(0)=1. \] Step 1: Homogeneous Solution Auxiliary equation: \[ m^2+1=0. \] \[ y_h=c_1\cos x+c_2\sin x. \] Step 2: Guess Particular Solution Split RHS into two parts: Polynomial \(10x\) → guess \(Ax+B\) Exponential \(e^{5x}\) → guess \(Ce^{5x}\) So, \[ y_p=Ax+B+Ce^{5x}. \] Step 3: Compute Derivatives \[ y_p'=A+5Ce^{5x} \] \[ y_p''=25Ce^{5x} \] Step 4: Substitute \[ y''+y \] Substitute: \[ 25Ce^{5x} + Ax+B+Ce^{5x}. \] \[ =Ax+B+26Ce^{5x}. \] Match coefficients with \[ 10x+e^{5x}. \] System: \[ A=10 \] \[ B=0 \] \[ 26C=1 \Rightarrow C=\frac{1}{26}. \] Thus \[ y=c_1\cos x+c_2\sin x+10x+\frac{1}{26}e^{5x}. \] Apply initial condition: \[ y(0)=c_1+\frac{1}{26}=1. \] \[ c_1=\frac{25}{26}. \] Final Answer \[ y=\frac{25}{26}\cos x +c_2\sin x +10x +\frac{1}{26}e^{5x}. \]
When the Method Fails This method does NOT work if: In those cases, we use variation of parameters (next lecture).

Summary

Lecture 15: Section 4.6 - Variation of Parameters ↑ Back to top