- §1. Definitions and Notation
- §2. First Order: Notation and Theorems
- §3. First Order: Separable
- §4. First Order: Exact
- §5. The Word "Linear"
- §6. Homogeneous (Const Coeff)
- §7. Linear Independence
- §8. Inhomogeneous (Const Coeff)
- §9. Linear, Series Solutions: Theorems
- §10. Linear, Series Solutions: Method
The Word "Linear": Definitions and Theorems
In this section, I will give, without proof, several important theorems about linear differential equations. But before I get to the theorems, you will need to understand what is meant by the word linear so that you can understand the content of these theorems. Before you read this section, make sure you know the definitions and notation in this section of the book.
Definition
An operator $\cal{L}$ is linear if it satisfies two conditions when acting on any appropriate vector $v$ and for any constants $\alpha$: \begin{align} \cal{L} (\alpha v)&=\alpha \cal{L} v\\ \cal{L} (v_1+v_2)&=\cal{L} v_1 + \cal{L} v_2 \end{align}
Examples
Example 1: Linear Equations.
The word linear comes from linear equations, i.e. equations for straight lines. The equation for a line through the origin $y=mx$ comes from the operator $f(x)=mx$ acting on vectors which are real numbers $x$ and constants that are real numbers $\alpha$. The first property: $$f(\alpha x)=m(\alpha x)=\alpha (mx)=\alpha f(x)$$ is just commutativity of the real numbers. The second property: $$f(x_1+x_2)=m(x_1+x_2)=(mx_1)+m(x_2)=f(x_1)+f(x_2)$$ is just distributivity of multiplication over addition. These are properties of real numbers that you've used since grade school, even if you didn't know to call the properties by these fancy names! Note that the equation for a line NOT through the origin $y=mx+b$, leading to the operator $g(x)=mx+b$, is NOT linear. $$g(x_1+x_2)=m(x_1+x_2)+b \ne (mx_1+b)+(mx_2+b)=g(x_1)+g(x_2).$$ It will be helpful to remember these two examples when you are learning the difference between homogeneous and inhomogeneous linear differential equations in examples 4 and 5 below.
Example 2: Matrix Multiplication. I will not prove it here, but you use the fact that matrix multiplication (acting on vectors that are columns and multiplication by scalars $\alpha$) is a linear operator when you do the following common matrix manipulations. $$ \begin{pmatrix} 2&3\\4&5 \end{pmatrix} \left( \alpha \begin{pmatrix} 6\\7 \end{pmatrix} \right) = \alpha\left( \begin{pmatrix} 2&3\\4&5 \end{pmatrix} \begin{pmatrix} 6\\7 \end{pmatrix} \right) $$
$$ \begin{pmatrix} 2&3\\4&5 \end{pmatrix} \left( \begin{pmatrix} 6\\7 \end{pmatrix} + \begin{pmatrix} 8\\9 \end{pmatrix} \right) = \left( \begin{pmatrix} 2&3\\4&5 \end{pmatrix} \begin{pmatrix} 6\\7 \end{pmatrix} \right) + \left( \begin{pmatrix} 2&3\\4&5 \end{pmatrix} \begin{pmatrix} 8\\9 \end{pmatrix} \right) $$
Example 3: Hermitian Operators in Quantum Mechanics.
I will not prove it here, but you use the fact that Hermitian operators in quantum mechanics, e.g. the Hamiltonian, are linear when you do the following bra/ket manipulations. $$ H\left(\alpha \vert \psi\rangle\right)=\alpha \left(H \vert \psi\rangle\right) $$
$$ H\left(\alpha \vert \psi_1\rangle+ \alpha \vert \psi_2\rangle\right) =\left(H \vert \psi_1\rangle\right)+\left(H \vert \psi_2\rangle\right) $$
Example 4: Homogeneous Linear Differential Equations.
You have often used the fact that the derivative operator acting on functions is linear: $$ \frac{d}{dx} \left(\alpha f\right)= \alpha \left(\frac{d}{dx} f\right) $$ $$ \frac{d}{dx}\left(f+g\right)=\left(\frac{d}{dx}f\right) + \left(\frac{d}{dx}g\right) $$ e.g. For example, how do you calculate $\frac{d}{dx} \left(3x^2 + \cos{x}\right)$?
By a straightforward extension, the linear differential operator $\cal{L}$ defined by $$ \cal{L}\equiv\frac{d^n}{dx^n}+a_{n-1}(x)\frac{d^{n-1}}{dx^{n-1}}+\dots+a_0(x) $$ is indeed linear. The important feature here is that all of the derivatives are to the first power and not inside of any other special functions. Most differential operators in physics ARE linear, so this should look very familiar. The following strange looking differential operators are NOT linear: $$\left(\frac{d}{dx} f\right)^2,$$ $$\sin\left(\frac{d}{dx} f\right).$$
Example 5: Inhomogeneous Linear Differential Equations.
As with Example 1 above for straight lines through the origin and not through the origin, if you take a homogeneous linear differential operator $\cal{L}$ as in example 3 above and add to it an inhomogeneous term $b(x)$, the resulting operator which takes $y$ to $\cal{L}y - b(x)$ is NOT linear. If $y_p$ and $y_q$ are both solutions of $$\cal{L} y=b(x),$$ then $$\cal{L} \left(y_p+y_q\right)=2b(x)\ne b(x).$$ You can NOT add two solutions of an inhomogeneous differential equation and get another solution. You can add ANY solution of the homogeneous equation to a solution of the inhomogeneous equation to get another solution of the inhomogeneous equation.
Theorems
Now we're ready for the theorems. The first two theorems tell us the general form for solutions for homogeneous and inhomogeneous linear differential equations and describes the free parameters that exist in each solution. The third theorem describes what kind of initial conditions are necessary to remove this freedom and specify a unique solution.
Linear Homogeneous ODEs: Form of General Solution
An $n^{th}$ order linear homogeneous differential equation $\cal{L}(y)=0$ always has $n$ linearly independent solutions, ${y_1,\dots ,y_n}$. The general solution $y_h$ is $$ y_{h}=C_1 y_1+C_2 y_2 + \dots +C_n y_n $$ where $C_1, C_2, \dots , C_n$ are arbitrary constants.
Linear Inhomogeneous ODEs: Form of General Solution
The general solution of the $n^{th}$ order linear inhomogeneous differential equation $\cal{L}(y)=b(x)$ is $$ y=y_p+y_h $$ where $y_{h}$ is the general solution of the homogeneous equation $\cal{L}(y)=0$ and $y_{p}$ is any particular solution of the inhomogeneous equation. Recall that the general solution of the homogeneous equation (above) has $n$ indepdendent parameters.
Linear ODEs: What Initial Conditions Make the Solution Unique
Consider the $n^{th}$ order, linear, ordinary differential equation: \begin{equation} 1y^{(n)}+a_{n-1}(x)y^{(n-1)}+\dots+a_0(x)y=b(x) \end{equation} together with the initial conditions: \begin{align} y(x_0)&= c_0\\ y'(x_0)&= c_1\\ &\vdots\\ y^{(n)}(x_0)&=c_n \end{align} If $b(x)$ and $a_j(x)$ are continuous $\forall j$ on some interval $I$ containing $x_0$, then the initial value problem has a unique solution throughout $I$.