The Geometry of Linear Algebra book:content
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/
2020-01-25T17:22:31-08:00The Geometry of Linear Algebra
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/lib/images/favicon.icotext/html2017-09-17T13:24:00-08:00book:content:adjoint
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/adjoint?rev=1505679840
The Hermitian adjoint of a matrix is the same as its transpose except that along with switching row and column elements you also complex conjugate all the elements. If all the elements of a matrix are real, its Hermitian adjoint and transpose are the same. In terms of components, $$\left(A_{ij}\right)^\dagger=A_{ji}^*.$$text/html2017-09-17T11:10:04-08:00book:content:argand
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/argand?rev=1505671804
A complex number $z$ is an ordered pair of real numbers $x$ and $y$ which are distinguished from each other by adjoining the symbol $i$ to the second number: \begin{equation} z=x+iy \label{defcomplex} \end{equation} The first number, $x$ is called the real part of the complex number $z$ and the second number, $y$ is called the imaginary part of $z$. Notice that the imaginary part of $z$ is a REAL number.text/html2017-04-14T13:53:08-08:00book:content:commute
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/commute?rev=1492203188
Suppose two operators $M$ and $N$ commute, $[M,N]=0$. Then if $M$ has an eigenvector $\vert v\rangle$ with non-degenerate eigenvalue $\lambda_v$, we will show that $\vert v\rangle$ is also an eigenvector of $N$. \begin{eqnarray*} M\vert v\rangle &=& \lambda_v\vert v\rangle\\ NM\vert v\rangle &=& MN\vert v\rangle=\lambda_vN\vert v\rangle\\ \end{eqnarray*} The last equality shows that $N\vert v\rangle$ is also an eigenvector of $M$ with the same non-degenerate eigenvalue $\lambda_v$. But if this…text/html2017-02-18T15:32:00-08:00book:content:complete
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/complete?rev=1487460720
Given a vector and an orthonormal basis, it is easy to determine the components of the vector in the given basis. For example, if \begin{equation} \FF = F_x \,\xhat + F_y \,\yhat + F_z \,\zhat \end{equation} then of course \begin{equation} F_x = \FF\cdot\xhat . \end{equation} Put differently, \begin{equation} \FF = (\FF\cdot\xhat)\,\xhat + (\FF\cdot\yhat)\,\yhat + (\FF\cdot\zhat)\,\zhat . \end{equation} All we need to make this idea work is an orthonormal basis, that is, a set of mutually ortho…text/html2017-09-17T12:59:13-08:00book:content:conjugate
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/conjugate?rev=1505678353
The complex conjugate $z^*$ of a complex number $z=x+iy$ is found by replacing every $i$ by $-i$. Therefore $z^*=x-iy$. (A common alternate notation for $z^*$ is $\bar{z}$.) Geometrically, you should be able to see that the complex conjugate of ANY complex number is found by reflecting in the real axis. FIXME Add a figure showing complex conjugates and the distance of z and zbar from the origin.text/html2018-06-23T20:06:00-08:00book:content:consthomo
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/consthomo?rev=1529809560
Form of the equation
Consider an $n$th order linear ODE of the form \begin{equation} \frac{d^ny}{dx^n} + a_{n-1} \frac{d^{n-1}y}{dx^{n-1}} + ... + a_0 y = 0 \label{linearconsthomo} \end{equation} where the coefficients $a_i$ are constant. This very special case of the general $n$th order linear ODE, for which all of the $a_i$'s are constant, comes up in physics incredibly often. Especially important is the case of small (damped, for $b\ne 0$) oscillations, such as for a pendulum, which are d…text/html2018-06-23T20:05:00-08:00book:content:constinhomo
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/constinhomo?rev=1529809500
Form of the equation
An $n$th order linear differential equation with constant coefficients is inhomogeneous if it has a nonzero ``source'' or ``forcing function,'' i.e. if it has a term that does NOT involve the unknown function. We will call this source $b(x)$. The form of these equations is: \begin{align} \frac{d^ny}{dx^n} + a_{n-1} \frac{d^{n-1}y}{dx^{n-1}} + ... + a_0 y &= b(x)\\ \cal{L} y&=b(x) \label{inhomo} \end{align} In the second form for these equations, we have rewritten all th…text/html2018-06-23T19:19:00-08:00book:content:defs
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/defs?rev=1529806740
Definitions
\begin{enumerate}\item A differential equation is an equation involving an unknown function and its derivatives.
\item A differential equation is an ordinary differential equation if the unknown function depends on only one independent variable, otherwise it is a partial differential equation.text/html2017-09-22T13:15:00-08:00book:content:degeneracy
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/degeneracy?rev=1506111300
It is not always the case that an $n\times n$ matrix has $n$ distinct eigenvectors. For example, consider the matrix \begin{equation} B = \begin{pmatrix} 3 & 0 & 0\\ 0 & 3 & 0\\ 0 & 0 & 5\\ \end{pmatrix}, \end{equation} whose eigenvectors are again clearly the standard basis. But what are the eigenvalues of $B$? Again, the answer is obvious: $3$ and $5$. In such cases, the eigenvalue $3$ is a degenerate eigenvalue of $B$, since there are two independent eigenvectors of $B$ with eigenvalue $3…text/html2017-09-17T13:42:00-08:00book:content:det
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/det?rev=1505680920
The determinant of a matrix is somewhat complicated in general, so you may want to check one of the reference books. The $2\times2$ and $3\times3$ cases can be memorized using the examples below.
The determinant of a $2\times2$ matrix is given by $$\det\left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right) = ad-bc.$$text/html2017-09-22T13:01:00-08:00book:content:diag
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/diag?rev=1506110460
A matrix whose only nonzero entries lie on the main diagonal is called a diagonal matrix. The simplest example of a diagonal matrix is the identity matrix \begin{equation} I = \begin{pmatrix} 1 & 0 &...& 0\\ 0 & 1 &...& 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 &...& 1\\ \end{pmatrix} . \end{equation}text/html2017-09-17T12:56:33-08:00book:content:division
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/division?rev=1505678193
We can use the concept of complex conjugate to give a strategy for dividing two complex numbers, $z_1 = x_1 + i y_1$ and $z_2 = x_2 + i y_2$. The trick is to multiply by the number 1, in a special form that simplifies the denominator to be a real number and turns division into multiplication.text/html2017-04-06T10:46:00-08:00book:content:eigenhermitian
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/eigenhermitian?rev=1491500760
The eigenvalues and eigenvectors of Hermitian matrices have some special properties. First of all, the eigenvalues must be real! To see why this relationship holds, start with the eigenvector equation \begin{equation} M |v\rangle = \lambda |v\rangle \label{eigen} \end{equation} and multiply on the left by $\langle v|$ (that is, by $v^\dagger$): \begin{equation} \langle v | M | v \rangle = \langle v | \lambda | v \rangle = \lambda \langle v | v\rangle . \label{vleft} \end{equation} But we ca…text/html2018-06-23T19:22:00-08:00book:content:eigennorm
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/eigennorm?rev=1529806920
From the eigenvalue/eigenvector equation: \begin{equation} A \left|v\right> = \lambda \left|v\right> \end{equation} it is straightforward to show that if $\vert v\rangle$ is an eigenvector of $A$, then, any multiple $N\vert v\rangle$ of $\vert v\rangle$ is also an eigenvector since the (real or complex) number $N$ can pull through to the left on both sides of the equation.text/html2017-04-05T22:03:00-08:00book:content:eigenunitary
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/eigenunitary?rev=1491454980
The eigenvalues and eigenvectors of unitary matrices have some special properties. If $U$ is unitary, then $UU^\dagger=I$. Thus, if \begin{equation} U |v\rangle = \lambda |v\rangle \label{eleft} \end{equation} then also \begin{equation} \langle v| U^\dagger = \langle v| \lambda^* . \label{eright} \end{equation} Combining~(\ref{eleft}) and~(\ref{eright}) leads totext/html2017-09-22T13:38:00-08:00book:content:eigenvalue
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/eigenvalue?rev=1506112680
In order to find the eigenvalues of a square matrix $A$, we must find the values of $\lambda$ such that the equation \begin{equation} A \left|v\right> = \lambda \left|v\right> \end{equation} admits solutions $\left|v\right>$. (The solutions $\left|v\right>$ are eigenvectors of $A$, as discussed in the next section.) Rearranging terms, $\left|v\right>$ must satisfy \begin{equation} (\lambda I-A)\left|v\right> = 0 \label{eeq} \end{equation} where $I$ denotes the identity matrix (of the same size…text/html2017-09-22T13:51:00-08:00book:content:eigenvector
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/eigenvector?rev=1506113460
Having found the eigenvalues of the example matrix $A=\begin{pmatrix}1&2\\9&4\\\end{pmatrix}$ in the last section to be $7$ and $-2$, we can now ask what the corresponding eigenvectors are. We must therefore solve the equation \begin{equation} A \left|v\right> = \lambda \left|v\right> \end{equation} in the two cases $\lambda=7$ and $\lambda=-2$. In the first case, we have \begin{equation} \begin{pmatrix}1&2\\9&4\\\end{pmatrix} \begin{pmatrix}x\\ y\\\end{pmatrix} = 7\begin{pmatrix}x\\ y\\\end…text/html2017-09-16T07:33:05-08:00book:content:euler
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/euler?rev=1505572385
A convenient notation for complex numbers involves complex exponentials. It is based on Euler's formula: \begin{equation} e^{i\theta}=\cos\theta + i\sin\theta \label{Euler} \end{equation}
Euler's formula can be “proved” in two ways: \begin{enumerate}\item Expand the left-hand and right-hand sides of Euler's equation (\ref{Euler}) in terms of known power series expansions. Compare equal powers. \item Show that both the left-hand and right-hand sides of Euler's equation (\ref{Euler}) are s…text/html2017-02-22T10:29:33-08:00book:content:exact
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/exact?rev=1487788173
See Boas 6.8 and 8.4
Note: Any first order linear ODE can be solved on some interval. You can always multiply the differential form of the differential equation by an appropriate function of the independent and dependent variables so that the equation becomes exact. Then you can solve the differential equation using methods for exact equations. Boas 8.3 has a nice description that not only shows you how to find the integrating factor, but also shows you how to find the solution of the diffe…text/html2017-09-17T18:30:03-08:00book:content:expform
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/expform?rev=1505698203
If we use polar coordinates \begin{eqnarray} x&=&r\cos\theta\\ y&=&r\sin\theta \end{eqnarray} to describe the complex number $z=x+iy$, we can factor out the $r$ and use Euler's formula to obtain the exponential form of the complex number $z$: \begin{eqnarray} z&=&x+iy\\ &=&r\cos\theta + i r\sin\theta\\ &=&r(\cos\theta +i\sin\theta)\\ &=&re^{i\theta} \end{eqnarray} Just as $r$ represents the distance of $z$ from the origin in the complex plane, $\theta$ represents the polar angle, measured in ra…text/html2017-02-18T18:21:00-08:00book:content:find
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/find?rev=1487470860
We are now ready to find formulas for the Fourier coefficients $a_m$ and $b_m$. Using the idea outlined at the start of the previous section, the coefficient of each normalized basis element is just the ``dot product'' of that basis element with the original ``vector''. In other words, each term in the expansion takes the form ``($f$ dot $u$) $u$'', where $u$ is the normalized basis element, and ``dot'' now refers to an integral.text/html2018-06-23T20:04:00-08:00book:content:firstorder
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/firstorder?rev=1529809440
FIXME Add explanation
Notation for First Order ODEs
Standard Form: \begin{equation} \frac{dy}{dx}=f(x,y) \label{standard} \end{equation}
Differential Form: Write $$f(x,y)=-\frac{M(x,y)}{N(x,y)}$$ (There are many ways to do this. Choose a way that is helpful for the problem at hand.) Then Eqn(\ref{standard}) becomes $$M(x,y)\, dx + N(x,y)\, dy =0$$text/html2017-02-19T09:06:00-08:00book:content:fourierex
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/fourierex?rev=1487523960
Let's consider an example. Suppose $f(x)$ describes a square wave, so that \begin{equation} f(x) = \begin{cases} C & (0\le x<\frac{L}{2}) \\ 0 & (\frac{L}{2}<x\le L) \end{cases} \end{equation} (the value of $f$ at $x=L/2$ doesn't matter). According to the results of the previous sections, we have \begin{equation} f = \frac12 a_0 + \sum_{m=1}^\infty a_m \cos\left(\frac{2\pi m x}{L}\right) + \sum_{m=1}^\infty b_m \sin\left(\frac{2\pi m x}{L}\right) \end{equation} where \begin{align} a_0 &= \fra…text/html2017-02-22T10:34:02-08:00book:content:fouriersym
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/fouriersym?rev=1487788442
If the function that you are trying to find a Fourier series representation for has a particular symmetry, e.g. if it is symmetric or antisymmetric around the center of the interval for which its defined, then only those basis functions that have the same symmetry will have nonzero coefficients.text/html2010-08-19T16:06:00-08:00book:content:ftransex
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/ftransex?rev=1282259160
text/html2017-04-10T15:41:00-08:00book:content:ftransform
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/ftransform?rev=1491864060
Consider the (square integrable) function $f(x)$, its Fourier transform is defined by:
Definition
\begin{equation} \tilde{f} (k)= F(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} f(x)\, e^{-ikx}\, dx \end{equation}
The inverse of the Fourier transform is given by:text/html2010-08-19T16:06:00-08:00book:content:ftranspacket
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/ftranspacket?rev=1282259160
text/html2010-08-19T16:06:00-08:00book:content:ftransuncertainty
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/ftransuncertainty?rev=1282259160
text/html2017-02-20T18:14:12-08:00book:content:gibbs
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/gibbs?rev=1487643252
Generally, it is possible to approximate a reasonably smooth function quite well, by keeping enough terms in the Fourier series. However, in the case of a function that has a finite number of discontinuities, the approximation of the function will always “overshoot” the discontinuity. This overshoot phenomenon gets sharper and sharper, i.e. bigger amplitude over a smaller domain, as the number of terms in the approximation is increased.text/html2018-01-07T09:49:22-08:00book:content:harmonic
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/harmonic?rev=1515347362
What happens if you multiply two different trig functions? For instance, consider the function $\sin(2\theta)\sin(3\theta)$, which is shown in Figure~1. It looks about as you might expect, with the overall structure of a trig function that ``wiggles''. What is the area under this graph, that is, what is its integral? Hard to tell, but there's about as much above the axis as below, so zero would be plausible guess, which turns out to be correct. This idea underlies all of Fourier theory.text/html2017-04-08T10:08:24-08:00book:content:hermitian
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/hermitian?rev=1491671304
There are two uses of the word Hermitian, one is to describe a type of operation--the Hermitian adjoint (a verb), the other is to describe a type of operator--a Hermitian matrix or Hermitian adjoint (a noun).
On an $n\times m$ matrix, $N$, the Hermitian adjoint (often denoted with a dagger, $\dagger$, means the conjugate transpose \begin{equation} M^\dagger=M^*{}^T \end{equation}text/html2017-09-17T12:08:00-08:00book:content:inverse
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/inverse?rev=1505675280
The matrix inverse of a matrix $A$, denoted $A^{-1}$, is the matrix such that when multiplied by the matrix A the result is the identity matrix. (The identity matrix is the matrix with ones down the diagonal and zeroes everywhere else.)
For $2\times2$ matrices, if $$A=\left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right)$$ then $$A^{-1}={1\over\det(A)}\left(\begin{array}{cc} d&-b\\ -c&a\\ \end{array} \right).$$text/html2018-06-23T20:03:00-08:00book:content:linear
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/linear?rev=1529809380
In this section, I will give, without proof, several important theorems about linear differential equations. But before I get to the theorems, you will need to understand what is meant by the word linear so that you can understand the content of these theorems. Before you read this section, make sure you know the definitions and notation in this section of the book.text/html2017-04-21T17:14:59-08:00book:content:logs
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/logs?rev=1492820099
How can we extend the logarithm function to complex numbers? We would like to retain the property that the logarithm of a product is the sum of the logarithms: \begin{equation} \ln(ab)=\ln a+\ln b \label{lnprod} \end{equation} Then, if we write the complex number $z$ in exponential form: \begin{equation} z=r\, e^{i(\theta+2\pi m)} \end{equation} and use the property (\ref{lnprod}), we find: \begin{eqnarray*} \ln z&=&\ln (r\, e^{i(\theta+2\pi m)})\\ &=&\ln r+ \ln (e^{i(\theta+2\pi m)})\\ &=&\ln …text/html2017-09-17T10:39:00-08:00book:content:madd
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/madd?rev=1505669940
For matrix addition to be defined, both matrices must be of the same dimension, that is, both matrices must have the same number of rows and columns. Addition then proceeds by adding corresponding components, as in $$C_{ij}=A_{ij}+B{ij} .$$
For example, if $$ A = \left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right) ,\qquad B = \left(\begin{array}{cc} e&f\\ g&h\\ \end{array} \right) ,$$ then $$A+B= \left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right) + \left(\begin{array}{cc} e&f\…text/html2017-04-06T10:03:00-08:00book:content:matrixdcomp
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/matrixdcomp?rev=1491498180
Given any normalized vector $|v\rangle$, that is, a vector satisfying \begin{equation} \langle v | v \rangle = 1 , \end{equation} we can construct a projection operator \begin{equation} P_v = |v \rangle \langle v| . \end{equation} The operator $P_v$ squares to itself, that is, \begin{equation} P_v^2 = P_v , \end{equation} and of course takes $|v\rangle$ to itself, that is, \begin{equation} P_v |v\rangle = |v\rangle ; \end{equation} $P_v$ projects any vector along $|v\rangle$.text/html2017-04-12T13:48:30-08:00book:content:matrixde
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/matrixde?rev=1492030110
The simplest non-trivial ode is the first-order linear ode with constant coefficients: \begin{equation} \frac{d}{dx} f(x)= a f(x) \end{equation} with solution: \begin{equation} f(x)=f(0)\, e^{ax} \end{equation}
We can generalize this equation to apply to solutions which are matrix exponentials, i.e.: \begin{equation} M(x)=M(0)e^{Ax} \end{equation} is a solution of: \begin{equation} \frac{d}{dx}\, M(x) = A\, M(x) \end{equation} where $A$ is a suitable constant matrix. (Show that if $A$ is anti-…text/html2017-04-08T10:52:31-08:00book:content:matrixex
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/matrixex?rev=1491673951
Any $2\times2$ complex matrix, $M$, can be written in the form \begin{equation} M = \begin{pmatrix} t+z& x-iy\\ x+iy& t-z\\ \end{pmatrix} \end{equation} with $x,y,z,t\in\RR$. Looking at the matrix coefficients of these variables, we can write \begin{equation} M = t \,I + x \,\sigma_x + y \,\sigma_y + z \,\sigma_z \end{equation} thus defining the three matrices \begin{align} \sigma_x &= \begin{pmatrix} 0& 1\\ 1& 0\\ \end{pmatrix} ,\\ \sigma_y &= \begin{pmatrix} 0& -i\\ i& 0\\ \end{pmatrix} ,\\ \…text/html2017-04-06T16:31:00-08:00book:content:matrixexp
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/matrixexp?rev=1491521460
How do you exponentiate matrices?
Recall the power series \begin{equation} e^{i\theta} = 1 + i\theta - \frac{\theta^2}{2!} - i\frac{\theta^3}{3!} + ... \end{equation} which can famously be used to prove Euler's formula, namely \begin{equation} e^{i\theta} = \cos\theta + i\sin\theta . \end{equation} We can use this same strategy to exponentiate matrices, by using the corresponding power series. We therefore define \begin{equation} e^{iM\theta} = I + iM\theta - \frac{M^2\theta^2}{2!} - i\frac{…text/html2017-09-17T13:41:00-08:00book:content:mmult
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/mmult?rev=1505680860
Matrices can also be multiplied together, but this operation is somewhat complicated. Watch the progression in the examples below; basically, the elements of the row of the first matrix are multiplied by the corresponding elements of the column of the second matrix. Matrix multiplication can be written in terms of components as $$C_{ij}=\sum_k A_{ik}B_{kj}.$$text/html2017-09-17T12:18:00-08:00book:content:mnote
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/mnote?rev=1505675880
Column matrices play a special role in physics, where they are interpreted as vectors or states. To remind us of this uniqueness they have their own special notation; introduced by Dirac, called ``bra-ket'' notation. In bra-ket notation, a column matrix can be written $$\left|v\right> := \left(\begin{array}{c} a\\ b\\ c\\ \end{array}\right).$$ The adjoint of this vector is denoted $$\left<v\right| := \left(\left|1\right>\right)^\dagger =\left(\begin{array}{ccc} a^*&b^*&c^*\\ \end{array}\ri…text/html2018-06-23T20:13:00-08:00book:content:pdeclass
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/pdeclass?rev=1529809980
Why do you want to classify solutions?
If we can classify a PDE that we are trying to solve, according to the scheme given below, it will help us extend qualitative knowledge that we have about the nature of solutions of similar PDEs to the current case. Most importantly, the types of initial conditions that are needed to ensure that our solution is unique vary according to the classification--see PDE Theorems. In physics situations, the classification is usually obvious: if there are two t…text/html2018-06-23T20:12:00-08:00book:content:pdeimportant
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/pdeimportant?rev=1529809920
On this page you will find a list of most of the important PDEs in physics with their names. Notice that some of the equation have no time dependence, some have a first order time derivative, and some have a second order time derivative. This difference is the foundation of an important classification scheme. Also notice that the spatial dependence always comes in the form of the laplacian $\nabla^2$. This particular spatial dependence occurs because space is rotationally invariant.text/html2018-06-23T20:08:00-08:00book:content:pdethms
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/pdethms?rev=1529809680
The Main Idea
In physics situations, the classification and types of boundary conditions are typically straightforward: if there are two time derivatives, the equation is hyperbolic and we will need two initial conditions on the entire spatial region to make the solution unique; if there is only a single time derivative, the equation is parabolic and we will need only a single initial condition; if the equation has no time derivatives, the equation is elliptic and the solutions are qualitativ…text/html2017-09-17T11:06:01-08:00book:content:rect
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/rect?rev=1505671561
The form of the complex number in the previous section: \begin{equation} z=x+iy \label{defcomplex2} \end{equation} is called the rectangular form, to refer to rectangular coordinates.
We will now extend the definitions of algebraic operations from the real numbers to the complex numbers. For two complex numbers $z_1 = x_1 + i y_1$ and $z_2 = x_2 + i y_2$, we definetext/html2017-04-17T13:49:01-08:00book:content:roots
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/roots?rev=1492462141
If a complex number $z$ is written in exponential form: $$z=re^{i\theta},$$ then the $n$th power of $z$ is: \begin{eqnarray*} z^n&=&r^n\, (e^{i\theta})^n\\ &=&r^n\, e^{in\theta} \end{eqnarray*} and we see that the distance of the point $z$ from the origin in the complex plane has been raised to the $n$th power, but the angle has been multiplied by $n$. Similarly, an $n$th root of $z$ is: \begin{eqnarray*} z^{\frac{1}{n}}&=&r^{\frac{1}{n}}e^{i\frac{\theta}{n}}. \end{eqnarray*} For example, a sq…text/html2017-02-22T10:24:37-08:00book:content:separable
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/separable?rev=1487787877
See Boas 8.2text/html2018-06-23T20:12:00-08:00book:content:sepprocess
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/sepprocess?rev=1529809920
Separation of variables is a procedure which can turn a partial differential equation into a set of ordinary differential equations. The procedure only works in very special cases involving a high degree of symmetry. Remarkably, the procedure works for many important physics examples. Here, we will use the procedure on the wave equation.text/html2017-05-04T18:35:51-08:00book:content:seriesthms
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/seriesthms?rev=1493948151
In this section, we will briefly discuss theorems that state when a second order linear ode has power series solutions.
First, write the ode in the form: \begin{equation} y^{\prime\prime}+p(z) y^{\prime}+q(z) y=0 \label{diffeq} \end{equation} and look at the functions $p(z)$ and $q(z)$ has function of the complex variable $z$. If $p(z)$ and $q(z)$ are analytic at a point $z=z_0$, the $z_0$ is said to be a regular point of the differential equation. (The word analytic is a technical term for …text/html2017-09-17T10:43:00-08:00book:content:smult
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/smult?rev=1505670180
A matrix can be multiplied by a scalar, in which case each element of the matrix is multiplied by the scalar. In components, $$C_{ij}=\lambda A_{ij}$$ where $\lambda$ is a scalar, that is, a complex number. For example, if $$A = \left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right),$$ then $$3A=3\cdot \left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right) = \left(\begin{array}{cc} 3a&3b\\ 3c&3d\\ \end{array} \right).$$text/html2018-06-23T20:07:00-08:00book:content:sturm
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/sturm?rev=1529809620
The Main Idea
When you use the separation of variables procedure on a PDE, you end up with one or more ODEs that are eigenvalue problems, i.e. they contain an unknown constant that comes from the separation constants. These ODEs are called Sturm-Liouville equations. By solving the ODEs for particular boundary conditions, we find particular allowed values for the eigenvalues. Furthermore, the solutions of the ODEs for these special boundary conditions and eigenvalues form an orthogonal…text/html2017-04-10T18:28:16-08:00book:content:symop
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/symop?rev=1491874096
text/html2017-09-17T11:26:00-08:00book:content:trace
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/trace?rev=1505672760
The trace of a matrix is just the sum of all of its diagonal elements. In terms of components, $$\mathrm{tr}(A)=\sum_i A_{ii}.$$ For example, if $$A=\left(\begin{array}{ccc} 1&2&3\\ 4&5&6\\ 7&8&9\\ \end{array}\right) $$ then $$\mathrm{tr}(A)=1+5+9=15.$$text/html2017-09-17T11:16:00-08:00book:content:transpose
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/transpose?rev=1505672160
The transpose of a matrix is obtained by interchanging rows and columns. In terms of components, $$\left(A_{ij}\right)^T=A_{ji}.$$ For example, $$A = \left(\begin{array}{cc} a&b\\ c&d\\ \end{array} \right) \Longrightarrow A^T= \left(\begin{array}{cc} a&c\\ b&d\\ \end{array} \right)$$ and $$B = \left(\begin{array}{ccc} a&b&c\\ d&e&f\\ g&h&i\\ \end{array} \right) \Longrightarrow B^T= \left(\begin{array}{ccc} a&d&g\\ b&e&h\\ c&f&i\\ \end{array} \right).$$text/html2017-04-07T13:45:19-08:00book:content:unitary
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/unitary?rev=1491597919
A complex $n\times n$ matrix $U$ is unitary if its conjugate transpose is equal to its inverse, that is, if \begin{equation} U^\dagger = U^{-1} , \end{equation} that is, if \begin{equation} U^\dagger U = I = UU^\dagger . \end{equation}
If $U$ is both unitary and real, then $U$ is an orthogonal matrix. The analogy goes even further: Working out the condition for unitarity, it is easy to see that the rows (and similarly the columns) of a unitary matrix $U$ form a complex orthonormal basis. Usi…text/html2018-06-23T20:00:00-08:00book:content:vsdefs
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/vsdefs?rev=1529809200
Definition of a Normed Vector Space
A set of objects (vectors) $\{\vec{u}, \vec{v}, \vec{w}, \dots\}$ is said to form a linear vector space over the field of scalars $\{\lambda, \mu,\dots\}$ (e.g. real numbers or complex numbers) if:
\begin{enumerate}\item the set is closed, commutative, and associative under (vector) addition; \item the set is closed, associative, and distributive under multiplication by a scalar; \item there exists a null vector $\vec{0}$; \item multiplication by the scalar…text/html2018-06-23T19:59:00-08:00book:content:wronskian
http://sites.science.oregonstate.edu/coursewikis/LinAlgBook/book/content/wronskian?rev=1529809140
Motivation and Analogy
We know from the theorem on $n$th order linear homogeneous differential equations $\cal{L} y=0$ that the general solution is a linear combination of $n$ linearly independent solutions $$y=C_1 y_1 + C_2 y_2 +\dots + C_n y_n$$ What does the word linearly independent mean and how do we find out if a set of particular solutions is linearly independent?