Geometry of Differential Forms book:content
http://sites.science.oregonstate.edu/coursewikis/GDF/
2020-01-26T14:18:16-08:00Geometry of Differential Forms
http://sites.science.oregonstate.edu/coursewikis/GDF/
http://sites.science.oregonstate.edu/coursewikis/GDF/lib/images/favicon.icotext/html2014-04-24T21:21:00-08:00book:content:3dext
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/3dext?rev=1398399660
text/html2014-04-24T21:10:00-08:00book:content:3dforms
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/3dforms?rev=1398399000
text/html2013-03-21T15:46:00-08:00book:content:3dhodge
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/3dhodge?rev=1363905960
text/html2014-04-24T21:12:00-08:00book:content:3dwedge
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/3dwedge?rev=1398399120
text/html2013-01-16T13:09:00-08:00book:content:acknowledge
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/acknowledge?rev=1358370540
The use of differential forms, and especially of orthonormal bases, as presented in this book, represents a radical change in my own thinking. The relativity community consists primarily of physicists, yet they mostly learned differential geometry as I did, from mathematicians, in a coordinate basis. This gap is reminiscent of the one between the vector calculus taught by mathematicians, exclusively in rectangular coordinates, and the vector calculus used by physicists, mostly in curvilinear co…text/html2013-12-18T13:01:00-08:00book:content:bases
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/bases?rev=1387400460
Let $M$ be an $n$-dimensional surface, with coordinates $(x^i)$. You can think of $M$ as being $\RR^n$, but we will also consider surfaces in higher-dimensional spaces, such as the 2-sphere in $\RR^3$. Then, as before, $\bigwedge^1(M)$ is the span of the 1-forms $\{dx^i\}$, $\bigwedge^0(M)$ is the space of functions on $M$, and $\bigwedge^p(M)$ is the space of $p$-forms; we will often write simply $\bigwedge^p$ for $\bigwedge^p(M)$.text/html2013-12-18T15:28:00-08:00book:content:bianchi
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/bianchi?rev=1387409280
Since $\sigma^i$ is an (ordinary) differential form, we must have \begin{align} 0 = -d^2\sigma^i &= d\left( \omega^i{}_k\wedge\sigma^k \right) \nonumber\\ &= d\omega^i{}_k\wedge\sigma^k - \omega^i{}_k\wedge d\sigma^k \nonumber\\ &= d\omega^i{}_j\wedge\sigma^j + \omega^i{}_k\wedge\omega^k{}_j\wedge\sigma^j \nonumber\\ &= \left( d\omega^i{}_j + \omega^i{}_k\wedge\omega^k{}_j \right) \wedge\sigma^k \nonumber\\ &= \Omega^i{}_j \wedge \sigma^j \label{bianchiI} \end{align} which is the first Bi…text/html2013-12-18T09:19:00-08:00book:content:books
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/books?rev=1387387140
\begin{itemize}\item Harley Flanders, Differential Forms with Applications to the Physical Sciences, Academic Press, New York, 1963; Dover, New York, 1989. Excellent but somewhat dated introduction to differential forms; only
covers the positive definite case.text/html2014-04-30T21:15:00-08:00book:content:calcthms
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/calcthms?rev=1398917700
We restate the results in the previous section, reversing the order. Stokes' Theorem for differential forms says that \begin{equation} \int_R d\alpha = \int_{\partial R} \alpha \end{equation} for any $p$-form $\alpha$ and any ($p+1$)-dimensional region $R$. All of the standard theorems in calculus relating integrals over regions and their boundaries are special cases of Stokes' Theorem.text/html2014-05-02T07:18:00-08:00book:content:commutators
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/commutators?rev=1399040280
It is a remarkable fact that the expression \begin{equation} [\vv,\ww](f) = \vv\bigl(\ww(f)\bigr) - \ww\bigl(\vv(f)\bigr) \end{equation} contains no second derivatives, and therefore defines a new vector field $[\vv,\ww]$, which is called the commutator of the vector fields $\vv$ and $\ww$.text/html2013-12-18T15:28:00-08:00book:content:components
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/components?rev=1387409280
Just as the components of the connection 1-forms are the Christoffel symbols, related by \begin{equation} \omega^i{}_j = \Gamma^i{}_{jk}\,\sigma^k \end{equation} the components of curvature also have names. We write \begin{equation} \Omega^i_{}j = \frac12 R^i{}_{jkl}\,\sigma^k\wedge\sigma^l \label{Riem} \end{equation} where the $R^i{}_{jkl}$ are the components of the Riemann curvature
tensor. However, Equation~(\ref{Riem}) alone only determines the combinations $R^i{}_{jkl}-R^i{}_{jlk}$, so …text/html2013-12-18T15:13:00-08:00book:content:connections
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/connections?rev=1387408380
Let's turn this problem on its head. We don't yet know how $d$ acts on our basis $\{\ee_i\}$, but let's give this action a name. We must have \begin{equation} d\ee_j = \omega^i{}_j\,\ee_i \end{equation} for some 1-forms $\omega^i{}_j$, which are called connection 1-forms. In fact, any choice of these 1-forms determines an exterior derivative operator satisfying the two conditions in the previous section. As we will see in the next section, further conditions must be imposed in order to determ…text/html2013-11-30T10:57:00-08:00book:content:corollary
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/corollary?rev=1385837820
text/html2013-12-18T12:53:00-08:00book:content:cov
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/cov?rev=1387399980
Consider a change of variables in two dimensions \begin{equation} x=x(u,v) \qquad y=y(u,v) \end{equation} This is just a special case of a parametric surface!
To find the surface element of such surfaces, first foliate the surface with curves along which either $u$ or $v$ is constant. We have \begin{equation} d\rr = \Partial{\rr}{u}\,du + \Partial{\rr}{v}\,dv \end{equation} The surface element can now be obtained as \begin{equation} d\AA = d\rr_1 \times d\rr_2 = \left( \Partial{\rr}{u} \time…text/html2014-04-25T15:58:00-08:00book:content:cross
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/cross?rev=1398466680
Consider $\bigwedge^1(\RR^3)$. Since all 3-dimensional vector spaces are isomorphic, we can identify the 1-form \begin{equation} v = v_x\,dx + v_y\,dy + v_z\,dz \end{equation} with the ordinary vector (field) \begin{equation} \vv = v_x\,\xhat + v_y\,\yhat + v_z\,\zhat \end{equation} and similarly for \begin{equation} w = w_x\,dx + w_y\,dy + w_z\,dz \end{equation} In other words, we identify the basis vectors as follows: \begin{align} \xhat &\longleftrightarrow dx \\ \yhat &\longleftrightarrow d…text/html2014-04-24T16:26:00-08:00book:content:curvature
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/curvature?rev=1398381960
We previously computed \begin{align} d^2\rr &= d\left(\ee_j\,\sigma^j \right) \nonumber\\ &= d\ee_j\wedge\sigma^j + \ee_j\,d\sigma^j \nonumber\\ &= \ee_i \left(\omega^i{}_j\wedge\sigma^j + d\sigma^i \right) \end{align} The vector components of this equation are called the first structure
equation, which takes the form \begin{equation} \Theta^i = \omega^i{}_j\wedge\sigma^j + d\sigma^i \end{equation} where the $\Theta^i$ are the torsion 2-forms. Since we are assuming that \begin{equation} …text/html2013-12-18T15:28:00-08:00book:content:curvature3d
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/curvature3d?rev=1387409280
In three Euclidean dimensions, the curvature 2-forms vanish. Thus, \begin{equation} 0 = \Omega^i{}_j = d\omega^i{}_j + \omega^i{}_k\wedge\omega^k{}_j \end{equation} and in particular \begin{equation} 0 = \Omega^1{}_2 = d\omega^1{}_2 + \omega^1{}_k\wedge\omega^k{}_2 = d\omega^1{}_2 + \omega^1{}_3\wedge\omega^3{}_2 \label{3dcurv} \end{equation} If we were instead to look only at the two-dimensional surface, we would compute \begin{equation} \tilde\Omega^1{}_2 = d\omega^1{}_2 + \omega^1{}_k\wed…text/html2013-12-18T15:22:00-08:00book:content:curves
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/curves?rev=1387408920
How much does a curve bend? Take the circle which best fits the curve at a given point, and use the radius of this circle to specify the curvature of the curve at that point. Since large circles have less curvature than small circles, define the curvature to be \begin{equation} \kappa = \frac{1}{r} \end{equation} where $r$ is the radius of the best-fit circle. The unit tangent vector to a curve is given by \begin{equation} \TT = \frac{d\rr}{|d\rr|} = \frac{d\rr}{ds} \end{equation} where, as u…text/html2013-11-26T16:33:00-08:00book:content:decomp
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/decomp?rev=1385512380
text/html2013-12-18T12:30:00-08:00book:content:differentials
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/differentials?rev=1387398600
Differentiation is about small changes. We interpret the differential $df$ as the ``small change in $f$'', so that the basic differentiation operation becomes \begin{equation} f \longmapsto df \end{equation} which we describe as ``zapping $f$ with $d$''.text/html2014-04-25T19:33:00-08:00book:content:divgradcurl
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/divgradcurl?rev=1398479580
Working in Euclidean $\RR^3$, consider a 1-form \begin{equation} F = F_x \,dx + F_y \,dy + F_z \,dz \end{equation} What is $dF$? We have \begin{align} d(F_x\,dx) &= dF_x\wedge dx \nonumber\\ &= \left( \Partial{F_x}{x}\,dx+\Partial{F_x}{y}\,dy+\Partial{F_x}{z}\,dz\right) \wedge dx \nonumber\\ &= \Partial{F_x}{z}\,dz\wedge dx - \Partial{F_x}{y}\,dx\wedge dy \end{align} Adding up three such terms, we obtain \begin{align} dF &= \left( \Partial{F_z}{y} - \Partial{F_y}{z} \right) dy\wedge dz + …text/html2013-12-18T12:57:00-08:00book:content:dot
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/dot?rev=1387400220
Consider now the wedge product of the 1-forms \begin{equation} v = v_x\,dx + v_y\,dy + v_z\,dz \end{equation} and \begin{equation} w = w_x\,dy\wedge dz + w_y\,dz\wedge dx + w_z\,dx\wedge dy \end{equation} This is a 3-form, and there is only one independent 3-form, so each term is a multiple of every other. Adding them up, we obtain \begin{equation} v\wedge w = (v_xw_x+v_yw_y+v_zw_z) \,dx\wedge dy\wedge dz \end{equation} which looks very much like the dot product between $\vv$ and $\ww$! But the…text/html2013-12-18T14:44:00-08:00book:content:dotcross
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/dotcross?rev=1387406640
We can now finally write down precise equivalents of the dot and cross products using differential forms. First of all, the definition of the Hodge dual tells us that the Hodge dual, $*$, and the metric, $g$, in fact contain the same information. Interpreting $g$ on 1-forms as the dot product, we have \begin{equation} \alpha \cdot \beta = (-1)^s {*}(\alpha\wedge{*}\beta) \end{equation} where we have used ${*}\omega=(-1)^s$. The dot product is thus defined in any dimension and signature, and …text/html2015-04-18T11:09:00-08:00book:content:dpolar
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/dpolar?rev=1429380540
We begin this chapter with a motivating example. How do $\rhat$ and $\phat$ vary as you move from point to point? There are several ways to answer this question.
Figure 1:The polar basis vectors $\rhat$ and $\phat$ at three nearby points.
Figure 2:The change in $\rhat$ in the $\phi$ direction.
A geometric answer can be constructed by drawing $\rhat$ and $\phat$ at two nearby points, then comparing them. If the points are separated radially, it is clear that these basis vectors do not cha…text/html2013-12-18T15:13:00-08:00book:content:dprop
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/dprop?rev=1387408380
So what properties should exterior differentiation of vector fields satisfy? Like all derivative operators, we must clearly demand linearity: \begin{equation} d(a\,\vv+\ww) = a\,d\vv+d\ww \end{equation} where $a$ is a constant, and a product rule: \begin{equation} d(\alpha\,\vv) = d\alpha\,\vv + (-1)^p \alpha\wedge d\vv \end{equation} where $\alpha$ is a $p$-form. An alternative version of the product rule is obtained by reversing the order of the factors: \begin{equation} d(\vv\,\alpha) = d\vv…text/html2013-12-18T12:52:00-08:00book:content:dvector
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/dvector?rev=1387399920
A vector field $\vv$ can be thought of as a vector-valued 0-form. As such, we should be able to take its exterior derivative and obtain a vector-valued 1-form $d\vv$.
The simplest examples occur when $\vv$ can be expanded in terms of a rectangular basis, e.g. \begin{equation} \FF = F^x \,\xhat + F^y \,\yhat \end{equation} in two dimensions. Surely, since $\xhat$ and $\yhat$ are constant vector fields, we should have \begin{equation} d\xhat = 0 = d\yhat \end{equation} so that \begin{equation} …text/html2014-04-30T22:13:00-08:00book:content:equiv
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/equiv?rev=1398921180
text/html2013-12-18T14:40:00-08:00book:content:euclid
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/euclid?rev=1387406400
Consider Euclidean 2-space, with line element \begin{equation} ds^2 = dx^2 + dy^2 \end{equation} and (ordered) orthonormal basis $\{dx,dy\}$. The orientation is \begin{equation} \omega = dx\wedge dy \end{equation} and it is again straightforward to compute the Hodge dual on a basis. In analogy with the previous example, we have \begin{equation} dx \wedge {*}dx = g(dx,dx)\, dx \wedge dy = dx \wedge dy \end{equation} from which it follows that \begin{equation} {*}dx = dy \end{equation} Similarly…text/html2013-11-29T15:29:00-08:00book:content:exterior
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/exterior?rev=1385767740
We would like to extend this notion of ``gradient'' to differential forms of higher rank. Any such form can be written as (a linear combination of terms like) \begin{equation} \alpha = f \,dx^I = f \,dx^{i_1} \wedge ... \wedge dx^{i_p} \end{equation} Just as taking the gradient of a function increases the rank by one, we would like to define $d\alpha$ to be a $p+1$-form, resulting in a map \begin{equation} d: \bigwedge\nolimits^p \longmapsto \bigwedge\nolimits^{p+1} \end{equation} The obvious p…text/html2017-12-13T08:38:00-08:00book:content:extprop
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/extprop?rev=1513183080
We now investigate the properties of exterior differentiation.
Consider $d^2f=d(df)$. We have \begin{align} d^2 f = d(df) &= d\left(\Partial{f}{x^i}\,dx^i\right) \nonumber\\ &= d\left(\Partial{f}{x^i}\right)\wedge dx^i \nonumber\\ &= \frac{\partial^2f}{\partial x^j\partial x^i} \, dx^j \wedge dx^i = 0 \end{align} since mixed partial derivatives are independent of the order of differentiation, and the wedge product is antisymmetric. More formally, interchanging the dummy indices $i$ and …text/html2013-11-29T16:38:00-08:00book:content:extuniq
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/extuniq?rev=1385771880
The properties of $d$ in fact determine it uniquely.
Theorem: There is a unique operator $d:\bigwedge^p\longmapsto\bigwedge^{p+1}$ satisfying the following properties: \begin{enumerate}\item $d(a\,\alpha+\beta) = a\,d\alpha+d\beta$ if $a=\hbox{constant}$; \item $d(\alpha\wedge\beta) = d\alpha\wedge\beta + (-1)^p \,\alpha\wedge d\beta$ if $\alpha$ is a $p$-form; \item $d^2\alpha = 0$; \item $df = \Partial{f}{x^i}\,dx^i$. \end{enumerate}text/html2019-02-06T10:59:00-08:00book:content:forms
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/forms?rev=1549479540
We are now ready to provide a definition of differential forms. Consider $\RR^n$ with coordinates $\{x^i\}$, where $i=1,...,n$. (As we will see later, any surface $M$ in $\RR^n$ can be used instead.) Consider the differentials $\{dx^i\}$, and construct all linear combinations \begin{equation} V = \left\langle\{dx^i\}\right\rangle = \{a_i \, dx^i \} \end{equation} where we have introduced the Einstein summation convention, under which repeated indices must be summed over. If the coefficients $…text/html2013-11-30T21:04:00-08:00book:content:formvector
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/formvector?rev=1385874240
The title says it all: We now consider differential forms which are also vector fields, which are called vector-valued differential forms. The standard example is $d\rr$ itself, which is both a 1-form and a vector field. More generally, a vector-valued $p$-form can be written as $\alpha^i \ee_i$, where each $\alpha^i$ is a $p$-form, and where $\{\ee_i\}$ is a vector basis (here chosen to be orthonormal).text/html2013-12-20T17:31:00-08:00book:content:gaussbonnet
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/gaussbonnet?rev=1387589460
Consider now an (oriented) compact surface $\Sigma\in\RR^3$. Any such surface has a rectangular decomposition, that is, it can be covered by finitely many quadrilaterals, the sides of which are smooth curves, with the further assumption that any two such quadrilaterals overlap, if at all, either in a single common vertex or in a single common edge.text/html2013-12-18T15:02:00-08:00book:content:general
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/general?rev=1387407720
We now compute the Hodge dual on an orthonormal basis directly from the definition. Recall that \begin{equation} \alpha \wedge {*}\beta = g(\alpha,\beta)\,\omega \end{equation} Suppose $\sigma^I$ is a basis $p$-form in $\bigwedge^p$. By permuting the basis 1-forms $\sigma^i$, we can bring $\sigma^I$ to the form \begin{equation} \sigma^I = \sigma^1 \wedge ... \wedge \sigma^p \end{equation} Furthermore, we can assume without loss of generality that our permutation is even, and hence does not cha…text/html2014-04-26T07:11:00-08:00book:content:geocurv
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geocurv?rev=1398521460
Figure 1:The tangent and normal vectors to a curve.
Consider a curve $C$ in a two-dimensional surface $\Sigma$. We will assume Euclidean signature. You can imagine that $\Sigma$ sits inside Euclidean $\RR^3$ if desired, but this is not necessary. Choose an orthonormal basis of vectors $\{\ee_1,\ee_2\}$ as usual. Then, as shown in Figure~1, the unit tangent vector $\TT$ to the curve must satisfy \begin{equation} \TT = \cos\alpha\,\ee_1 + \sin\alpha\,\ee_2 \end{equation} for some angle $\alp…text/html2015-04-18T10:47:00-08:00book:content:geodesics
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geodesics?rev=1429379220
When is a curve ``straight''?
Consider the velocity vector associated with a curve, given by \begin{equation} \vv = \frac{d\rr}{dt} \end{equation} so that \begin{equation} \vv\,dt = d\rr = \sigma^i \ee_i \end{equation} Thus, \begin{equation} \sigma^i = v^i\,dt \end{equation} Taking the exterior derivative of $\vv$, we obtain \begin{align} d\vv = d(v^i \ee_i) &= dv^i \ee_i + v^i d\ee^i \nonumber\\ &= (dv^j + \omega^j{}_i v^i) \ee_j \nonumber\\ &= (dv^j + \Gamma^j{}_{ik} v^i\,\sigma^k) \ee_j \…text/html2013-12-18T15:34:00-08:00book:content:geodesics3d
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geodesics3d?rev=1387409640
We already have a definition of geodesics which lie in surfaces in $\RR^3$, namely curves whose geodesic curvature vanishes. Recall that \begin{equation} \kappa_g \,ds = d\TT\cdot\NN \end{equation} where $\TT$ is the unit tangent vector to the curve, and $\NN$ is a particular choice of normal vector to the curve. Thus, for $\kappa_g$ to vanish, we must have \begin{equation} \dot\TT \cdot \NN = 0 \end{equation} Since $\TT$ is a unit vector, we know that \begin{equation} \dot\TT \cdot \TT = 0 \e…text/html2013-12-18T15:35:00-08:00book:content:geoex
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geoex?rev=1387409700
The Plane
In rectangular coordinates, it is clear that $\dot{v}$ will be zero if and only if $\ddot{x}=0=\ddot{y}$; the geodesics are straight lines. Consider this same problem in polar coordinates, where \begin{equation} d\rr = dr\,\rhat + r\,d\phi\,\phat \end{equation} Then \begin{equation} \vv = \dot{r}\,\rhat + r\,\dot\phi\,\phat \end{equation} so that \begin{align} d\vv &= d\dot{r}\,\rhat + \dot{r}\,d\rhat + d(r\dot\phi)\,\phat + r\dot\phi\,d\phat \nonumber\\ &= d\dot{r}\,\rhat + \dot{…text/html2013-12-20T17:50:00-08:00book:content:geopolar
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geopolar?rev=1387590600
The geodesic equation in polar coordinates reduces to the coupled differential equations \begin{align} \dot\phi &= \frac{\ell}{r^2} \\ \dot{r}^2 &= 1 - \frac{\ell^2}{r^2} \end{align} where the constant $\ell$ is the angular momentum about the origin (per unit mass), and where we have used arclength (here labeled $\lambda$) as the parameter (so that the curve is traversed at unit speed). We now proceed to solve these equations.text/html2013-06-05T21:03:00-08:00book:content:geosol
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geosol?rev=1370491380
Recall the geodesic equation in polar coordinates: \begin{align} \ddot{r}-r\dot\phi^2 &= 0 \\ r\ddot\phi+2\dot{r}\dot\phi &= 0 \end{align} Divide the second equation by $r\dot\phi$ to obtain \begin{equation} \frac{\ddot\phi}{\dot\phi} + \frac{2\dot{r}}{r} = 0 \end{equation} which can be integrated to yield \begin{equation} \dot\phi = \frac{A}{r^2} \label{geosolp} \end{equation} where $A$ is a constant. Inserting this into the first equation and multiplying by $\dot{r}$ now yields \begin{equatio…text/html2015-04-18T10:47:00-08:00book:content:geosphere
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geosphere?rev=1429379220
The geodesic equation on the sphere reduces to the coupled differential equations \begin{align} \dot\phi &= \frac{\ell}{r^2\sin^2\theta} \\ r^2 \dot\theta^2 &= 1 - \frac{\ell^2}{r^2\sin^2\theta} \end{align} where the constant $\ell$ is the angular momentum about the $z$-axis (per unit mass), and where we have used arclength (here labeled $\lambda$) as the parameter (so that the curve is traversed at unit speed). We now proceed to solve these equations.text/html2014-04-30T22:14:00-08:00book:content:geotri
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/geotri?rev=1398921240
What is the total curvature around a closed curve? Using the definition in the previous section, if we integrate the geodesic curvature around any closed curve we have \begin{equation} \oint \kappa_g\,ds = \oint d\alpha - \oint \omega^1{}_2 = 2\pi - \oint \omega^1{}_2 \end{equation} since $\TT$ must return to its original orientation. We can now use Stokes' Theorem on the last term, which tells us that \begin{equation} \oint \omega^1{}_2 = \int d\omega^1{}_2 \end{equation} where the integral o…text/html2014-04-24T16:25:00-08:00book:content:gradient
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/gradient?rev=1398381900
We now have two algebraic maps on differential forms, namely: \begin{align} \wedge:& \bigwedge\nolimits^p \times \bigwedge\nolimits^q \longmapsto \bigwedge\nolimits^{p+q} \\ *:& \bigwedge\nolimits^p \longmapsto \bigwedge\nolimits^{n-p} \end{align} In this chapter, we introduce a third such map, involving differentiation.text/html2011-01-13T19:41:00-08:00book:content:higher
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/higher?rev=1294976460
We refer to the product of two 1-forms as a 2-form. The space of all 2-forms, denoted $\bigwedge^2(\RR^n)$ or simply $\bigwedge^2$, is therefore spanned by all products of basis 1-forms, that is \begin{equation} \bigwedge\nolimits^2 = \left\langle\{dx^i \wedge dx^j\}\right\rangle \end{equation} But this set is redundant; it is sufficient to assume $i<j$. Similarly, we can define $p$-forms as \begin{equation} \bigwedge\nolimits^p = \left\langle\{dx^{i_1} \wedge ... \wedge dx^{i_p}\}\right\rang…text/html2013-12-18T14:43:00-08:00book:content:hodge
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/hodge?rev=1387406580
We have seen that, in $\RR^3$, the wedge product of two 1-forms looks very much like the cross product of vectors, except that it is a 2-form. And the wedge product of a 1-form and a 2-form looks very much like the dot product of vectors, except that it is a 3-form. There is clearly some sort of correspondence of the form \begin{align} dx &\longleftrightarrow dy\wedge dz \\ dy &\longleftrightarrow dz\wedge dx \\ dz &\longleftrightarrow dx\wedge dy \\ 1 &\longleftrightarrow dx\wedge dy\wedge dz…text/html2014-05-02T07:16:00-08:00book:content:hodge2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/hodge2?rev=1399040160
We really ought to verify that our implicit definition of the Hodge dual, namely \begin{equation} \alpha \wedge {*}\beta = g(\alpha,\beta)\,\omega \end{equation} is in fact well-defined. To do so, we rely on a standard result about linear maps.
Lemma: Given any linear map $f:V\longmapsto\RR$ on a vector space $V$ with
inner product $g$, there is a unique element $\gamma\in V$ such that \begin{equation} f(\alpha) = g(\alpha,\gamma) \end{equation} for every $\alpha\in V$.text/html2013-12-18T14:40:00-08:00book:content:hpolar
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/hpolar?rev=1387406400
text/html2013-12-18T13:05:00-08:00book:content:inner
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/inner?rev=1387400700
We have argued that 1-forms are ``like'' vectors, and that in particular $dx$ is ``like'' $\ii$. Since $\{\ii,\jj,\kk\}$ is an orthonormal basis for vectors in $\RR^3$, we would like to regard $\{dx,dy,dz\}$ as an orthonormal basis for 1-forms.
We therefore introduce an inner product $g$ on $\bigwedge^1(\RR^3)$ by defining the rectangular basis $\{dx,dy,dz\}$ to be orthonormal under this product, that is \begin{align} g(dx,dx) = g(dy,dy) = g(dz,dz) &= 1\\ g(dx,dy) = g(dy,dz) = g(dz,dx) &= 0 \e…text/html2013-11-14T14:44:00-08:00book:content:integrands
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/integrands?rev=1384469040
Integration is about chopping a region into small pieces, then adding up some small quantity on each piece. Thus, integration is about adding up differentials, and the basic integration operation is \begin{equation} f = \int df \end{equation}
In single-variable calculus, such integrals take the form \begin{equation} W = \int F\,dx \end{equation} which might represent the work done by the force $F$ when moving an object in the $x$-direction. The integrand in this case is $F\,dx$, where $F$ is …text/html2013-12-18T15:04:00-08:00book:content:integrands2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/integrands2?rev=1387407840
We have seen that (in $\RR^3$) \begin{align} F &= \FF\cdot d\rr \\ {*}F &= \FF\cdot d\AA \end{align} when we integrate both sides. This restriction leads us to a slightly different interpretation of integrands than you may be used to. Integrands by themselves can be thought of as being defined everywhere; it is only after you decide where to integrate them that they are restricted to the domain of integration. If two integrands are equal after being integrated over any domain, we say that the…text/html2013-12-18T15:36:00-08:00book:content:lagrange
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/lagrange?rev=1387409760
A Lagrangian density $\mathcal{L}$ is just an integrand, that is, an $n$-form. The corresponding action $\mathcal{S}$ is just the integral of the Lagrangian density, typically over the entire space. The goal is to choose the Lagrangian involving physical fields so that its extrema correspond to physically interesting conditions on those fields.text/html2013-11-29T16:01:00-08:00book:content:laplacian
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/laplacian?rev=1385769660
The Laplacian of a function $f$ is defined by \begin{equation} \Delta f = \grad\cdot\grad f \end{equation} Rewriting this in terms of differential forms, we obtain \begin{equation} \Delta f = {*}d{*}df \end{equation}
As an example, consider polar coordinates. We have \begin{align} \Delta f &= {*}d{*}df \nonumber\\ &= {*}d{*} \left(\Partial{f}{r}\,dr+\Partial{f}{\phi}\,d\phi\right) \nonumber\\ &= {*}d{*} \left(\Partial{f}{r}\,dr + \frac{1}{r}\Partial{f}{\phi}\,r\,d\phi\right) \nonumber\\ &…text/html2014-04-25T20:51:00-08:00book:content:levicivita
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/levicivita?rev=1398484260
We now impose two additional conditions on the connection, then argue that these are enough to determine the connection uniquely.
First, we require that \begin{equation} d(\vv\cdot\ww) = d\vv\cdot\ww + \vv\cdot d\ww \end{equation} so that differentiation respects the dot product. Applying this condition to our orthonormal bases, we have \begin{equation} 0 = d(\ee_i\cdot\ee_j) = d\ee_i\cdot\ee_j + \ee_i\cdot d\ee_j = \omega_{ji} + \omega_{ij} \label{mcomp} \end{equation} A connection which s…text/html2013-11-21T21:44:00-08:00book:content:linear
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/linear?rev=1385099040
To demonstrate the power of differential forms, we consider some elementary applications. A map \begin{equation} A:V \longmapsto V \end{equation} is linear if it satisfies \begin{equation} A(f\alpha + \beta) = f(A\alpha) + A\beta \end{equation} A linear map on 1-forms can be extended to a linear map on $p$-forms, also called $A$ (or occasionally $\bigwedge^p A$), given by \begin{equation} \alpha^1 \wedge ... \wedge \alpha^p \longmapsto A\alpha^1 \wedge ... \wedge A\alpha^p \end{equation} In…text/html2013-12-18T14:57:00-08:00book:content:maxwell1
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/maxwell1?rev=1387407420
Maxwell's equations are a system of coupled differential equations for the electric field $\EE$ and the magnetic field $\BB$. In traditional language, they take the form \begin{align} \grad\cdot\EE &= 4\pi\rho \\ \grad\cdot\BB &= 0 \\ \grad\times\EE + \dot\BB &= 0 \\ \grad\times\BB - \dot\EE &= 4\pi\JJ \end{align} where $\rho$ is the charge density, $\JJ$ is the current density, and dots denote time derivatives. Taking the divergence of the last equation, and using the first, leads to the cont…text/html2013-11-29T16:27:00-08:00book:content:maxwell2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/maxwell2?rev=1385771220
It is straightforward to translate Maxwell's equations into the language of differential forms. We obtain \begin{align} {*}d{*}E &= 4\pi\rho \\ {*}d{*}B &= 0 \\ {*}dE + \dot B &= 0 \\ {*}dB - \dot E &= 4\pi J \end{align} but it is customary to take the Hodge dual of both sides, resulting in \begin{align} d{*}E &= 4\pi\,{*}\rho \\ d{*}B &= 0 \\ dE + {*}\dot B &= 0 \\ dB - {*}\dot E &= 4\pi \,{*}J \end{align} Differentiating the last equation, and using the first, brings the continuity equation t…text/html2013-12-18T14:57:00-08:00book:content:maxwell3
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/maxwell3?rev=1387407420
A remarkable simplification occurs if we rewrite Maxwell's equations using differential forms in 4-dimensional Minkowski space, as we now show. We will assume that the orientation is given by \begin{equation} \omega = dx\wedge dy\wedge dz\wedge dt \end{equation}text/html2013-12-18T13:01:00-08:00book:content:metric
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/metric?rev=1387400460
Now suppose there is an inner product $g$ on $\bigwedge^1$. The components of $g$ in the basis $\{\sigma^i\}$ are given by \begin{equation} g^{ij} = g(\sigma^i,\sigma^j) \end{equation} As usual, we will also use $g$ to denote the matrix $(g^{ij})$.
Recall that an inner product is symmetric; thus, $g^{ji}=g^{ij}$, so the matrix $g$ is symmetric. An inner product must also be non-degenerate, and a little thought shows that this is equivalent to requiring that the determinant $|g|$ be nonzero; $…text/html2013-12-18T13:09:00-08:00book:content:metric2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/metric2?rev=1387400940
When comparing the cross product in $\RR^3$ with the wedge product on $\bigwedge^1(\RR^3)$, it was natural to associate $\zhat$ with both $dz\in\bigwedge^1$ and $dx\wedge dy\in\bigwedge^2$. Since $|\zhat|=1$, we expect not only \begin{equation} g(dz,dz) = 1 \end{equation} which we already know, but also something like \begin{equation} g(dx\wedge dy,dx\wedge dy) = 1 \label{g2} \end{equation} which we have yet to define. Consider for simplicity $\bigwedge^2(\RR^2)$, which only has one independen…text/html2013-12-18T14:40:00-08:00book:content:minkowski
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/minkowski?rev=1387406400
Consider Minkowski 2-space, with line element \begin{equation} ds^2 = dx^2 - dt^2 \end{equation} and (ordered) orthonormal basis $\{dx,dt\}$. The orientation is \begin{equation} \omega = dx\wedge dt \end{equation} and it is straightforward to compute the Hodge dual on a basis. First of all, we have \begin{equation} dx \wedge {*}dx = g(dx,dx)\, dx \wedge dt = dx \wedge dt \end{equation} from which it follows that \begin{equation} {*}dx = dt \end{equation} Similarly, from \begin{equation} dt …text/html2012-12-01T19:53:00-08:00book:content:mult
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/mult?rev=1354420380
We are now ready to multiply differentials. Starting from \begin{equation} du\,dv = \Jacobian{u}{v}{x}{y} \> dx\,dy \end{equation} first set $u=y$ and $v=x$ to obtain \begin{equation} dy\,dx = -dx\,dy \end{equation} Thus, multiplication of differentials is antisymmetric. Now set $u=v=x$, in which case the determinant is $0$, so that \begin{equation} dx\,dx = 0 \end{equation} Thus, differentials ``square'' to zero.text/html2013-12-18T13:04:00-08:00book:content:orient
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/orient?rev=1387400640
As before, let $\{\sigma^i\}$ be an orthonormal basis of $\bigwedge^1(M)$, and consider $\bigwedge^n(M)$. First of all, this space is 1-dimensional, with basis \begin{equation} \omega = \sigma^1 \wedge \,...\, \wedge \sigma^n \end{equation} But $\omega$ is a unit $n$-form, since \begin{equation} g(\omega,\omega) = g(\sigma^1,\sigma^1) ... g(\sigma^n,\sigma^n) = (-1)^s \label{orientation} \end{equation} Furthermore, there are precisely two unit $n$-forms, namely $\pm\omega$, a statement which d…text/html2014-04-25T19:43:00-08:00book:content:orthogonal
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/orthogonal?rev=1398480180
An orthogonal coordinate system is one in which the coordinate directions are mutually perpendicular. The standard examples are rectangular, polar, cylindrical and spherical coordinates.
Working in Euclidean $\RR^3$, suppose that $(u,v,w)$ are orthogonal coordinates. Then an infinitesimal displacement $d\rr$ between nearby points can be expressed as \begin{equation} d\rr = h_u\,du \,\Hat u + h_v\,dv \,\Hat v + h_w\,dw \,\Hat w \label{ortho} \end{equation} for some functions $h_u$, $h_v$, $h_w…text/html2014-12-31T10:42:00-08:00book:content:orthogonal2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/orthogonal2?rev=1420051320
What is the gradient in an orthogonal coordinate system? From \begin{equation} df = \Partial{h_u}{u}\,du + \Partial{h_v}{v}\,dv + \Partial{h_w}{w}\,dw = \grad f\cdot d\rr \end{equation} and \begin{equation} d\rr = h_u\,du \,\Hat u + h_v\,dv \,\Hat v + h_w\,dw \,\Hat w \end{equation} we see that \begin{equation} \grad f = \frac{1}{h_u} \Partial{f}{u} \Hat u + \frac{1}{h_v} \Partial{f}{v} \Hat v + \frac{1}{h_w} \Partial{f}{w} \Hat w \end{equation}text/html2013-05-11T20:40:00-08:00book:content:parts
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/parts?rev=1368330000
text/html2014-04-25T16:03:00-08:00book:content:pictures
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/pictures?rev=1398466980
How do vectors act on functions? By telling you how much the function changes in the direction of the vector. This action is given by \begin{equation} \vv(f) = \vv\cdot\grad f \label{vact} \end{equation} and involves differentiation. But we have also seen that the 1-form $df$ corresponds to $\grad f$, through the Master Formula ($df = \grad f\cdot d\rr$) and we can therefore regard~(\ref{vact}) as an action of the 1-form $df$ on the vector $\vv$, given by \begin{equation} df(\vv) = \vv(f) =…text/html2014-04-24T16:25:00-08:00book:content:polar
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/polar?rev=1398381900
Consider $\RR^2$ with the usual rectangular coordinates $(x,y)$. The standard basis of 1-forms is $\{dx,dy\}$. But we could also use polar coordinates $(r,\phi)$, and the corresponding basis $\{dr,d\phi\}$. How are these bases related?
We have \begin{align} x &= r\,\cos\phi \\ y &= r\,\sin\phi \end{align} which leads to \begin{align} dx &= dr\,\cos\phi - r\,\sin\phi\,d\phi \\ dy &= dr\,\sin\phi + r\,\cos\phi\,d\phi \end{align}text/html2013-12-18T12:51:00-08:00book:content:polar2
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/polar2?rev=1387399860
As an example, consider polar coordinates. We have \begin{align} r^2 &= x^2 + y^2 \\ \tan\phi &= \frac{y}{x} \end{align} so that \begin{align} r\,dr &= x\,dx + y\,dy \\ \frac{r^2}{x^2}\,d\phi = (1+\tan^2\phi) \,d\phi &= \frac{x\,dy-y\,dx}{x^2} \end{align} Thus, \begin{align} g(r\,dr,r\,dr) &= g(x\,dx+y\,dy,x\,dx+y\,dy) = x^2 + y^2 = r^2\\ g(r^2\,d\phi,r^2\,d\phi) &= g(x\,dy-y\,dx,x\,dy-y\,dx) = x^2 + y^2 = r^2\\ g(r\,dr,r^2\,d\phi) &= g(x\,dx+y\,dy,x\,dy-y\,dx) = yx - xy = 0 \end{align} and we …text/html2011-03-03T09:47:00-08:00book:content:preface
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/preface?rev=1299174420
I took my first course in differential geometry as a graduate student. I got an A, but I didn't learn much. Many of my colleagues, including several non-mathematicians with a desire to learn the subject, have reported similar experiences.
Why should this be the case? I believe there are two reasons. First, differential geometry --- like calculus --- tends to be taught as a branch of analysis, not geometry. Everything is a map between suitable spaces: Curves and surfaces are parameterized; …text/html2013-05-11T20:48:00-08:00book:content:products
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/products?rev=1368330480
We can of course multiply any differential forms together, not just 1-forms. So let $\alpha\in\bigwedge^p$ and $\beta\in\bigwedge^q$. Clearly we must have $\alpha\wedge\beta\in\bigwedge^{p+q}$. For example, if \begin{align} \alpha &= dx\wedge dy \\ \beta &= dz\wedge du\wedge dv \end{align} then \begin{equation} \alpha\wedge\beta = dx\wedge dy\wedge dz\wedge du\wedge dv \end{equation}text/html2013-11-29T16:24:00-08:00book:content:prules
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/prules?rev=1385771040
What is the gradient of a product of functions? The ordinary product rule for differentials is \begin{equation} d(fg) = g\,df + f\,dg \end{equation} and under the correspondence \begin{equation} df \longleftrightarrow \grad f \end{equation} we obtain the standard product rule \begin{equation} \grad(fg) = g\,\grad{f} + f\,\grad{g} \end{equation}text/html2011-02-08T10:39:00-08:00book:content:pseudo
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/pseudo?rev=1297190340
Is the cross product of two vectors a vector? Yes and no. Yes, of course, $\uu\times\vv$ is a vector, but is it the same type of vector as $\uu$ and $\vv$? Not really.
First of all, the dimensions are different: If $\uu$ and $\vv$ have dimensions of length, then $\uu\times\vv$ has dimensions of area. This behavior arises from the geometric definition of the cross product as directed area.text/html2014-05-01T20:49:00-08:00book:content:references
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/references?rev=1399002540
\begin{enumerate}\item
\item Gabriel Weinreich, Geometrical Vectors, University of Chicago Press, Chicago, 1998.
\item David Bachman, A geometric approach to differential forms, Birkhuser, Boston, 2006.
\item Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, Freeman, San Fransisco, 1973.text/html2013-11-26T15:55:00-08:00book:content:schwarz
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/schwarz?rev=1385510100
We have just seen that \begin{align} g(\alpha\wedge\beta,\alpha\wedge\beta) &= \left| \begin{matrix} g(\alpha,\alpha)& g(\alpha,\beta)\cr g(\beta,\alpha)& g(\beta,\beta)\cr \end{matrix} \right| \nonumber\\ &= g(\alpha,\alpha)g(\beta,\beta) - g(\alpha,\beta)^2 \end{align} But we also know that \begin{equation} \alpha\wedge\beta = f\,dx\wedge dy \end{equation} for some function $f$ (which was computed explicitly in the preceding section), so that \begin{align} g(\alpha\wedge\beta,\alpha\wedge\be…text/html2013-11-26T15:46:00-08:00book:content:signature
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/signature?rev=1385509560
Although we do not know whether each $g(\sigma^i,\sigma^i)$ is $+1$ or $-1$, it is easily seen that the number of plus signs ($p$) and minus signs ($m$) is basis independent. We define the signature of the metric $g$ to be \begin{equation} s = m \end{equation} and therefore the signature is just the number of minus signs.text/html2013-12-01T16:06:00-08:00book:content:sphereint
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/sphereint?rev=1385942760
text/html2014-04-26T08:12:00-08:00book:content:spinors
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/spinors?rev=1398525120
We now allow ourselves to formally add differential forms of different ranks, and introduce a new product on the resulting space. For any function $f$ and 1-forms $\alpha$, $\beta$, define \begin{align} f \vee \alpha &= f\alpha \\ \alpha \vee \beta &= \alpha \wedge \beta + g(\alpha,\beta) \end{align} The operation $\vee$ is called the Clifford product, and is read as ``vee''. We extend the Clifford product to (formal sums of) differential forms of all ranks by requiring associativity. It is t…text/html2014-04-25T19:58:00-08:00book:content:stokes
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/stokes?rev=1398481080
There are three “big theorems” in vector calculus, namely the Divergence Theorem: \begin{equation} \int_R \grad\cdot\FF\,dV = \int_{\partial R} \FF\cdot d\AA \label{divthm} \end{equation} Stokes' Theorem: \begin{equation} \int_S \grad\times\FF\cdot d\AA = \int_{\partial S} \FF\cdot d\rr \label{curlthm} \end{equation} and \begin{equation} \int_A^B \grad f\cdot d\rr = f \bigg|_A^B \label{gradthm} \end{equation} where ``$\partial$'' stands for ``the boundary of''. This latter theorem is someti…text/html2013-12-18T12:53:00-08:00book:content:surface
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/surface?rev=1387399980
We began our study of differential forms by claiming that differential forms are integrands. Which ones?
Using the results of the last section, line integrals are easy. We have \begin{equation} \int_C \FF\cdot d\rr = \int_C F \end{equation} where of course \begin{equation} F = \FF\cdot d\rr \end{equation} is the 1-form corresponding to $\FF$.text/html2013-12-18T15:23:00-08:00book:content:surfaces
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/surfaces?rev=1387408980
One way to describe a surface is as the level surface of some function, which can be taken to be a coordinate. In this section, we will work in three Euclidean dimensions with coordinates ($x^1$,$x^2$,$x^3$), and assume that our surface is given as $x^3=\hbox{constant}$. Thus, on our surface we have $dx^3=0$. More generally, in orthogonal coordinates, we can assume that $\sigma^3=0$ on our surface.text/html2013-12-18T15:25:00-08:00book:content:surfaces3d
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/surfaces3d?rev=1387409100
Figure 1:The $xy$-plane.
Consider first the $xy$-plane. As shown in Figure~1, we have \begin{align} \ee_1 &= \xhat \\ \ee_2 &= \yhat \\ \nn=\ee_3 &= \zhat \end{align} and of course \begin{equation} d\nn = 0 \end{equation} so that $S$ is the zero matrix, and all of the curvatures are zero.text/html2013-12-19T16:32:00-08:00book:content:tensoralg
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/tensoralg?rev=1387499520
(This section can be skipped on first reading.)
Tensors are multilinear maps on vectors. Differential forms are a type of tensor.
Consider the 1-form $df$ for some function $f$. How does it act on a vector $\vv$? That's easy: by giving the directional derivative, namely \begin{equation} df(\vv) = \grad f\cdot\vv \end{equation} This is just the master formula in a new guise. Recall that the master formula tells us how to relate $df$ and $\grad f$, namely \begin{equation} df = \grad f\cdot d…text/html2014-04-25T16:02:00-08:00book:content:tensors
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/tensors?rev=1398466920
We now have two quite different interpretations of $df$: \begin{itemize}\item $df$ represents a small change in $f$ (infinitesimals); \item $df(\vv)=\grad f\cdot\vv$ (linear map on vectors). \end{itemize} The first interpretation comes from calculus; $df$ is a differential. The second comes from linear algebra; $df$ is a tensor. We will use both of these interpretations: The calculus of differentials allows us to relate differentials of different functions, and tensor algebra tells us how to…text/html2014-04-26T08:14:00-08:00book:content:topology
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/topology?rev=1398525240
text/html2013-12-23T21:42:00-08:00book:content:torus
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/torus?rev=1387863720
Figure 1: A torus in $\RR^3$.
Figure 2:Parameterizing the torus as a surface of revolution about the
$z$-axis.
The torus $\TTT$, shown in Figure~1, can be parameterized as a surface of revolution about the $z$-axis, as shown in Figure~2, resulting in \begin{align} x &= (R+r\,\cos\theta)\,\cos\phi \\ y &= (R+r\,\cos\theta)\,\sin\phi \\ z &= r\,\sin\theta \end{align} from which it follows by direct computation that \begin{equation} ds^2 = r^2\,d\theta^2 + (R+r\,\cos\theta)^2\,d\phi^2 \lab…text/html2015-04-18T10:46:00-08:00book:content:unique
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/unique?rev=1429379160
We now outline the derivation of an explicit formula for the connection 1-forms, thus also proving that, given $d\rr$, there is a unique Levi-Civita connection. We emphasize, however, that this formula is rarely the most efficient way to actually compute the connection 1-forms.text/html2014-04-30T21:16:00-08:00book:content:vcreview
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/vcreview?rev=1398917760
The basic objects in vector calculus are vector fields $\FF$ in 3-dimensional Euclidean space, $\RR^3$. A vector field can be expressed in terms of a basis by giving its components with respect to that basis, such as \begin{equation} \FF = F_x\,\xhat + F_y\,\yhat + F_z\,\zhat \end{equation} where $\{\xhat,\yhat,\zhat\}$ denotes the standard rectangular basis for $\RR^3$, also written as $\{\ii,\jj,\kk\}$.text/html2013-12-18T12:48:00-08:00book:content:vectors
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/vectors?rev=1387399680
We have repeatedly made use of the obvious correspondence between vector fields and 1-forms, under which $\xhat$ becomes $dx$, etc. It is time to make this correspondence more formal. Throughout the following discussion, it is useful to imagine that we are working in ordinary three-dimensional Euclidean space, with the usual dot product. However, the argument is the same in any dimension and signature.text/html2013-12-18T15:17:00-08:00book:content:wpolar
http://sites.science.oregonstate.edu/coursewikis/GDF/book/content/wpolar?rev=1387408620
Since the Levi-Civita connection is unique, the easiest way to determine it is often educated guesswork. We illustrate the technique using polar coordinates.
Our orthonormal basis of 1-forms is $\{dr,r\,d\phi\}$. Writing out the torsion-free condition, we have \begin{align} d(dr) + \omega^r{}_r\wedge dr + \omega^r{}_\phi\wedge r\,d\phi &= 0 \\ d(r\,d\phi) + \omega^\phi{}_r\wedge dr + \omega^\phi{}_\phi\wedge r\,d\phi &= 0 \end{align} But metric compatibility tells us that \begin{align} \omeg…