- §1. Differential Forms
- §2. Higher Rank Forms
- §3. Polar Coordinates
- §4. Linear Maps
- §5. Cross Product
- §6. Dot Product
- §7. Products of Forms
- §8. Pictures of Forms
- §9. Tensors
- §10. Inner Products
- §11. Polar Coordinates II
Differential Forms
We are now ready to provide a definition of differential forms. Consider $\RR^n$ with coordinates $\{x^i\}$, where $i=1,…,n$. (As we will see later, any surface $M$ in $\RR^n$ can be used instead.) Now construct all linear combinations of the differentials $\{dx^i\}$, that is, consider the space $V$ defined by \begin{equation} V = \left\langle\{dx^i\}\right\rangle = \{a_i \, dx^i \} \end{equation} where we have introduced the Einstein summation convention, under which repeated indices must be summed over. If the coefficients $a_i$ are numbers, then $V$ is an $n$-dimensional vector space with basis $\{dx^i\}$. We will however instead allow the coefficients to be functions on $\RR^n$, which turns $V$ into a module over the ring of functions. (This is entirely analogous to the transition from vectors to vector fields.)
What are the elements of $V$? Any differential $df$ can be expanded in terms of our basis using calculus, namely \begin{equation} df = \Partial{f}{x^i} \, dx^i \label{partials} \end{equation} so that $df\in V$. What integrand does $df$ correspond to? The fundamental theorem for gradient says that \begin{equation} \int_C \grad f \cdot d\rr = f \big|_A^B \end{equation} for any curve $C$ from point $A$ to point $B$. We rewrite this in terms of integrands as \begin{equation} df = \grad f \cdot d\rr \label{Master} \end{equation} which we refer to as the Master Formula because of its importance in vector calculus. Thus, the differential form $df$ represents the integrand corresponding to the vector field $\grad f$; we will have more to say about this correspondence later.
A special case of (\ref{partials}) occurs when $f=x^j$ is a coordinate function, in which case \begin{equation} d(x^j) = \Partial{x^j}{x^i} \, dx^i = dx^j \end{equation} which shows that our basis really is what we thought. And we can use calculus to check that this construction does not depend on the choice of coordinates, that is, that $df$ is the same when expanded in terms of any set of coordinates on $\RR^n$. Schematically, this argument goes as follows: \begin{align} df &= \Partial{f}{u} \,du + \Partial{f}{v} \,dv + … \nonumber\\ &= \Partial{f}{u} \left(\Partial{u}{x} \,dx + … \right) + \Partial{f}{v} \left(\Partial{v}{x} \,dx + … \right) + … \nonumber\\ &= \left( \Partial{f}{u} \Partial{u}{x} + \Partial{f}{v} \Partial{v}{x} + … \right) \,dx + … \nonumber\\ &= \Partial{f}{x} \,dx + … \end{align} which shows how to use calculus to convert between one basis and another.
We will refer to elements of $V$ as 1-forms on $\RR^n$, and will henceforth write $V$ itself as $\bigwedge^1(\RR^n)$, which we will often abbreviate to $\bigwedge^1$.
We already know how to multiply 1-forms, which we now write with an explicit symbol, $\wedge$, read as “wedge”: Use the rules for integrands. Thus, we require \begin{align} \alpha\wedge\alpha &= 0 \\ \beta\wedge\alpha &= -\alpha\wedge\beta \end{align} for any $\alpha,\beta\in\bigwedge^1$. We also assume distributivity and associativity, so that \begin{align} (\alpha+\beta)\wedge\gamma &= \alpha\wedge\gamma + \beta\wedge\gamma \\ f (\alpha\wedge\gamma) &= (f\alpha)\wedge\gamma \\ (\alpha\wedge\beta) \wedge \gamma &= \alpha \wedge (\beta\wedge\gamma) \end{align} for any $\alpha,\beta,\gamma\in\bigwedge^1$ and function $f$.