- §1. Basis Forms
- §2. The Metric Tensor
- §3. Signature
- §4. Higher Rank Forms
- §5. The Schwarz Inequality
- §6. Orientation
- §7. The Hodge Dual
- §8. Ex: Minkowski 2-space
- §9. Ex: Euclidean 2-space
- §10. Ex: Polar Coordinates
- §11. Dot+Cross Product II
- §12. Pseudovectors
- §13. The general case
- §14. Technical Note
- §15. Decomposable Forms
The Metric Tensor
Now suppose there is an inner product $g$ on $\bigwedge^1$. The components of $g$ in the basis $\{\sigma^i\}$ are given by \begin{equation} g^{ij} = g(\sigma^i,\sigma^j) \end{equation} As usual, we will also use $g$ to denote the matrix $(g^{ij})$.
Recall that an inner product is symmetric; thus, $g^{ji}=g^{ij}$, so the matrix $g$ is symmetric. An inner product must also be non-degenerate, and a little thought shows that this is equivalent to requiring that the determinant $|g|$ be nonzero; $g$ must be invertible. The inner product $g$ is often called the metric tensor, or, since it acts on 1-forms (and not vectors) the inverse metric.
But any symmetric matrix can be diagonalized, and if the determinant is nonzero, then none of the diagonal entries can be $0$. Equivalently, we can assume that the basis $\{\sigma^i\}$ is orthogonal: \begin{equation} g(\sigma^i,\sigma^j) = 0 \qquad (i\ne j) \end{equation} Furthermore, the basis can be rescaled, so that \begin{equation} |g(\sigma^i,\sigma^i)| = 1 \end{equation} However, since we are not assuming that $g$ is positive definite, we can not determine the signs. Putting this together, we can assume without loss of generality that our basis is orthonormal, that is, that \begin{equation} g(\sigma^i,\sigma^j) = \pm \delta^{ij} \end{equation} where $\delta^{ij}$ is the Kronecker delta, which is 1 if $i=j$, and 0 otherwise.
The information in $g$ can be encoded in other ways. A standard description is to give the line element, which, in a coordinate basis, is \begin{equation} ds^2 = g_{ij} \,dx^i dx^j \end{equation} where the matrix $(g_{ij})$ turns out to be the inverse of the matrix $(g^{ij})$ (for the basis $\{\sigma^i=dx^i\}$). If our basis is orthonormal, then (the matrix) $g$ is its own inverse!
An example we have already seen is polar coordinates, for which \begin{equation} ds^2 = dr^2 + r^2\,d\phi^2 \end{equation} which tells us that $\{dr,r\,d\phi\}$ is an orthonormal basis (and $\{dr,d\phi\}$ is a coordinate basis). From the point of view of differential forms, the line element is nothing more than a way of encoding the metric.
Equivalent information is also contained in the vector differential $d\rr$, which is most useful in an orthogonal coordinate system, that is, when \begin{equation} g(dx^i,dx^j) = 0 \qquad (i\ne j) \end{equation} In this case, each $\sigma^i$ is obtained by normalizing $dx^i$, and we write \begin{equation} d\rr = \sigma^i \hat{e}_i \end{equation} where $\hat{e}_i$ is the unit vector in the $x^i$ direction. Thus, \begin{equation} d\rr \cdot d\rr = ds^2 \end{equation} as expected. From the point of view of differential forms, $d\rr$ is just a vector-valued 1-form.
Again, we have seen this before in polar coordinates, where \begin{equation} d\rr = dr \,\rhat + (r\,d\phi)\,\phat \end{equation}