- §1. Basis Forms
- §2. The Metric Tensor
- §3. Signature
- §4. Higher Rank Forms
- §5. The Schwarz Inequality
- §6. Orientation
- §7. The Hodge Dual
- §8. Ex: Minkowski 2-space
- §9. Ex: Euclidean 2-space
- §10. Ex: Polar Coordinates
- §11. Dot+Cross Product II
- §12. Pseudovectors
- §13. The general case
- §14. Technical Note
- §15. Decomposable Forms
The General Case
We now compute the Hodge dual on an orthonormal basis directly from the definition. Recall that \begin{equation} \alpha \wedge {*}\beta = g(\alpha,\beta)\,\omega \end{equation} Suppose $\sigma^I$ is a basis $p$-form in $\bigwedge^p$. By permuting the basis 1-forms $\sigma^i$, we can bring $\sigma^I$ to the form \begin{equation} \sigma^I = \sigma^1 \wedge … \wedge \sigma^p \end{equation} Furthermore, we can assume without loss of generality that our permutation is even, and hence does not change the orientation $\omega$. 1) Consider now the defining property \begin{equation} \sigma^I\wedge{*}\sigma^I = g(\sigma^I,\sigma^I) \,\omega \end{equation} from which it is apparent that ${*}\sigma^I$ must contain a term of the form \begin{equation} \sigma^J = \sigma^{p+1} \wedge … \wedge \sigma^n \end{equation} Furthermore, it is easy to see that ${*}\sigma^I$ can not contain any other terms, as otherwise it would not yield zero when wedged with all other basis $p$-forms, as required by the definition (since our basis is orthonormal). We conclude that \begin{equation} {*}\sigma^I = g(\sigma^I,\sigma^I) \> \sigma^J = g(\sigma^1,\sigma^1) … g(\sigma^p,\sigma^p) \> \sigma^J \end{equation}
We can repeat this argument to determine ${*}\sigma^J$. We have \begin{equation} \sigma^J \wedge \sigma^I = (-1)^{p(n-p)} \sigma^I \wedge \sigma^J = (-1)^{p(n-p)} \omega \end{equation} from which it follows that \begin{equation} {*}\sigma^J = (-1)^{p(n-p)} g(\sigma^J,\sigma^J) \> \sigma^I \end{equation} Putting these duals together, we have \begin{align} *{*}\sigma^I &= g(\sigma^I,\sigma^I)\>{*}\sigma^J \nonumber\\ &= (-1)^{p(n-p)} g(\sigma^I,\sigma^I) g(\sigma^J,\sigma^J) \> \sigma^I \nonumber\\ &= (-1)^{p(n-p)} g(\sigma^1,\sigma^1)…g(\sigma^n,\sigma^n) \> \sigma^I \nonumber\\ &= (-1)^{p(n-p)} g(\omega,\omega) \> \sigma^I \nonumber\\ &= (-1)^{p(n-p)+s} \> \sigma^I \end{align} This identity is true for any differential form, and is most easily remembered as \begin{equation} *{*} = (-1)^{p(n-p)+s} \end{equation} A special case is $s=0$ and $n=3$, for which the exponent is always even, so that \begin{equation} *{*} = 1 \end{equation} in agreement with §Relationships between Differential Forms.