This method provides a way to determine the basis adapted to any coordinate system, which will be a frame if (as is often the case) the coordinates are orthogonal.
Given a coordinate system $(y_i)$, we can (in principle) express the natural coordinates $(x_i)=(x,y,z)$ in terms of the $y_i$. The position vector $\pp=x\,\xhat+y\,\yhat+z\,\zhat$ is then also a function of the $y_i$. Holding any two of them fixed yields a parametric curve along which the remaining coordinate is the parameter. The tangent vectors along these curves yield adapted basis vectors which (for orthogonal coordinates) become a frame when renormalized.
Example: Polar coordinates
The position vector in polar coordinates is $\pp=r\cos\theta\,\xhat+r\sin\theta\,\yhat$. Thus, \begin{align} \rhat &= \frac{\partial\pp/\partial r}{|\partial\pp/\partial r|} =\cos\theta\,\xhat+\sin\theta\,\yhat \\ \that &= \frac{\partial\pp/\partial\theta}{|\partial\pp/\partial\theta|} =-\sin\theta\,\xhat+\cos\theta\,\yhat \end{align}
A good picture of surfaces with one of the coordinates held constant can often be used to determine the adapted frame.
This method relies on the fact that the attitude matrix is orthogonal, that is, $A^T A=I$.
In matrices, if $e=Au$ and $\alpha^T u=I$, then $\sigma=A\alpha$ satisfies $\sigma^T e = \alpha^T A^T A u = I$. Thus, if $\alpha=(dx_i)$ is the dual basis to $u=(\uu_i)$ then $\sigma=A\alpha=(a_{ij}dx_j)$ is the dual basis to $e=Au=(a_{ij}\uu_j)$. In words, the matrix relating the dual basis to the natural dual basis is exactly the same as the attitude matrix relating the given frame to the natural frame.
Example: Polar coordinates
Since \begin{align} \rhat &= \cos\theta\,\xhat + \sin\theta\,\yhat \\ \that &= -\sin\theta\,\xhat + \cos\theta\,\yhat \end{align} it must be the case that \begin{align} \sigma_1 &= \cos\theta\,dx + \sin\theta\,dy = dr \\ \sigma_2 &= -\sin\theta\,dx + \cos\theta\,dy = r\,d\theta \end{align} where the last step requires expressing the final answer in terms of the new coordinates.
This method looks like informal manipulation of infinitesimals, but can be rigorously derived using tensor analysis, with arclength reinterpreted as the metric tensor. It yields the dual basis without using vectors!
As argued in multivariable calculus, arclength $ds$ in two dimensions satisfies \begin{equation} ds^2=dx^2+dy^2 \end{equation} Accepting for the moment that 1-forms can be "squared", we can argue that \begin{equation} \sigma^T \sigma = \alpha^T A^T A \alpha = \alpha^T \alpha = ds^2 \end{equation} with $\sigma=(\sigma_i)$ and $\alpha=(dx_i)$, as above. Thus, in three dimensions, \begin{equation} \sigma_1^2 + \sigma_2^2 + \sigma_3^2 = dx^2 + dy^2 + dz^2 \end{equation}
Example: Polar coordinates
Starting from $x=r\cos\theta$, $y=r\sin\theta$ one computes \begin{equation} dx^2 + dy^2 = dr^2 + r^2 d\theta^2 \end{equation} from which it is reasonably obvious that $\sigma_1=dr$ and $\sigma_2=r\,d\theta$.
As above, the position vector is $\pp=x\,\xhat+y\,\yhat+z\,\zhat$, so that \begin{equation} d\pp=dx\,\xhat+dy\,\yhat+dz\,\zhat=\Partial{\pp}{q_i} dq_i \end{equation} for any coordinates $\{q_i\}$. Setting $h_i=\left|\frac{\partial\pp}{\partial q_i}\right|$, we have already seen that $\ee_i=\frac{1}{h_i}\frac{\partial\pp}{\partial q_i}$ (no sum). But $\sum\sigma_i^2 = ds^2$ correctly suggests that $d\pp = \sigma_i \ee_i$, and hence $\sigma_i = h_i dq_i$ (no sum). (A more careful argument appears below.) Thus, we have \begin{equation} \sigma_i = \left|\frac{\partial\pp}{\partial q_i}\right| dq_i \qquad \hbox{(no sum)} \end{equation}
Furthermore, we have shown that \begin{equation} d\pp = dx_i \uu_i = \Partial{\pp}{q_i} dq_i = \sigma_i \ee_i \end{equation} correctly suggesting that, just as $\{dx_i\}$ and $\{sigma_i\}$, are the dual bases to the frames $\{\uu_i\}$ and $\{\ee_i\}$, respectively, so is $\left\{\Partial{\pp}{q_i}\right\}$ the dual basis to $\{dq_i\}$.
Many differential geometers choose to work with the coordinate
basis $\left\{\Partial{\pp}{q_i}\right\}$ rather than the frame
$\{\ee_i\}$.
One reason for this choice is that
\begin{equation}
\Partial{\pp}{q_i}[f]
= df\left(\Partial{\pp}{q_i}\right)
= \Partial{f}{x_j} dx_j\left(\Partial{x_k}{q_i}\uu_k\right)
= \Partial{f}{x_j}\Partial{x_k}{q_i}\delta_{jk}
= \Partial{f}{x_j}\Partial{x_j}{q_i}
= \Partial{f}{q_i}
\end{equation}
so that coordinate basis vectors take partial derivatives of functions, with
no extra factors. A coordinate basis is therefore often written as
$\{\Partial{}{q_i}\}$.
Example: Polar coordinates
Since $\pp=r\cos\theta\,\xhat+r\sin\theta\,\yhat$, we have $h_1=\left|\frac{\partial\pp}{\partial r}\right|=1$ and $h_2=\left|\frac{\partial\pp}{\partial\theta}\right|=r$. Thus, $\sigma_1=1\,dr$, and $\sigma_2=r\,d\theta$.
This method relies on computational fluency in switching between coordinate systems, as well as special properties of the position vector. It is therefore most useful in special cases, or when just one component is needed.
Example: Polar coordinates
We have the following relationships: \begin{align} r^2 = x^2 + y^2 &\Longrightarrow r\,dr = x\,dx + y\,dy \\ \tan\theta = \frac{y}{x} &\Longrightarrow r^2 d\theta = -y\,dx + x\,dy \\ r\,\rhat &= x\,\xhat + y\,\yhat \\ r\,\that &= -y\,\xhat + x\,\yhat \end{align} Therefore, \begin{align} \rhat[r] &= dr(\rhat) = \frac{1}{r^2} r\,dr(r\,\rhat) = \frac{x^2+y^2}r = 1 \\ \that[r] &= dr(\that) = \frac{1}{r^2} r\,dr(r\,\that) = 0 \end{align} and \begin{equation} \nabla_\vv(r\,\rhat) = \nabla_\vv(x\,\xhat+y\,\yhat) = \vv \end{equation} Using the product rule \begin{equation} \nabla_\vv (f\ww) = \vv[f] \,\ww + f\,\nabla_\vv \ww \end{equation} now leads to \begin{align} r\,\nabla_\rhat \rhat &= \nabla_\rhat(r\,\rhat) - \rhat[r]\,\rhat = \rhat - \rhat = 0 \\ r\,\nabla_\that \rhat &= \nabla_\that(r\,\rhat) - \that[r]\,\rhat = \that - 0 = \that \end{align} so that, finally, \begin{align} \omega_{12}(\rhat) &= \nabla_\rhat(\rhat) \cdot \that = 0 \\ \omega_{12}(\that) &= \nabla_\that(\rhat) \cdot \that = \frac1r \end{align} and therefore $\omega_{12}=\frac1r\sigma_2$. Once we know that the dual basis is $\{\sigma_1,\sigma_2\}=\{dr,r\,d\theta\}$, we have shown that $\omega_{12}=d\theta$.
This method uses straightforward computation, so long as one knows the basis transformation \begin{equation} \ee_i = \sum a_{ij} \uu_j \end{equation} between the given frame $\{\ee_i\}$ and the natural frame $\{\uu_i\}=\{\xhat,\yhat,\zhat\}$.
In terms of the attitude matrix $A=(a_{ij})$, the connection $\omega=(\omega_{ij})$ is given by \begin{equation} \omega = dA \,A^T \end{equation} or equivalently $\omega_{ij} = \sum a_{jk} \,da_{ik}$.
Example: Polar coordinates
From \begin{align} \rhat &= \cos\theta\,\xhat + \sin\theta\,\yhat \\ \that &= -\sin\theta\,\xhat + \cos\theta\,\yhat \end{align} we have \begin{equation} A = \begin{pmatrix}\cos\theta & \sin\theta \\ -\sin\theta & \cos\theta\end{pmatrix} \end{equation} and therefore \begin{equation} \omega = dA\,A^T = \begin{pmatrix}-\sin\theta\,d\theta & \cos\theta\,d\theta \\ -\cos\theta\,d\theta & -\sin\theta\,d\theta\end{pmatrix} \begin{pmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{pmatrix} = \begin{pmatrix}0 & d\theta \\ -d\theta & 0\end{pmatrix} \end{equation} In particular, $\omega_{12}=d\theta$, as before.
Just do the computation! (This method is essentially the same as Method II, but without matrices.)
Example: Polar coordinates
Compute directly that \begin{align} \nabla_\vv \rhat &= \nabla_\vv (\cos\theta\,\xhat+\sin\theta\,\yhat) \\ &= \vv[\cos\theta]\,\xhat + \vv[\sin\theta]\,\yhat \\ &= d\cos\theta(\vv)\,\xhat + d\sin\theta(\vv)\,\yhat \\ &= (-\sin\theta\,\xhat + \cos\theta\,\yhat) \>d\theta(\vv) \\ &= d\theta(\vv) \,\that \end{align} so that \begin{equation} \omega_{12}(\vv) = \nabla_\vv \rhat \cdot \that = d\theta(\vv) \end{equation} or, equivalently, $\omega_{12}=d\theta$ as before.
This method yields the connection 1-forms without using vectors!
The first structure equation states that \begin{equation} d\sigma_i = \sum \omega_{ij}\wedge\sigma_j \end{equation} In three dimensions, there are three equations here (why?), each of which has three components (why?). Treating the connection 1-forms as the unknowns, there are again three (why?), each with three components (why?). Thus, we have a linear system of equations with the same number of equations and variables. So if a solution exists, it must be unique.
Careful counting in arbitrary dimensions yields the same answer: The first structure equation represents a linear system with the same number of equations and variables. Thus, if we can solve this equation by any means, we have found the connection 1-forms! In other words, rather than regarding the (first) structure equation as a property of the connection, we can run the argument the other way and use the (first) structure equation to find the connection.
Example: Polar coordinates
Our starting point is the dual basis $\{dr,r\,d\theta\}$, and the first structure equation \begin{align} d(dr) &= \omega_{12} \wedge r\,d\theta \\ d(r\,d\theta) &= \omega_{21} \wedge dr = -\omega_{12} \wedge dr \end{align} It's pretty easy to guess a solution to this system of equations, but it is also straightforward to work it out systematically. Since $\omega_{12}$ is a 1-form, it can be written as $\omega_{12}=a\,dr+b\,r\,d\theta$ for some functions $a$, $b$. The above equations become \begin{align} 0 &= d(dr) = a\,dr \wedge r\,d\theta \\ dr\wedge d\theta &= d(r\,d\theta) = -b\,r\,d\theta \wedge dr \end{align} from which it follows immediately that $a=0$ and $b=\frac1r$. Thus, $\omega_{12}=d\theta$.