DSPRelated.com
Free Books

Gram-Schmidt Orthogonalization

Recall from the end of §5.10 above that an orthonormal set of vectors is a set of unit-length vectors that are mutually orthogonal. In other words, orthonormal vector set is just an orthogonal vector set in which each vector $ \sv_i$ has been normalized to unit length $ \sv_i/ \vert\vert\,\sv_i\,\vert\vert $.


Theorem: Given a set of $ N$ linearly independent vectors $ \sv_0,\ldots,\sv_{N-1}$ from $ {\bf C}^N$, we can construct an orthonormal set $ \underline{\tilde{s}}_0,\ldots,\underline{\tilde{s}}_{N-1}$ which are linear combinations of the original set and which span the same space.


Proof: We prove the theorem by constructing the desired orthonormal set $ \{\underline{\tilde{s}}_k\}$ sequentially from the original set $ \{\sv_k\}$. This procedure is known as Gram-Schmidt orthogonalization.

First, note that $ \sv_k\ne \underline{0}$ for all $ k$, since $ \underline{0}$ is linearly dependent on every vector. Therefore, $ \vert\vert\,\sv_k\,\vert\vert \ne
0$.

  • Set $ \underline{\tilde{s}}_0 \isdef \frac{\sv_0}{\left\Vert\,\sv_0\,\right\Vert}$.
  • Define $ \underline{x}_1$ as $ \sv_1$ minus the projection of $ \sv_1$ onto $ \underline{\tilde{s}}_0$:

    $\displaystyle \underline{x}_1 \isdef \sv_1 - {\bf P}_{\underline{\tilde{s}}_0}(...
...)
= \sv_1 - \left<\sv_1,\underline{\tilde{s}}_0\right>\underline{\tilde{s}}_0
$

    The vector $ \underline{x}_1$ is orthogonal to $ \underline{\tilde{s}}_0$ by construction. (We subtracted out the part of $ \sv_1$ that wasn't orthogonal to $ \underline{\tilde{s}}_0$.) Also, since $ \sv_1$ and $ \sv_0$ are linearly independent, we have $ \vert\vert\,\underline{x}_1\,\vert\vert \ne 0$.

  • Set $ \underline{\tilde{s}}_1 \isdef \frac{\underline{x}_1}{\left\Vert\,\underline{x}_1\,\right\Vert}$ (i.e., normalize the result of the preceding step).
  • Define $ \underline{x}_2$ as $ \sv_2$ minus the projection of $ \sv_2$ onto $ \underline{\tilde{s}}_0$ and $ \underline{\tilde{s}}_1$:

    $\displaystyle \underline{x}_2 \;\isdef \; \sv_2 - {\bf P}_{\underline{\tilde{s}...
...ilde{s}}_0 - \left<\sv_2,\underline{\tilde{s}}_1\right>\underline{\tilde{s}}_1
$

  • Normalize: $ \underline{\tilde{s}}_2 \isdef \frac{\underline{x}_2}{\left\Vert\,\underline{x}_2\,\right\Vert}$.
  • Continue this process until $ \underline{\tilde{s}}_{N-1}$ has been defined.

The Gram-Schmidt orthogonalization procedure will construct an orthonormal basis from any set of $ N$ linearly independent vectors. Obviously, by skipping the normalization step, we could also form simply an orthogonal basis. The key ingredient of this procedure is that each new basis vector is obtained by subtracting out the projection of the next linearly independent vector onto the vectors accepted so far into the set. We may say that each new linearly independent vector $ \sv_k$ is projected onto the subspace spanned by the vectors $ \{\underline{\tilde{s}}_0,\ldots,\underline{\tilde{s}}_{k-1}\}$, and any nonzero projection in that subspace is subtracted out of $ \sv_k$ to make the new vector orthogonal to the entire subspace. In other words, we retain only that portion of each new vector $ \sv_k$ which ``points along'' a new dimension. The first direction is arbitrary and is determined by whatever vector we choose first ($ \sv_0$ here). The next vector is forced to be orthogonal to the first. The second is forced to be orthogonal to the first two (and thus to the 2D subspace spanned by them), and so on.

This chapter can be considered an introduction to some important concepts of linear algebra. The student is invited to pursue further reading in any textbook on linear algebra, such as [47].5.13

Matlab/Octave examples related to this chapter appear in Appendix I.


Next Section:
Nth Roots of Unity
Previous Section:
Signal/Vector Reconstruction from Projections