DSPRelated.com
Free Books

Sinusoidal Amplitude Estimation

If the sinusoidal frequency $ \omega_0$ and phase $ \phi$ happen to be known, we obtain a simple linear least squares problem for the amplitude $ A$ . That is, the error signal

$\displaystyle x(n)-\hat{{\cal A}}e^{j\hat{\omega}_0n} = x(n)-{\hat A}e^{j(\omega_0 n+\phi)}$ (6.36)

becomes linear in the unknown parameter $ {\hat A}$ . As a result, the sum of squared errors

$\displaystyle J({\hat A}) \isdef \sum_{n=0}^{N-1}\left\vert x(n)-{\hat A}e^{j(\omega_0 n+\phi)}\right\vert^2 \protect$ (6.37)

becomes a simple quadratic (parabola) over the real line.6.11 Quadratic forms in any number of dimensions are easy to minimize. For example, the ``bottom of the bowl'' can be reached in one step of Newton's method. From another point of view, the optimal parameter $ {\hat A}$ can be obtained as the coefficient of orthogonal projection of the data $ x(n)$ onto the space spanned by all values of $ {\hat A}$ in the linear model $ {\hat A}
e^{j(\omega_0 n+\phi)}$ .

Yet a third way to minimize (5.37) is the method taught in elementary calculus: differentiate $ J({\hat A})$ with respect to $ {\hat A}$ , equate it to zero, and solve for $ {\hat A}$ . In preparation for this, it is helpful to write (5.37) as

\begin{eqnarray*}
J({\hat A}) &\isdef & \sum_{n=0}^{N-1}
\left[x(n)-{\hat A}e^{j(\omega_0 n+\phi)}\right]
\left[\overline{x(n)}-\overline{{\hat A}} e^{-j(\omega_0 n+\phi)}\right]\\
&=&
\sum_{n=0}^{N-1}
\left[
\left\vert x(n)\right\vert^2
-
x(n)\overline{{\hat A}} e^{-j(\omega_0 n+\phi)}
-
\overline{x(n)}{\hat A}e^{j(\omega_0 n+\phi)}
+
{\hat A}^2
\right]
\\
&=& \left\Vert\,x\,\right\Vert _2^2 - 2\mbox{re}\left\{\sum_{n=0}^{N-1} x(n)\overline{{\hat A}}
e^{-j(\omega_0 n+\phi)}\right\}
+ N {\hat A}^2.
\end{eqnarray*}

Differentiating with respect to $ {\hat A}$ and equating to zero yields

$\displaystyle 0 = \frac{d J({\hat A})}{d{\hat A}} = - 2$re$\displaystyle \left\{\sum_{n=0}^{N-1} x(n) e^{-j(\omega_0 n+\phi)}\right\} + 2N{\hat A}.$ (6.38)

Solving this for $ {\hat A}$ gives the optimal least-squares amplitude estimate

$\displaystyle {\hat A}= \frac{1}{N}$re$\displaystyle \left\{\sum_{n=0}^{N-1} x(n) e^{-j(\omega_0 n+\phi)}\right\} = \frac{1}{N}$re$\displaystyle \left\{\hbox{\sc DTFT}_{\omega_0 }\left[e^{-j\phi} x\right]\right\}. \protect$ (6.39)

That is, the optimal least-squares amplitude estimate may be found by the following steps:
  1. Multiply the data $ x(n)$ by $ e^{-j\phi}$ to zero the known phase $ \phi$ .
  2. Take the DFT of the $ N$ samples of $ x$ , suitably zero padded to approximate the DTFT, and evaluate it at the known frequency $ \omega_0$ .
  3. Discard any imaginary part since it can only contain noise, by (5.39).
  4. Divide by $ N$ to obtain a properly normalized coefficient of projection [264] onto the sinusoid

    $\displaystyle s_{\omega_0 }(n)\isdef e^{j\omega_0 n}.$ (6.40)


Next Section:
Sinusoidal Amplitude and Phase Estimation
Previous Section:
Matlab for Computing Minimum Zero-Padding Factors