### Least SquaresSinusoidal Parameter Estimation

There are many ways to define optimal'' in signal modeling. Perhaps the most elementary case is least squares estimation. Every estimator tries to measure one or more parameters of some underlying signal model. In the case of sinusoidal parameter estimation, the simplest model consists of a single complex sinusoidal component in additive white noise:

 (6.32)

where is the complex amplitude of the sinusoid, and is white noise (defined in §C.3). Given measurements of , , we wish to estimate the parameters of this sinusoid. In the method of least squares, we minimize the sum of squared errors between the data and our model. That is, we minimize

 (6.33)

with respect to the parameter vector

 (6.34)

where denotes our signal model:

 (6.35)

Note that the error signal is linear in but nonlinear in the parameter . More significantly, is non-convex with respect to variations in . Non-convexity can make an optimization based on gradient descent very difficult, while convex optimization problems can generally be solved quite efficiently [22,86].

#### Sinusoidal Amplitude Estimation

If the sinusoidal frequency and phase happen to be known, we obtain a simple linear least squares problem for the amplitude . That is, the error signal

 (6.36)

becomes linear in the unknown parameter . As a result, the sum of squared errors

 (6.37)

becomes a simple quadratic (parabola) over the real line.6.11 Quadratic forms in any number of dimensions are easy to minimize. For example, the bottom of the bowl'' can be reached in one step of Newton's method. From another point of view, the optimal parameter can be obtained as the coefficient of orthogonal projection of the data onto the space spanned by all values of in the linear model .

Yet a third way to minimize (5.37) is the method taught in elementary calculus: differentiate with respect to , equate it to zero, and solve for . In preparation for this, it is helpful to write (5.37) as

Differentiating with respect to and equating to zero yields

 re (6.38)

Solving this for gives the optimal least-squares amplitude estimate

 rere (6.39)

That is, the optimal least-squares amplitude estimate may be found by the following steps:
1. Multiply the data by to zero the known phase .
2. Take the DFT of the samples of , suitably zero padded to approximate the DTFT, and evaluate it at the known frequency .
3. Discard any imaginary part since it can only contain noise, by (5.39).
4. Divide by to obtain a properly normalized coefficient of projection [264] onto the sinusoid

 (6.40)

#### Sinusoidal Amplitude and Phase Estimation

The form of the optimal estimator (5.39) immediately suggests the following generalization for the case of unknown amplitude and phase:

 (6.41)

That is, is given by the complex coefficient of projection [264] of onto the complex sinusoid at the known frequency . This can be shown by generalizing the previous derivation, but here we will derive it using the more enlightened orthogonality principle [114].

The orthogonality principle for linear least squares estimation states that the projection error must be orthogonal to the model. That is, if is our optimal signal model (viewed now as an -vector in ), then we must have [264]

Thus, the complex coefficient of projection of onto is given by

 (6.42)

The optimality of in the least squares sense follows from the least-squares optimality of orthogonal projection [114,121,252]. From a geometrical point of view, referring to Fig.5.16, we say that the minimum distance from a vector to some lower-dimensional subspace , where (here for one complex sinusoid) may be found by dropping a perpendicular'' from to the subspace. The point at the foot of the perpendicular is the point within the subspace closest to in Euclidean distance.

#### Sinusoidal Frequency Estimation

The form of the least-squares estimator (5.41) in the known-frequency case immediately suggests the following frequency estimator for the unknown-frequency case:

 (6.43)

That is, the sinusoidal frequency estimate is defined as that frequency which maximizes the DTFT magnitude. Given this frequency, the least-squares sinusoidal amplitude and phase estimates are given by (5.41) evaluated at that frequency.

It can be shown [121] that (5.43) is in fact the optimal least-squares estimator for a single sinusoid in white noise. It is also the maximum likelihood estimator for a single sinusoid in Gaussian white noise, as discussed in the next section.

In summary,

In practice, of course, the DTFT is implemented as an interpolated FFT, as described in the previous sections (e.g., QIFFT method).

Next Section:
Maximum Likelihood Sinusoid Estimation
Previous Section:
Bias of Parabolic Peak Interpolation