DSPRelated.com
Free Books

Equation Error Formulation

The equation error is defined (in the frequency domain) as

$\displaystyle E_{\mbox{ee}}(e^{j\omega}) \isdef \hat{A}(e^{j\omega})H(e^{j\omega}) - \hat{B}(e^{j\omega})
$

By comparison, the more natural frequency-domain error is the so-called output error:

$\displaystyle E_{\mbox{oe}}(e^{j\omega}) \isdef H(e^{j\omega}) - \frac{\hat{B}(e^{j\omega})}{\hat{A}(e^{j\omega})}
$

The names of these errors make the most sense in the time domain. Let $ x(n)$ and $ y(n)$ denote the filter input and output, respectively, at time $ n$. Then the equation error is the error in the difference equation:

\begin{eqnarray*}
e_{\mbox{ee}}(n) &=& y(n) + \hat{a}_1 y(n-1) + \cdots + \hat{a...
...0 x(n) - \hat{b}_1 x(n-1) - \cdots - \hat{b}_{{n}_b}x(n-{{n}_b})
\end{eqnarray*}

while the output error is the difference between the ideal and approximate filter outputs:

\begin{eqnarray*}
e_{\mbox{oe}}(n) &=& y(n) - \hat{y}(n) \\
\hat{y}(n) &=& \ha...
...}_1 \hat{y}(n-1) - \cdots - \hat{a}_{{{n}_a}} \hat{y}(n-{{n}_a})
\end{eqnarray*}

Denote the $ L2$ norm of the equation error by

$\displaystyle J_E(\hat{\theta}) \isdef \left\Vert\,\hat{A}(e^{j\omega})H(e^{j\omega}) - \hat{B}(e^{j\omega})\,\right\Vert _2,$ (I.11)

where $ \hat{\theta}^T = [\hat{b}_0,\hat{b}_1,\ldots,\hat{b}_{{n}_b}, \hat{a}_1,\ldots, \hat{a}_{{n}_a}]$ is the vector of unknown filter coefficients. Then the problem is to minimize this norm with respect to $ \hat{\theta}$. What makes the equation-error so easy to minimize is that it is linear in the parameters. In the time-domain form, it is clear that the equation error is linear in the unknowns $ \hat{a}_i,\hat{b}_i$. When the error is linear in the parameters, the sum of squared errors is a quadratic form which can be minimized using one iteration of Newton's method. In other words, minimizing the $ L2$ norm of any error which is linear in the parameters results in a set of linear equations to solve. In the case of the equation-error minimization at hand, we will obtain $ {{n}_b}+{{n}_a}+1$ linear equations in as many unknowns.

Note that (I.11) can be expressed as

$\displaystyle J_E(\hat{\theta}) = \left\Vert\,\left\vert\hat{A}(e^{j\omega})\ri...
...ot\left\vert H(e^{j\omega}) - \hat{H}(e^{j\omega})\right\vert\,\right\Vert _2.
$

Thus, the equation-error can be interpreted as a weighted output error in which the frequency weighting function on the unit circle is given by $ \vert\hat{A}(e^{j\omega})\vert$. Thus, the weighting function is determined by the filter poles, and the error is weighted less near the poles. Since the poles of a good filter-design tend toward regions of high spectral energy, or toward ``irregularities'' in the spectrum, it is evident that the equation-error criterion assigns less importance to the most prominent or structured spectral regions. On the other hand, far away from the roots of $ \hat{A}(z)$, good fits to both phase and magnitude can be expected. The weighting effect can be eliminated through use of the Steiglitz-McBride algorithm [45,78] which iteratively solves the weighted equation-error solution, using the canceling weight function from the previous iteration. When it converges (which is typical in practice), it must converge to the output error minimizer.


Next Section:
Error Weighting and Frequency Warping
Previous Section:
Examples