DSPRelated.com
Free Books

Filter Design by Minimizing the
L2 Equation-Error Norm

One of the simplest formulations of recursive digital filter design is based on minimizing the equation error. This method allows matching of both spectral phase and magnitude. Equation-error methods can be classified as variations of Prony's method [48]. Equation error minimization is used very often in the field of system identification [46,30,78].

The problem of fitting a digital filter to a given spectrum may be formulated as follows:

Given a continuous complex function $ H(e^{j\omega}),\,-\pi < \omega \leq \pi$, corresponding to a causalI.2 desired frequency-response, find a stable digital filter of the form

$\displaystyle \hat{H}(z) \isdef \frac{\hat{B}(z)}{\hat{A}(z)},
$

where

\begin{eqnarray*}
\hat{B}(z) &\isdef \hat{b}_0 + \hat{b}_1 z^{-1} + \cdots + \ha...
...1 + \hat{a}_1 z^{-1} + \cdots + \hat{a}_{{n}_a}z^{-{{n}_a}} ,\\
\end{eqnarray*}

with $ {{n}_b},{{n}_a}$ given, such that some norm of the error

$\displaystyle J(\hat{\theta}) \isdef \left\Vert\,H(e^{j\omega}) - \hat{H}(e^{j\omega})\,\right\Vert
$

is minimum with respect to the filter coefficients

$\displaystyle \hat{\theta}^T\isdef \left[\hat{b}_0,\hat{b}_1,\ldots\,,\hat{b}_{{n}_b},\hat{a}_1,\hat{a}_2,\ldots\,,\hat{a}_{{n}_a}\right]^T,
$

which are constrained to lie in a subset $ \hat{\Theta}\subset\Re ^{N}$, where $ N\isdef {{n}_a}+{{n}_b}+1$. When explicitly stated, the filter coefficients may be complex, in which case $ \hat{\Theta}\subset{\bf C}^{N}$.

The approximate filter $ \hat{H}$ is typically constrained to be stable, and since positive powers of $ z$ do not appear in $ \hat{B}(z)$, stability implies causality. Consequently, the impulse response of the filter $ \hat{h}(n)$ is zero for $ n < 0$. If $ H$ were noncausal, all impulse-response components $ h(n)$ for $ n < 0$ would be approximated by zero.

Equation Error Formulation

The equation error is defined (in the frequency domain) as

$\displaystyle E_{\mbox{ee}}(e^{j\omega}) \isdef \hat{A}(e^{j\omega})H(e^{j\omega}) - \hat{B}(e^{j\omega})
$

By comparison, the more natural frequency-domain error is the so-called output error:

$\displaystyle E_{\mbox{oe}}(e^{j\omega}) \isdef H(e^{j\omega}) - \frac{\hat{B}(e^{j\omega})}{\hat{A}(e^{j\omega})}
$

The names of these errors make the most sense in the time domain. Let $ x(n)$ and $ y(n)$ denote the filter input and output, respectively, at time $ n$. Then the equation error is the error in the difference equation:

\begin{eqnarray*}
e_{\mbox{ee}}(n) &=& y(n) + \hat{a}_1 y(n-1) + \cdots + \hat{a...
...0 x(n) - \hat{b}_1 x(n-1) - \cdots - \hat{b}_{{n}_b}x(n-{{n}_b})
\end{eqnarray*}

while the output error is the difference between the ideal and approximate filter outputs:

\begin{eqnarray*}
e_{\mbox{oe}}(n) &=& y(n) - \hat{y}(n) \\
\hat{y}(n) &=& \ha...
...}_1 \hat{y}(n-1) - \cdots - \hat{a}_{{{n}_a}} \hat{y}(n-{{n}_a})
\end{eqnarray*}

Denote the $ L2$ norm of the equation error by

$\displaystyle J_E(\hat{\theta}) \isdef \left\Vert\,\hat{A}(e^{j\omega})H(e^{j\omega}) - \hat{B}(e^{j\omega})\,\right\Vert _2,$ (I.11)

where $ \hat{\theta}^T = [\hat{b}_0,\hat{b}_1,\ldots,\hat{b}_{{n}_b}, \hat{a}_1,\ldots, \hat{a}_{{n}_a}]$ is the vector of unknown filter coefficients. Then the problem is to minimize this norm with respect to $ \hat{\theta}$. What makes the equation-error so easy to minimize is that it is linear in the parameters. In the time-domain form, it is clear that the equation error is linear in the unknowns $ \hat{a}_i,\hat{b}_i$. When the error is linear in the parameters, the sum of squared errors is a quadratic form which can be minimized using one iteration of Newton's method. In other words, minimizing the $ L2$ norm of any error which is linear in the parameters results in a set of linear equations to solve. In the case of the equation-error minimization at hand, we will obtain $ {{n}_b}+{{n}_a}+1$ linear equations in as many unknowns.

Note that (I.11) can be expressed as

$\displaystyle J_E(\hat{\theta}) = \left\Vert\,\left\vert\hat{A}(e^{j\omega})\ri...
...ot\left\vert H(e^{j\omega}) - \hat{H}(e^{j\omega})\right\vert\,\right\Vert _2.
$

Thus, the equation-error can be interpreted as a weighted output error in which the frequency weighting function on the unit circle is given by $ \vert\hat{A}(e^{j\omega})\vert$. Thus, the weighting function is determined by the filter poles, and the error is weighted less near the poles. Since the poles of a good filter-design tend toward regions of high spectral energy, or toward ``irregularities'' in the spectrum, it is evident that the equation-error criterion assigns less importance to the most prominent or structured spectral regions. On the other hand, far away from the roots of $ \hat{A}(z)$, good fits to both phase and magnitude can be expected. The weighting effect can be eliminated through use of the Steiglitz-McBride algorithm [45,78] which iteratively solves the weighted equation-error solution, using the canceling weight function from the previous iteration. When it converges (which is typical in practice), it must converge to the output error minimizer.


Error Weighting and Frequency Warping

Audio filter designs typically benefit from an error weighting function that weights frequencies according to their audibility. An oversimplified but useful weighting function is simply $ 1/\omega$, in which low frequencies are deemed generally more important than high frequencies. Audio filter designs also typically improve when using a frequency warping, such as described in [88,78] (and similar to that in §I.3.2). In principle, the effect of a frequency-warping can be achieved using a weighting function, but in practice, the numerical performance of a frequency warping is often much better.


Stability of Equation Error Designs

A problem with equation-error methods is that stability of the filter design is not guaranteed. When an unstable design is encountered, one common remedy is to reflect unstable poles inside the unit circle, leaving the magnitude response unchanged while modifying the phase of the approximation in an ad hoc manner. This requires polynomial factorization of $ \hat{A}(z)$ to find the filter poles, which is typically more work than the filter design itself.

A better way to address the instability problem is to repeat the filter design employing a bulk delay. This amounts to replacing $ H(e^{j\omega})$ by

$\displaystyle H_\tau(e^{j\omega}) \isdef e^{-\omega \tau} H(e^{j\omega}),\quad\tau>0,
$

and minimizing $ \vert\vert\,\hat{A}(e^{j\omega})H_\tau(e^{j\omega}) - \hat{B}(e^{j\omega})\,\vert\vert _2$. This effectively delays the desired impulse response, i.e., $ h_\tau(n)=h(n-\tau)$. As the bulk delay is increased, the likelihood of obtaining an unstable design decreases, for reasons discussed in the next paragraph.

Unstable equation-error designs are especially likely when $ H(e^{j\omega})$ is noncausal. Since there are no constraints on where the poles of $ \hat{H}$ can be, one can expect unstable designs for desired frequency-response functions having a linear phase trend with positive slope.

In the other direction, experience has shown that best results are obtained when $ H(z)$ is minimum phase, i.e., when all the zeros of $ H(z)$ are inside the unit circle. For a given magnitude, $ \vert H(e^{j\omega})\vert$, minimum phase gives the maximum concentration of impulse-response energy near the time origin $ n = 0$. Consequently, the impulse-response tends to start large and decay immediately. For non-minimum phase $ H$, the impulse-response $ h(n)$ may be small for the first $ {{n}_b}+1$ samples, and the equation error method can yield very poor filters in these cases. To see why this is so, consider a desired impulse-response $ h(n)$ which is zero for $ n\leq{{n}_b}$, and arbitrary thereafter. Transforming $ J_E^2$ into the time domain yields

\begin{eqnarray*}
J_E^2(\hat{\theta}) &=& \left\Vert\,\hat{a}\ast h(n) - \hat{b}...
... \sum_{n={{n}_b}+1}^\infty
\left(\hat{a}\ast h(n)\right)^2,\\
\end{eqnarray*}

where ``$ \ast $'' denotes convolution, and the additive decomposition is due the fact that $ \hat{a}\ast h(n)=0$ for $ n\leq{{n}_b}$. In this case the minimum occurs for $ \hat{B}(z)=0\,\,\Rightarrow\,\,\hat{H}(z)\equiv 0$! Clearly this is not a particularly good fit. Thus, the introduction of bulk-delay to guard against unstable designs is limited by this phenomenon.

It should be emphasized that for minimum-phase $ H(e^{j\omega})$, equation-error methods are very effective. It is simple to convert a desired magnitude response into a minimum-phase frequency-response by use of cepstral techniques [22,60] (see also the appendix below), and this is highly recommended when minimizing equation error. Finally, the error weighting by $ \vert\hat{A}(e^{j\omega_k})\vert$ can usually be removed by a few iterations of the Steiglitz-McBride algorithm.


An FFT-Based Equation-Error Method

The algorithm below minimizes the equation error in the frequency-domain. As a result, it can make use of the FFT for speed. This algorithm is implemented in Matlab's invfreqz() function when no iteration-count is specified. (The iteration count gives that many iterations of the Steiglitz-McBride algorithm, thus transforming equation error to output error after a few iterations. There is also a time-domain implementation of the Steiglitz-McBride algorithm called stmcb() in the Matlab Signal Processing Toolbox, which takes the desired impulse response as input.)

Given a desired spectrum $ H(e^{j\omega_k})$ at equally spaced frequencies $ \omega_k = 2\pi k/N, k=0,\ldots\,,N-1$, with $ N$ a power of $ 2$, it is desired to find a rational digital filter with $ {{n}_b}$ zeros and $ {{n}_a}$ poles,

$\displaystyle \hat{H}(z) \isdef \frac{\hat{B}(z)}{\hat{A}(z)}
\isdef \frac{\sum_{k=0}^{{n}_b}b_k z^{-k}}{\sum_{k=0}^{{n}_a}a_k z^{-k} },$

normalized by $ a_0 = 1$, such that

$\displaystyle J^2_E = \sum_{k=0}^{N-1} \left\vert\hat{A}(e^{j\omega_k})H(e^{j\omega_k})-\hat{B}(e^{j\omega_k})\right\vert^2
$

is minimized.

Since $ J^2_E$ is a quadratic form, the solution is readily obtained by equating the gradient to zero. An easier derivation follows from minimizing equation error variance in the time domain and making use of the orthogonality principle [36]. This may be viewed as a system identification problem where the known input signal is an impulse, and the known output is the desired impulse response. A formulation employing an arbitrary known input is valuable for introducing complex weighting across the frequency grid, and this general form is presented. A detailed derivation appears in [78, Chapter 2], and here only the final algorithm is given:

Given spectral output samples $ Y(e^{j\omega_k})$ and input samples $ U(e^{j\omega_k})$, we minimize

$\displaystyle J^2_E = \sum_{k=0}^{N-1} \left\vert\hat{A}(e^{j\omega_k})Y(e^{j\omega_k})-\hat{B}(e^{j\omega_k})U(e^{j\omega_k})\right\vert^2 .
$

If $ \vert U(e^{j\omega_k})\vert^2$ is to be used as a weighting function in the filter-design problem, then we set $ Y(e^{j\omega_k}) = H(e^{j\omega_k})U(e^{j\omega_k})$.

Let $ \underline{x}[n_1\,$:$ \,n_2]$ denote the column vector determined by $ x(n)$, for $ n=n_1,\ldots\,,n_2$ filled in from top to bottom, and let $ T(x[n_1\,$:$ \,n_2])$ denote the size $ n_2-n_1+1$ symmetric Toeplitz matrix consisting of $ \underline{x}[n_1\,$:$ \,n_2]$ in its first column. A nonsymmetric Toeplitz matrix may be specified by its first column and row, and we use the notation $ T(\underline{x}[n_1\,$:$ \,n_2],\underline{y}^T[m_1\,$:$ \,m_2])$ to denote the $ n_2-n_1+1$ by $ m_2-m_1+1$ Toeplitz matrix with left-most column $ \underline{x}[n_1\,$:$ \,n_2]$ and top row $ \underline{y}^T[m_1\,$:$ \,m_2]$. The inverse Fourier transform of $ X(e^{j\omega_k})$ is defined as

$\displaystyle x(n) =$   FFT$\displaystyle ^{-1}{X(e^{j\omega_k})} \isdef \frac{1}{N}\sum_{k=0}^{N-1} X(e^{j\omega_k})
e^{j\omega_k n} .
$

The scaling by $ 1/N$ is optional since it has no effect on the solution. We require three correlation functions involving $ U$ and $ Y$,

\begin{eqnarray*}
\underline{R}_{uu}(n) &\isdef & \mbox{FFT}^{-1}{\left\vert U(e...
...})\overline{U(e^{j\omega_k})}} \\
n & = & 0,1,\ldots\,,N-1,\\
\end{eqnarray*}

where the overbar denotes complex conjugation, and four corresponding Toeplitz matrices,

\begin{eqnarray*}
R_{yy} &\isdef & T(\underline{R}_{yy}[0\,\mbox{:}\,{{n}_a}-1])...
...u}^T[-1\,\mbox{:}\,-{{n}_a}])\\
R_{uy} &\isdef & R_{uy}^T , \\
\end{eqnarray*}

where negative indices are to be interpreted mod $ N$, e.g., $ R_{yu}(-1)=R_{yu}(N-1)$.

The solution is then

$\displaystyle \hat{\theta}^\ast = \left[\begin{array}{c} \underline{\hat{B}}^\a...
...,{{n}_b}] \\ [2pt] \underline{R}_{yy}[1\,\mbox{:}\,{{n}_a}] \end{array}\right]
$

where

$\displaystyle \underline{\hat{B}}^\ast \isdef \left[\begin{array}{c} \hat{b}^\a...
...t{a}^\ast _0 \\ [2pt] \vdots \\ [2pt] \hat{a}^\ast _{{n}_a}\end{array}\right],
$


Prony's Method

There are several variations on equation-error minimization, and some confusion in terminology exists. We use the definition of Prony's method given by Markel and Gray [48]. It is equivalent to ``Shank's method'' [9]. In this method, one first computes the denominator $ \hat{A}^\ast (z)$ by minimizing

\begin{eqnarray*}
J_S^2(\hat{\theta}) &= \sum_{n={{n}_b}+1}^\infty\left(\hat{a}\...
...= \sum_{n={{n}_b}+1}^\infty\left(\hat{a}\ast h(n) \right)^2. \\
\end{eqnarray*}

This step is equivalent to minimization of ratio error (as used in linear prediction) for the all-pole part $ \hat{A}(z)$, with the first $ {{n}_b}+1$ terms of the time-domain error sum discarded (to get past the influence of the zeros on the impulse response). When $ {{n}_b}={{n}_a}-1$, it coincides with the covariance method of linear prediction [48,47]. This idea for finding the poles by ``skipping'' the influence of the zeros on the impulse-response shows up in the stochastic case under the name of modified Yule-Walker equations [11].

Now, Prony's method consists of next minimizing $ L2$ output error with the pre-assigned poles given by $ \hat{A}^\ast (z)$. In other words, the numerator $ \hat{B}(z)$ is found by minimizing

$\displaystyle \left\Vert\,H(e^{j\omega}) - \frac{\hat{B}(e^{j\omega})}{\hat{A}^\ast (e^{j\omega})}\,\right\Vert _2,
$

where $ \hat{A}^\ast (e^{j\omega})$ is now known. This hybrid method is not as sensitive to the time distribution of $ h(n)$ as is the pure equation-error method. In particular, the degenerate equation-error example above (in which $ \hat{H}\equiv 0$ was obtained) does not fare so badly using Prony's method.


The Padé-Prony Method

Another variation of Prony's method, described by Burrus and Parks [9] consists of using Padé approximation to find the numerator $ \hat{B}^\ast $ after the denominator $ \hat{A}^\ast $ has been found as before. Thus, $ \hat{B}^\ast $ is found by matching the first $ {{n}_b}+1$ samples of $ h(n)$, viz., $ \hat{b}^\ast _n = \hat{a}^\ast \ast h (n),
n=0\ldots\,,{{n}_b}$. This method is faster, but does not generally give as good results as the previous version. In particular, the degenerate example $ h(n)=0, n\leq {{n}_b}$ gives $ \hat{H}^\ast (z)\equiv 0$ here as did pure equation error. This method has been applied also in the stochastic case [11].

On the whole, when $ H(e^{j\omega})$ is causal and minimum phase (the ideal situation for just about any stable filter-design method), the variants on equation-error minimization described in this section perform very similarly. They are all quite fast, relative to algorithms which iteratively minimize output error, and the equation-error method based on the FFT above is generally fastest.


Next Section:
Time Plots: myplot.m
Previous Section:
Digitizing Analog Filters with the Bilinear Transformation