DSPRelated.com
Free Books

Digital Filter Design Overview

This section (adapted from [428]), summarizes some of the more commonly used methods for digital filter design aimed at matching a nonparametric frequency response, such as typically obtained from input/output measurements. This problem should be distinguished from more classical problems with their own specialized methods, such as designing lowpass, highpass, and bandpass filters [343,362], or peak/shelf equalizers [559,449], and other utility filters designed from a priori mathematical specifications.

The problem of fitting a digital filter to a prescribed frequency response may be formulated as follows. To simplify, we set $ T=1$.

Given a continuous complex function $ H(e^{j\omega}),\,-\pi < \omega \le \pi$, corresponding to a causal desired frequency response,9.8 find a stable digital filter of the form

$\displaystyle {\hat H}(z) \isdefs \frac{{\hat B}(z)}{ {\hat A}(z)},$

where
$\displaystyle {\hat B}(z)$ $\displaystyle \isdef$ $\displaystyle {\hat b}_0 + {\hat b}_1 z^{-1} + \cdots + {\hat b}_{\hat{N}_b}z^{-{\hat{N}_b}}$ (9.15)
$\displaystyle {\hat A}(z)$ $\displaystyle \isdef$ $\displaystyle 1 + {\hat a}_1 z^{-1} + \cdots + {\hat a}_{\hat{N}_a}z^{-{\hat{N}_a}} ,$ (9.16)

with $ {\hat{N}_b},{\hat{N}_a}$ given, such that some norm of the error

$\displaystyle J(\hat{\theta}) \isdefs \left\Vert\,H(e^{j\omega}) - {\hat H}(e^{j\omega})\,\right\Vert \protect$ (9.17)

is minimum with respect to the filter coefficients

$\displaystyle \hat{\theta}^K\isdefs [{\hat b}_0,{\hat b}_1,\ldots\,,{\hat b}_{\hat{N}_b},{\hat a}_1,{\hat a}_2,\ldots\,,{\hat a}_{\hat{N}_a}].
$

The filter coefficients are constrained to lie in some subset $ \hat{\Theta}\subset\Re ^{{\hat N}}$, where $ {\hat N}\isdef {\hat{N}_a}+{\hat{N}_b}+1$. The filter coefficients may also be complex, in which case $ \hat{\Theta}\subset{\bf C}^{{\hat N}}$.

The approximate filter $ {\hat H}$ is typically constrained to be stable, and since $ {\hat B}(z)$ is causal (no positive powers of $ z$), stability implies causality. Consequently, the impulse response of the model $ {\hat h}(n)$ is zero for $ n<0$.

The filter-design problem is then to find a (strictly) stable $ {\hat{N}_a}$-pole, $ {\hat{N}_b}$-zero digital filter which minimizes some norm of the error in the frequency-response. This is fundamentally rational approximation of a complex function of a real (frequency) variable, with constraints on the poles.

While the filter-design problem has been formulated quite naturally, it is difficult to solve in practice. The strict stability assumption yields a compact space of filter coefficients $ \hat{\Theta}$, leading to the conclusion that a best approximation $ \hat{H}^\ast $ exists over this domain.9.9Unfortunately, the norm of the error $ J(\hat{\theta})$ typically is not a convex9.10function of the filter coefficients on $ \hat{\Theta}$. This means that algorithms based on gradient descent may fail to find an optimum filter due to their premature termination at a suboptimal local minimum of $ J(\hat{\theta})$.

Fortunately, there is at least one norm whose global minimization may be accomplished in a straightforward fashion without need for initial guesses or ad hoc modifications of the complex (phase-sensitive) IIR filter-design problem--the Hankel norm [155,428,177,36]. Hankel norm methods for digital filter design deliver a spontaneously stable filter of any desired order without imposing coefficient constraints in the algorithm.

An alternative to Hankel-norm approximation is to reformulate the problem, replacing Eq.$ \,$(8.17) with a modified error criterion so that the resulting problem can be solved by linear least-squares or convex optimization techniques. Examples include

  • Pseudo-norm minimization: (Pseudo-norms can be zero for nonzero functions.) For example, Padé approximation falls in this category. In Padé approximation, the first $ {\hat{N}_a}+{\hat{N}_b}+1$ samples of the impulse-response $ h(n)$ of $ H$ are matched exactly, and the error in the remaining impulse-response samples is ignored.

  • Ratio Error: Minimize $ \vert\vert\,H(e^{j\omega})/{\hat H}(e^{j\omega})\,\vert\vert $ subject to $ {\hat B}(z)=1$. Minimizing the $ L2$ norm of the ratio error yields the class of methods known as linear prediction techniques [20,296,297]. Since, by the definition of a norm, we have $ \vert\vert\,e^{j\theta}E(e^{j\omega})\,\vert\vert = \vert\vert\,E(e^{j\omega})\,\vert\vert $, it follows that $ \vert\vert\,H/{\hat H}\,\vert\vert = \vert\vert\,\vert H\vert/\vert{\hat H}\vert\,\vert\vert $; therefore, ratio error methods ignore the phase of the approximation. It is also evident that ratio error is minimized by making $ \vert{\hat H}(e^{j\omega})\vert$ larger than $ \vert H(e^{j\omega})\vert$.9.11 For this reason, ratio-error methods are considered most appropriate for modeling the spectral envelope of $ \vert H(e^{j\omega})\vert$. It is well known that these methods are fast and exceedingly robust in practice, and this explains in part why they are used almost exclusively in some data-intensive applications such as speech modeling and other spectral-envelope applications. In some applications, such as adaptive control or forecasting, the fact that linear prediction error is minimized can justify their choice.

  • Equation error: Minimize

    $\displaystyle \left\Vert\,{\hat A}(e^{j\omega})H(e^{j\omega})-{\hat B}(e^{j\ome...
...}(e^{j\omega})\left[ H(e^{j\omega})-{\hat H}(e^{j\omega})\right]\,\right\Vert.
$

    When the $ L2$ norm of equation-error is minimized, the problem becomes solving a set of $ {\hat N}={\hat{N}_a}+{\hat{N}_b}+1$ linear equations.

    The above expression makes it clear that equation-error can be seen as a frequency-response error weighted by $ \vert{\hat A}(e^{j\omega})\vert$. Thus, relatively large errors can be expected where the poles of the optimum approximation (roots of $ {\hat A}(z)$) approach the unit circle $ \vert z\vert=1$. While this may make the frequency-domain formulation seem ill-posed, in the time-domain, linear prediction error is minimized in the $ L2$ sense, and in certain applications this is ideal. (Equation-error methods thus provide a natural extension of ratio-error methods to include zeros.) Using so-called Steiglitz-McBride iterations [287,449,288], the equation-error solution iteratively approaches the norm-minimizing solution of Eq.$ \,$(8.17) for the L2 norm.

    Examples of minimizing equation error using the matlab function invfreqz are given in §8.6.3 and §8.6.4 below. See [449, Appendix I] (based on [428, pp. 48-50]) for a discussion of equation-error IIR filter design and a derivation of a fast equation-error method based on the Fast Fourier Transform (FFT) (used in invfreqz).

  • Conversion to real-valued approximation: For example, power spectrum matching, i.e., minimization of $ \vert\vert\,\vert H(e^{j\omega})\vert^2-\vert{\hat H}(e^{j\omega})\vert^2\,\vert\vert $, is possible using the Chebyshev or $ L-infinity$ norm [428]. Similarly, linear-phase filter design can be carried out with some guarantees, since again the problem reduces to real-valued approximation on the unit circle. The essence of these methods is that the phase-response is eliminated from the error measure, as in the norm of the ratio error, in order to convert a complex approximation problem into a real one. Real rational approximation of a continuous curve appears to be solved in principle only under the $ L-infinity$ norm [373,374].

  • Decoupling poles and zeros: An effective example of this approach is Kopec's method [428] which consists of using ratio error to find the poles, computing the error spectrum $ E=H/{\hat H}$, inverting it, and fitting poles again (to $ 1/E(e^{j\omega})$). There is a wide variety of methods which first fit poles and then zeros. None of these methods produce optimum filters, however, in any normal sense.

In addition to these modifications, sometimes it is necessary to reformulate the problem in order to achieve a different goal. For example, in some audio applications, it is desirable to minimize the log-magnitude frequency-response error. This is due to the way we hear spectral distortions in many circumstances. A technique which accomplishes this objective to the first order in the $ L-infinity$ norm is described in [428].

Sometimes the most important spectral structure is confined to an interval of the frequency domain. A question arises as to how this structure can be accurately modeled while obtaining a cruder fit elsewhere. The usual technique is a weighting function versus frequency. An alternative, however, is to frequency-warp the problem using a first-order conformal map. It turns out a first-order conformal map can be made to approximate very well frequency-resolution scales of human hearing such as the Bark scale or ERB scale [459]. Frequency-warping is especially valuable for providing an effective weighting function connection for filter-design methods, such as the Hankel-norm method, that are intrinsically do not offer choice of a weighted norm for the frequency-response error.

There are several methods which produce $ {\hat H}(z){\hat H}(z^{-1})$ instead of $ {\hat H}(z)$ directly. A fast spectral factorization technique is useful in conjunction with methods of this category [428]. Roughly speaking, a size $ 2{\hat{N}_a}$ polynomial factorization is replaced by an FFT and a size $ {\hat{N}_a}$ system of linear equations.


Next Section:
Digital Differentiator Design
Previous Section:
Ideal Differentiator (Spring Admittance)