Least-Squares Linear-Phase FIR Filter Design
Another versatile, effective, and often-used case is the weighted least squares method, which is implemented in the matlab function firls and others. A good general reference in this area is [204].
Let the FIR filter length be samples, with even, and suppose we'll initially design it to be centered about the time origin (``zero phase''). Then the frequency response is given on our frequency grid by
(5.33) |
Enforcing even symmetry in the impulse response, i.e., , gives a zero-phase FIR filter that we can later right-shift samples to make a causal, linear phase filter. In this case, the frequency response reduces to a sum of cosines:
(5.34) |
or, in matrix form:
Recall from §3.13.8, that the Remez multiple exchange algorithm is based on this formulation internally. In that case, the left-hand-side includes the alternating error, and the frequency grid iteratively seeks the frequencies of maximum error--the so-called extremal frequencies.
In matrix notation, our filter-design problem can be stated as (cf. §3.13.8)
(5.36) |
where these quantities are defined in (4.35). We can denote the optimal least-squares solution by
(5.37) |
To find , we need to minimize
This is a quadratic form in . Therefore, it has a global minimum which we can find by setting the gradient to zero, and solving for .5.14Assuming all quantities are real, equating the gradient to zero yields the so-called normal equations
(5.39) |
with solution
(5.40) |
The matrix
(5.41) |
is known as the (Moore-Penrose) pseudo-inverse of the matrix . It can be interpreted as an orthogonal projection matrix, projecting onto the column-space of [264], as we illustrate further in the next section.
Geometric Interpretation of Least Squares
Typically, the number of frequency constraints is much greater than the number of design variables (filter coefficients). In these cases, we have an overdetermined system of equations (more equations than unknowns). Therefore, we cannot generally satisfy all the equations, and are left with minimizing some error criterion to find the ``optimal compromise'' solution.
In the case of least-squares approximation, we are minimizing the Euclidean distance, which suggests the geometrical interpretation shown in Fig.4.19.
Thus, the desired vector
is the vector sum of its
best least-squares approximation
plus an orthogonal error
:
(5.42) |
In practice, the least-squares solution can be found by minimizing the sum of squared errors:
(5.43) |
Figure 4.19 suggests that the error vector is orthogonal to the column space of the matrix , hence it must be orthogonal to each column in :
(5.44) |
This is how the orthogonality principle can be used to derive the fact that the best least squares solution is given by
(5.45) |
In matlab, it is numerically superior to use ``h= A h'' as opposed to explicitly computing the pseudo-inverse as in ``h = pinv(A) * d''. For a discussion of numerical issues in matrix least-squares problems, see, e.g., [92].
We will return to least-squares optimality in §5.7.1 for the purpose of estimating the parameters of sinusoidal peaks in spectra.
Matlab Support for Least-Squares FIR Filter Design
Some of the available functions are as follows:
- firls - least-squares linear-phase FIR filter design
for piecewise constant desired amplitude responses -- also designs
Hilbert transformers and differentiators
- fircls - constrained least-squares linear-phase FIR
filter design for piecewise constant desired amplitude responses --
constraints provide lower and upper bounds on the frequency response
- fircls1 - constrained least-squares linear-phase FIR
filter design for lowpass and highpass filters -- supports relative
weighting of pass-band and stop-band error
For more information, type help firls and/or doc firls, etc., and refer to the ``See Also'' section of the documentation for pointers to more relevant functions.
Next Section:
Chebyshev FIR Design via Linear Programming
Previous Section:
Optimal Chebyshev FIR Filters