DSPRelated.com
Free Books

Time Domain Filter Estimation

System identification is the subject of identifying filter coefficients given measurements of the input and output signals [46,78]. For example, one application is amplifier modeling, in which we measure (1) the normal output of an electric guitar (provided by the pick-ups), and (2) the output of a microphone placed in front of the amplifier we wish to model. The guitar may be played in a variety of ways to create a collection of input/output data to use in identifying a model of the amplifier's ``sound.'' There are many commercial products which offer ``virtual amplifier'' presets developed partly in such a way.F.6 One can similarly model electric guitars themselves by measuring the pick signal delivered to the string (as the input) and the normal pick-up-mix output signal. A separate identification is needed for each switch and tone-control position. After identifying a sampling of models, ways can be found to interpolate among the sampled settings, thereby providing ``virtual'' tone-control knobs that respond like the real ones [101].

In the notation of the §F.1, assume we know $ {\underline{x}}$ and $ \underline{y}$ and wish to solve for the filter impulse response $ \underline{h}^T=[h_0,h_1,\ldots,h_{N_h-1}]$. We now outline a simple yet practical method for doing this, which follows readily from the discussion of the previous section.

Recall that convolution is commutative. In terms of the matrix representation of §F.3, this implies that the input signal and the filter can switch places to give

$\displaystyle \underbrace{%
\left[\begin{array}{c}
y_0 \\ [2pt] y_1 \\ [2pt] y...
... h_2 \\ [2pt] h_3\\ [2pt] h_4\end{array}\right]}_{\displaystyle\underline{h}},
$

or

$\displaystyle \underline{y}= \mathbf{x}\underline{h}. \protect$ (F.7)

Here we have indicated the general case for a length $ N_h=5$ causal FIR filter, with input and output signals that go on forever. While $ \mathbf{x}$ is not invertible because it is not square, we can solve for $ \underline{h}$ under general conditions by taking the pseudoinverse of $ \mathbf{x}$. Doing this provides a least-squares system identification method [46].

The Moore-Penrose pseudoinverse is easy to derive.F.7 First multiply Eq.$ \,$(F.7) on the left by the transpose of $ \mathbf{x}$ in order to obtain a ``square'' system of equations:

$\displaystyle \mathbf{x}^T\underline{y}= \mathbf{x}^T\mathbf{x}\underline{h}
$

Since $ \mathbf{x}^T\mathbf{x}$ is a square $ N_h\times N_h$ matrix, it is invertible under general conditions, and we obtain the following formula for $ \underline{h}$:

$\displaystyle \zbox {\underline{h}= \left(\mathbf{x}^T\mathbf{x}\right)^{-1}\mathbf{x}^T\underline{y}} \protect$ (F.8)

Thus, $ \left(\mathbf{x}^T\mathbf{x}\right)^{-1}\mathbf{x}^T$ is the Moore-Penrose pseudoinverse of $ \mathbf{x}$.

If the input signal $ x$ is an impulse $ \delta (n)$ (a 1 at time zero and 0 at all other times), then $ \mathbf{x}^T\mathbf{x}$ is simply the identity matrix, which is its own inverse, and we obtain $ \underline{h}=\underline{y}$. We expect this by definition of the impulse response. More generally, $ \mathbf{x}^T\mathbf{x}$ is invertible whenever the input signal is ``sufficiently exciting'' at all frequencies. An LTI filter frequency response can be identified only at frequencies that are excited by the input, and the accuracy of the estimate at any given frequency can be improved by increasing the input signal power at that frequency. [46].

Effect of Measurement Noise

In practice, measurements are never perfect. Let $ \hat{\underline{y}}=\underline{y}+\underline{e}$ denote the measured output signal, where $ \underline{e}$ is a vector of ``measurement noise'' samples. Then we have

$\displaystyle \hat{\underline{y}}= \underline{y}+ \underline{e}= \mathbf{x}\underline{h}+ \underline{e}.
$

By the orthogonality principle [38], the least-squares estimate of $ \underline{h}$ is obtained by orthogonally projecting $ \hat{\underline{y}}$ onto the space spanned by the columns of $ \mathbf{x}$. Geometrically speaking, choosing $ \underline{h}$ to minimize the Euclidean distance between $ \hat{\underline{y}}$ and $ \mathbf{x}\underline{h}$ is the same thing as choosing it to minimize the sum of squared estimated measurement errors $ \vert\vert\,\underline{e}\,\vert\vert ^2$. The distance from $ \mathbf{x}\underline{h}$ to $ \hat{\underline{y}}$ is minimized when the projection error $ \underline{e}=\hat{\underline{y}}-\mathbf{x}\underline{h}$ is orthogonal to every column of $ \mathbf{x}$, which is true if and only if $ \mathbf{x}^T\underline{e}=0$ [84]. Thus, we have, applying the orthogonality principle,

$\displaystyle 0 = \mathbf{x}^T\underline{e}= \mathbf{x}^T(\underline{y}- \mathb...
...nderline{h}) = \mathbf{x}^T\underline{y}- \mathbf{x}^T\mathbf{x}\underline{h}.
$

Solving for $ \underline{h}$ yields Eq.$ \,$(F.8) as before, but this time we have derived it as the least squares estimate of $ \underline{h}$ in the presence of output measurement error.

It is also straightforward to introduce a weighting function in the least-squares estimate for $ \underline{h}$ by replacing $ \mathbf{x}^T$ in the derivations above by $ \mathbf{x}^TR$, where $ R$ is any positive definite matrix (often taken to be diagonal and positive). In the present time-domain formulation, it is difficult to choose a weighting function that corresponds well to audio perception. Therefore, in audio applications, frequency-domain formulations are generally more powerful for linear-time-invariant system identification. A practical example is the frequency-domain equation-error method described in §I.4.4 [78].


Matlab System Identification Example

The Octave output for the following small matlab example is listed in Fig.F.1:

delete('sid.log'); diary('sid.log'); % Log session
echo('on');       % Show commands as well as responses
N = 4;            % Input signal length
%x = rand(N,1)    % Random input signal - snapshot:
x = [0.056961, 0.081938, 0.063272, 0.672761]'
h = [1 2 3]';     % FIR filter
y = filter(h,1,x) % Filter output
xb = toeplitz(x,[x(1),zeros(1,N-1)]) % Input matrix
hhat = inv(xb' * xb) * xb' * y % Least squares estimate
% hhat = pinv(xb) * y % Numerically robust pseudoinverse
hhat2 = xb\y % Numerically superior (and faster) estimate
diary('off'); % Close log file
One fine point is the use of the syntax `` $ \underline{h}= \mathbf{x}\backslash\underline{y}$'', which has been a matlab language feature from the very beginning [82]. It is usually more accurate (and faster) than multiplying by the explicit pseudoinverse. It uses the QR decomposition to convert the system of linear equations into upper-triangular form (typically using Householder reflections), determine the effective rank of $ \mathbf{x}$, and backsolve the reduced triangular system (starting at the bottom, which goes very fast) [29, §6.2].F.8

Figure F.1: Time-domain system-identification matlab example.

 
+ echo('on');       % Show commands as well as responses
+ N = 4;            % Input signal length
+ %x = rand(N,1)    % Random input signal - snapshot:
+ x = [0.056961, 0.081938, 0.063272, 0.672761]'
x =
  0.056961
  0.081938
  0.063272
  0.672761

+ h = [1 2 3]';     % FIR filter
+ y = filter(h,1,x) % Filter output
y =
  0.056961
  0.195860
  0.398031
  1.045119

+ xb = toeplitz(x,[x(1),zeros(1,N-1)]) % Input matrix
xb =
  0.05696  0.00000  0.00000  0.00000
  0.08194  0.05696  0.00000  0.00000
  0.06327  0.08194  0.05696  0.00000
  0.67276  0.06327  0.08194  0.05696

+ hhat = inv(xb' * xb) * xb' * y % Least squares estimate
hhat =
   1.0000
   2.0000
   3.0000
   3.7060e-13

+ % hhat = pinv(xb) * y % Numerically robust pseudoinverse
+ hhat2 = xb\y % Numerically superior (and faster) estimate
hhat2 =
   1.0000
   2.0000
   3.0000
   3.6492e-16


Next Section:
Markov Parameters
Previous Section:
State Space Realization