DSPRelated.com
Free Books

Time Domain Digital Filter Representations

This chapter discusses several time-domain representations for digital filters, including the difference equation, system diagram, and impulse response. Additionally, the convolution representation for LTI filters is derived, and the special case of FIR filters is considered. The transient response, steady-state response, and decay response are examined for FIR and IIR digital filters.

Difference Equation

The difference equation is a formula for computing an output sample at time $ n$ based on past and present input samples and past output samples in the time domain.6.1We may write the general, causal, LTI difference equation as follows:

$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle b_0 \,x(n) + b_1 \,x(n - 1) + \cdots + b_M \,x(n - M)$  
    $\displaystyle \qquad\quad\;
- a_1 \,y(n - 1) - \cdots - a_N \,y(n - N)$  
  $\displaystyle =$ $\displaystyle \sum_{i=0}^M b_i \,x(n-i) - \sum_{j=1}^N a_j \,y(n-j)
\protect$ (6.1)

where $ x$ is the input signal, $ y$ is the output signal, and the constants $ b_i, i = 0, 1, 2, \ldots, M$, $ a_i, i = 1, 2, \ldots, N$ are called the coefficients

As a specific example, the difference equation

$\displaystyle y(n) = 0.01\, x(n) + 0.002\, x(n - 1) + 0.99\, y(n - 1)
$

specifies a digital filtering operation, and the coefficient sets $ (0.01, 0.002)$ and $ (0.99)$ fully characterize the filter. In this example, we have $ M = N = 1$.

When the coefficients are real numbers, as in the above example, the filter is said to be real. Otherwise, it may be complex.

Notice that a filter of the form of Eq.$ \,$(5.1) can use ``past'' output samples (such as $ y(n-1)$) in the calculation of the ``present'' output $ y(n)$. This use of past output samples is called feedback. Any filter having one or more feedback paths ($ N>0$) is called recursive. (By the way, the minus signs for the feedback in Eq.$ \,$(5.1) will be explained when we get to transfer functions in §6.1.)

More specifically, the $ b_i$ coefficients are called the feedforward coefficients and the $ a_i$ coefficients are called the feedback coefficients.

A filter is said to be recursive if and only if $ a_i\neq 0$ for some $ i>0$. Recursive filters are also called infinite-impulse-response (IIR) filters. When there is no feedback ( $ a_i=0, \forall i>0$), the filter is said to be a nonrecursive or finite-impulse-response (FIR) digital filter.

When used for discrete-time physical modeling, the difference equation may be referred to as an explicit finite difference scheme.6.2

Showing that a recursive filter is LTI (Chapter 4) is easy by considering its impulse-response representation (discussed in §5.6). For example, the recursive filter

\begin{eqnarray*}
y(n) &=& x(n) + \frac{1}{2}y(n-1) \\
&=& x(n) + \frac{1}{2}x(n-1) + \frac{1}{4}x(n-2) + \frac{1}{8}x(n-3) + \cdots,
\end{eqnarray*}

has impulse response $ h(m) = 2^{-m}$, $ m=0,1,2,\ldots\,$. It is now straightforward to apply the analysis of the previous chapter to find that time-invariance, superposition, and the scaling property hold.


Signal Flow Graph

One possible signal flow graph (or system diagram) for Eq.$ \,$(5.1) is given in Fig.5.1a for the case of $ M = 2$ and $ N = 2$. Hopefully, it is easy to see how this diagram represents the difference equation (a box labeled ``$ z^{-1}$'' denotes a one-sample delay in time). The diagram remains true if it is converted to the frequency domain by replacing all time-domain signals by their respective z transforms (or Fourier transforms); that is, we may replace $ x(n)$ by $ X(z)$ and $ y(n)$ by $ Y(z)$. $ Z$ transforms and their usage will be discussed in Chapter 6.

Figure 5.1: Signal flow graph for the filter difference equation
$ y(n) = b_0 x(n) + b_1 x(n - 1) + b_2 x(n - 2) - a_1 y(n - 1) - a_2 y(n - 2)$.
(a) Direct form I. (b) Direct form II.
\begin{figure}\input fig/kfig2p8.pstex_t
\end{figure}


Causal Recursive Filters

Equation (5.1) does not cover all LTI filters, for it represents only causal LTI filters. A filter is said to be causal when its output does not depend on any ``future'' inputs. (In more colorful terms, a filter is causal if it does not ``laugh'' before it is ``tickled.'') For example, $ y(n) = x(n + 1)$ is a non-causal filter because the output anticipates the input one sample into the future. Restriction to causal filters is quite natural when the filter operates in real time. Many digital filters, on the other hand, are implemented on a computer where time is artificially represented by an array index. Thus, noncausal filters present no difficulty in such an ``off-line'' situation. It happens that the analysis for noncausal filters is pretty much the same as that for causal filters, so we can easily relax this restriction.


Filter Order

The maximum delay, in samples, used in creating each output sample is called the order of the filter. In the difference-equation representation, the order is the larger of $ M$ and $ N$ in Eq.$ \,$(5.1). For example, $ y(n) = x(n) -
x(n - 1) - 2y(n - 1) + y(n - 2)$ specifies a particular second-order filter. If $ M$ and $ N$ in Eq.$ \,$(5.1) are constrained to be finite (which is, of course, necessary in practice), then Eq.$ \,$(5.1) represents the class of all finite-order causal LTI digital filters.


Direct-Form-I Implementation

The difference equation (Eq.$ \,$(5.1)) is often used as the recipe for numerical implementation in software or hardware. As such, it specifies the direct-form I (DF-I) implementation of a digital filter, one of four direct-form structures to choose from. The DF-I signal flow graph for the second-order case is shown in Fig.5.1a. The direct-form II structure, another common choice, is depicted in Fig.5.1b. The other two direct forms are obtained by transposing direct forms I and II. Chapter 9 discusses all four direct-form structures.


Impulse-Response Representation

In addition to difference-equation coefficients, any LTI filter may be represented in the time domain by its response to a specific signal called the impulse. This response is called, naturally enough, the impulse response of the filter. Any LTI filter can be implemented by convolving the input signal with the filter impulse response, as we will see.



Definition. The impulse signal is denoted $ \delta (n)$ and defined by

$\displaystyle \delta(n)\isdef \left\{ {1,\;n=0}\atop{0,\;n\neq 0.} \right.
$

We may also write $ \delta = [1,0,0,\ldots]$.

A plot of $ \delta (n)$ is given in Fig.5.2a. In the physical world, an impulse may be approximated by a swift hammer blow (in the mechanical case) or balloon pop (acoustic case). We also have a special notation for the impulse response of a filter:



Definition. The impulse response of a filter is the response of the filter to $ \delta (n)$ and is most often denoted $ h(n)$:

$\displaystyle h(n) \isdef {\cal L}_n\{\delta(\cdot)\}
$

The impulse response $ h(n)$ is the response of the filter $ {\cal L}$ at time $ n$ to a unit impulse occurring at time 0. We will see that $ h(n)$ fully describes any LTI filter.6.3

We normally require that the impulse response decay to zero over time; otherwise, we say the filter is unstable. The next section formalizes this notion as a definition.


Filter Stability



Definition. An LTI filter is said to be stable if the impulse response $ h(n)$ approaches zero as $ n$ goes to infinity.
In this context, we may say that an impulse response ``approaches zero'' by definition if there exists a finite integer $ n_f$, and real numbers $ A\geq 0$ and $ \alpha>0$, such that $ \left\vert h(n)\right\vert<A\exp(-\alpha
n)$ for all $ n\geq n_f$. In other terms, the impulse response is asymptotically bounded by a decaying exponential.

Every finite-order nonrecursive filter is stable. Only the feedback coefficients $ a_i$ in Eq.$ \,$(5.1) can cause instability. Filter stability will be discussed further in §8.4 after poles and zeros have been introduced. Suffice it to say for now that, for stability, the feedback coefficients must be restricted so that the feedback gain is less than 1 at every frequency. (We'll learn in §8.4 that stability is guaranteed when all filter poles have magnitude less than 1.) In practice, the stability of a recursive filter is usually checked by computing the filter reflection coefficients, as described in §8.4.1.


Impulse Response Example

An example impulse response for the first-order recursive filter

$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle x(n) + 0.9y(n - 1)$ (6.2)
  $\displaystyle =$ $\displaystyle x(n) + 0.9x(n - 1) + 0.9^2 x(n - 2) + \cdots
\protect$ (6.3)

is shown in Fig.5.2b. The impulse response is a sampled exponential decay, $ (1,\, 0.9,\, 0.81,\, 0.73,\,\ldots)$, or, more formally,

$\displaystyle h(n) = \left\{\begin{array}{ll}
(0.9)^n, & n\geq 0 \\ [5pt]
0, & n<0. \\
\end{array}\right.
$

We can more compactly represent this by means of the unit step function,

$\displaystyle u(n) \isdef \left\{\begin{array}{ll}
1, & n\geq 0 \\ [5pt]
0, & n<0 \\
\end{array}\right.,
$

so that

$\displaystyle h(n) = u(n)(0.9)^n, \quad n\in{\bf Z}
$

where $ n\in{\bf Z}$ means $ n$ is any integer.


Implications of Linear-Time-Invariance

Using the basic properties of linearity and time-invariance, we will derive the convolution representation which gives an algorithm for implementing the filter directly in terms of its impulse response. In other words,

$\textstyle \parbox{0.8\textwidth}{\emph{the output $y(n)$\ of any LTI filter (i...
... be computed by convolving the input signal with the filter impulse response.}}$

Figure: Input and output signals for the filter $ y(n)
= x(n) + 0.9\,y(n - 1)$. (a) Input impulse $ \delta (n)$. (b) Output impulse response $ h(n)=u(n)\,0.9^n$. (c) Input delayed-impulse $ \delta (n - 5)$. (d) Output delayed-impulse response $ h(n - 5)$.
\begin{figure}\input fig/kfig2p9.pstex_t
\end{figure}

The convolution formula plays the role of the difference equation when the impulse response is used in place of the difference-equation coefficients as a filter representation. In fact, we will find that, for FIR filters (nonrecursive, i.e., no feedback), the difference equation and convolution representation are essentially the same thing. For recursive filters, one can think of the convolution representation as the difference equation with all feedback terms ``expanded'' to an infinite number of feedforward terms.

An outline of the derivation of the convolution formula is as follows: Any signal $ x(n)$ may be regarded as a superposition of impulses at various amplitudes and arrival times, i.e., each sample of $ x(n)$ is regarded as an impulse with amplitude $ x(n)$ and delay $ n$. We can write this mathematically as $ x(n)\delta(\cdot - n)$. By the superposition principle for LTI filters, the filter output is simply the superposition of impulse responses $ h(\cdot)$, each having a scale factor and time-shift given by the amplitude and time-shift of the corresponding input impulse. Thus, the sample $ x(n)$ contributes the signal $ x(n)h(\cdot - n)$ to the convolution output, and the total output is the sum of such contributions, by superposition. This is the heart of LTI filtering.

Before proceeding to the general case, let's look at a simple example with pictures. If an impulse strikes at time $ n = 5$ rather than at time $ n = 0$, this is represented by writing $ \delta (n - 5)$. A picture of this delayed impulse is given in Fig.5.2c. When $ \delta (n - 5)$ is fed to a time-invariant filter, the output will be the impulse response $ h(n)$ delayed by 5 samples, or $ h(n - 5)$. Figure 5.2d shows the response of the example filter of Eq.$ \,$(5.3) to the delayed impulse $ \delta (n - 5)$.

In the general case, for time-invariant filters we may write

$\displaystyle {\cal L}_n\{{\mbox{{\sc Shift}}_K\{\delta}\}\} \isdef
{\cal L}_n\{\delta(\cdot - K)\} = {\cal L}_{n-K}\{\delta(\cdot)\} = h(n-K)
$

where $ K$ is the number of samples delay. This equation states that right-shifting the input impulse by $ K$ points merely right-shifts the output (impulse response) by $ K$ points. Note that this is just a special case of the definition of time-invariance, Eq.$ \,$(4.5).

If two impulses arrive at the filter input, the first at time $ n = 0$, say, and the second at time $ n = 5$, then this input may be expressed as $ \delta(n) + \delta(n - 5)$. If, in addition, the amplitude of the first impulse is 2, while the second impulse has an amplitude of 1, then the input may be written as $ 2\delta (n) + \delta (n - 5)$. In this case, using linearity as well as time-invariance, the response of the general LTI filter to this input may be expressed as

\begin{eqnarray*}
{\cal L}_n\{2\delta(\cdot) + \delta(\cdot - 5)\}
&=& {\cal ...
...ot)\} + {\cal L}_{n-5}\{\delta(\cdot)\} \\
&=& 2h(n) + h(n-5).
\end{eqnarray*}

For the example filter of Eq.$ \,$(5.3), given the input $ 2\delta (n) + \delta (n - 5)$ (pictured in Fig.5.3a), the output may be computed by scaling, shifting, and adding together copies of the impulse response $ h(n)$. That is, taking the impulse response in Fig.5.2b, multiplying it by 2, and adding it to the delayed impulse response in Fig.5.2d, we obtain the output shown in Fig.5.3b. Thus, a weighted sum of impulses produces the same weighted sum of impulse responses.

\begin{eqnarray*}
2h(n) + h(n-5) &=& \left\{\begin{array}{ll}
2(0.9)^n+(0.9)^{n...
...0 \\
\end{array} \right.\\
&=& 2u(n) 0.9^n + u(n-5) 0.9^{n-5}
\end{eqnarray*}

Figure 5.3: Input impulse pair and corresponding output for the filter $ y(n) = x(n) + 0.9y(n - 1)$. (a) Input: impulse of amplitude 2 plus delayed-impulse $ 2\delta (n) + \delta (n - 5)$. (b) Output: $ 2h(n) + h(n - 5)$.
\begin{figure}\input fig/kfig2p10.pstex_t
\end{figure}


Convolution Representation

We will now derive the convolution representation for LTI filters in its full generality. The first step is to express an arbitrary signal $ x(\cdot)$ as a linear combination of shifted impulses, i.e.,

$\displaystyle x(n)=\sum_{i=0}^n x(i)\delta(n-i) \isdef (x\ast \delta)(n), \quad n\in{\bf Z} \protect$ (6.4)

where ``$ \ast $'' denotes the convolution operator. (See [84]6.4 for an elementary introduction to convolution.)

If the above equation is not obvious, here is how it is built up intuitively. Imagine $ \delta(\cdot)$ as a 1 in the midst of an infinite string of 0s. Now think of $ \delta(\cdot -i)$ as the same pattern shifted over to the right by $ i$ samples. Next multiply $ \delta(\cdot -i)$ by $ x(\cdot)$, which plucks out the sample $ x(i)$ and surrounds it on both sides by 0's. An example collection of waveforms $ x(i)\delta(\cdot -i)$ for the case $ x(i) = i, i =
-2, -1, 0, 1, 2$ is shown in Fig.5.4a. Now, sum over all $ i$, bringing together the samples of $ x$, to obtain $ x(\cdot)$. Figure 5.4b shows the result of this addition for the sequences in Fig.5.4a. Thus, any signal $ x(\cdot)$ may be expressed as a weighted sum of shifted impulses.

Equation (5.4) expresses a signal as a linear combination (or weighted sum) of impulses. That is, each sample may be viewed as an impulse at some amplitude and time. As we have already seen, each impulse (sample) arriving at the filter's input will cause the filter to produce an impulse response. If another impulse arrives at the filter's input before the first impulse response has died away, then the impulse response for both impulses will superimpose (add together sample by sample). More generally, since the input is a linear combination of impulses, the output is the same linear combination of impulse responses. This is a direct consequence of the superposition principle which holds for any LTI filter.

Figure 5.4: Any signal may be considered to be a sum of impulses. (a) Family of impulses at various amplitudes and time shifts. (b) Addition of impulses in (a), giving part of a ramp signal $ x(i)=i$.
\begin{figure}\input fig/kfig2p11.pstex_t
\end{figure}

We repeat this in more precise terms. First linearity is used and then time-invariance is invoked. Using the form of the general linear filter in Eq.$ \,$(4.2), and the definition of linearity, Eq.$ \,$(4.3) and Eq.$ \,$(4.5), we can express the output of any linear (and possibly time-varying) filter by

\begin{eqnarray*}
y(n) &=& {\cal L}_n\{x(\cdot)\}\\
&=& {\cal L}_n\{(x \ast \...
...ght\}\\
&\isdef & \sum_{i=-\infty}^\infty x(i) h(n,i)
\protect
\end{eqnarray*}

where we have written $ h(n, i) \isdef {\cal L}_n\{\delta(\cdot - i)\}$ to denote the filter response at time $ n$ to an impulse which occurred at time $ i$. If we are to be completely rigorous mathematically, certain ``smoothness'' restrictions must be placed on the linear operator $ {\cal L}$ in order that it may be distributed inside the infinite summation [37]. However, practically useful filters of the form of Eq.$ \,$(5.1) satisfy these restrictions. If in addition to being linear, the filter is time-invariant, then $ h(n, i) = h(n - i)$, which allows us to write

$\displaystyle y(n) =\sum_{i=-\infty}^\infty x(i) h(n - i) \isdef (x\ast h)(n), \; n \in {\bf Z} \protect.$ (6.5)

This states that the filter output $ y$ is the convolution of the input $ x$ with the filter impulse response $ h$.

The infinite sum in Eq.$ \,$(5.5) can be replaced by more typical practical limits. By choosing time 0 as the beginning of the signal, we may define $ x(n)$ to be 0 for $ n < 0$ so that the lower summation limit of $ -\infty$ can be replaced by 0. Also, if the filter is causal, we have $ h(n) = 0$ for $ n < 0$, so the upper summation limit can be written as $ n$ instead of $ \infty$. Thus, the convolution representation of a linear, time-invariant, causal digital filter is given by

$\displaystyle \zbox {y(n) = \sum_{i=0}^n x(i) h(n - i) = (x \ast h)(n),\;
n=0,1,2,\ldots,}
$

for causal input signals (i.e., $ x(n)=0$ for $ n < 0$).

Since the above equation is a convolution, and since convolution is commutative (i.e., $ (x \ast h)(n) = (h \ast x)(n)$ [84]), we can rewrite it as

$\displaystyle \zbox {y(n) = \sum_{i=0}^n h(i) x(n - i) = (h \ast x)(n), \; n\ge0}
$

or

$\displaystyle y(n) = h(0) x(n) + h(1) x(n-1) + h(2) x(n-2) + \cdots + h(n) x(0).
$

This latter form looks more like the general difference equation presented in Eq.$ \,$(5.1). In this form one can see that $ h(i)$ may be identified with the $ b_i$ coefficients in Eq.$ \,$(5.1). It is also evident that the filter operates by summing weighted echoes of the input signal together. At time $ n$, the weight of the echo from $ i$ samples ago [$ x(n - i)$] is $ h(i)$.

Convolution Representation Summary

We have shown that the output $ y$ of any LTI filter may be calculated by convolving the input $ x$ with the impulse response $ h$. It is instructive to compare this method of filter implementation to the use of difference equations, Eq.$ \,$(5.1). If there is no feedback (no $ a_j$ coefficients in Eq.$ \,$(5.1)), then the difference equation and the convolution formula are essentially identical, as shown in the next section. For recursive filters, we can convert the difference equation into a convolution by calculating the filter impulse response. However, this can be rather tedious, since with nonzero feedback coefficients the impulse response generally lasts forever. Of course, for stable filters the response is infinite only in theory; in practice, one may truncate the response after an appropriate length of time, such as after it falls below the quantization noise level due to round-off error.


Finite Impulse Response Digital Filters

In §5.1 we defined the general difference equation for IIR filters, and a couple of second-order examples were diagrammed in Fig.5.1. In this section, we take a more detailed look at the special case of Finite Impulse Response (FIR) digital filters. In addition to introducing various terminology and practical considerations associated with FIR filters, we'll look at a preview of transfer-function analysis (Chapter 6) for this simple special case.

Figure: The general, causal, length $ N=M+1$, finite-impulse-response (FIR) digital filter. For FIR filters, direct-form I and direct-form II are the same (see Chapter 9).
\begin{figure}\input fig/fir.pstex_t
\end{figure}

Figure 5.5 gives the signal flow graph for a general causal FIR filter Such a filter is also called a transversal filter, or a tapped delay line. The implementation shown is classified as a direct-form implementation.

FIR impulse response

The impulse response $ h(n)$ is obtained at the output when the input signal is the impulse signal $ \delta = [1,0,0,0,\ldots]$5.6). If the $ k$th tap is denoted $ b_k$, then it is obvious from Fig.5.5 that the impulse response signal is given by

$\displaystyle h(n)\isdef \left\{\begin{array}{ll} 0, & n<0 \\ [5pt] b_n, & 0\leq n\leq M \\ [5pt] 0, & n> M \\ \end{array} \right. \protect$ (6.6)

In other words, the impulse response simply consists of the tap coefficients, prepended and appended by zeros.


Convolution Representation of FIR Filters

Notice that the output of the $ k$th delay element in Fig.5.5 is $ x(n-k)$, $ k=0,1,2,\ldots,M$, where $ x(n)$ is the input signal amplitude at time $ n$. The output signal $ y(n)$ is therefore

$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle b_0 x(n) + b_1 x(n-1) + b_2 x(n-2) + \cdots + b_M x(n-M)$  
  $\displaystyle =$ $\displaystyle \sum_{m=0}^M b_m x(n-m)$  
  $\displaystyle =$ $\displaystyle \sum_{m=-\infty}^{\infty} h(m) x(n-m)$  
  $\displaystyle \isdef$ $\displaystyle (h\ast x)(n)$ (6.7)

where we have used the convolution operator ``$ \ast $'' to denote the convolution of $ h$ and $ x$, as defined in Eq.$ \,$(5.4). An FIR filter thus operates by convolving the input signal $ x$ with the filter's impulse response $ h$.


The ``Finite'' in FIR

From Eq.$ \,$(5.7), we can see that the impulse response becomes zero after time $ M=N-1$. Therefore, a tapped delay line (Fig.5.5) can only implement finite-duration impulse responses in the sense that the non-zero portion of the impulse response must be finite. This is what is meant by the term finite impulse response (FIR). We may say that the impulse response has finite support [52].


Causal FIR Filters

From Eq.$ \,$(5.6), we see also that the impulse response $ h(n)$ is always zero for $ n < 0$. Recall from §5.3 that any LTI filter having a zero impulse response prior to time 0 is said to be causal. Thus, a tapped delay line such as that depicted in Fig.5.5 can only implement causal FIR filters. In software, on the other hand, we may easily implement non-causal FIR filters as well, based simply on the definition of convolution.


FIR Transfer Function

The transfer function of an FIR filter is given by the z transform of its impulse response. This is true for any LTI filter, as discussed in Chapter 6. For FIR filters in particular, we have, from Eq.$ \,$(5.6),

$\displaystyle H(z) \isdef \sum_{n=-\infty}^{\infty} h_n z^{-n} = \sum_{n=0}^M b_n z^{-n} \protect$ (6.8)

Thus, the transfer function of every length $ N=M+1$ FIR filter is an $ M$th-order polynomial in $ z^{-1}$.


FIR Order

The order of a filter is defined as the order of its transfer function, as discussed in Chapter 6. For FIR filters, this is just the order of the transfer-function polynomial. Thus, from Equation (5.8), the order of the general, causal, length $ N=M+1$ FIR filter is $ M$ (provided $ b_M\neq 0$).

Note from Fig.5.5 that the order $ M$ is also the total number of delay elements in the filter. This is typical of practical digital filter implementations. When the number of delay elements in the implementation (Fig.5.5) is equal to the filter order, the filter implementation is said to be canonical with respect to delay. It is not possible to implement a given transfer function in fewer delays than the transfer function order, but it is possible (and sometimes even desirable) to have extra delays.


FIR Software Implementations

In matlab, an efficient FIR filter is implemented by calling

        outputsignal = filter(B,1,inputsignal);
where

$\displaystyle \texttt{B} = [b_0, b_1, \ldots, b_M].
$

It is relatively efficient because filter is a built-in function (compiled C code in most matlab implementations). However, for FIR filters longer than a hundred or so taps, FFT convolution should be used for maximum speed. In Octave and the Matlab Signal Processing Toolbox, fftfilt implements FIR filters using FFT convolution (say ``help fftfilt'').

Figure 5.6 lists a second-order FIR filter implementation in the C programming language.

Figure 5.6: C code for implementing a length 3 FIR filter.

 
  typedef double *pp;  // pointer to array of length NTICK
  typedef double word; // signal and coefficient data type

  typedef struct _fir3Vars {
      pp outputAout;
      pp inputAinp;
      word b0;
      word b1;
      word b2;
      word s1;
      word s2;
  } fir3Vars;

  void fir3(fir3Vars *a)
  {
      int i;
      word input;
      for (i=0; i<NTICK; i++) {
          input = a->inputAinp[i];
          a->outputAout[i] = a->b0 * input
                +  a->b1 * a->s1  +  a->b2 * a->s2;
          a->s2 = a->s1;
          a->s1 = input;
      }
  }


Transient Response, Steady State, and Decay

Figure 5.7: Example transient, steady-state, and decay responses for an FIR ``running sum'' filter driven by a gated sinusoid.
\includegraphics{eps/xientdemo1}
Input Signal

\includegraphics{eps/xientdemo2}
Filter Output Signal

The terms transient response and steady state response arise naturally in the context of sinewave analysis (e.g., §2.2). When the input sinewave is switched on, the filter takes a while to ``settle down'' to a perfect sinewave at the same frequency, as illustrated in Fig.5.12. The filter response during this ``settling'' period is called the transient response of the filter. The response of the filter after the transient response, provided the filter is linear and time-invariant, is called the steady-state response, and it consists of a pure sinewave at the same frequency as the input sinewave, but with amplitude and phase determined by the filter's frequency response at that frequency. In other words, the steady-state response begins when the LTI filter is fully ``warmed up'' by the input signal. More precisely, the filter output is the same as if the input signal had been applied since time minus infinity. Length $ N$ FIR filters only ``remember'' $ N-1$ samples into the past. Thus, for length $ N$ FIR filters, the duration of the transient response is $ N-1$ samples.

To show this, (it may help to refer to the general FIR filter implementation in Fig.5.5), consider that a length $ N=1$ (zero-order) FIR filter (a simple gain), has no state memory at all, and thus it is in ``steady state'' immediately when the input sinewave is switched on. A length $ N = 2$ FIR filter, on the other hand, reaches steady state one sample after the input sinewave is switched on, because it has one sample of delay. At the switch-on time instant, the length 2 FIR filter has a single sample of state that is still zero (instead of its steady-state value which is the previous input sinewave sample).

In general, a length $ N$ FIR filter is fully ``warmed up'' after $ N-1$ samples of input; that is, for an input starting at time $ n = 0$, by time $ n=N-1$, all internal state delays of the filter contain delayed input samples instead of their initial zeros. When the input signal is a unit step $ u(n)$ times a sinusoid (or, by superposition, any linear combination of sinusoids), we may say that the filter output reaches steady state at time $ n=N-1$.

FIR Example

An example sinewave input signal is shown in Fig.5.12, and the output of a length $ N=128$ FIR ``running sum'' filter is shown in Fig.5.12. These signals were computed by the following matlab code:

  Nx = 1024; % input signal length (nonzero portion)
  Nh = 128;  % FIR filter length
  A = 1; B = ones(1,Nh); % FIR "running sum" filter
  n = 0:Nx-1;
  x = sin(n*2*pi*7/Nx);  % input sinusoid - zero-pad it:
  zp=zeros(1,Nx/2); xzp=[zp,x,zp]; nzp=[0:length(xzp)-1];
  y = filter(B,A,xzp);   % filtered output signal
We know that the transient response must end $ \texttt{Nh}-1=127$ samples after the input sinewave switches on, and the decay-time lasts the same amount of time after the input signal switches back to zero.

Since the coefficients of an FIR filter are also its nonzero impulse response samples, we can say that the duration of the transient response equals the length of the impulse response minus one.

For Infinite Impulse Response (IIR) filters, such as the recursive comb filter analyzed in Chapter 3, the transient response decays exponentially. This means it is never really completely finished. In other terms, since its impulse response is infinitely long, so is its transient response, in principle. However, in practice, we treat it as finished for all practical purposes after several time constants of decay. For example, seven time-constants of decay correspond to more than 60 dB of decay, and is a common cut-off used for audio purposes. Therefore, we can adopt $ t_{60}$ as the definition of decay time (or ``ring time'') for typical audio filters. See [84]6.5 for a detailed derivation of $ t_{60}$ and related topics. In summary, we can say that the transient response of an audio filter is over after $ t_{60}$ seconds, where $ t_{60}$ is the time it takes the filter impulse response to decay by $ 60$ dB.


IIR Example

Figure 5.8 plots an IIR filter example for the filter

$\displaystyle y(n) = x(n) + 0.99\, y(n-1).
$

The previous matlab is modified as follows:
      Nh = 300; % APPROXIMATE filter length (visually in plot)
      B = 1; A = [1 -0.99]; % One-pole recursive example
      ...       % otherwise as above for the FIR example
The decay time for this recursive filter was arbitrarily marked at 300 samples (about three time-constants of decay).

Figure 5.8: Example transient, steady-state, and decay responses for an IIR ``one-pole'' filter driven by a gated sinusoid.
\includegraphics{eps/xientdemor1}
Input Signal

\includegraphics{eps/xientdemor2}
Filter Output Signal


Transient and Steady-State Signals

Loosely speaking, any sudden change in a signal is regarded as a transient, and transients in an input signal disturb the steady-state operation of a filter, resulting in a transient response at the filter output. This leads us to ask how do we define ``transient'' in a precise way? This turns out to be difficult in practice.

A mathematically convenient definition is as follows: A signal is said to contain a transient whenever its Fourier expansion [84] requires an infinite number of sinusoids. Conversely, any signal expressible as a finite number of sinusoids can be defined as a steady-state signal. Thus, waveform discontinuities are transients, as are discontinuities in the waveform slope, curvature, etc. Any fixed sum of sinusoids, on the other hand, is a steady-state signal.

In practical audio signal processing, defining transients is more difficult. In particular, since hearing is bandlimited, all audible signals are technically steady-state signals under the above definition. One way to pose the question is to ask which sounds should be ``stretched'' and which should be translated in time when a signal is ``slowed down''? In the case of speech, for example, short consonants would be considered transients, while vowels and sibilants such as ``ssss'' would be considered steady-state signals. Percussion hits are generally considered transients, as are the ``attacks'' of plucked and struck strings (such as piano). More generally, almost any ``attack'' is considered a transient, but a slow fade-in of a string section, e.g., might not be. In sum, musical discrimination between ``transient'' and ``steady state'' signals depends on our perception, and on our learned classifications of sounds. However, to first order, transient sounds can be defined practically as sudden ``wideband events'' in an otherwise steady-state signal. This is at least similar in spirit to the mathematical definition given above.

In summary, a filter transient response is caused by suddenly switching on a filter input signal, or otherwise disturbing a steady-state input signal away from its steady-state form. After the transient response has died out, we see the steady-state response, provided that the input signal itself is a steady-state signal (a fixed linear combination of sinusoids) and given that the filter is LTI.


Decay Response, Initial Conditions Response

If a filter is in steady state and we switch off the input signal, we see its decay response. This response is identical (but for a time shift) to the filter's response to initial conditions. In other words, when the input signal is switched off (becomes zero), the future output signal is computed entirely from the filter's internal state, because the input signal remains zero.


Complete Response

In general, the so-called complete response of a linear, time-invariant filter is given by the superposition of its

  • zero-state response and
  • initial-condition response.
``Zero-state response'' simply means the response of the filter to an input signal when the initial state of the filter (all its memory cells) are zeroed to begin with. The initial-condition response is of course the response of the filter to its own initial state, with the input signal being zero. This clean superposition of the zero-state and initial-condition responses only holds in general for linear filters. In §G.3, this superposition will be considered for state-space filter representations.


Summary and Conclusions

This concludes the discussion of time-domain filter descriptions, including difference equations, signal flow graphs, and impulse-response representations. More time-domain forms (alternative digital filter implementations) will be described in Chapter 9. A tour of elementary digital filter sections used often in audio applications is presented in Appendix B. Beyond that, some matrix-based representations are included in Appendix F, and the state-space formulation is discussed in Appendix G.

Time-domain forms are typically used to implement recursive filters in software or hardware, and they generalize readily to nonlinear and/or time-varying cases. For an understanding of the effects of an LTI filter on a sound, however, it is usually more appropriate to consider a frequency-domain picture, to which we now turn in the next chapter.


Time Domain Representation Problems

See http://ccrma.stanford.edu/~jos/filtersp/Time_Domain_Representation_Problems.html.


Next Section:
Transfer Function Analysis
Previous Section:
Linear Time-Invariant Digital Filters