Free Books

Linear Time-Invariant Digital Filters

In this chapter, the important concepts of linearity and time-invariance (LTI) are discussed. Only LTI filters can be subjected to frequency-domain analysis as illustrated in the preceding chapters. After studying this chapter, you should be able to classify any filter as linear or nonlinear, and time-invariant or time-varying.

The great majority of audio filters are LTI, for several reasons: First, no new spectral components are introduced by LTI filters. Time-varying filters, on the other hand, can generate audible sideband images of the frequencies present in the input signal (when the filter changes at audio rates). Time-invariance is not overly restrictive, however, because the static analysis holds very well for filters that change slowly with time. (One rule of thumb is that the coefficients of a quasi-time-invariant filter should be substantially constant over its impulse-response duration.) Nonlinear filters generally create new sinusoidal components at all sums and differences of the frequencies present in the input signal.5.1This includes both harmonic distortion (when the input signal is periodic) and intermodulation distortion (when at least two inharmonically related tones are present). A truly linear filter does not cause harmonic or intermodulation distortion.

All the examples of filters mentioned in Chapter 1 were LTI, or approximately LTI. In addition, the $ z$ transform and all forms of the Fourier transform are linear operators, and these operators can be viewed as LTI filter banks, or as a single LTI filter having multiple outputs.

In the following sections, linearity and time-invariance will be formally introduced, together with some elementary mathematical aspects of signals.

Definition of a Signal

Definition. A real discrete-time signal is defined as any time-ordered sequence of real numbers. Similarly, a complex discrete-time signal is any time-ordered sequence of complex numbers.
Mathematically, we typically denote a signal as a real- or complex-valued function of an integer, e.g., $ x(n)$, $ n=0,1,2,\ldots$. Thus, $ x(n)$ is the $ n$th real (or complex) number in the signal, and $ n$ represents time as an integer sample number.

Using the set notation $ {\bf Z},{\bf R}$, and $ {\bf C}$ to denote the set of all integers, real numbers, and complex numbers, respectively, we can express that $ x$ is a real, discrete-time signal by expressing it as a function mapping every integer (optionally in a restricted range) to a real number:

$\displaystyle x:{\bf Z}\rightarrow {\bf R}

Alternatively, we can write $ x(n)\in{\bf R}$ for all $ n\in{\bf Z}$.

Similarly, a discrete-time complex signal is a mapping from each integer to a complex number:

$\displaystyle w:{\bf Z}\rightarrow {\bf C}

i.e., $ w(n)\in{\bf C}, \forall n\in{\bf Z}$ ($ w(n)$ is a complex number for every integer $ n$).

It is useful to define $ {\cal S}$ as the signal space consisting of all complex signals $ x(n)\in{\bf C}$, $ n\in{\bf Z}$.

We may expand these definitions slightly to include functions of the form $ x(nT)$, $ w(nT)$, where $ T\in{\bf R}$ denotes the sampling interval in seconds. In this case, the time index has physical units of seconds, but it is isomorphic to the integers. For finite-duration signals, we may prepend and append zeros to extend its domain to all integers $ {\bf Z}$.

Mathematically, the set of all signals $ x$ can be regarded a vector space5.2 $ {\cal S}$ in which every signal $ x$ is a vector in the space ( $ x\in{\cal S}$). The $ n$th sample of $ x$, $ x(n)$, is regarded as the $ n$th vector coordinate. Since signals as we have defined them are infinitely long (being defined over all integers), the corresponding vector space $ {\cal S}$ is infinite-dimensional. Every vector space comes with a field of scalars which we may think of as constant gain factors that can be applied to any signal in the space. For purposes of this book, ``signal'' and ``vector'' mean the same thing, as do ``constant gain factor'' and ``scalar''. The signals and gain factors (vectors and scalars) may be either real or complex, as applications may require.

By definition, a vector space is closed under linear combinations. That is, given any two vectors $ x_1\in{\cal S}$ and $ x_2\in{\cal S}$, and any two scalars $ \alpha$ and $ \beta$, there exists a vector $ y\in{\cal S}$ which satisfies $ y = \alpha x_1 + \beta x_2$, i.e.,

$\displaystyle y(n) = \alpha x_1(n) + \beta x_2(n)

for all $ n\in{\bf Z}$.

A linear combination is what we might call a mix of two signals $ x_1$ and $ x_2$ using mixing gains $ \alpha$ and $ \beta$ ( $ y = \alpha x_1 + \beta x_2$). Thus, a signal mix is represented mathematically as a linear combination of vectors. Since signals in practice can overflow the available dynamic range, resulting in clipping (or ``wrap-around''), it is not normally true that the space of signals used in practice is closed under linear combinations (mixing). However, in floating-point numerical simulations, closure is true for most practical purposes.5.3

Definition of a Filter

Definition. A real digital filter $ {\cal T}_n$ is defined as any real-valued function of a real signal for each integer $ n\in{\bf Z}$.
Thus, a real digital filter maps every real, discrete-time signal to a real, discrete-time signal. A complex filter, on the other hand, may produce a complex output signal even when its input signal is real.

We may express the input-output relation of a digital filter by the notation

$\displaystyle y(n)={\cal T}_n\{x(\cdot)\} \protect$ (5.1)

where $ x(\cdot)$ denotes the entire input signal, and $ y(n)$ is the output signal at time $ n$. (We will also refer to $ x(\cdot)$ as simply $ x$.) The general filter is denoted by $ {\cal T}_n\{x\}$, which stands for any transformation from a signal $ x$ to a sample value at time $ n$. The filter $ {\cal T}$ can also be called an operator on the space of signals $ {\cal S}$. The operator $ {\cal T}$ maps every signal $ x\in{\cal S}$ to some new signal $ y\in{\cal S}$. (For simplicity, we take $ {\cal S}$ to be the space of complex signals whenever $ {\cal T}$ is complex.) If $ {\cal T}$ is linear, it can be called a linear operator on $ {\cal S}$. If, additionally, the signal space $ {\cal S}$ consists only of finite-length signals, all $ N$ samples long, i.e., $ {\cal S}\subset{\bf R}^N$ or $ {\cal S}\subset{\bf C}^N$, then every linear filter $ {\cal T}$ may be called a linear transformation, which is representable by constant $ N\times N$ matrix.

In this book, we are concerned primarily with single-input, single-output (SISO) digital filters. For this reason, the input and output signals of a digital filter are defined as real or complex numbers for each time index $ n$ (as opposed to vectors). When both the input and output signals are vector-valued, we have what is called a multi-input, multi-out (MIMO) digital filter. We look at MIMO allpass filters in §C.3 and MIMO state-space filter forms in Appendix G, but we will not cover transfer-function analysis of MIMO filters using matrix fraction descriptions [37].

Examples of Digital Filters

While any mapping from signals to real numbers can be called a filter, we normally work with filters which have more structure than that. Some of the main structural features are illustrated in the following examples.

The filter analyzed in Chapter 1 was specified by

$\displaystyle y(n)=x(n) + x(n-1).

Such a specification is known as a difference equation. This simple filter is a special case of an important class of filters called linear time-invariant (LTI) filters. LTI filters are important in audio engineering because they are the only filters that preserve signal frequencies.

The above example remains a real LTI filter if we scale the input samples by any real coefficients:

$\displaystyle y(n)=2\, x(n) - 3.1\, x(n-1)

If we use complex coefficients, the filter remains LTI, but it becomes a complex filter:

$\displaystyle y(n)=(2+j)\,x(n) + 5 j \,x(n-1)$

The filter also remains LTI if we use more input samples in a shift-invariant way:

$\displaystyle y(n)=x(n) + x(n-1) + x(n+1) + \cdots$

The use of ``future'' samples, such as $ x(n+1)$ in this difference equation, makes this a non-causal filter example. Causal filters may compute $ y(n)$ using only present and/or past input samples $ x(n)$, $ x(n-1)$, $ x(n-2)$, and so on.

Another class of causal LTI filters involves using past output samples in addition to present and/or past input samples. The past-output terms are called feedback, and digital filters employing feedback are called recursive digital filters:

$\displaystyle y(n)=x(n) - x(n-1) + 0.1 \, y(n-1) + \cdots$

An example multi-input, multi-output (MIMO) digital filter is

$\displaystyle \left[\begin{array}{c} y_1(n) \\ [2pt] y_2(n) \end{array}\right] ...
...y}\right]\left[\begin{array}{c} x_1(n-1) \\ [2pt] x_2(n-1) \end{array}\right],

where we have introduced vectors and matrices inside square brackets. This is the 2D generalization of the SISO filter $ y(n) = a \, x(n) + b\,

The simplest nonlinear digital filter is

$\displaystyle y(n)=x^2(n),$

i.e., it squares each sample of the input signal to produce the output signal. This example is also a memoryless nonlinearity because the output at time $ n$ is not dependent on past inputs or outputs. The nonlinear filter

$\displaystyle y(n)=x(n)-y^2(n-1)$

is not memoryless.

Another nonlinear filter example is the median smoother of order $ N$ which assigns the middle value of $ N$ input samples centered about time $ n$ to the output at time $ n$. It is useful for ``outlier'' elimination. For example, it will reject isolated noise spikes, and preserve steps.

An example of a linear time-varying filter is

$\displaystyle y(n)=x(n) + \cos(2\pi n /10)\, x(n-1).$

It is time-varying because the coefficient of $ x(n-1)$ changes over time. It is linear because no coefficients depend on $ x$ or $ y$.

These examples provide a kind of ``bottom up'' look at some of the major types of digital filters. We will now take a ``top down'' approach and characterize all linear, time-invariant filters mathematically. This characterization will enable us to specify frequency-domain analysis tools that work for any LTI digital filter.

Linear Filters

In everyday terms, the fact that a filter is linear means simply that the following two properties hold:


The amplitude of the output is proportional to the amplitude of the input (the scaling property).


When two signals are added together and fed to the filter, the filter output is the same as if one had put each signal through the filter separately and then added the outputs (the superposition property).

While the implications of linearity are far-reaching, the mathematical definition is simple. Let us represent the general linear (but possibly time-varying) filter as a signal operator:

$\displaystyle y(n) = {\cal L}_n\{x(\cdot)\} \protect$ (5.2)

where $ x(\cdot)$ is the entire input signal, $ y(n)$ is the output at time $ n$, and $ {\cal L}_n\{\}$ is the filter expressed as a real-valued function of a signal for each $ n$. Think of the subscript $ n$ on $ {\cal L}_n\{\}$ as selecting the $ n$th output sample of the filter. In general, each output sample can be a function of several or even all input samples, and this is why we write $ x(\cdot)$ as the filter input.

Definition. A filter $ {\cal L}_n$ is said to be linear if for any pair of signals $ x_1(\cdot),x_2(\cdot)$ and for all constant gains $ g$, we have the following relation for each sample time $ n\in{\bf Z}$:

$\displaystyle \hbox{Scaling:}\!\!$   $\displaystyle {\cal L}_n\{g\, x(\cdot) \} = g\,{\cal L}_n\{x(\cdot)\},
\quad\forall g\in{\bf C}, \;\forall x\in{\cal S}
\protect$ (5.3)
$\displaystyle \hbox{Superposition:}\!\!$   $\displaystyle {\cal L}_n\{x_1(\cdot) + x_2(\cdot)\}
= {\cal L}_n\{x_1(\cdot)\} + {\cal L}_n\{x_2(\cdot)\}$ (5.4)
    $\displaystyle \forall x_1,x_2\in{\cal S}

where $ {\cal S}$ denotes the signal space (complex-valued sequences, in general). These two conditions are simply a mathematical restatement of the previous descriptive definition.

The scaling property of linear systems states that scaling the input of a linear system (multiplying it by a constant gain factor) scales the output by the same factor. The superposition property of linear systems states that the response of a linear system to a sum of signals is the sum of the responses to each individual input signal. Another view is that the individual signals which have been summed at the input are processed independently inside the filter--they superimpose and do not interact. (The addition of two signals, sample by sample, is like converting stereo to mono by mixing the two channels together equally.)

Another example of a linear signal medium is the earth's atmosphere. When two sounds are in the air at once, the air pressure fluctuations that convey them simply add (unless they are extremely loud). Since any finite continuous signal can be represented as a sum (i.e., superposition) of sinusoids, we can predict the filter response to any input signal just by knowing the response for all sinusoids. Without superposition, we have no such general description and it may be impossible to do any better than to catalog the filter output for each possible input.

Linear operators distribute over linear combinations, i.e.,

$\displaystyle \zbox {%
{\cal L}\{\alpha x_1 + \beta x_2\} = \alpha{\cal L}\{x_1\} + \beta {\cal L}\{x_2\}}

for any linear operator $ {\cal L}\{\}$, any real or complex signals $ x_1,
x_2\in{\cal S}$, and any real or complex constant gain factors $ \alpha,\beta$.

Real Linear Filtering of Complex Signals

When a filter $ {\cal L}_n\{x\}$ is a linear filter (but not necessarily time-invariant), and its input is a complex signal $ w \isdeftext x+jy$, then, by linearity,

$\displaystyle {\cal L}_n\{w\} \isdef {\cal L}_n\{x+jy\} = {\cal L}_n\{x\}+j{\cal L}_n\{y\}.

This means every linear filter maps complex signals to complex signals in a manner equivalent to applying the filter separately to the real and imaginary parts (which are each real). In other words, there is no ``interaction'' between the real and imaginary parts of a complex input signal when passed through a linear filter. If the filter is real, then filtering of complex signals can be carried out by simply performing real filtering on the real and imaginary parts separately (thereby avoiding complex arithmetic).

Appendix H presents a linear-algebraic view of linear filters that can be useful in certain applications.

Time-Invariant Filters

In plain terms, a time-invariant filter (or shift-invariant filter) is one which performs the same operation at all times. It is awkward to express this mathematically by restrictions on Eq.$ \,$(4.2) because of the use of $ x(\cdot)$ as the symbol for the filter input. What we want to say is that if the input signal is delayed (shifted) by, say, $ N$ samples, then the output waveform is simply delayed by $ N$ samples and unchanged otherwise. Thus $ y(\cdot)$, the output waveform from a time-invariant filter, merely shifts forward or backward in time as the input waveform $ x(\cdot)$ is shifted forward or backward in time.

Definition. A digital filter $ {\cal L}_n$ is said to be time-invariant if, for every input signal $ x$, we have

$\displaystyle {\cal L}_n\{$SHIFT$\displaystyle _N\{x\}\}$ $\displaystyle =$ $\displaystyle {\cal L}_{n-N}\{x(\cdot)\}\;=\;y(n-N)$  
  $\displaystyle =$ SHIFT$\displaystyle _{N,n}\{y\},
\protect$ (5.5)

where the $ N$-sample shift operator is defined by

   SHIFT$\displaystyle _{N,n}\{x\}\isdef x(n-N).

On the signal level, we can write

   SHIFT$\displaystyle _N\{x\} \isdef x(\cdot-N).

Thus, SHIFT$ _N\{x\}$ denotes the waveform $ x(\cdot)$ shifted right (delayed) by $ N$ samples. The most common notation in the literature for SHIFT$ _N\{x\}$ is $ x(n-N)$, but this can be misunderstood (if $ n$ is not interpreted as `$ \cdot$'), so it will be avoided here. Note that Eq.$ \,$(4.5) can be written on the waveform level instead of the sample level as

$\displaystyle {\cal L}\{$SHIFT$\displaystyle _N\{x\}\}=$SHIFT$\displaystyle _N\{{\cal L}\{x\}\}=$SHIFT$\displaystyle _N\{y\}. \protect$ (5.6)

Showing Linearity and Time Invariance, or Not

The filter $ y(n) = 2 x^2(n)$ is nonlinear and time invariant. The scaling property of linearity clearly fails since, scaling $ x(n)$ by $ g$ gives the output signal $ 2[gx(n)]^2 = 2g^2x^2(n)$, while $ gy(n) =
2gx^2(n)$. The filter is time invariant, however, because delaying $ x$ by $ m$ samples gives $ 2x^2(n-m)$ which is the same as $ y(n-m)$.

The filter $ y(n) = n x(n) + x(n-1)$ is linear and time varying. We can show linearity by setting the input to a linear combination of two signals $ x(n) = \alpha x_1(n) + \beta x_2(n)$, where $ \alpha$ and $ \beta$ are constants:

n [\alpha x_1(n) + \beta x_2(n)] &+& [\alpha x_1(n-1) + \beta ...
... [n x_2(n) + x_2(n-1)]\\
&\isdef & \alpha y_1(n) + \beta y_2(n)

Thus, scaling and superposition are verified. The filter is time-varying, however, since the time-shifted output is $ y(n-m) =
(n-m) x(n-m) + x(n-m-1)$ which is not the same as the filter applied to a time-shifted input ( $ n x(n-m) + x(n-m-1)$). Note that in applying the time-invariance test, we time-shift the input signal only, not the coefficients.

The filter $ y(n) = c$, where $ c$ is any constant, is nonlinear and time-invariant, in general. The condition for time invariance is satisfied (in a degenerate way) because a constant signal equals all shifts of itself. The constant filter is technically linear, however, for $ c=0$, since $ 0\cdot(\alpha x_1 + \beta x_2) =
\alpha(0\cdot x_1) + \beta(0\cdot x_2) = 0$, even though the input signal has no effect on the output signal at all.

Any filter of the form $ y(n) = b_0x(n) + b_1 x(n - 1)$ is linear and time-invariant. This is a special case of a sliding linear combination (also called a running weighted sum, or moving average when $ b_0=b_1=1/2$). All sliding linear combinations are linear, and they are time-invariant as well when the coefficients ( $ b_0,
b_1,\ldots$) are constant with respect to time.

Sliding linear combinations may also include past output samples as well (feedback terms). A simple example is any filter of the form

$\displaystyle y(n) = b_0 x(n) + b_1 x(n-1) - a_1 y(n-1). \protect$ (5.7)

Since linear combinations of linear combinations are linear combinations, we can use induction to show linearity and time invariance of a constant sliding linear combination including feedback terms. In the case of this example, we have, for an input signal $ x(n)$ starting at time zero,

y(0) &=& b_0 x(0)\\
y(1) &=& b_0 x(1) + b_1 x(0) - a_1 y(0) \...
...(b_1 -a_1 b_0) x(1) - (a_1 b_1 - a_1^2 b_0) x(0)\\
&=& \cdots.

If the input signal is now replaced by $ x_2(n)\isdeftext x(n-m)$, which is $ x(n)$ delayed by $ m$ samples, then the output $ y_2(n)$ is $ y_2(n)=0$ for $ n<m$, followed by

y_2(m) &=& b_0 x(0)\\
y_2(m+1) &=& b_0 x(1) + b_1 x(0) - a_1 ...
...(b_1 -a_1 b_0) x(1) - (a_1 b_1 - a_1^2 b_0) x(0)\\
&=& \cdots,

or $ y_2(n) = y(n-m)$ for all $ n\geq m$ and $ m\geq 0$. This establishes that each output sample from the filter of Eq.$ \,$(4.7) can be expressed as a time-invariant linear combination of present and past samples.

Nonlinear Filter Example:
Dynamic Range Compression

A simple practical example of a nonlinear filtering operation is dynamic range compression, such as occurs in Dolby or DBX noise reduction when recording to magnetic tape (which, believe it or not, still happens once in a while). The purpose of dynamic range compression is to map the natural dynamic range of a signal to a smaller range. For example, audio signals can easily span a range of 100 dB or more, while magnetic tape has a linear range on the order of only 55 dB. It is therefore important to compress the dynamic range when making analog recordings to magnetic tape. Compressing the dynamic range of a signal for recording and then expanding it on playback may be called companding (compression/expansion).

Recording engineers often compress the dynamic range of individual tracks to intentionally ``flatten'' their audio dynamic range for greater musical uniformity. Compression is also often applied to a final mix.

Another type of dynamic-range compressor is called a limiter, which is used in recording studios to ``soft limit'' a signal when it begins to exceed the available dynamic range. A limiter may be implemented as a very high compression ratio above some amplitude threshold. This replaces ``hard clipping'' by ``soft limiting,'' which sounds less harsh and may even go unnoticed if there were no indicator.

The preceding examples can be modeled as a variable gain that automatically ``turns up the volume'' (increases the gain) when the signal level is low, and turns it down when the level is high. The signal level is normally measured over a short time interval that includes at least one period of the lowest frequency allowed, and typically several periods of any pitched signal present. The gain normally reacts faster to attacks than to decays in audio compressors.

Why Dynamic Range Compression is Nonlinear

We can model dynamic range compression as a level-dependent gain. Multiplying a signal by a constant gain (``volume control''), on the other hand, is a linear operation. Let's check that the scaling and superposition properties of linear systems are satisfied by a constant gain: For any signals $ x_1,x_2$, and for any constants $ \alpha,\beta$, we must have

$\displaystyle g \cdot [\alpha \cdot x_1(n) + \beta \cdot x_2(n)] = \alpha \cdot [g \cdot x_1(n)]
+ \beta \cdot [g \cdot x_2(n)].

Since this is obviously true from the algebraic properties of real or complex numbers, both scaling and superposition have been verified. (For clarity, an explicit ``$ \cdot$'' is used to indicate multiplication.)

Dynamic range compression can also be seen as a time-varying gain factor, so one might be tempted to classify it as a linear, time-varying filter. However, this would be incorrect because the gain $ g$, which multiplies the input, depends on the input signal $ x(n)$. This happens because the compressor must estimate the current signal level in order to normalize it. Dynamic range compression can be expressed symbolically as a filter of the form

$\displaystyle y(n) = g_n(x) \cdot x(n)

where $ g_n(x)$ denotes a gain that depends on the ``current level'' of $ x(\cdot)$ at time $ n$. A common definition of signal level is rms level (the ``root mean square'' [84, p. 75] computed over a sliding time-window). Since many successive samples of $ x$ are needed to estimate the current level, we cannot correctly write $ g[x(n)]$ for the gain function, although we could write something like $ g[x(n-M\!:\!n)]$ (borrowing matlab syntax), where $ M$ is the number of past samples needed to estimate the current amplitude level. In general,

$\displaystyle g(x_1 + x_2)\cdot [x_1(n) + x_2(n)] \neq g(x_1) \cdot x_1(n) + g(x_2) \cdot x_2(n) .

That is, the compression of the sum of two signals is not generally the same as the addition of the two signals compressed individually. Therefore, the superposition condition of linearity fails. It is also clear that the scaling condition fails.

In general, any signal operation that includes a multiplication in which both multiplicands depend on the input signal can be shown to be nonlinear.

A Musical Time-Varying Filter Example

Note, however, that a gain $ g$ may vary with time independently of $ x$ to yield a linear time-varying filter. In this case, linearity may be demonstrated by verifying

$\displaystyle g(n) \left[ \alpha \cdot x_1(n) + \beta \cdot x_2(n)\right]
= \alpha \cdot [g(n)\cdot x_1(n)] + \beta\cdot[g(n)\cdot x_2(n)]

to show that both scaling and superposition hold. A simple example of a linear time-varying filter is a tremolo function, which can be written as a time-varying gain, $ y(n)=g(n)x(n)$. For example, $ g(n) = 1
+ \cos[2\pi (4)nT]$ would give a maximally deep tremolo with 4 swells per second.

Analysis of Nonlinear Filters

There is no general theory of nonlinear systems. A nonlinear system with memory can be quite surprising. In particular, it can emit any output signal in response to any input signal. For example, it could replace all music by Beethoven with something by Mozart, etc. That said, many subclasses of nonlinear filters can be successfully analyzed:

One often-used tool for nonlinear systems analysis is Volterra series [4]. A Volterra series expansion represents a nonlinear system as a sum of iterated convolutions:

$\displaystyle y = h_0 + h_1 \ast x + ((h_{2,n} \ast x)_n \ast x) + \cdots

Here $ x(n)$ is the input signal, $ y(n)$ is the output signal, and the impulse-response replacements $ h_i(n)$ are called Volterra kernels. The special notation $ ((h_{2,n} \ast x)_n \ast x)$ indicates that the second-order kernel $ h_2$ is fundamentally two-dimensional, meaning that the third term above (the first nonlinear term) is written out explicitly as

$\displaystyle ((h_{2,n} \ast x)_n \ast x) \isdef \sum_{l=0}^\infty\sum_{m=0}^\infty
h_2(l,m) x(n-l)x(n-m).

Similarly, the third-order kernel $ h_3$ is three-dimensional, in general. In principle, every nonlinear system can be represented by its (typically infinite) Volterra series expansion. The method is most successful when the kernels rapidly approach zero as order increases.

In the special case for which the Volterra expansion reduces to

$\displaystyle y = h_0 + h_1 \ast x + h_2 \ast x \ast x + \cdots\,,

we have an immediate frequency-domain interpretation in which the output spectrum is expressed as a power series in the input spectrum:

$\displaystyle Y = H_0 + H_1 X + H_2 X^2 + \cdots\,.


This chapter has discussed the concepts of linearity and time-invariance in some detail, with various examples considered. In the rest of this book, all filters discussed will be linear and (at least approximately) time-invariant. For brevity, these will be referred to as LTI filters.

Linearity and Time-Invariance Problems

See http://ccrma.stanford.edu/~jos/filtersp/Linearity_Time_Invariance_Problems.html

Next Section:
Time Domain Digital Filter Representations
Previous Section:
Analysis of a Digital Comb Filter