A View of Linear Time Varying Digital Filters
As discussed in Appendix F, linear time-varying (LTV) digital filters may be represented as matrix operators on the linear space of discrete time signals. Using the matrix representation, this appendix provides an interpretation of LTV filters that the author has found conceptually useful. In this interpretation, the input signal is first expanded into a linear combination of orthogonal basis signals. Then the LTV filter can be seen as replacing each basis signal with a new (arbitrary) basis signal. In particular, when the input-basis is taken to be sinusoidal, as in the Discrete Fourier Transform (DFT), one may readily design a time varying filter to emit any prescribed waveform in response to each frequency component of an input signal.
Introduction
The most common type of filter dealt with in practice is a linear, causal, and time-invariant operator on the vector space consisting of arbitrary real-valued functions of time. Since we are dealing with the space of functions of time, we will use the terms vector, function, and signal interchangeably. When time is a continuous variable, the vector space is infinite-dimensional even when time is restricted to a finite interval. Digital filters are simpler in many ways theoretically because finite-time digital signals occupy a finite-dimensional vector space. Furthermore, every linear operator on the space of digital signals may be represented as a matrix.H.1If the range of time is restricted to N samples then the arbitrary linear operator is an N by N matrix. In the discussion that follows, we will be exclusively concerned with the digital domain. Every linear filter will be representable as a matrix, and every signal will be expressible as a column vector.
Linearity implies the superposition principle which is presently
indispensible for a general filter response analysis. The superposition
principle states that if a signal is represented as a linear
combination of signals
, then the response
of any
linear filter
may written as the same linear combination of the
signals
where
. More generally,

Causality means that the filter output does not depend on future inputs. This is necessary in analog filters where time is a real entity, but for digital filters causality is highly unnecessary unless the filter must operate in real-time. Requiring a filter to be causal results in a triangular matrix representation.
A time-invariant filter is one whose response does not depend on the
time of excitation. This allows superposition in time in
addition to the superposition of component functions given by
linearity. A matrix representing a linear time-invariant filter is
Toeplitz (each diagonal is constant). The chief value of
time-invariance is that it allows a linear filter to represented by
its impulse response which, for digital filters, is the response
elicited by the signal
. A deeper consequence of
superposition in time together with superposition of component signal
responses is the fact that every stable linear time invariant filter
emits a sinusoid at frequency
in response to an input sinusoid at
frequency
after sufficient time for start-up transients to
settle. For this reason sinusoids are called eigenfunctions of
linear time-invariant systems. Another way of putting it is that a
linear time-invariant filter can only modify a sinusoidal input by a
constant scaling of its amplitude and a constant offset in its phase.
This is the rationale behind Fourier analysis. The Laplace transform
of the impulse response gives the transfer function and the
Fourier transform of the impulse response is the frequency
response. It is important to note that relaxing time-invariance only
prevents us from using superposition in time. Consequently, while we
can no longer uniquely characterize a filter in terms of its impulse
response, we may still characterize it in terms of its basis
function response.
This will be developed below for the particular basis functions used in the Discrete Fourier Transform (DFT). These basis functions are defined for the N-dimensional discrete-time signal space as










Derivation
For notational simplicity, we restrict exposition to the
three-dimensional case. The general linear digital filter equation
is written in three dimensions as
![\begin{displaymath}
\left[
\begin{array}{c}
y_0 \\ [2pt]
y_1 \\ [2pt]
y_2
\end{a...
...in{array}{c}
x_0 \\ [2pt]
x_1 \\ [2pt]
x_2
\end{array}\right].
\end{displaymath}](http://www.dsprelated.com/josimages_new/filters/img2304.png)




![\begin{displaymath}
H=\left[
\begin{array}{ccc}
h_0 & 0 & 0 \\ [2pt]
h_1 & h_0 & 0 \\ [2pt]
h_2 & h_1 & h_0
\end{array}\right].
\end{displaymath}](http://www.dsprelated.com/josimages_new/filters/img2306.png)
Consider the non-causal time-varying filter defined by
![\begin{displaymath}
C_3(k)={1/3}\left[
\begin{array}{ccc}
1 & W_3^1(k) & W_3^2(k...
...3^2(k) \\ [2pt]
1 & W_3^1(k) & {W_3^2(k)}
\end{array}\right].
\end{displaymath}](http://www.dsprelated.com/josimages_new/filters/img2307.png)
We may call the collector matrix corresponding to the
frequency.We have
![\begin{eqnarray*}
C_3(0)&=&\frac{1}{3}\left[
\begin{array}{ccc}
1 & 1 & 1 \\ [2p...
... e^{-j\frac{2\pi}{3}} & e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{eqnarray*}](http://www.dsprelated.com/josimages_new/filters/img2309.png)
The top row of each matrix is recognized as a basis function for the
order three DFT (equispaced vectors on the unit circle). Accordingly,
we have the orthogonality and spanning properties of these vectors. So
let us define a basis for the signal space
by
![\begin{displaymath}
x_0\isdef \left[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\e...
...rac{2\pi}{3}} \\ [2pt]
e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{displaymath}](http://www.dsprelated.com/josimages_new/filters/img2311.png)
Then every component of
and every component of
when
. Now since any signal
in
may
be written as a linear combination of
, we find that
![\begin{displaymath}
C_3(k)X =
C_3(k)\sum_{i=0}^2\alpha_ix_i =
\sum_{i=0}^2\alp...
...[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\end{array}\right].
\end{displaymath}](http://www.dsprelated.com/josimages_new/filters/img2317.png)










The uniqueness of the decomposition is easy to verify: Suppose there are two distinct decompositions of the form Eq.



That every linear time-varying filter may be expressed in this form is
also easy to show. Given an arbitrary filter matrix of order N,
measure its response to each of the N basis functions (sine and cosine
replace
) to obtain a set of N by 1 column vectors.
The output vector due to the
basis vector is precisely the
diagonal of
.
Summary
A representation of an arbitrary linear time-varying digital filter has been constructed which characterizes such a filter as having the ability to generate an arbitrary output in response to each basis function in the signal space. The representation was obtained by casting the filter in moving-average form as a matrix, and studying its response to individual orthogonal basis functions which were chosen here to be complex sinusoids. The overall conclusion is that time-varying filters may be used to convert from a set of orthogonal signals (such as tones at distinct frequencies) to a set of unconstrained waveforms in a one-to-one fashion. Linear combinations of these orthogonal signals are then transformed by the LTV filter to the same linear combination of the transformed basis signals.
Next Section:
Recursive Digital Filter Design
Previous Section:
State Space Filters