DSPRelated.com
Free Books

A View of Linear Time Varying Digital Filters

As discussed in Appendix F, linear time-varying (LTV) digital filters may be represented as matrix operators on the linear space of discrete time signals. Using the matrix representation, this appendix provides an interpretation of LTV filters that the author has found conceptually useful. In this interpretation, the input signal is first expanded into a linear combination of orthogonal basis signals. Then the LTV filter can be seen as replacing each basis signal with a new (arbitrary) basis signal. In particular, when the input-basis is taken to be sinusoidal, as in the Discrete Fourier Transform (DFT), one may readily design a time varying filter to emit any prescribed waveform in response to each frequency component of an input signal.

Introduction

The most common type of filter dealt with in practice is a linear, causal, and time-invariant operator on the vector space consisting of arbitrary real-valued functions of time. Since we are dealing with the space of functions of time, we will use the terms vector, function, and signal interchangeably. When time is a continuous variable, the vector space is infinite-dimensional even when time is restricted to a finite interval. Digital filters are simpler in many ways theoretically because finite-time digital signals occupy a finite-dimensional vector space. Furthermore, every linear operator on the space of digital signals may be represented as a matrix.H.1If the range of time is restricted to N samples then the arbitrary linear operator is an N by N matrix. In the discussion that follows, we will be exclusively concerned with the digital domain. Every linear filter will be representable as a matrix, and every signal will be expressible as a column vector.

Linearity implies the superposition principle which is presently indispensible for a general filter response analysis. The superposition principle states that if a signal $ X$ is represented as a linear combination of signals $ \{x_1,x_2,\ldots\}$, then the response $ Y$ of any linear filter $ H$ may written as the same linear combination of the signals $ \{y_1,y_2,\ldots\}$ where $ y_i=Hx_i$. More generally,

$\displaystyle Y = HX =
H\sum_{i=-\infty }^\infty \alpha_ix_i =
\sum_{i=-\infty }^\infty \alpha_iHx_i = \sum_{i=-\infty }^\infty \alpha_iy_i
.
$

A set of signals that can be used to express every signal in the space is called a set of basis functions. An example of a basis set is the familiar set of sinusoids at all frequencies. The most crucial use of linearity for our purposes is the representation of an arbitrary linear filter as a matrix operator.

Causality means that the filter output does not depend on future inputs. This is necessary in analog filters where time is a real entity, but for digital filters causality is highly unnecessary unless the filter must operate in real-time. Requiring a filter to be causal results in a triangular matrix representation.

A time-invariant filter is one whose response does not depend on the time of excitation. This allows superposition in time in addition to the superposition of component functions given by linearity. A matrix representing a linear time-invariant filter is Toeplitz (each diagonal is constant). The chief value of time-invariance is that it allows a linear filter to represented by its impulse response which, for digital filters, is the response elicited by the signal $ (1,0,0,\ldots)$. A deeper consequence of superposition in time together with superposition of component signal responses is the fact that every stable linear time invariant filter emits a sinusoid at frequency $ f$ in response to an input sinusoid at frequency $ f$ after sufficient time for start-up transients to settle. For this reason sinusoids are called eigenfunctions of linear time-invariant systems. Another way of putting it is that a linear time-invariant filter can only modify a sinusoidal input by a constant scaling of its amplitude and a constant offset in its phase. This is the rationale behind Fourier analysis. The Laplace transform of the impulse response gives the transfer function and the Fourier transform of the impulse response is the frequency response. It is important to note that relaxing time-invariance only prevents us from using superposition in time. Consequently, while we can no longer uniquely characterize a filter in terms of its impulse response, we may still characterize it in terms of its basis function response.

This will be developed below for the particular basis functions used in the Discrete Fourier Transform (DFT). These basis functions are defined for the N-dimensional discrete-time signal space as

$\displaystyle W_N^n(k)=e^{j2\pi \frac{kn}{N}},\qquad k,n = 0,1,2,\ldots N-1
$

where $ n$ is the time index, and $ k$ is the discrete frequency index. To be more concrete we could define $ \omega_k=2\pi f_s k/N$ as the $ k^{th}$ raidan frequency and $ t_n=nT$ as the time of the $ n^{th}$ sample, where the sampling rate $ f_s=1/T$. Note that $ W_N^n(k)$ is a sampled version of the continuous time sinusoidal basis function $ e^{j\omega t}$ used in the Fourier transform. There are no eigenfunctions for general time-varying filters and so there is no fundamental reason to prefer the Fourier basis over any other basis. The basis set may be chosen according to the most natural decomposition of the input signal space without a penalty in complexity.


Derivation

For notational simplicity, we restrict exposition to the three-dimensional case. The general linear digital filter equation $ Y=HX$ is written in three dimensions as

\begin{displaymath}
\left[
\begin{array}{c}
y_0 \\ [2pt]
y_1 \\ [2pt]
y_2
\end{a...
...in{array}{c}
x_0 \\ [2pt]
x_1 \\ [2pt]
x_2
\end{array}\right].
\end{displaymath}

where $ x_i$ is regarded as the input sample at time $ i$, and $ y_i$ is the output sample at time $ i$. The general causal time-invariant filter appears in three-space as

\begin{displaymath}
H=\left[
\begin{array}{ccc}
h_0 & 0 & 0 \\ [2pt]
h_1 & h_0 & 0 \\ [2pt]
h_2 & h_1 & h_0
\end{array}\right].
\end{displaymath}

Consider the non-causal time-varying filter defined by

\begin{displaymath}
C_3(k)={1/3}\left[
\begin{array}{ccc}
1 & W_3^1(k) & W_3^2(k...
...3^2(k) \\ [2pt]
1 & W_3^1(k) & {W_3^2(k)}
\end{array}\right].
\end{displaymath}

We may call $ C_3(k)$ the collector matrix corresponding to the $ k^{th}$ frequency.We have

\begin{eqnarray*}
C_3(0)&=&\frac{1}{3}\left[
\begin{array}{ccc}
1 & 1 & 1 \\ [2p...
... e^{-j\frac{2\pi}{3}} & e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{eqnarray*}

The top row of each matrix is recognized as a basis function for the order three DFT (equispaced vectors on the unit circle). Accordingly, we have the orthogonality and spanning properties of these vectors. So let us define a basis for the signal space $ \{x_0,x_1,x_2\}$ by

\begin{displaymath}
x_0\isdef \left[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\e...
...rac{2\pi}{3}} \\ [2pt]
e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{displaymath}

Then every component of $ C_3(k)x_k = 1$ and every component of $ C_3(k)x_j=0$ when $ k\neq j$. Now since any signal $ X$ in $ \Re ^3$ may be written as a linear combination of $ \{x_1,x_2,x_3\}$, we find that

\begin{displaymath}
C_3(k)X =
C_3(k)\sum_{i=0}^2\alpha_ix_i =
\sum_{i=0}^2\alp...
...[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\end{array}\right].
\end{displaymath}

Consequently, we observe that $ C_N(k)$ is a matrix which annihilates all input basis components but the $ k^{th}$. Now multiply $ C_N(k)$ on the left by a diagonal matrix $ D(k)$ so that the product of $ D(k)$$ C_N(k)$ times $ x_k$ gives an arbitrary column vector $ (d_1,d_2,d_3)$. Then every linear time-varying filter $ G$ is expressible as a sum of these products as we will show below. In general, the decomposition for every filter on $ \Re ^N$ is simply

$\displaystyle G=\sum_{k=0}^{N-1}D(k)C_N(k). \protect$ (H.1)

The uniqueness of the decomposition is easy to verify: Suppose there are two distinct decompositions of the form Eq.$ \,$(H.1). Then for some $ k$ we have different D(k)'s. However, this implies that we can get two distinct outputs in response to the $ k^{th}$ input basis function which is absurd.

That every linear time-varying filter may be expressed in this form is also easy to show. Given an arbitrary filter matrix of order N, measure its response to each of the N basis functions (sine and cosine replace $ e^{j\omega t}$) to obtain a set of N by 1 column vectors. The output vector due to the $ k^{th}$ basis vector is precisely the diagonal of $ D(k)$.


Summary

A representation of an arbitrary linear time-varying digital filter has been constructed which characterizes such a filter as having the ability to generate an arbitrary output in response to each basis function in the signal space. The representation was obtained by casting the filter in moving-average form as a matrix, and studying its response to individual orthogonal basis functions which were chosen here to be complex sinusoids. The overall conclusion is that time-varying filters may be used to convert from a set of orthogonal signals (such as tones at distinct frequencies) to a set of unconstrained waveforms in a one-to-one fashion. Linear combinations of these orthogonal signals are then transformed by the LTV filter to the same linear combination of the transformed basis signals.


Next Section:
Recursive Digital Filter Design
Previous Section:
State Space Filters