As discussed in Appendix F, linear time-varying (LTV) digital filters may be represented as matrix operators on the linear space of discrete time signals. Using the matrix representation, this appendix provides an interpretation of LTV filters that the author has found conceptually useful. In this interpretation, the input signal is first expanded into a linear combination of orthogonal basis signals. Then the LTV filter can be seen as replacing each basis signal with a new (arbitrary) basis signal. In particular, when the input-basis is taken to be sinusoidal, as in the Discrete Fourier Transform (DFT), one may readily design a time varying filter to emit any prescribed waveform in response to each frequency component of an input signal.
The most common type of filter dealt with in practice is a linear, causal, and time-invariant operator on the vector space consisting of arbitrary real-valued functions of time. Since we are dealing with the space of functions of time, we will use the terms vector, function, and signal interchangeably. When time is a continuous variable, the vector space is infinite-dimensional even when time is restricted to a finite interval. Digital filters are simpler in many ways theoretically because finite-time digital signals occupy a finite-dimensional vector space. Furthermore, every linear operator on the space of digital signals may be represented as a matrix.H.1If the range of time is restricted to N samples then the arbitrary linear operator is an N by N matrix. In the discussion that follows, we will be exclusively concerned with the digital domain. Every linear filter will be representable as a matrix, and every signal will be expressible as a column vector.
Linearity implies the superposition principle which is presently indispensible for a general filter response analysis. The superposition principle states that if a signal is represented as a linear combination of signals , then the response of any linear filter may written as the same linear combination of the signals where . More generally,
Causality means that the filter output does not depend on future inputs. This is necessary in analog filters where time is a real entity, but for digital filters causality is highly unnecessary unless the filter must operate in real-time. Requiring a filter to be causal results in a triangular matrix representation.
A time-invariant filter is one whose response does not depend on the time of excitation. This allows superposition in time in addition to the superposition of component functions given by linearity. A matrix representing a linear time-invariant filter is Toeplitz (each diagonal is constant). The chief value of time-invariance is that it allows a linear filter to represented by its impulse response which, for digital filters, is the response elicited by the signal . A deeper consequence of superposition in time together with superposition of component signal responses is the fact that every stable linear time invariant filter emits a sinusoid at frequency in response to an input sinusoid at frequency after sufficient time for start-up transients to settle. For this reason sinusoids are called eigenfunctions of linear time-invariant systems. Another way of putting it is that a linear time-invariant filter can only modify a sinusoidal input by a constant scaling of its amplitude and a constant offset in its phase. This is the rationale behind Fourier analysis. The Laplace transform of the impulse response gives the transfer function and the Fourier transform of the impulse response is the frequency response. It is important to note that relaxing time-invariance only prevents us from using superposition in time. Consequently, while we can no longer uniquely characterize a filter in terms of its impulse response, we may still characterize it in terms of its basis function response.
For notational simplicity, we restrict exposition to the three-dimensional case. The general linear digital filter equation is written in three dimensions as
Consider the non-causal time-varying filter defined by
We may call the collector matrix corresponding to the frequency.We have
The top row of each matrix is recognized as a basis function for the order three DFT (equispaced vectors on the unit circle). Accordingly, we have the orthogonality and spanning properties of these vectors. So let us define a basis for the signal space by
Then every component of and every component of when . Now since any signal in may be written as a linear combination of , we find that
The uniqueness of the decomposition is easy to verify: Suppose there are two distinct decompositions of the form Eq.(H.1). Then for some we have different D(k)'s. However, this implies that we can get two distinct outputs in response to the input basis function which is absurd.
That every linear time-varying filter may be expressed in this form is also easy to show. Given an arbitrary filter matrix of order N, measure its response to each of the N basis functions (sine and cosine replace ) to obtain a set of N by 1 column vectors. The output vector due to the basis vector is precisely the diagonal of .
A representation of an arbitrary linear time-varying digital filter has been constructed which characterizes such a filter as having the ability to generate an arbitrary output in response to each basis function in the signal space. The representation was obtained by casting the filter in moving-average form as a matrix, and studying its response to individual orthogonal basis functions which were chosen here to be complex sinusoids. The overall conclusion is that time-varying filters may be used to convert from a set of orthogonal signals (such as tones at distinct frequencies) to a set of unconstrained waveforms in a one-to-one fashion. Linear combinations of these orthogonal signals are then transformed by the LTV filter to the same linear combination of the transformed basis signals.
Recursive Digital Filter Design
State Space Filters