DSPRelated.com
Free Books

Derivation

For notational simplicity, we restrict exposition to the three-dimensional case. The general linear digital filter equation $ Y=HX$ is written in three dimensions as

\begin{displaymath}
\left[
\begin{array}{c}
y_0 \\ [2pt]
y_1 \\ [2pt]
y_2
\end{a...
...in{array}{c}
x_0 \\ [2pt]
x_1 \\ [2pt]
x_2
\end{array}\right].
\end{displaymath}

where $ x_i$ is regarded as the input sample at time $ i$, and $ y_i$ is the output sample at time $ i$. The general causal time-invariant filter appears in three-space as

\begin{displaymath}
H=\left[
\begin{array}{ccc}
h_0 & 0 & 0 \\ [2pt]
h_1 & h_0 & 0 \\ [2pt]
h_2 & h_1 & h_0
\end{array}\right].
\end{displaymath}

Consider the non-causal time-varying filter defined by

\begin{displaymath}
C_3(k)={1/3}\left[
\begin{array}{ccc}
1 & W_3^1(k) & W_3^2(k...
...3^2(k) \\ [2pt]
1 & W_3^1(k) & {W_3^2(k)}
\end{array}\right].
\end{displaymath}

We may call $ C_3(k)$ the collector matrix corresponding to the $ k^{th}$ frequency.We have

\begin{eqnarray*}
C_3(0)&=&\frac{1}{3}\left[
\begin{array}{ccc}
1 & 1 & 1 \\ [2p...
... e^{-j\frac{2\pi}{3}} & e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{eqnarray*}

The top row of each matrix is recognized as a basis function for the order three DFT (equispaced vectors on the unit circle). Accordingly, we have the orthogonality and spanning properties of these vectors. So let us define a basis for the signal space $ \{x_0,x_1,x_2\}$ by

\begin{displaymath}
x_0\isdef \left[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\e...
...rac{2\pi}{3}} \\ [2pt]
e^{j\frac{2\pi}{3}}
\end{array}\right].
\end{displaymath}

Then every component of $ C_3(k)x_k = 1$ and every component of $ C_3(k)x_j=0$ when $ k\neq j$. Now since any signal $ X$ in $ \Re ^3$ may be written as a linear combination of $ \{x_1,x_2,x_3\}$, we find that

\begin{displaymath}
C_3(k)X =
C_3(k)\sum_{i=0}^2\alpha_ix_i =
\sum_{i=0}^2\alp...
...[
\begin{array}{c}
1 \\ [2pt]
1 \\ [2pt]
1
\end{array}\right].
\end{displaymath}

Consequently, we observe that $ C_N(k)$ is a matrix which annihilates all input basis components but the $ k^{th}$. Now multiply $ C_N(k)$ on the left by a diagonal matrix $ D(k)$ so that the product of $ D(k)$$ C_N(k)$ times $ x_k$ gives an arbitrary column vector $ (d_1,d_2,d_3)$. Then every linear time-varying filter $ G$ is expressible as a sum of these products as we will show below. In general, the decomposition for every filter on $ \Re ^N$ is simply

$\displaystyle G=\sum_{k=0}^{N-1}D(k)C_N(k). \protect$ (H.1)

The uniqueness of the decomposition is easy to verify: Suppose there are two distinct decompositions of the form Eq.$ \,$(H.1). Then for some $ k$ we have different D(k)'s. However, this implies that we can get two distinct outputs in response to the $ k^{th}$ input basis function which is absurd.

That every linear time-varying filter may be expressed in this form is also easy to show. Given an arbitrary filter matrix of order N, measure its response to each of the N basis functions (sine and cosine replace $ e^{j\omega t}$) to obtain a set of N by 1 column vectors. The output vector due to the $ k^{th}$ basis vector is precisely the diagonal of $ D(k)$.


Next Section:
Summary
Previous Section:
Introduction