DSPRelated.com
Free Books

State Space Filters

An important representation for discrete-time linear systems is the state-space formulation

$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle C {\underline{x}}(n) + D\underline{u}(n)
\protect$  
$\displaystyle {\underline{x}}(n+1)$ $\displaystyle =$ $\displaystyle A {\underline{x}}(n) + B \underline{u}(n)$ (G.1)

where $ {\underline{x}}(n)$ is the length $ N$ state vector at discrete time $ n$, $ \underline{u}(n)$ is a $ p\times 1$ vector of inputs, and $ \underline{y}(n)$ the $ q\times 1$ output vector. $ A$ is the $ N\times N$ state transition matrix,G.1and it determines the dynamics of the system (its poles or resonant modes).

The state-space representation is especially powerful for multi-input, multi-output (MIMO) linear systems, and also for time-varying linear systems (in which case any or all of the matrices in Eq.$ \,$(G.1) may have time subscripts $ n$) [37]. State-space models are also used extensively in the field of control systems [28].

An example of a Single-Input, Single-Ouput (SISO) state-space model appears in §F.6.

Markov Parameters

The impulse response of a state-space model is easily found by direct calculation using Eq.$ \,$(G.1):

\begin{eqnarray*}
\mathbf{h}(0) &=& C {\underline{x}}(0) + D\,\underline{\delta}...
... B\\ [5pt]
&\vdots&\\
\mathbf{h}(n) &=& C A^{n-1} B, \quad n>0
\end{eqnarray*}

Note that we have assumed $ {\underline{x}}(0)=0$ (zero initial state or zero initial conditions). The notation $ \underline{\delta}(n)$ denotes a $ q\times q$ matrix having $ \delta (n)$ along the diagonal and zeros elsewhere.G.2

The impulse response of the state-space model can be summarized as

$\displaystyle \fbox{$\displaystyle \mathbf{h}(n) = \left\{\begin{array}{ll} D, & n=0 \\ [5pt] CA^{n-1}B, & n>0 \\ \end{array} \right.$}$ (G.2)

The impulse response terms $ C A^n B$ for $ n\geq 0$ are known as the Markov parameters of the state-space model.

Note that each sample of the impulse response $ \mathbf{h}(n)$ is a $ p\times q$ matrix.G.3 Therefore, it is not a possible output signal, except when $ q=1$. A better name might be ``impulse-matrix response''. In §G.4 below, we'll see that $ \mathbf{h}(n)$ is the inverse z transform of the matrix transfer-function of the system.

Given an arbitrary input signal $ \underline{u}(n)$ (and zero intial conditions $ {\underline{x}}(0)=0$), the output signal is given by the convolution of the input signal with the impulse response:

$\displaystyle \underline{y}_u(n) = (\mathbf{h}\ast \underline{u})(n) = \left\{\...
... \sum_{m=0}^nCA^{m-1}B\underline{u}(n-m), & n>0 \\ \end{array} \right. \protect$ (G.3)


Response from Initial Conditions

The response of a state-space model to initial conditions, i.e., its initial state $ {\underline{x}}(0)$ is given by, again using Eq.$ \,$(G.1),

$\displaystyle \underline{y}_x(n) = C A^{n-1}{\underline{x}}(0), \quad n=0,1,2,\ldots\,. \protect$ (G.4)


Complete Response

The complete response of a linear system consists of the superposition of (1) its response to the input signal $ \underline{u}(n)$ and (2) its response to initial conditions $ {\underline{x}}(0)$:

$\displaystyle \underline{y}(n) = \underline{y}_u(n) + \underline{y}_x(n),
$

where $ \underline{y}_u(n)$ was defined in Eq.$ \,$(G.3) and $ \underline{y}_x(n)$ was defined in Eq.$ \,$(G.4) above.


Transfer Function of a State Space Filter

The transfer function can be defined as the $ z$ transform of the impulse response:

$\displaystyle H(z) \isdef \sum_{n=0}^{\infty} h(n) z^{-n}
= D + \sum_{n=1}^{\in...
...z^{-n}
= D + z^{-1}C \left[\sum_{n=0}^{\infty} \left(z^{-1}A\right)^n\right] B
$

Using the closed-form sum of a matrix geometric series,G.4we obtain

$\displaystyle \fbox{$\displaystyle H(z) = D + C \left(zI - A\right)^{-1}B.$} \protect$ (G.5)

Note that if there are $ p$ inputs and $ q$ outputs, $ H(z)$ is a $ p\times q$ transfer-function matrix (or ``matrix transfer function'').

Example State Space Filter Transfer Function

In this example, we consider a second-order filter ($ N = 2$) with two inputs ($ p=2$) and two outputs ($ q=2$):

\begin{eqnarray*}
A &=& g\left[\begin{array}{rr} c & -s \\ [2pt] s & c \end{arra...
... \left[\begin{array}{cc} 0 & 0 \\ [2pt] 0 & 0 \end{array}\right]
\end{eqnarray*}

so that

\begin{eqnarray*}
\left[\begin{array}{c} x_1(n+1) \\ [2pt] x_2(n+1) \end{array}\...
...left[\begin{array}{c} x_1(n) \\ [2pt] x_2(n) \end{array}\right].
\end{eqnarray*}

From Eq.$ \,$(G.5), the transfer function of this MIMO digital filter is then

\begin{eqnarray*}
H(z) &=& C(zI-A)^{-1}B = (zI-A)^{-1} = \left[\begin{array}{cc}...
...z^{-2}}{\displaystyle 1-2gcz^{-1}+g^2z^{-2}} \end{array}\right].
\end{eqnarray*}

Note that when $ g=1$, the state transition matrix $ A$ is simply a 2D rotation matrix, rotating through the angle $ \theta$ for which $ c=\cos(\theta)$ and $ s=\sin(\theta)$. For $ g<1$, we have a type of normalized second-order resonator [51], and $ g$ controls the ``damping'' of the resonator, while $ \theta =
2\pi f_r/f_s$ controls the resonance frequency $ f_r$. The resonator is ``normalized'' in the sense that the filter's state has a constant $ L2$ norm (``preserves energy'') when $ g=1$ and the input is zero:

$\displaystyle \left\Vert\,{\underline{x}}(n+1)\,\right\Vert \isdef \sqrt{x_1^2(...
...x}}(n)\,\right\Vert \equiv \left\Vert\,{\underline{x}}(n)\,\right\Vert \protect$ (G.6)

since a rotation does not change the $ L2$ norm, as can be readily checked.

In this two-input, two-output digital filter, the input $ u_1(n)$ drives state $ x_1(n)$ while input $ u_2(n)$ drives state $ x_2(n)$. Similarly, output $ y_1(n)$ is $ x_1(n)$, while $ y_2(n)$ is $ x_2(n)$. The two-by-two transfer-function matrix $ H(z)$ contains entries for each combination of input and output. Note that all component transfer functions have the same poles. This is a general property of physical linear systems driven and observed at arbitrary points: the resonant modes (poles) are always the same, but the zeros vary as the input or output location are changed. If a pole is not visible using a particular input/output pair, we say that the pole has been ``canceled'' by a zero associated with that input/output pair. In control-theory terms, the pole is ``uncontrollable'' from that input, or ``unobservable'' from that output, or both.


Transposition of a State Space Filter

Above, we found the transfer function of the general state-space model to be

$\displaystyle H(z) = D + C \left(zI - A\right)^{-1}B.
$

By the rules for transposing a matrix, the transpose of this equation gives

$\displaystyle H^T(z) = D^T + B^T \left(zI - A^T\right)^{-1}C^T.
$

The system $ (A^T,C^T,B^T,D^T)$ may be called the transpose of the system $ (A,B,C,D)$. The transpose is obtained by interchanging $ B$ and $ C$ in addition to transposing all matrices.

When there is only one input and output signal (the SISO case), $ H(z)$ is a scalar, as is $ D$. In this case we have

$\displaystyle H(z) = D + B^T \left(zI - A^T\right)^{-1}C^T.
$

That is, the transfer function of the transposed system is the same as the untransposed system in the scalar case. It can be shown that transposing the state-space representation is equivalent to transposing the signal flow graph of the filter [75]. The equivalence of a flow graph to its transpose is established by Mason's gain theorem [49,50]. See §9.1.3 for more on this topic.


Poles of a State Space Filter

In this section, we show that the poles of a state-space model are given by the eigenvalues of the state-transition matrix $ A$.

Beginning again with the transfer function of the general state-space model,

$\displaystyle H(z) = D + C \left(zI - A\right)^{-1}B,
$

we may first observe that the poles of $ H(z)$ are either the same as or some subset of the poles of

$\displaystyle H_p(z) \isdef \left(zI - A\right)^{-1}.
$

(They are the same when all modes are controllable and observable [37].) By Cramer's rule for matrix inversion, the denominator polynomial for $ \left(zI - A\right)^{-1}$ is given by the determinant

$\displaystyle D(z) \isdef \det(zI - A)
$

where $ \det(Q)$ denotes the determinant of the square matrix $ Q$. (The determinant of $ Q$ is also often written $ \left\vert Q\right\vert$.) In linear algebra, the polynomial $ D(z) = \left\vert zI-A\right\vert$ is called the characteristic polynomial for the matrix $ A$. The roots of the characteristic polynomial are called the eigenvalues of $ A$.

Thus, the eigenvalues of the state transition matrix $ A$ are the poles of the corresponding linear time-invariant system. In particular, note that the poles of the system do not depend on the matrices $ B,C,D$, although these matrices, by placing system zeros, can cause pole-zero cancellations (unobservable or uncontrollable modes).


Difference Equations to State Space

Any explicit LTI difference equation (§5.1) can be converted to state-space form. In state-space form, many properties of the system are readily obtained. For example, using standard utilities (such as in Matlab), there are functions for computing the modes of the system (its poles), an equivalent transfer-function description, stability information, and whether or not modes are ``observable'' and/or ``controllable'' from any given input/output point.

Every $ n$th order scalar (ordinary) difference equation may be reformulated as a first order vector difference equation. For example, consider the second-order difference equation

$\displaystyle y(n) = u(n) + 2u(n-1) + 3u(n-2) - \frac{1}{2}y(n-1) - \frac{1}{3}y(n-2). \protect$ (G.7)

We may define a vector first-order difference equation--the ``state space representation''--as discussed in the following sections.

Converting to State-Space Form by Hand

Converting a digital filter to state-space form is easy because there are various ``canonical forms'' for state-space models which can be written by inspection given the strictly proper transfer-function coefficients.

The canonical forms useful for transfer-function to state-space conversion are controller canonical form (also called control or controllable canonical form) and observer canonical form (or observable canonical form) [28, p. 80], [37]. These names come from the field of control theory [28] which is concerned with designing feedback laws to control the dynamics of real-world physical systems. State-space models are used extensively in the control field to model physical systems.

The name ``controller canonical form'' reflects the fact that the input signal can ``drive'' all modes (poles) of the system. In the language of control theory, we may say that all of the system poles are controllable from the input $ u(n)$. In observer canonical form, all modes are guaranteed to be observable. Controllability and observability of a state-space model are discussed further in §G.7.3 below.

The following procedure converts any causal LTI digital filter into state-space form:

  1. Determine the filter transfer function $ H(z)=B(z)/A(z)$.
  2. If $ H(z)$ is not strictly proper ($ b_0\ne 0$), ``pull out'' the delay-free path to obtain a feed-through gain $ b_0$ in parallel with a strictly proper transfer function.
  3. Write down the state-space representation by inspection using controller canonical form for the strictly proper transfer function. (Or use the matlab function tf2ss.)

We now elaborate on these steps for the general case:

  1. The general causal IIR filter
    $\displaystyle y(n) = b_0\,u(n)
$ $\displaystyle +$ $\displaystyle b_1\,u(n-1) +\,\cdots\, + b_{N_b}\,u(n-{N_b})$ (G.8)
    $\displaystyle $ $\displaystyle -$ $\displaystyle a_1\,y(n-1) - \,\cdots\, -a_{N_a}\,y(n-{N_a})
\protect$ (G.9)

    has transfer function

    $\displaystyle H(z) \isdef \frac{Y(z)}{U(z)} = \frac{b_0 + b_1\,z^{-1} +\,\cdots...
..._b}\,z^{-{N_b}} }{ 1 + a_1\,z^{-1} + \,\cdots\, +a_{N_a}\,z^{-{N_a}}}. \protect$ (G.10)

  2. By convention, state-space descriptions handle any delay-free path from input to output via the direct-path coefficient $ D$ in Eq.$ \,$(G.1). This is natural because the delay-free path does not affect the state of the system.

    A causal filter contains a delay-free path if its impulse response $ h(n)$ is nonzero at time zero, i.e., if $ h(0)\neq
0$.G.5 In such cases, we must ``pull out'' the delay-free path in order to implement it in parallel, setting $ D=h(0)=b_0$ in the state-space model.

    In our example, one step of long division yields

    $\displaystyle H(z)$ $\displaystyle =$ $\displaystyle b_0 +
\frac{(b_1 - b_0 a_1)\,z^{-1} +\,\cdots\, + (b_N - b_0 a_N)\,z^{-N} }{
1 + a_1\,z^{-1} + \,\cdots\, +a_{N_a}\,z^{-{N_a}}},$  
      $\displaystyle \isdef$ $\displaystyle b_0 +
\frac{\beta_1\,z^{-1} +\,\cdots\, + \beta_N\,z^{-N}}{
1 + a_1\,z^{-1} + \,\cdots\, +a_{N_a}\,z^{-{N_a}}},
\protect$ (G.11)

    where $ N\isdef \max(N_a,N_b)$, with $ a_i\isdef 0$ for $ i>{N_a}$, and $ b_i\isdef 0$ for $ i>{N_b}$.

  3. The controller canonical form is then easily written as follows:
    $\displaystyle A$ $\displaystyle =$ \begin{displaymath}\left[
\begin{array}{ccccc}
-a_1 & -a_2 & \cdots & -a_{N-1} &...
...\ [2pt] 0 \\ [2pt] \vdots\\ [2pt] 0\end{array}\right]
\nonumber\end{displaymath}  
    $\displaystyle C$ $\displaystyle =$ $\displaystyle \left[\begin{array}{cccc} \beta_1 & \beta_2 & \cdots & \beta_N \end{array}\right]\qquad
D = b_0
\protect$ (G.12)

    An alternate controller canonical form is obtained by applying the similarity transformation (see §G.8 below) which simply reverses the order of the state variables. Any permutation of the state variables would similarly yield a controllable form. The transpose of a controllable form is an observable form.

One might worry that choosing controller canonical form may result in unobservable modes. However, this will not happen if $ B(z)$ and $ A(z)$ have no common factors. In other words, if there are no pole-zero cancellations in the transfer function $ H(z)=B(z)/A(z)$, then either controller or observer canonical form will yield a controllable and observable state-space model.

We now illustrate these steps using the example of Eq.$ \,$(G.7):

  1. The transfer function can be written, by inspection, as

    $\displaystyle H(z) = \frac{1 + 2 z^{-1}+ 3 z^{-2}}{1 + \frac{1}{2}z^{-1}+ \frac{1}{3}z^{-2}}. \protect$ (G.13)

  2. We need to convert Eq.$ \,$(G.13) to the form

    $\displaystyle H(z) = b_0 + \frac{\beta_1 z^{-1}+ \beta_2 z^{-2}}{1 + \frac{1}{2}z^{-1}+ \frac{1}{3}z^{-2}}. \protect$ (G.14)

    Obtaining a common denominator and equating numerator coefficients with Eq.$ \,$(G.13) yields
    $\displaystyle b_0$ $\displaystyle =$ $\displaystyle 1,$  
    $\displaystyle \beta_1$ $\displaystyle =$ $\displaystyle 2 - \frac{1}{2}= \frac{3}{2},$ and  
    $\displaystyle \beta_2$ $\displaystyle =$ $\displaystyle 3 - \frac{1}{3}= \frac{8}{3}.
\protect$ (G.15)

    The same result is obtained using long division (or synthetic division).

  3. Finally, the controller canonical form is given by
    $\displaystyle A$ $\displaystyle \isdef$ $\displaystyle \left[\begin{array}{cc} -\frac{1}{2} & -\frac{1}{3} \\ [2pt] 1 & 0 \end{array}\right]$  
    $\displaystyle B$ $\displaystyle \isdef$ $\displaystyle \left[\begin{array}{c} 1 \\ [2pt] 0 \end{array}\right]$  
    $\displaystyle C^T$ $\displaystyle \isdef$ $\displaystyle \left[\begin{array}{cc} 3/2 & 8/3 \end{array}\right]$  
    $\displaystyle D$ $\displaystyle \isdef$ $\displaystyle 1.
\protect$ (G.16)


Converting Signal Flow Graphs to State-Space Form by Hand

The procedure of the previous section quickly converts any transfer function to state-space form (specifically, controller canonical form). When the starting point is instead a signal flow graph, it is usually easier to go directly to state-space form by labeling each delay-element output as a state variable and writing out the state-space equations by inspection of the flow graph.

For the example of the previous section, suppose we are given Eq.$ \,$(G.14) in direct-form II (DF-II), as shown in Fig.G.1. It is important that the filter representation be canonical with respect to delay, i.e., that the number of delay elements equals the order of the filter. Then the third step (writing down controller canonical form by inspection) may replaced by the following more general procedure:

  1. Assign a state variable to the output of each delay element (indicated in Fig.G.1).
  2. Write down the state-space representation by inspection of the flow graph.

Figure: Direct-form-II representation of the difference equation in Eq.$ \,$(G.7), after pulling out the direct path as given in Equations (G.14-G.15).
\begin{figure}\input fig/ssexdf.pstex_t
\end{figure}

The state-space description of the difference equation in Eq.$ \,$(G.7) is given by Eq.$ \,$(G.16). We see that controller canonical form follows immediately from the direct-form-II digital filter realization, which is fundamentally an all-pole filter followed by an all-zero (FIR) filter (see §9.1.2). By starting instead from the transposed direct-form-II (TDF-II) structure, the observer canonical form is obtained [28, p. 87]. This is because the zeros effectively precede the poles in a TDF-II realization, so that they may introduce nulls in the input spectrum, but they cannot cancel output from the poles (e.g., from initial conditions). Since the other two digital-filter direct forms (DF-I and TDF-I--see Chapter 9 for details) are not canonical with respect to delay, they are not used as a basis for deriving state-space models.


Controllability and Observability

Since the output $ y(n)$ in Fig.G.1 is a linear combination of the input and states $ x_i(n)$, one or more poles can be canceled by the zeros induced by this linear combination. When that happens, the canceled modes are said to be unobservable. Of course, since we started with a transfer function, any pole-zero cancellations should be dealt with at that point, so that the state space realization will always be controllable and observable. If a mode is uncontrollable, the input cannot affect it; if it is unobservable, it has no effect on the output. Therefore, there is usually no reason to include unobservable or uncontrollable modes in a state-space model.G.6

A physical example of uncontrollable and unobservable modes is provided by the plucked vibrating string of an electric guitar with one (very thin) magnetic pick-up. In a vibrating string, considering only one plane of vibration, each quasi-harmonicG.7 overtone corresponds to a mode of vibration [86] which may be modeled by a pair of complex-conjugate poles in a digital filter which models a particular point-to-point transfer function of the string. All modes of vibration having a node at the plucking point are uncontrollable at that point, and all modes having a node at the pick-up are unobservable at that point. If an ideal string is plucked at its midpoint, for example, all even numbered harmonics will not be excited, because they all have vibrational nodes at the string midpoint. Similarly, if the pick-up is located one-fourth of the string length from the bridge, every fourth string harmonic will be ``nulled'' in the output. This is why plucked and struck strings are generally excited near one end, and why magnetic pick-ups are located near the end of the string.

A basic result in control theory is that a system in state-space form is controllable from a scalar input signal $ u(n)$ if and only if the matrix

$\displaystyle \left[B,\, AB,\, A^2 B,\, \dots,\, A^{N-1}B\right]
$

has full rank (i.e., is invertible). Here, $ B$ is $ N\times 1$. For the general $ N\times q$ case, this test can be applied to each of the $ q$ columns of $ B$, thereby testing controllability from each input in turn. Similarly, a state-space system is observable from a given output if and only if

\begin{displaymath}
\left[
\begin{array}{l}
C\\
CA\\
CA^2\\
\dots\\
CA^{N-1}
\end{array}\right]
\end{displaymath}

is nonsingular (i.e., invertible), where $ C$ is $ 1\times N$. In the $ p$-output case, $ C$ can be considered the row corresponding to the output for which observability is being checked.


A Short-Cut to Controller Canonical Form

When converting a transfer function to state-space form by hand, the step of pulling out the direct path, like we did in going from Eq.$ \,$(G.13) to Eq.$ \,$(G.14), can be bypassed [28, p. 87].

Figure: Direct-form-II realization of Eq.$ \,$(G.7). Note that this form has a delay-free path from input to output.
\begin{figure}\input fig/ssexdf2.pstex_t
\end{figure}

Figure G.2 gives the standard direct-form-II structure for a second-order IIR filter. Unlike Fig.G.1, it includes a direct path from the input to the output. The filter coefficients are all given directly by the transfer function, Eq.$ \,$(G.13).

This form can be converted directly to state-space form by carefully observing all paths from the input and state variables to the output. For example, $ x_1(n)$ reaches the output through gain 2 on the right, but also via gain $ -1/2\cdot 1$ on the left and above. Therefore, its contribution to the output is $ (2 - 1/2)x_1(n) = (3/2) x_1(n)$, as obtained in the DF-II realization with direct-path pulled out shown in Fig.G.1. The state variable $ x_2(n)$ reaches the output with gain $ 3 - 1/3\cdot 1 = 8/3$, again as we obtained before. Finally, it must also be observed that the gain of the direct path from input to output is $ 1$.


Matlab Direct-Form to State-Space Conversion

Matlab and Octave support state-space models with functions such as

Note that while these utilities are documented primarily for use with continuous-time systems, they are also used for discrete-time systems.

Let's repeat the previous example using Matlab:

>> num = [1 2 3]; % transfer function numerator
>> den = [1 1/2 1/3]; % denominator coefficients
>> [A,B,C,D] = tf2ss(num,den)

A =
   -0.5000   -0.3333
    1.0000         0

B =
     1
     0

C =  1.5000    2.6667

D =  1

>> [N,D] = ss2tf(A,B,C,D)

N = 1.0000    2.0000    3.0000

D = 1.0000    0.5000    0.3333

The tf2ss and ss2tf functions are documented online at The Mathworks help desk as well as within Matlab itself (say help tf2ss). In Octave, say help tf2ss or help -i tf2ss.


State Space Simulation in Matlab

Since matlab has first-class support for matrices and vectors, it is quite simple to implement a state-space model in Matlab using no support functions whatsoever, e.g.,

% Define the state-space system parameters:
A = [0 1; -1 0]; % State transition matrix
B = [0; 1];  C = [0 1];  D = 0; % Input, output, feed-around

% Set up the input signal, initial conditions, etc.
x0 = [0;0];             % Initial state (N=2)
Ns = 10; 	        % Number of sample times to simulate
u = [1, zeros(1,Ns-1)]; % Input signal (an impulse at time 0)
y = zeros(Ns,1);        % Preallocate output signal for n=0:Ns-1

% Perform the system simulation:
x = x0;                % Set initial state
for n=1:Ns-1           % Iterate through time
  y(n) = C*x + D*u(n); % Output for time n-1
  x = A*x + B*u(n);    % State transitions to time n
end
y' % print the output y (transposed)
% ans =
%  0   1   0  -1   0   1   0  -1   0   0
The restriction to indexes beginning at 1 is unwieldy here, because we want to include time $ n = 0$ in the input and output. It can be readily checked that the above examples has the transfer function

$\displaystyle H(z) = \frac{z^{-1}}{1 + z^{-2}},
$

so that the following matlab checks the above output using the built-in filter function:
NUM = [0 1];
DEN = [1 0 1];
y = filter(NUM,DEN,u)
% y =
%   0   1   0  -1   0   1   0  -1   0   1
To eliminate the unit-sample delay, i.e., to simulate $ H(z)=1/(1+z^{-2})$ in state-space form, it is necessary to use the $ D$ (feed-around) coefficient:
[A,B,C,D] = tf2ss([1 0 0], [1 0 1])
% A =
%    0   1
%   -1  -0
%
% B =
%   0
%   1
%
% C =
%   -1   0
%
% D = 1

x = x0;                % Reset to initial state
for n=1:Ns-1
  y(n) = C*x + D*u(n);
  x = A*x + B*u(n);
end
y'
% ans =
%  1   0  -1   0   1   0  -1   0   1   0
Note the use of trailing zeros in the first argument of tf2ss (the transfer-function numerator-polynomial coefficients) to make it the same length as the second argument (denominator coefficients). This is necessary in tf2ss because the same function is used for both the continous- and discrete-time cases. Without the trailing zeros, the numerator will be extended by zeros on the left, i.e., ``right-justified'' relative to the denominator.


Other Relevant Matlab Functions

Related Signal Processing Toolbox functions include

  • tf2sos -- Convert digital filter transfer function parameters to second-order sections form. (See §9.2.)

  • sos2ss -- Convert second-order filter sections to state-space form.G.8

  • tf2zp -- Convert transfer-function filter parameters to zero-pole-gain form.

  • ss2zp -- Convert state-space model to zeros, poles, and a gain.

  • zp2ss -- Convert zero-pole-gain filter parameters to state-space form.

In Matlab, say lookfor state-space to find your state-space support utilities (there are many more than listed above). In Octave, say help -i ss2tf and keep reading for more functions (the above list is complete, as of this writing).


Matlab State-Space Filter Conversion Example

Here is the example of §F.6 repeated using matlab.G.9 The difference equation

$\displaystyle y(n) = u(n-1) + u(n-2) + 0.5\, y(n-1) - 0.1\, y(n-2) + 0.01\, y(n-3)
$

corresponds to the transfer function

$\displaystyle H(z) = \frac{B(z)}{A(z)} =
\frac{z^{-1}+ z^{-2}}{1 - 0.5\,z^{-1}+ 0.1\,z^{-2}- 0.01\,z^{-3}},
$

so that in matlab the filter is represented by the vectors
NUM = [0  1   1    0   ]; % NUM and DEN should be same length
DEN = [1 -0.5 0.1 -0.01];
The tf2ss function converts from ``transfer-function'' form to state-space form:
 [A,B,C,D] = tf2ss(NUM,DEN)
A =
   0.00000   1.00000   0.00000
   0.00000   0.00000   1.00000
   0.01000  -0.10000   0.50000

B =
  0
  0
  1

C =
  0  1  1

D = 0


Similarity Transformations

A similarity transformation is a linear change of coordinates. That is, the original $ N$-dimensional state vector $ {\underline{x}}(n)$ is recast in terms of a new coordinate basis. For any linear transformation of the coordinate basis, the transformed state vector $ \underline{{\tilde x}}(n)$ may be computed by means of a matrix multiply. Denoting the matrix of the desired one-to-one linear transformation by $ E$, we can express the change of coordinates as

$\displaystyle {\underline{x}}(n) \isdef E\underline{{\tilde x}}(n)
$

or $ \underline{{\tilde x}}(n)=E^{-1}{\underline{x}}(n)$, if we prefer, since the inverse of a one-to-one linear transformation always exists.

Let's now apply the linear transformation $ E$ to the general $ N$-dimensional state-space description in Eq.$ \,$(G.1). Substituting $ {\underline{x}}(n) \isdeftext E\underline{{\tilde x}}(n)$ in Eq.$ \,$(G.1) gives

$\displaystyle E\underline{{\tilde x}}(n+1)$ $\displaystyle =$ $\displaystyle A \, E\underline{{\tilde x}}(n) + B \underline{u}(n)$  
$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle C E\underline{{\tilde x}}(n) + D\underline{u}(n)$ (G.17)

Premultiplying the first equation above by $ E^{-1}$, we have
$\displaystyle \underline{{\tilde x}}(n+1)$ $\displaystyle =$ $\displaystyle \left(E^{-1}A E\right) \underline{{\tilde x}}(n) + \left(E^{-1}B\right) \underline{u}(n)$  
$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle \left(C E\right) \underline{{\tilde x}}(n) + D\underline{u}(n).$ (G.18)

Defining
$\displaystyle \tilde{A}$ $\displaystyle =$ $\displaystyle E^{-1}A E$  
$\displaystyle {\tilde B}$ $\displaystyle =$ $\displaystyle E^{-1}B$  
$\displaystyle {\tilde C}$ $\displaystyle =$ $\displaystyle C E$  
$\displaystyle {\tilde D}$ $\displaystyle =$ $\displaystyle D$ (G.19)

we can write
$\displaystyle \underline{{\tilde x}}(n+1)$ $\displaystyle =$ $\displaystyle \tilde{A}\, \underline{{\tilde x}}(n) + {\tilde B}\, \underline{u}(n)$  
$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle {\tilde C}\, \underline{{\tilde x}}(n) + {\tilde D}\, \underline{u}(n)$ (G.20)

The transformed system describes the same system as in Eq.$ \,$(G.1) relative to new state-variable coordinates. To verify that it's really the same system, from an input/output point of view, let's look at the transfer function using Eq.$ \,$(G.5):

\begin{eqnarray*}
{\tilde H}(z) &=& {\tilde D}+ {\tilde C}(zI - \tilde{A})^{-1}{...
...ht]^{-1} B\\
&=& D + C \left(zI - A\right)^{-1} B\\
&=& H(z)
\end{eqnarray*}

Since the eigenvalues of $ A$ are the poles of the system, it follows that the eigenvalues of $ \tilde{A}=E^{-1}A E$ are the same. In other words, eigenvalues are unaffected by a similarity transformation. We can easily show this directly: Let $ \underline{e}$ denote an eigenvector of $ A$. Then by definition $ A\underline{e}=\lambda\underline{e}$, where $ \lambda$ is the eigenvalue corresponding to $ \underline{e}$. Define $ \underline{\tilde{e}}=E^{-1}\underline{e}$ as the transformed eigenvector. Then we have

$\displaystyle \tilde{A}\underline{\tilde{e}}= \tilde{A}(E^{-1}\underline{e}) = ...
...^{-1}A\underline{e}= E^{-1}\lambda\underline{e}= \lambda\underline{\tilde{e}}.
$

Thus, the transformed eigenvector is an eigenvector of the transformed $ A$ matrix, and the eigenvalue is unchanged.

The transformed Markov parameters, $ {\tilde C}\tilde{A}^n {\tilde B}$, are obviously the same also since they are given by the inverse $ z$ transform of the transfer function $ {\tilde H}(z)$. However, it is also easy to show this by direct calculation:

\begin{eqnarray*}
{\tilde h}(n) &=& {\tilde C}\tilde{A}^n{\tilde B}= (CE)(E^{-1}...
...ilde B}= C(EE^{-1}) A(EE^{-1}) \cdots A(EE^{-1}) B)\\
&=& CA^nB
\end{eqnarray*}


Modal Representation

When the state transition matrix $ A$ is diagonal, we have the so-called modal representation. In the single-input, single-output (SISO) case, the general diagonal system looks like

$\displaystyle \left[\begin{array}{c} x_1(n+1) \\ [2pt] x_2(n+1) \\ [2pt] \vdots \\ [2pt] x_{N-1}(n+1)\\ [2pt] x_N(n+1)\end{array}\right]$ $\displaystyle =\!$ \begin{displaymath}\left[
\begin{array}{ccccc}
\lambda _1 & 0 & 0 & \cdots & 0 \...
...ts \\ [2pt] b_{N-1}\\ [2pt] b_N\end{array}\right] u(n)\nonumber\end{displaymath}  
$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle C {\underline{x}}(n) + du(n)$  
  $\displaystyle =$ $\displaystyle [c_1, c_2, \dots, c_N]{\underline{x}}(n) + d u(n).
\protect$ (G.21)

Since the state transition matrix is diagonal, the modes are decoupled, and we can write each mode's time-update independently:

\begin{eqnarray*}
x_1(n+1) &=& \lambda _1 x_1(n) + b_1 u(n)\\
x_2(n+1) &=& \lam...
...y(n) & = & c_1 x_1(n) + c_2 x_2(n) + \dots + c_N x_N(n) + d u(n)
\end{eqnarray*}

Thus, the diagonalized state-space system consists of $ N$ parallel one-pole systems. See §9.2.2 and §6.8.7 regarding the conversion of direct-form filter transfer functions to parallel (complex) one-pole form.

Diagonalizing a State-Space Model

To obtain the modal representation, we may diagonalize any state-space representation. This is accomplished by means of a particular similarity transformation specified by the eigenvectors of the state transition matrix $ A$. An eigenvector of the square matrix $ A$ is any vector $ \underline{e}_i$ for which

$\displaystyle A\underline{e}_i= \lambda_i \underline{e}_i,
$

where $ \lambda_i $ may be complex. In other words, when the matrix $ E$ of the similarity transformation is composed of the eigenvectors of $ A$,

$\displaystyle E= \left[ \underline{e}_1 \; \cdots \; \underline{e}_N \right],
$

the transformed system will be diagonalized, as we will see below.

A system can be diagonalized whenever the eigenvectors of $ A$ are linearly independent. This always holds when the system poles are distinct. It may or may not hold when poles are repeated.

To see how this works, suppose we are able to find $ N$ linearly independent eigenvectors of $ A$, denoted $ \underline{e}_i$, $ i=1,\ldots,N$. Then we can form an $ N\times N$ matrix $ E$ having these eigenvectors as columns. Since the eigenvectors are linearly independent, $ E$ is full rank and can be used as a one-to-one linear transformation, or change-of-coordinates matrix. From Eq.$ \,$(G.19), we have that the transformed state transition matrix is given by

$\displaystyle \tilde{A}= E^{-1}A E
$

Since each column $ \underline{e}_i$ of $ E$ is an eigenvector of $ A$, we have $ A\underline{e}_i=\lambda_i \underline{e}_i$, $ i=1,\ldots,N$, which implies

$\displaystyle A E= E\Lambda,
$

where

$\displaystyle \Lambda \isdef \left[\begin{array}{ccc}
\lambda_1 & & 0\\ [2pt]
& \ddots & \\ [2pt]
0 & & \lambda_N
\end{array}\right]
$

is a diagonal matrix having the (complex) eigenvalues of $ A$ along the diagonal. It then follows that

$\displaystyle \tilde{A}= E^{-1}A E= E^{-1}E\Lambda = \Lambda,
$

which shows that the new state transition matrix is diagonal and made up of the eigenvalues of $ A$.

The transfer function is now, from Eq.$ \,$(G.5), in the SISO case,

$\displaystyle H(z)$ $\displaystyle =$ $\displaystyle d + {\tilde C}\left(zI - \Lambda\right)^{-1}{\tilde B}$  
  $\displaystyle =$ $\displaystyle d + \frac{{\tilde c}_1 b_1 z^{-1}}{1 - \lambda _1z^{-1}}
+ \frac{...
...^{-1}}
+ \cdots
+ \frac{{\tilde c}_N {\tilde b}_N z^{-1}}{1 - \lambda _Nz^{-1}}$  
  $\displaystyle =$ $\displaystyle d + \sum_{i=1}^N \frac{{\tilde c}_i {\tilde b}_i z^{-1}}{1 - \lambda _iz^{-1}}.
\protect$ (G.22)

We have incidentally shown that the eigenvalues of the state-transition matrix $ A$ are the poles of the system transfer function. When it is diagonal, i.e., when $ A =$   diag$ (\lambda _1,\ldots,\lambda _N)$, the state-space model may be called a modal representation of the system, because the poles appear explicitly along the diagonal of $ A$ and the system's dynamic modes are decoupled.

Notice that the diagonalized state-space form is essentially equivalent to a partial-fraction expansion form (§6.8). In particular, the residue of the $ i$th pole is given by $ c_i
b_i$. When complex-conjugate poles are combined to form real, second-order blocks (in which case $ A$ is block-diagonal with $ 2\times 2$ blocks along the diagonal), this is corresponds to a partial-fraction expansion into real, second-order, parallel filter sections.


Finding the Eigenvalues of A in Practice

Small problems may be solved by hand by solving the system of equations

$\displaystyle A E= E\Lambda.
$

The Matlab built-in function eig() may be used to find the eigenvalues of $ A$ (system poles) numerically.G.10


Example of State-Space Diagonalization

For the example of Eq.$ \,$(G.7), we obtain the following results:

>> % Initial state space filter from example above:
>> A = [-1/2, -1/3; 1, 0]; % state transition matrix
>> B = [1; 0];
>> C = [2-1/2, 3-1/3];
>> D = 1;
>>
>> eig(A) % find eigenvalues of state transition matrix A

ans =
  -0.2500 + 0.5204i
  -0.2500 - 0.5204i

>> roots(den) % find poles of transfer function H(z)

ans =
  -0.2500 + 0.5204i
  -0.2500 - 0.5204i

>> abs(roots(den)) % check stability while we're here

ans =
    0.5774
    0.5774

% The system is stable since each pole has magnitude < 1.

Our second-order example is already in real $ 2\times 2$ form, because it is only second order. However, to illustrate the computations, let's obtain the eigenvectors and compute the complex modal representation:

>> [E,L] = eig(A)  % [Evects,Evals] = eig(A)

E =

  -0.4507 - 0.2165i  -0.4507 + 0.2165i
        0 + 0.8660i        0 - 0.8660i


L =

  -0.2500 + 0.5204i        0
        0            -0.2500 - 0.5204i

>> A * E - E * L   % should be zero (A * evect = eval * evect)

ans =
  1.0e-016 *
        0 + 0.2776i        0 - 0.2776i
        0                  0

% Now form the complete diagonalized state-space model (complex):

>> Ei = inv(E); % matrix inverse
>> Ab = Ei*A*E % new state transition matrix (diagonal)

Ab =
  -0.2500 + 0.5204i   0.0000 + 0.0000i
  -0.0000            -0.2500 - 0.5204i

>> Bb = Ei*B   % vector routing input signal to internal modes

Bb =
   -1.1094
   -1.1094

>> Cb = C*E    % vector taking mode linear combination to output

Cb =
  -0.6760 + 1.9846i  -0.6760 - 1.9846i

>> Db = D      % feed-through term unchanged

Db =
     1

% Verify that we still have the same transfer function:

>> [numb,denb] = ss2tf(Ab,Bb,Cb,Db)

numb =
   1.0000             2.0000 + 0.0000i   3.0000 + 0.0000i

denb =
   1.0000             0.5000 - 0.0000i   0.3333

>> num = [1, 2, 3]; % original numerator
>> norm(num-numb)

ans =
  1.5543e-015

>> den = [1, 1/2, 1/3]; % original denominator
>> norm(den-denb)

ans =
  1.3597e-016


Properties of the Modal Representation

The vector $ {\tilde B}$ in a modal representation (Eq.$ \,$(G.21)) specifies how the modes are driven by the input. That is, the $ i$th mode receives the input signal $ u(n)$ weighted by $ {\tilde b}_i$. In a computational model of a drum, for example, $ {\tilde B}$ may be changed corresponding to different striking locations on the drumhead.

The vector $ {\tilde C}$ in a modal representation (Eq.$ \,$(G.21)) specifies how the modes are to be mixed into the output. In other words, $ {\tilde C}$ specifies how the output signal is to be created as a linear combination of the mode states:

$\displaystyle y(n) = {\tilde C}\underline{{\tilde x}}(n) = {\tilde c}_1 {\tilde x}_1(n) + {\tilde c}_2 {\tilde x}_2(n) + \cdots + {\tilde c}_N {\tilde x}_N(n)
$

In a computational model of an electric guitar string, for example, $ {\tilde C}$ changes whenever a different pick-up is switched in or out (or is moved [99]).

The modal representation is not unique since $ {\tilde B}$ and $ {\tilde C}$ may be scaled in compensating ways to produce the same transfer function. (The diagonal elements of $ \tilde{A}$ may also be permuted along with $ {\tilde B}$ and $ {\tilde C}$.) Each element of the state vector $ \underline{{\tilde x}}(n)$ holds the state of a single first-order mode of the system.

For oscillatory systems, the diagonalized state transition matrix must contain complex elements. In particular, if mode $ i$ is both oscillatory and undamped (lossless), then an excited state-variable $ {\tilde x}_i(n)$ will oscillate sinusoidally, after the input becomes zero, at some frequency $ \omega_i$, where

$\displaystyle \lambda _i = e^{j\omega_iT}
$

relates the system eigenvalue $ \lambda_i $ to the oscillation frequency $ \omega_i$, with $ T$ denoting the sampling interval in seconds. More generally, in the damped case, we have

$\displaystyle \lambda _i = R_i e^{j\omega_iT}
$

where $ R_i$ is the pole (eigenvalue) radius. For stability, we must have

$\displaystyle \left\vert R_i\right\vert<1.
$

In practice, we often prefer to combine complex-conjugate pole-pairs to form a real, ``block-diagonal'' system; in this case, the transition matrix $ \tilde{A}$ is block-diagonal with two-by-two real matrices along its diagonal of the form

$\displaystyle \mathbf{A}_i = \left[\begin{array}{cc} 2R_iC_i & -R_i^2 \\ [2pt] 1 & 0 \end{array}\right]
$

where $ R_i=\left\vert\lambda_i\right\vert$ is the pole radius, and $ 2R_iC_i \isdef
2R_i\cos(\omega_i T) = \lambda_i + \overline{\lambda_i} =
2$re$ \left\{\lambda_i\right\}$. Note that, for real systems, a real second order block requires only two multiplies (one in the lossless case) per time update, while a complex second-order system requires two complex multiplies. The function cdf2rdf() in the Matlab Control Toolbox can be used to convert complex diagonal form to real block-diagonal form.


Repeated Poles

The above summary of state-space diagonalization works as stated when the modes (poles) of the system are distinct. When there are two or more resonant modes corresponding to the same ``natural frequency'' (eigenvalue of $ A$), then there are two further subcases: If the eigenvectors corresponding to the repeated eigenvalue (pole) are linearly independent, then the modes are independent and can be treated as distinct (the system can be diagonalized). Otherwise, we say the equal modes are coupled.

The coupled-repeated-poles situation is detected when the matrix of eigenvectors V returned by the eig matlab function [e.g., by saying [V,D] = eig(A)] turns out to be singular. Singularity of V can be defined as when its condition number [cond(V)] exceeds some threshold, such as 1E7. In this case, the linearly dependent eigenvectors can be replaced by so-called generalized eigenvectors [58]. Use of that similarity transformation then produces a ``block diagonalized'' system instead of a diagonalized system, and one of the blocks along the diagonal will be a $ k\times k$ matrix corresponding to the pole repeated $ k$ times.

Connecting with the discussion regarding repeated poles in §6.8.5, the $ k\times k$ Jordan block corresponding to a pole repeated $ k$ times plays exactly the same role of repeated poles encountered in a partial-fraction expansion, giving rise to terms in the impulse response proportional to $ n\lambda ^n$, $ n^2\lambda ^n$, and so on, up to $ n^{k-1}\lambda ^n$, where $ \lambda$ denotes the repeated pole itself (i.e., the repeated eigenvalue of the state-transition matrix $ A$).

Jordan Canonical Form

The block diagonal system having the eigenvalues along the diagonal and ones in some of the superdiagonal elements (which serve to couple repeated eigenvalues) is called Jordan canonical form. Each block size corresponds to the multiplicity of the repeated pole. As an example, a pole $ p_i$ of multiplicity $ m_i=3$ could give rise to the following $ 3\times 3$ Jordan block:

$\displaystyle D_i = \left[\begin{array}{ccc}
p_i & 1 & 0\\ [2pt]
0 & p_i & 1\\ [2pt]
0 & 0 & p_i
\end{array}\right]
$

The ones along the superdiagonal serve to couple the states corresponding to $ p_i$ and generate polynomial amplitude envelopes multiplying the sampled exponential $ p_i^n$.G.11Note, however, that a pole of multiplicity three can also yield two Jordan blocks, such as

$\displaystyle D_i = \left[\begin{array}{ccc}
p_i & 1 & 0\\ [2pt]
0 & p_i & 0\\ [2pt]
0 & 0 & p_i
\end{array}\right]
$

or even three Jordan blocks of order 1. The number of Jordan blocks associated with a single pole $ p_i$ is equal to the number of linearly independent eigenvectors of the transition matrix associated with eigenvalue $ p_i$. If all eigenvectors of $ A$ are linearly independent, the system can be diagonalized after all, and any repeated roots are ``uncoupled'' and behave like non-repeated roots (no polynomial amplitude envelopes).

Interestingly, neither Matlab nor Octave seem to have a numerical function for computing the Jordan canonical form of a matrix. Matlab will try to do it symbolically when the matrix entries are given as exact rational numbers (ratios of integers) by the jordan function, which requires the Maple symbolic mathematics toolbox. Numerically, it is generally difficult to distinguish between poles that are repeated exactly, and poles that are merely close together. The residuez function sets a numerical threshold below which poles are treated as repeated.


State-Space Analysis Example:
The Digital Waveguide Oscillator

As an example of state-space analysis, we will use it to determine the frequency of oscillation of the system of Fig.G.3 [90].

Figure G.3: The second-order digital waveguide oscillator.
\begin{figure}\input fig/sswgo.pstex_t
\end{figure}

Note the assignments of unit-delay outputs to state variables $ x_1(n)$ and $ x_2(n)$. From the diagram, we see that

$\displaystyle x_1(n+1) = c[x_1(n) + x_2(n)] - x_2(n) = c\,x_1(n) + (c-1) x_2(n)
$

and

$\displaystyle x_2(n+1) = x_1(n) + c[x_1(n) + x_2(n)] = (1+c) x_1(n) + c\,x_2(n)
$

In matrix form, the state time-update can be written

$\displaystyle \left[\begin{array}{c} x_1(n+1) \\ [2pt] x_2(n+1) \end{array}\rig...
...ay}\right]}_A \left[\begin{array}{c} x_1(n) \\ [2pt] x_2(n) \end{array}\right]
$

or, in vector notation,

$\displaystyle {\underline{x}}(n+1) = A \, {\underline{x}}(n).
$

We have two natural choices of output, $ x_1(n)$ and $ x_2(n)$:

\begin{eqnarray*}
y_1(n) &\isdef & x_1(n) = [1, 0] {\underline{x}}(n)\\
y_2(n) &\isdef & x_2(n) = [0, 1] {\underline{x}}(n)
\end{eqnarray*}

A basic fact from linear algebra is that the determinant of a matrix is equal to the product of its eigenvalues. As a quick check, we find that the determinant of $ A$ is

$\displaystyle \det{A} = c^2 - (c+1)(c-1) = c^2 - (c^2-1) = 1.
$

Since an undriven sinusoidal oscillator must not lose energy, and since every lossless state-space system has unit-modulus eigenvalues (consider the modal representation), we expect $ \left\vert\det{A}\right\vert=1$.

Note that $ {\underline{x}}(n) = A^n{\underline{x}}(0)$. If we diagonalize this system to obtain $ \tilde{A}=E^{-1}A E$, where $ \tilde{A}=$   diag$ [\lambda_1,\lambda_2]$, and $ E$ is the matrix of eigenvectors of $ A$, then we have

$\displaystyle \underline{{\tilde x}}(n) = \tilde{A}^n\,\underline{{\tilde x}}(0...
...t[\begin{array}{c} {\tilde x}_1(0) \\ [2pt] {\tilde x}_2(0) \end{array}\right]
$

where $ \underline{{\tilde x}}(n) \isdef E^{-1}{\underline{x}}(n)$ denotes the state vector in these new ``modal coordinates''. Since $ \tilde{A}$ is diagonal, the modes are decoupled, and we can write

\begin{eqnarray*}
{\tilde x}_1(n) &=& \lambda_1^n\,{\tilde x}_1(0)\\
{\tilde x}_2(n) &=& \lambda_2^n\,{\tilde x}_2(0).
\end{eqnarray*}

If this system is to generate a real sampled sinusoid at radian frequency $ \omega$, the eigenvalues $ \lambda_1$ and $ \lambda_2$ must be of the form

\begin{eqnarray*}
\lambda_1 &=& e^{j\omega T}\\
\lambda_2 &=& e^{-j\omega T},
\end{eqnarray*}

(in either order) where $ \omega$ is real, and $ T$ denotes the sampling interval in seconds.

Thus, we can determine the frequency of oscillation $ \omega$ (and verify that the system actually oscillates) by determining the eigenvalues $ \lambda_i $ of $ A$. Note that, as a prerequisite, it will also be necessary to find two linearly independent eigenvectors of $ A$ (columns of $ E$).

Finding the Eigenstructure of A

Starting with the defining equation for an eigenvector $ \underline{e}$ and its corresponding eigenvalue $ \lambda$,

$\displaystyle A\underline{e}_i= \lambda_i \underline{e}_i,\quad i=1,2
$

we get

$\displaystyle \left[\begin{array}{cc} c & c-1 \\ [2pt] c+1 & c \end{array}\righ...
...egin{array}{c} \lambda_i \\ [2pt] \lambda_i \eta_i \end{array}\right]. \protect$ (G.23)

We normalized the first element of $ \underline{e}_i$ to 1 since $ g\underline{e}_i$ is an eigenvector whenever $ \underline{e}_i$ is. (If there is a missing solution because its first element happens to be zero, we can repeat the analysis normalizing the second element to 1 instead.)

Equation (G.23) gives us two equations in two unknowns:

$\displaystyle c+\eta_i (c-1)$ $\displaystyle =$ $\displaystyle \lambda_i
\protect$ (G.24)
$\displaystyle (1+c) +c\eta_i$ $\displaystyle =$ $\displaystyle \lambda_i \eta_i$ (G.25)

Substituting the first into the second to eliminate $ \lambda_i $, we get

\begin{eqnarray*}
1+c+c\eta_i &=& [c+\eta_i (c-1)]\eta_i = c\eta_i + \eta_i ^2 (...
...-1)\\
\,\,\Rightarrow\,\,\eta_i &=& \pm \sqrt{\frac{c+1}{c-1}}.
\end{eqnarray*}

Thus, we have found both eigenvectors

\begin{eqnarray*}
\underline{e}_1&=&\left[\begin{array}{c} 1 \\ [2pt] \eta \end{...
...ght], \quad \hbox{where}\\
\eta&\isdef &\sqrt{\frac{c+1}{c-1}}.
\end{eqnarray*}

They are linearly independent provided $ \eta\neq0\Leftrightarrow
c\neq -1$ and finite provided $ c\neq 1$.

We can now use Eq.$ \,$(G.24) to find the eigenvalues:

$\displaystyle \lambda_i = c + \eta_i (c-1) = c \pm \sqrt{\frac{c+1}{c-1} (c-1)^2}
= c \pm \sqrt{c^2-1}
$

Assuming $ \left\vert c\right\vert<1$, the eigenvalues are

$\displaystyle \lambda_i = c \pm j\sqrt{1-c^2} \protect$ (G.26)

and so this is the range of $ c$ corresponding to sinusoidal oscillation. For $ \left\vert c\right\vert>1$, the eigenvalues are real, corresponding to exponential growth and decay. The values $ c=\pm 1$ yield a repeated root (dc or $ f_s/2$ oscillation).

Let us henceforth assume $ -1 < c < 1$. In this range $ \theta \isdef
\arccos(c)$ is real, and we have $ c=\cos(\theta)$, $ \sqrt{1-c^2} =
\sin(\theta)$. Thus, the eigenvalues can be expressed as follows:

\begin{eqnarray*}
\lambda_1 &=& c + j\sqrt{1-c^2} = \cos(\theta) + j\sin(\theta)...
...- j\sqrt{1-c^2} = \cos(\theta) - j\sin(\theta) = e^{-j\theta}\\
\end{eqnarray*}

Equating $ \lambda_i $ to $ e^{j\omega_i T}$, we obtain $ \omega_i T = \pm
\theta$, or $ \omega_i = \pm \theta/T = \pm f_s\theta =
\pm f_s\arccos(c)$, where $ f_s$ denotes the sampling rate. Thus the relationship between the coefficient $ c$ in the digital waveguide oscillator and the frequency of sinusoidal oscillation $ \omega$ is expressed succinctly as

$\displaystyle \fbox{$\displaystyle c = \cos(\omega T).$}
$

We see that the coefficient range (-1,1) corresponds to frequencies in the range $ (-f_s/2,f_s/2)$, and that's the complete set of available digital frequencies.

We have now shown that the system of Fig.G.3 oscillates sinusoidally at any desired digital frequency $ \omega$ rad/sec by simply setting $ c=\cos(\omega T)$, where $ T$ denotes the sampling interval.


Choice of Output Signal and Initial Conditions

Recalling that $ {\tilde x}= E\underline{{\tilde x}}$, the output signal from any diagonal state-space model is a linear combination of the modal signals. The two immediate outputs $ x_1(n)$ and $ x_2(n)$ in Fig.G.3 are given in terms of the modal signals $ {\tilde x}_1(n) = \lambda_1^n{\tilde x}_1(0)$ and $ {\tilde x}_2(n)=
\lambda_2^n{\tilde x}_2(0)$ as

\begin{eqnarray*}
y_1(n) &=& [1, 0] {\underline{x}}(n) = [1, 0] \left[\begin{arr...
...\lambda_1^n {\tilde x}_1(0) - \eta \lambda_2^n\,{\tilde x}_2(0).
\end{eqnarray*}

The output signal from the first state variable $ x_1(n)$ is

\begin{eqnarray*}
y_1(n) &=& \lambda_1^n\,{\tilde x}_1(0) + \lambda_2^n\,{\tilde...
...{j\omega n T} {\tilde x}_1(0) + e^{-j\omega n T}{\tilde x}_2(0).
\end{eqnarray*}

The initial condition $ {\underline{x}}(0) = [1, 0]^T$ corresponds to modal initial state

$\displaystyle \underline{{\tilde x}}(0) = E^{-1}\left[\begin{array}{c} 1 \\ [2p...
...nd{array}\right] = \left[\begin{array}{c} 1/2 \\ [2pt] 1/2 \end{array}\right].
$

For this initialization, the output $ y_1$ from the first state variable $ x_1$ is simply

$\displaystyle y_1(n) = \frac{e^{j\omega n T} + e^{-j\omega n T}}{2} = \cos(\omega n T).
$

A similar derivation can be carried out to show that the output $ y_2(n) = x_2(n)$ is proportional to $ \sin(\omega nT)$, i.e., it is in phase quadrature with respect to $ y_1(n)=x_1(n)$). Phase-quadrature outputs are often useful in practice, e.g., for generating complex sinusoids.


References

Further details on state-space analysis of linear systems may be found in [102,37]. More Matlab exercises and some supporting theory may be found in [10, Chapter 5].


State Space Problems

See http://ccrma.stanford.edu/~jos/filtersp/State_Space_Problems.html


Next Section:
A View of Linear Time Varying Digital Filters
Previous Section:
Matrix Filter Representations