Free Books

Modal Representation

One of the filter structures introduced in Book II [449, p. 209] was the parallel second-order filter bank, which may be computed from the general transfer function (a ratio of polynomials in $ z$) by means of the Partial Fraction Expansion (PFE) [449, p. 129]:

$\displaystyle H(z) \isdefs \frac{B(z)}{A(z)} \eqsp \sum_{i=1}^{N} \frac{r_i}{1-p_iz^{-1}} \protect$ (2.12)


B(z) &=& b_0 + b_1 z^{-1}+ b_2z^{-2}+ \cdots + b_M z^{-M}\\
A(z) &=& 1 + a_1 z^{-1}+ a_2z^{-2}+ \cdots + a_N z^{-N},\quad M<N

The PFE Eq.$ \,$(1.12) expands the (strictly proper2.10) transfer function as a parallel bank of (complex) first-order resonators. When the polynomial coefficients $ b_i$ and $ a_i$ are real, complex poles $ p_i$ and residues $ r_i$ occur in conjugate pairs, and these can be combined to form second-order sections [449, p. 131]:

H_i(z) &=& \frac{r_i}{1-p_iz^{-1}} + \frac{\overline{r_i}}{1-\...
..._i-\theta_i)z^{-1}}{1-2R_i\,\cos(\theta_i)z^{-1}+ R_i^2 z^{-2}}.

where $ p_i\isdeftext R_ie^{j\theta_i}$ and $ r_i\isdeftext
G_ie^{j\phi_i}$. Thus, every transfer function $ H(z)$ with real coefficients can be realized as a parallel bank of real first- and/or second-order digital filter sections, as well as a parallel FIR branch when $ M\ge N$.

As we will develop in §8.5, modal synthesis employs a ``source-filter'' synthesis model consisting of some driving signal into a parallel filter bank in which each filter section implements the transfer function of some resonant mode in the physical system. Normally each section is second-order, but it is sometimes convenient to use larger-order sections; for example, fourth-order sections have been used to model piano partials in order to have beating and two-stage-decay effects built into each partial individually [30,29].

For example, if the physical system were a row of tuning forks (which are designed to have only one significant resonant frequency), each tuning fork would be represented by a single (real) second-order filter section in the sum. In a modal vibrating string model, each second-order filter implements one ``ringing partial overtone'' in response to an excitation such as a finger-pluck or piano-hammer-strike.

State Space to Modal Synthesis

The partial fraction expansion works well to create a modal-synthesis system from a transfer function. However, this approach can yield inefficient realizations when the system has multiple inputs and outputs, because in that case, each element of the transfer-function matrix must be separately expanded by the PFE. (The poles are the same for each element, unless they are canceled by zeros, so it is really only the residue calculations that must be carried out for each element.)

If the second-order filter sections are realized in direct-form-II or transposed-direct-form-I (or more generally in any form for which the poles effectively precede the zeros), then the poles can be shared among all the outputs for each input, since the poles section of the filter from that input to each output sees the same input signal as all others, resulting in the same filter state. Similarly, the recursive portion can be shared across all inputs for each output when the filter sections have poles implemented after the zeros in series; one can imagine ``pushing'' the identical two-pole filters through the summer used to form the output signal. In summary, when the number of inputs exceeds the number of outputs, the poles are more efficiently implemented before the zeros and shared across all outputs for each input, and vice versa. This paragraph can be summarized symbolically by the following matrix equation:

$\displaystyle \left[\begin{array}{c} y_1 \\ [2pt] y_2 \end{array}\right]
...{\frac{1}{A}\left[\begin{array}{c} u_1 \\ [2pt] u_2 \end{array}\right]\right\}

What may not be obvious when working with transfer functions alone is that it is possible to share the poles across all of the inputs and outputs! The answer? Just diagonalize a state-space model by means of a similarity transformation [449, p. 360]. This will be discussed a bit further in §8.5. In a diagonalized state-space model, the $ A$ matrix is diagonal.2.11 The $ B$ matrix provides routing and scaling for all the input signals driving the modes. The $ C$ matrix forms the appropriate linear combination of modes for each output signal. If the original state-space model is a physical model, then the transformed system gives a parallel filter bank that is excited from the inputs and observed at the outputs in a physically correct way.

Force-Driven-Mass Diagonalization Example

To diagonalize our force-driven mass example, we may begin with its state-space model Eq.$ \,$(1.9):

$\displaystyle \left[\begin{array}{c} x_{n+1} \\ [2pt] v_{n+1} \end{array}\right...
...t[\begin{array}{c} 0 \\ [2pt] T/m \end{array}\right] f_n, \quad n=0,1,2,\ldots

which is in the general state-space form $ \underline{x}(n+1) = A\, \underline{x}(n) + B\,
\underline{u}(n)$ as needed (Eq.$ \,$(1.8)). We can see that $ A$ is already a Jordan block of order 2 [449, p. 368]. (We can change the $ T$ to 1 by scaling the physical units of $ x_2(n)$.) Thus, the system is already as diagonal as it's going to get. We have a repeated pole at $ z=1$, and they are effectively in series (instead of parallel), thus giving a ``defective'' $ A$ matrix [449, p. 136].

Typical State-Space Diagonalization Procedure

As discussed in [449, p. 362] and exemplified in §C.17.6, to diagonalize a system, we must find the eigenvectors of $ A$ by solving

$\displaystyle A\underline{e}_i = \lambda_i \underline{e}_i

for $ \underline{e}_i$, $ i=1,2$, where $ \lambda_i$ is simply the $ i$th pole (eigenvalue of $ A$). The $ N$ eigenvectors $ \underline{e}_i$ are collected into a similarity transformation matrix:

$\displaystyle E= \left[\begin{array}{cccc} \underline{e}_1 & \underline{e}_2 & \cdots & \underline{e}_N \end{array}\right]

If there are coupled repeated poles, the corresponding missing eigenvectors can be replaced by generalized eigenvectors.2.12 The $ E$ matrix is then used to diagonalize the system by means of a simple change of coordinates:

$\displaystyle \underline{x}(n) \isdef E\, \tilde{\underline{x}}(n)

The new diagonalized system is then
$\displaystyle \tilde{\underline{x}}(n+1)$ $\displaystyle =$ $\displaystyle \tilde{A}\, \tilde{\underline{x}}(n) + {\tilde B}\, \underline{u}(n)$  
$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle {\tilde C}\, \tilde{\underline{x}}(n) + {\tilde D}\,\underline{u}(n),$ (2.13)

$\displaystyle \tilde{A}$ $\displaystyle =$ $\displaystyle E^{-1}A E$  
$\displaystyle {\tilde B}$ $\displaystyle =$ $\displaystyle E^{-1}B$  
$\displaystyle {\tilde C}$ $\displaystyle =$ $\displaystyle C E$  
$\displaystyle {\tilde D}$ $\displaystyle =$ $\displaystyle D.
\protect$ (2.14)

The transformed system describes the same system as in Eq.$ \,$(1.8) relative to new state-variable coordinates $ \tilde{\underline{x}}(n)$. For example, it can be checked that the transfer-function matrix is unchanged.

Efficiency of Diagonalized State-Space Models

Note that a general $ N$th-order state-space model Eq.$ \,$(1.8) requires around $ N^2$ multiply-adds to update for each time step (assuming the number of inputs and outputs is small compared with the number of state variables, in which case the $ A\underline{x}(n)$ computation dominates). After diagonalization by a similarity transform, the time update is only order $ N$, just like any other efficient digital filter realization. Thus, a diagonalized state-space model (modal representation) is a strong contender for applications in which it is desirable to have independent control of resonant modes.

Another advantage of the modal expansion is that frequency-dependent characteristics of hearing can be brought to bear. Low-frequency resonances can easily be modeled more carefully and in more detail than very high-frequency resonances which tend to be heard only ``statistically'' by the ear. For example, rows of high-frequency modes can be collapsed into more efficient digital waveguide loops (§8.5) by retuning them to the nearest harmonic mode series.

Next Section:
Equivalent Circuits
Previous Section:
Transfer Functions