DSPRelated.com
Free Books

Physical Models

We now turn to the main subject of this book, physical models of musical instruments and audio effects. In contrast to the non-physical signal models mentioned above, we will consider a signal model to be a physical signal model when there is an explicit representation of the relevant physical state of the sound source. For example, a string physical model must offer the possibility of exciting the string at any point along its length.

We begin with a review of physical models in general, followed by an overview of computational subtypes, with some indication of their relative merits, and what is and is not addressed in this book.


All We Need is Newton

Since there are no relativistic or quantum effects to worry about in musical instruments or audio effects (at least not yet), good old Newtonian mechanics will suffice for our purposes. Newton's three laws of motion can be summarized by the classic equation2.4

$\displaystyle f = m\, a$   (Force = Mass times Acceleration)$\displaystyle .
$

However, the apparent mathematical simplicity of Newton's basic laws of motion does not mean that our task will always be easy. Models based on Newton's laws can quickly become prohibitively complex. We will usually need many further simplifications that preserve both sound quality and expressivity of control.


Formulations

Below are various physical-model representations we will consider:

We will briefly introduce these topics below, and the remaining chapters and appendices will develop further details.

ODEs and PDEs are purely mathematical descriptions (being differential equations), but they can be readily ``digitized'' to obtain computational physical models.2.5Difference equations are simply digitized differential equations. That is, digitizing ODEs and PDEs produces DEs. A DE may also be called a finite difference scheme. A discrete-time state-space model is a special formulation of a DE in which a vector of state variables is defined and propagated in a systematic way (as a vector first-order finite-difference scheme). A linear difference equation with constant coefficients--the Linear, Time-Invariant (LTI) case--can be reduced to a collection of transfer functions, one for each pairing of input and output signals (or a single transfer function matrix can relate a vector of input signal z transforms to a vector of output signal z transforms). An LTI state-space model can be diagonalized to produce a so-called modal representation, yielding a computational model consisting of a parallel bank of second-order digital filters. Impedance networks and their associated equivalent circuits are at the foundations of electrical engineering, and analog circuits have been used extensively to model linear systems and provide many useful functions. They are also useful intermediate representations for developing computational physical models in audio. Wave Digital Filters (WDF) were introduced as a means of digitizing analog circuits element by element, while preserving the ``topology'' of the original analog circuit (a very useful property when parameters are time varying as they often are in audio effects). Digital waveguide networks can be viewed as highly efficient computational forms for propagating solutions to PDEs allowing wave propagation. They can also be used to ``compress'' the computation associated with a sum of quasi harmonically tuned second-order resonators.

All of the above techniques are discussed to varying extents in this book. The following sections provide a bit more introduction before plunging into the chapters that follow.


ODEs

Ordinary Differential Equations (ODEs) typically result directly from Newton's laws of motion, restated here as follows:

$\displaystyle f(t) \eqsp m\, \ddot x(t)$   (Force = Mass times Acceleration)

This is a second-order ODE relating the force $ f(t)$ on a mass $ m$ at time $ t$ to the second time-derivative of its position $ x(t)$, i.e., $ a(t) \isdeftext \ddot x(t) \isdeftext d^2 x(t)/dt^2$. A physical diagram is shown in Fig.1.1. From this ODE we can see that a constant applied force $ f(t)=f_0>0$ results in a constant acceleration $ a(t)$, a linearly increasing velocity $ v(t)=\int a(t)\, dt$, and quadratically increasing position $ x(t)=\int v(t)\, dt$. The initial position $ x(0)$ and velocity $ v(0)$ of the mass comprise the initial state of mass, and serve as the boundary conditions for the ODE. The boundary conditions must be known in order to determine the two constants of integration needed when computing $ x(t)$ for $ t>0$.

Figure 1.1: Physical diagram of an external force driving a mass sliding on a frictionless surface.
\includegraphics{eps/forcemassintro}

If the applied force $ f(t)$ is due to a spring with spring-constant $ k$, then we may write the ODE as

$\displaystyle k\, x(t) + m\, \ddot x(t) \eqsp 0$   (Spring Force + Mass Inertial Force = 0)$\displaystyle .
$

This case is diagrammed in Fig.1.2.

Figure 1.2: Mass-spring-wall diagram.
\includegraphics{eps/massspringwall}

If the mass is sliding with friction, then a simple ODE model is given by

$\displaystyle k\, x(t) + \mu\, \dot x(t) + m\, \ddot x(t) \eqsp 0$   (Spring + Friction + Inertial Forces = 0)$\displaystyle .
$

as depicted in Fig.1.3.

Figure 1.3: Mass-spring-dashpot-wall diagram.
\includegraphics{eps/massspringdashpotwall}

We will use such ODEs to model mass, spring, and dashpot2.6 elements in Chapter 7.


PDEs

A partial differential equation (PDE) extends ODEs by adding one or more independent variables (usually spatial variables). For example, the wave equation for the ideal vibrating string adds one spatial dimension $ x$ (along the axis of the string) and may be written as follows:

$\displaystyle K\, y''(x,t) = \epsilon \, {\ddot y}(t)$   (Restoring Force = Inertial Force)$\displaystyle , \protect$ (2.1)

where $ y(x,t)$ denotes the transverse displacement of the string at position $ x$ along the string and time $ t$, and $ y'(x,t)\isdeftext \partial y(x,t)/\partial x$ denotes the partial derivative of $ y$ with respect to $ x$.2.7 The physical parameters in this case are string tension $ K$ and string mass-density $ \epsilon $. This PDE is the starting point for both digital waveguide models (Chapter 6) and finite difference schemesC.2.1).


Difference Equations (Finite Difference Schemes)

There are many methods for converting ODEs and PDEs to difference equations [53,55,481]. As will be discussed in §7.3, a very simple, order-preserving method is to replace each derivative with a finite difference:

$\displaystyle \dot x(t)\isdefs \frac{d}{dt} x(t) \isdefs \lim_{\delta\to 0} \frac{x(t) - x(t-\delta)}{\delta} \;\approx\; \frac{x(n T)-x[(n-1)T]}{T} \protect$ (2.2)

for sufficiently small $ T$ (the sampling interval). This is formally known as the backward difference operation for approximating differentiation. We will discuss a variety of such methods in §7.3 and Appendix D.

As a simple example, consider a mass $ m$ driven along a frictionless surface by a driving force $ f(t)$, as in Fig.1.1, and suppose we wish to know the resulting velocity of the mass $ v(t)$, assuming it starts out with position and velocity 0 at time 0 (i.e., $ x(0)=v(0)=0$). Then, from Newton's $ f=ma$ relation, the ODE is

$\displaystyle f(t) = m\,{\dot v}(t),
$

and the difference equation resulting from the backward-difference substitution is

$\displaystyle f(nT) \eqsp m \frac{v(nT) - v[(n-1)T]}{T}, \quad n=0,1,2,\ldots\,.
$

Solving for $ v(nT)$ yields the following finite difference scheme:

$\displaystyle v(nT) \eqsp v[(n-1)T] + \frac{T}{m} f(nT), \quad n=0,1,2,\ldots \protect$ (2.3)

with $ v(-T)\isdeftext 0$. In general, the driving force $ f$ could depend on the current state of the system (e.g., if a spring and/or dashpot were introduced). In such a case, Eq.$ \,$(1.3) may not be computable. (A delay-free loop could appear in the signal flow diagram.) Also, a finite force at time $ t$ cannot produce an instantaneous velocity at time $ t$, so Eq.$ \,$(1.3) is not ``physical'' in that sense, since $ v(nT)$ depends on $ f(nT)$. To address both of these issues, we can instead use the forward difference approximation to the derivative:

$\displaystyle \dot x(t) \eqsp \lim_{\delta\to 0} \frac{x(t+\delta) - x(t)}{\delta} \;\approx\; \frac{x[(n+1)T]-x(n T)}{T} \protect$ (2.4)

As $ T\to0$, the forward and backward difference operators approach the same limit (since $ x(t)$ is presumed continuous). Using this we obtain what is called an explicit finite difference scheme:

$\displaystyle v[(n+1)T] \eqsp v(nT) + \frac{T}{m} f(nT), \quad n=0,1,2,\ldots \protect$ (2.5)

with $ v(0)\isdeftext 0$.

A finite difference scheme is said to be explicit when it can be computed forward in time in terms of quantities from previous time steps, as in this example. Thus, an explicit finite difference scheme can be implemented in real time as a causal digital filter.

There are also implicit finite-difference schemes which may correspond to non-causal digital filters [449]. Implicit schemes are generally solved using iterative and/or matrix-inverse methods, and they are typically used offline (not in real time) [555].

There is also an interesting class of explicit schemes called semi-implicit finite-difference schemes which are obtained from an implicit scheme by imposing a fixed upper limit on the number of iterations in, say, Newton's method for iterative solution [555]. Thus, any implicit scheme that can be quickly solved by iterative methods can be converted to an explicit scheme for real-time usage. One technique for improving the iterative convergence rate is to work at a very high sampling rate, and initialize the iteration for each sample at the solution for the previous sample [555].

In this book, we will be concerned almost exclusively with explicit linear finite-difference schemes, i.e., causal digital filter models of one sort or another. That is, the main thrust is to obtain as much ``physical modeling power'' as possible from ordinary digital filters and delay lines. We will also be able to easily add memoryless nonlinearities where needed (such as implemented by table look-ups and short polynomial evaluations) as a direct result of the physical meaning of the signal samples.


State Space Models

Equations of motion for any physical system may be conveniently formulated in terms of the state of the system [330]:

$\displaystyle \underline{{\dot x}}(t) = f_t[\underline{x}(t),\underline{u}(t)] \protect$ (2.6)

Here, $ \underline{x}(t)$ denotes the state of the system at time $ t$, $ \underline{u}(t)$ is a vector of external inputs (typically forces), and the general vector function $ f_t$ specifies how the current state $ \underline{x}(t)$ and inputs $ \underline{u}(t)$ cause a change in the state at time $ t$ by affecting its time derivative $ \underline{{\dot x}}(t)$. Note that the function $ f_t$ may itself be time varying in general. The model of Eq.$ \,$(1.6) is extremely general for causal physical systems. Even the functionality of the human brain is well cast in such a form.

Equation (1.6) is diagrammed in Fig.1.4.

Figure: Continuous-time state-space model $ \underline{{\dot x}}(t) = f_t[\underline{x}(t),\underline{u}(t)]$.
\includegraphics{eps/statespaceanalog}

The key property of the state vector $ \underline{x}(t)$ in this formulation is that it completely determines the system at time $ t$, so that future states depend only on the current state and on any inputs at time $ t$ and beyond.2.8 In particular, all past states and the entire input history are ``summarized'' by the current state $ \underline{x}(t)$. Thus, $ \underline{x}(t)$ must include all ``memory'' of the system.

Forming Outputs

Any system output is some function of the state, and possibly the input (directly):

$\displaystyle \underline{y}(t) \isdef o_t[\underline{x}(t),\underline{u}(t)]
$

The general case of output extraction is shown in Fig.1.5.

Figure: Continuous-time state-space model with output vector $ \underline{y}(t) = o_t[\underline{x}(t),\underline{u}(t)]$.
\includegraphics{eps/statespaceanalogwo}

The output signal (vector) is most typically a linear combination of state variables and possibly the current input:

$\displaystyle \underline{y}(t) \isdefs C\underline{x}(t) + D\underline{u}(t)
$

where $ C$ and $ D$ are constant matrices of linear-combination coefficients.


State-Space Model of a Force-Driven Mass

Figure 1.6: Ideal mass $ m$ on frictionless surface driven by force $ f(t)$.
\includegraphics{eps/forcemassintrosimp}

For the simple example of a mass $ m$ driven by external force $ f$ along the $ x$ axis, we have the system of Fig.1.6. We should choose the state variable to be velocity $ v={\dot x}$ so that Newton's $ f=ma$ yields

$\displaystyle \dot{v} \eqsp \frac{1}{m} f.
$

This is a first-order system (no vector needed). We'll look at a simple vector example below in §1.3.7.


Numerical Integration of General State-Space Models

An approximate discrete-time numerical solution of Eq.$ \,$(1.6) is provided by

$\displaystyle \underline{x}(t_n+T_n) \eqsp \underline{x}(t_n) + T_n\,f[\underline{x}(t_n),\underline{u}(t_n)], \quad n=0,1,2,\ldots\,. \protect$ (2.7)

Let

$\displaystyle g_{t_n}[\underline{x}(t_n),\underline{u}(t_n)] \isdefs \underline{x}(t_n) + T_n\,f_{t_n}[\underline{x}(t_n),\underline{u}(t_n)].
$

Then we can diagram the time-update as in Fig.1.7. In this form, it is clear that $ g_{t_n}$ predicts the next state $ \underline{x}(t_n+T_n)$ as a function of the current state $ \underline{x}(t_n)$ and current input $ \underline{u}(t_n)$. In the field of computer science, computations having this form are often called finite state machines (or simply state machines), as they compute the next state given the current state and external inputs.

Figure 1.7: Discrete-time state-space model viewed as a state predictor, or finite state machine.
\includegraphics{eps/statemachineg}

This is a simple example of numerical integration for solving an ODE, where in this case the ODE is given by Eq.$ \,$(1.6) (a very general, potentially nonlinear, vector ODE). Note that the initial state $ \underline{x}(t_0)$ is required to start Eq.$ \,$(1.7) at time zero; the initial state thus provides boundary conditions for the ODE at time zero. The time sampling interval $ T_n$ may be fixed for all time as $ T_n=T$ (as it normally is in linear, time-invariant digital signal processing systems), or it may vary adaptively according to how fast the system is changing (as is often needed for nonlinear and/or time-varying systems). Further discussion of nonlinear ODE solvers is taken up in §7.4, but for most of this book, linear, time-invariant systems will be emphasized.

Note that for handling switching states (such as op-amp comparators and the like), the discrete-time state-space formulation of Eq.$ \,$(1.7) is more conveniently applicable than the continuous-time formulation in Eq.$ \,$(1.6).


State Definition

In view of the above discussion, it is perhaps plausible that the state $ \underline{x}(t) = [x_1(t), \ldots, x_N(t)]^T$ of a physical system at time $ t$ can be defined as a collection of state variables $ x_i(t)$, wherein each state variable $ x_i(t)$ is a physical amplitude (pressure, velocity, position, $ \ldots$) corresponding to a degree of freedom of the system. We define a degree of freedom as a single dimension of energy storage. The net result is that it is possible to compute the stored energy in any degree of freedom (the system's ``memory'') from its corresponding state-variable amplitude.

For example, an ideal mass $ m$ can store only kinetic energy $ E_m \eqsp \frac{1}{2}m\, v^2$, where $ v={\dot x}$ denotes the mass's velocity along the $ x$ axis. Therefore, velocity is the natural choice of state variable for an ideal point-mass. Coincidentally, we reached this conclusion independently above by writing $ f=ma$ in state-space form $ \dot{v}=(1/m)f$. Note that a point mass that can move freely in 3D space has three degrees of freedom and therefore needs three state variables $ (v_x,v_y,v_z)$ in its physical model. In typical models from musical acoustics (e.g., for the piano hammer), masses are allowed only one degree of freedom, corresponding to being constrained to move along a 1D line, like an ideal spring. We'll study the ideal mass further in §7.1.2.

Another state-variable example is provided by an ideal spring described by Hooke's law $ f=kx$B.1.3), where $ k$ denotes the spring constant, and $ x$ denotes the spring displacement from rest. Springs thus contribute a force proportional to displacement in Newtonian ODEs. Such a spring can only store the physical work (force times distance), expended to displace, it in the form of potential energy $ E_k \eqsp \frac{1}{2}k\, x^2$. More about ideal springs will be discussed in §7.1.3. Thus, spring displacement is the most natural choice of state variable for a spring.

In so-called RLC electrical circuits (consisting of resistors $ R_i$, inductors $ L_i$, and capacitors $ C_i$), the state variables are typically defined as all of the capacitor voltages (or charges) and inductor currents. We will discuss RLC electrical circuits further below.

There is no state variable for each resistor current in an RLC circuit because a resistor dissipates energy but does not store it--it has no ``memory'' like capacitors and inductors. The state (current $ I$, say) of a resistor $ R$ is determined by the voltage $ V$ across it, according to Ohm's law $ V=IR$, and that voltage is supplied by the capacitors, inductors, and voltage-sources, etc., to which it is connected. Analogous remarks apply to the dashpot, which is the mechanical analog of the resistor--we do not assign state variables to dashpots. (If we do, such as by mistake, then we will obtain state variables that are linearly dependent on other state variables, and the order of the system appears to be larger than it really is. This does not normally cause problems, and there are many numerical ways to later ``prune'' the state down to its proper order.)

Masses, springs, dashpots, inductors, capacitors, and resistors are examples of so-called lumped elements. Perhaps the simplest distributed element is the continuous ideal delay line. Because it carries a continuum of independent amplitudes, the order (number of state variables) is infinity for a continuous delay line of any length! However, in practice, we often work with sampled, bandlimited systems, and in this domain, delay lines have a finite number of state variables (one for each delay element). Networks of lumped elements yield finite-order state-space models, while even one distributed element jumps the order to infinity until it is bandlimited and sampled.

In summary, a state variable may be defined as a physical amplitude for some energy-storing degree of freedom. In models of mechanical systems, a state variable is needed for each ideal spring and point mass (times the number of dimensions in which it can move). For RLC electric circuits, a state variable is needed for each capacitor and inductor. If there are any switches, their state is also needed in the state vector (e.g., as boolean variables). In discrete-time systems such as digital filters, each unit-sample delay element contributes one (continuous) state variable to the model.


Linear State Space Models

As introduced in Book II [449, Appendix G], in the linear, time-invariant case, a discrete-time state-space model looks like a vector first-order finite-difference model:

$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle C \underline{x}(n) + D\underline{u}(n)$  
$\displaystyle \underline{x}(n+1)$ $\displaystyle =$ $\displaystyle A \underline{x}(n) + B \underline{u}(n)
\protect$ (2.8)

where $ \underline{x}(n)$ is the length $ N$ state vector at discrete time $ n$, $ \underline{u}(n)$ is in general a $ p\times 1$ vector of inputs, and $ \underline{y}(n)$ the $ q\times 1$ output vector. $ A$ is the $ N\times N$ state transition matrix, and it determines the dynamics of the system (its poles, or modal resonant frequencies and damping).

The state-space representation is especially powerful for multi-input, multi-output (MIMO) linear systems, and also for time-varying linear systems (in which case any or all of the matrices in Eq.$ \,$(1.8) may have time subscripts $ n$) [220].

To cast the previous force-driven mass example in state-space form, we may first observe that the state of the mass is specified by its velocity $ v(t)$ and position $ x(t)=\int v(t)\, dt$, or $ x[(n+1)T] =
x(nT) + T\,v(nT)$.2.9Thus, to Eq.$ \,$(1.5) we may add the explicit difference equation

$\displaystyle x[(n+1)T] \eqsp x(nT) + T\,v(nT)
\eqsp x(nT) + T\,v[(n-1)T] + \frac{T^2}{m} f[(n-1)T]
$

which, in canonical state-space form, becomes (letting $ x_n\isdef
x(nT)$, etc., for notational simplicity)

$\displaystyle \left[\begin{array}{c} x_{n+1} \\ [2pt] v_{n+1} \end{array}\right...
...ray}{c} 0 \\ [2pt] T/m \end{array}\right] f_n, \quad n=0,1,2,\ldots\,, \protect$ (2.9)

with $ x_0=v_0=0$ being a typical initial state.

General features of this example are that the entire physical state of the system is collected together into a single vector, and the elements of the $ (A,B,C,D)$ matrices include physical parameters (and the sampling interval, in the discrete-time case). The parameters may also vary with time (time-varying systems), or be functions of the state (nonlinear systems).

The general procedure for building a state-space model is to label all the state variables and collect them into a vector $ \underline{x}$, and then work out the state-transition matrix $ A$, input gains $ B$, output gains $ C$, and any direct coefficient $ D$. A state variable $ \underline{x}_i(n)$ is needed for each lumped energy-storage element (mass, spring, capacitor, inductor), and one for each sample of delay in sampled distributed systems. After that, various equivalent (but numerically preferable) forms can be generated by means of similarity transformations [449, pp. 360-374]. We will make sparing use of state-space models in this book, because they can be linear-algebra intensive, and therefore rarely used in practical real-time signal processing systems for music and audio effects. However, the state-space framework is an important general-purpose tool that should be kept in mind [220], and there is extensive support for state-space models in the matlab (``matrix laboratory'') language and its libraries. We will use it mainly as an analytical tool from time to time.

As noted earlier, a point mass only requires a first-order model:

$\displaystyle v_{n+1} \eqsp v_n + \frac{T}{m} f_n
$

Position $ x_n$ can always be computed from initial position $ x_0$ and velocity $ v_n$ by numerical integration:

$\displaystyle x_{n+1} \eqsp x_n + T\,v_n
$

Impulse Response of State Space Models

As derived in Book II [449, Appendix G], the impulse response of the state-space model can be summarized as

$\displaystyle {\mathbf{h}}(n) \eqsp \left\{\begin{array}{ll} D, & n=0 \\ [5pt] CA^{n-1}B, & n>0 \\ \end{array} \right. \protect$ (2.10)

Thus, the $ n$th ``sample'' of the impulse response is given by $ C A^{n-1}
B$ for $ n\geq0$. Each such ``sample'' is a $ p\times q$ matrix, in general.

In our force-driven-mass example, we have $ p=q=1$, $ B=[0,T/m]^T$, and $ D=0$. For a position output we have $ C=[1,0]$ while for a velocity output we would set $ C=[0,1]$. Choosing $ C=\mathbf{I}$ simply feeds the whole state vector to the output, which allows us to look at both simultaneously:

\begin{eqnarray*}
{\mathbf{h}}(n+1) &=&\left[\begin{array}{cc} 1 & 0 \\ [2pt] 0 ...
...\left[\begin{array}{c} nT \\ [2pt] 1 \end{array}\right]
\protect
\end{eqnarray*}

Thus, when the input force is a unit pulse, which corresponds physically to imparting momentum $ T$ at time 0 (because the time-integral of force is momentum and the physical area under a unit sample is the sampling interval $ T$), we see that the velocity after time 0 is a constant $ v_n = T/m$, or $ m\,v_n=T$, as expected from conservation of momentum. If the velocity is constant, then the position must grow linearly, as we see that it does: $ x_{n+1} = n
(T^2/m)$. The finite difference approximation to the time-derivative of $ x(t)$ now gives $ (x_{n+1}-x_n)/T = T/m = v_n$, for $ n\ge0$, which is consistent.


Zero-Input Response of State Space Models

The response of a state-space model Eq.$ \,$(1.8) to initial conditions, i.e., its initial state $ \underline{x}(0)$, is given by

$\displaystyle \underline{y}_x(n) \eqsp C A^{n-1}\underline{x}(0), \quad n=0,1,2,\ldots\,,
$

and the complete response of a linear system is given by the sum of its forced response (such as the impulse response) and its initial-condition response.

In our force-driven mass example, with the external force set to zero, we have, from Eq.$ \,$(1.9) or Eq.$ \,$(1.11),

$\displaystyle \left[\begin{array}{c} x_{n+1} \\ [2pt] v_{n+1} \end{array}\right...
...ght]
\eqsp \left[\begin{array}{c} x_0+v_0 n T \\ [2pt] v_0 \end{array}\right].
$

Thus, any initial velocity $ v_0$ remains unchanged, as physically expected. The initial position $ x_0$ remains unchanged if the initial velocity is zero. A nonzero initial velocity results in a linearly growing position, as physically expected. This response to initial conditions can be added to any forced response by superposition. The forced response may be computed as the convolution of the input driving force $ f_n$ with the impulse response Eq.$ \,$(1.11).


Transfer Functions

As developed in Book II [449], a discrete-time transfer function is the z transform of the impulse response of a linear, time-invariant (LTI) system. In a physical modeling context, we must specify the input and output signals we mean for each transfer function to be associated with the LTI model. For example, if the system is a simple mass sliding on a surface, the input signal could be an external applied force, and the output could be the velocity of the mass in the direction of the applied force. In systems containing many masses and other elements, there are many possible different input and output signals. It is worth emphasizing that a system can be reduced to a set of transfer functions only in the LTI case, or when the physical system is at least nearly linear and only slowly time-varying (compared with its impulse-response duration).

As we saw in the previous section, the state-space formulation nicely organizes all possible input and output signals in a linear system. Specifically, for inputs, each input signal is multiplied by a ``$ B$ vector'' (the corresponding column of the $ B$ matrix) and added to the state vector; that is, each input signal may be arbitrarily scaled and added to any state variable. Similarly, each state variable may be arbitrarily scaled and added to each output signal via the row of the $ C$ matrix corresponding to that output signal.

Using the closed-form sum of a matrix geometric series (again as detailed in Book II), we may easily calculate the transfer function of the state-space model of Eq.$ \,$(1.8) above as the z transform of the impulse response given in Eq.$ \,$(1.10) above:

$\displaystyle H(z) \eqsp D + C \left(zI - A\right)^{-1}B \protect$ (2.11)

Note that if there are $ p$ inputs and $ q$ outputs, $ H(z)$ is a $ p\times q$ transfer-function matrix (or ``matrix transfer function'').

In the force-driven-mass example of the previous section, defining the input signal as the driving force $ f_n$ and the output signal as the mass velocity $ v_n$, we have $ p=q=1$, $ B=[0,T/m]^T$, $ C=[0,1]$, and $ D=0$, so that the force-to-velocity transfer function is given by

\begin{eqnarray*}
H(z)
&=& D + C \left(zI - A\right)^{-1}B\\ [5pt]
&=&\begin{ar...
...{z-1}{(z-1)^2} \eqsp \zbox {\frac{T}{m}\frac{z^{-1}}{1-z^{-1}}.}
\end{eqnarray*}

Thus, the force-to-velocity transfer function is a one-pole filter with its pole at $ z=1$ (an integrator). The unit-sample delay in the numerator guards against delay-free loops when this element (a mass) is combined with other elements to build larger filter structures.

Similarly, the force-to-position transfer function is a two-pole filter:

\begin{eqnarray*}
H(z)
&=&\frac{1}{(z-1)^2}\begin{array}{r}\left[\begin{array}{...
...m}{(z-1)^2} \eqsp \frac{T^2}{m}\,\frac{z^{-2}}{1-2z^{-1}+z^{-2}}
\end{eqnarray*}

Now we have two poles on the unit circle at $ z=1$, and the impulse response of this filter is a ramp, as already discovered from the previous impulse-response calculation.

Once we have transfer-function coefficients, we can realize any of a large number of digital filter types, as detailed in Book II [449, Chapter 9].


Modal Representation

One of the filter structures introduced in Book II [449, p. 209] was the parallel second-order filter bank, which may be computed from the general transfer function (a ratio of polynomials in $ z$) by means of the Partial Fraction Expansion (PFE) [449, p. 129]:

$\displaystyle H(z) \isdefs \frac{B(z)}{A(z)} \eqsp \sum_{i=1}^{N} \frac{r_i}{1-p_iz^{-1}} \protect$ (2.12)

where

\begin{eqnarray*}
B(z) &=& b_0 + b_1 z^{-1}+ b_2z^{-2}+ \cdots + b_M z^{-M}\\
A(z) &=& 1 + a_1 z^{-1}+ a_2z^{-2}+ \cdots + a_N z^{-N},\quad M<N
\end{eqnarray*}

The PFE Eq.$ \,$(1.12) expands the (strictly proper2.10) transfer function as a parallel bank of (complex) first-order resonators. When the polynomial coefficients $ b_i$ and $ a_i$ are real, complex poles $ p_i$ and residues $ r_i$ occur in conjugate pairs, and these can be combined to form second-order sections [449, p. 131]:

\begin{eqnarray*}
H_i(z) &=& \frac{r_i}{1-p_iz^{-1}} + \frac{\overline{r_i}}{1-\...
..._i-\theta_i)z^{-1}}{1-2R_i\,\cos(\theta_i)z^{-1}+ R_i^2 z^{-2}}.
\end{eqnarray*}

where $ p_i\isdeftext R_ie^{j\theta_i}$ and $ r_i\isdeftext
G_ie^{j\phi_i}$. Thus, every transfer function $ H(z)$ with real coefficients can be realized as a parallel bank of real first- and/or second-order digital filter sections, as well as a parallel FIR branch when $ M\ge N$.

As we will develop in §8.5, modal synthesis employs a ``source-filter'' synthesis model consisting of some driving signal into a parallel filter bank in which each filter section implements the transfer function of some resonant mode in the physical system. Normally each section is second-order, but it is sometimes convenient to use larger-order sections; for example, fourth-order sections have been used to model piano partials in order to have beating and two-stage-decay effects built into each partial individually [30,29].

For example, if the physical system were a row of tuning forks (which are designed to have only one significant resonant frequency), each tuning fork would be represented by a single (real) second-order filter section in the sum. In a modal vibrating string model, each second-order filter implements one ``ringing partial overtone'' in response to an excitation such as a finger-pluck or piano-hammer-strike.

State Space to Modal Synthesis

The partial fraction expansion works well to create a modal-synthesis system from a transfer function. However, this approach can yield inefficient realizations when the system has multiple inputs and outputs, because in that case, each element of the transfer-function matrix must be separately expanded by the PFE. (The poles are the same for each element, unless they are canceled by zeros, so it is really only the residue calculations that must be carried out for each element.)

If the second-order filter sections are realized in direct-form-II or transposed-direct-form-I (or more generally in any form for which the poles effectively precede the zeros), then the poles can be shared among all the outputs for each input, since the poles section of the filter from that input to each output sees the same input signal as all others, resulting in the same filter state. Similarly, the recursive portion can be shared across all inputs for each output when the filter sections have poles implemented after the zeros in series; one can imagine ``pushing'' the identical two-pole filters through the summer used to form the output signal. In summary, when the number of inputs exceeds the number of outputs, the poles are more efficiently implemented before the zeros and shared across all outputs for each input, and vice versa. This paragraph can be summarized symbolically by the following matrix equation:

$\displaystyle \left[\begin{array}{c} y_1 \\ [2pt] y_2 \end{array}\right]
\eqsp...
...{\frac{1}{A}\left[\begin{array}{c} u_1 \\ [2pt] u_2 \end{array}\right]\right\}
$

What may not be obvious when working with transfer functions alone is that it is possible to share the poles across all of the inputs and outputs! The answer? Just diagonalize a state-space model by means of a similarity transformation [449, p. 360]. This will be discussed a bit further in §8.5. In a diagonalized state-space model, the $ A$ matrix is diagonal.2.11 The $ B$ matrix provides routing and scaling for all the input signals driving the modes. The $ C$ matrix forms the appropriate linear combination of modes for each output signal. If the original state-space model is a physical model, then the transformed system gives a parallel filter bank that is excited from the inputs and observed at the outputs in a physically correct way.


Force-Driven-Mass Diagonalization Example

To diagonalize our force-driven mass example, we may begin with its state-space model Eq.$ \,$(1.9):

$\displaystyle \left[\begin{array}{c} x_{n+1} \\ [2pt] v_{n+1} \end{array}\right...
...t[\begin{array}{c} 0 \\ [2pt] T/m \end{array}\right] f_n, \quad n=0,1,2,\ldots
$

which is in the general state-space form $ \underline{x}(n+1) = A\, \underline{x}(n) + B\,
\underline{u}(n)$ as needed (Eq.$ \,$(1.8)). We can see that $ A$ is already a Jordan block of order 2 [449, p. 368]. (We can change the $ T$ to 1 by scaling the physical units of $ x_2(n)$.) Thus, the system is already as diagonal as it's going to get. We have a repeated pole at $ z=1$, and they are effectively in series (instead of parallel), thus giving a ``defective'' $ A$ matrix [449, p. 136].


Typical State-Space Diagonalization Procedure

As discussed in [449, p. 362] and exemplified in §C.17.6, to diagonalize a system, we must find the eigenvectors of $ A$ by solving

$\displaystyle A\underline{e}_i = \lambda_i \underline{e}_i
$

for $ \underline{e}_i$, $ i=1,2$, where $ \lambda_i$ is simply the $ i$th pole (eigenvalue of $ A$). The $ N$ eigenvectors $ \underline{e}_i$ are collected into a similarity transformation matrix:

$\displaystyle E= \left[\begin{array}{cccc} \underline{e}_1 & \underline{e}_2 & \cdots & \underline{e}_N \end{array}\right]
$

If there are coupled repeated poles, the corresponding missing eigenvectors can be replaced by generalized eigenvectors.2.12 The $ E$ matrix is then used to diagonalize the system by means of a simple change of coordinates:

$\displaystyle \underline{x}(n) \isdef E\, \tilde{\underline{x}}(n)
$

The new diagonalized system is then
$\displaystyle \tilde{\underline{x}}(n+1)$ $\displaystyle =$ $\displaystyle \tilde{A}\, \tilde{\underline{x}}(n) + {\tilde B}\, \underline{u}(n)$  
$\displaystyle \underline{y}(n)$ $\displaystyle =$ $\displaystyle {\tilde C}\, \tilde{\underline{x}}(n) + {\tilde D}\,\underline{u}(n),$ (2.13)

where
$\displaystyle \tilde{A}$ $\displaystyle =$ $\displaystyle E^{-1}A E$  
$\displaystyle {\tilde B}$ $\displaystyle =$ $\displaystyle E^{-1}B$  
$\displaystyle {\tilde C}$ $\displaystyle =$ $\displaystyle C E$  
$\displaystyle {\tilde D}$ $\displaystyle =$ $\displaystyle D.
\protect$ (2.14)

The transformed system describes the same system as in Eq.$ \,$(1.8) relative to new state-variable coordinates $ \tilde{\underline{x}}(n)$. For example, it can be checked that the transfer-function matrix is unchanged.


Efficiency of Diagonalized State-Space Models

Note that a general $ N$th-order state-space model Eq.$ \,$(1.8) requires around $ N^2$ multiply-adds to update for each time step (assuming the number of inputs and outputs is small compared with the number of state variables, in which case the $ A\underline{x}(n)$ computation dominates). After diagonalization by a similarity transform, the time update is only order $ N$, just like any other efficient digital filter realization. Thus, a diagonalized state-space model (modal representation) is a strong contender for applications in which it is desirable to have independent control of resonant modes.

Another advantage of the modal expansion is that frequency-dependent characteristics of hearing can be brought to bear. Low-frequency resonances can easily be modeled more carefully and in more detail than very high-frequency resonances which tend to be heard only ``statistically'' by the ear. For example, rows of high-frequency modes can be collapsed into more efficient digital waveguide loops (§8.5) by retuning them to the nearest harmonic mode series.


Equivalent Circuits

The concepts of ``circuits'' and ``ports'' from classical circuit/network theory [35] are very useful for partitioning complex systems into self-contained sections having well-defined (small) interfaces. For example, it is typical in analog electric circuit design to drive a high-input-impedance stage from a low-output-impedance stage (a so-called ``voltage transfer'' connection). This large impedance ratio allows us to neglect ``loading effects'' so that the circuit sections (stages) can be analyzed separately.

The name ``analog circuit'' refers to the fact that electrical capacitors (denoted $ C$) are analogous to physical springs, inductors ($ L$) are analogous to physical masses, and resistors ($ R$) are analogous to ``dashpots'' (which are idealized physical devices for which compression velocity is proportional to applied force--much like a shock-absorber (``damper'') in an automobile suspension). These are all called lumped elements to distinguish them from distributed parameters such as the capacitance and inductance per unit length in an electrical transmission line. Lumped elements are described by ODEs while distributed-parameter systems are described by PDEs. Thus, RLC analog circuits can be constructed as equivalent circuits for lumped dashpot-mass-spring systems. These equivalent circuits can then be digitized by finite difference or wave digital methods. PDEs describing distributed-parameter systems can be digitized via finite difference methods as well, or, when wave propagation is the dominant effect, digital waveguide methods.

As discussed in Chapter 77.2), the equivalent circuit for a force-driven mass is shown in Fig.F.10. The mass $ m$ is represented by an inductor $ L=m$. The driving force $ f(t)$ is supplied via a voltage source, and the mass velocity $ v(t)$ is the loop current.

Figure: Electrical equivalent circuit of the force-driven mass in Fig.1.6.
\includegraphics{eps/forcemassecCopy}

As also discussed in Chapter 77.2), if two physical elements are connected in such a way that they share a common velocity, then they are said to be formally connected in series. The ``series'' nature of the connection becomes more clear when the equivalent circuit is considered.

Figure 1.9: A mass and spring connected in series.
\includegraphics{eps/lseriesExampleCopy}

For example, Fig.1.9 shows a mass connected to one end of a spring, with the other end of the spring attached to a rigid wall. The driving force $ f_{\mbox{ext}}(t)$ is applied to the mass $ m$ on the left so that a positive force results in a positive mass displacement $ x_m(t)$ and positive spring displacement (compression) $ x_k(t)$. Since the mass and spring displacements are physically the same, we can define $ x_m=x_k\isdeftext x$. Their velocities are similarly equal so that $ v_m=v_k\isdeftext v$. The equivalent circuit has their electrical analogs connected in series, as shown in Fig.1.10. The common mass and spring velocity $ v(t)$ appear as a single current running through the inductor (mass) and capacitor (spring).

Figure: Electrical equivalent circuit of the series mass-spring driven by an external force diagrammed in Fig.1.9.
\includegraphics{eps/lseriesecCopy}

By Kirchoff's loop law for circuit analysis, the sum of all voltages around a loop equals zero.2.13 Thus, following the direction for current $ v$ in Fig.1.10, we have $ f_m+f_k-f_{\mbox{ext}}=0$ (where the minus sign for $ f_{\mbox{ext}}=0$ occurs because the current enters its minus sign), or

$\displaystyle f_{\mbox{ext}}(t) \eqsp f_m(t)+f_k(t).
$

Thus, the equivalent circuit agrees with our direct physical analysis that the applied force $ f_{\mbox{ext}}(t)$ is equal at all times to the sum of the mass inertial force $ f_m(t) = m\ddot{x}(t)$ and spring force $ f_k(t) = k x(t)$, i.e.,

$\displaystyle f_{\mbox{ext}}(t) \eqsp m\ddot{x}(t) + k x(t).
$


Impedance Networks

The concept of impedance is central in classical electrical engineering. The simplest case is Ohm's Law for a resistor $ R$:

$\displaystyle V(t) \eqsp R\, I(t)
$

where $ V(t)$ denotes the voltage across the resistor at time $ t$, and $ I(t)$ is the current through the resistor. For the corresponding mechanical element, the dashpot, Ohm's law becomes

$\displaystyle f(t) \eqsp \mu\, v(t)
$

where $ f(t)$ is the force across the dashpot at time $ t$, and $ v(t)$ is its compression velocity. The dashpot value $ \mu $ is thus a mechanical resistance.2.14

Thanks to the Laplace transform [449]2.15(or Fourier transform [451]), the concept of impedance easily extends to masses and springs as well. We need only allow impedances to be frequency-dependent. For example, the Laplace transform of Newton's $ f=ma$ yields, using the differentiation theorem for Laplace transforms [449],

$\displaystyle F(s) \eqsp m\, A(s) \eqsp m\, sV(s) \eqsp m\, s^2X(s)
$

where $ F(s)$ denotes the Laplace transform of $ f(t)$ (`` $ F(s) =
{\cal L}_s\{f\}$''), and similarly for displacement, velocity, and acceleration Laplace transforms. (It is assumed that all initial conditions are zero, i.e., $ f(0)=x(0)=v(0)=a(0)=0$.) The mass impedance is therefore

$\displaystyle R_m(s) \isdefs \frac{F(s)}{V(s)} \eqsp ms.
$

Specializing the Laplace transform to the Fourier transform by setting $ s=j\omega$ gives

$\displaystyle R_m(j\omega) \eqsp jm\omega.
$

Similarly, the impedance of a spring having spring-constant $ k$ is given by

\begin{eqnarray*}
R_k(s) &=& \frac{k}{s}\\ [5pt]
R_k(j\omega) &=& \frac{k}{j\omega}.
\end{eqnarray*}

The important benefit of this frequency-domain formulation of impedance is that it allows every interconnection of masses, springs, and dashpots (every RLC equivalent circuit) to be treated as a simple resistor network, parametrized by frequency.

Figure: Impedance diagram for the force-driven, series arrangement of mass and spring shown in Fig.1.9.
\includegraphics{eps/lseriesidCopy}

As an example, Fig.1.11 gives the impedance diagram corresponding to the equivalent circuit in Fig.1.10. Viewing the circuit as a (frequency-dependent) resistor network, it is easy to write down, say, the Laplace transform of the force across the spring using the voltage divider formula:

$\displaystyle F_k(s) \eqsp F_{\mbox{ext}}(s) \frac{R_k(s)}{R_m(s)+R_k(s)} \eqsp F_{\mbox{ext}}(s)\frac{k/m}{s^2+k/m}
$

These sorts of equivalent-circuit and impedance-network models of mechanical systems, and their digitization to digital-filter form, are discussed further in Chapter 7.


Wave Digital Filters

The idea of wave digital filters is to digitize RLC circuits (and certain more general systems) as follows:

  1. Determine the ODEs describing the system (PDEs also workable).

  2. Express all physical quantities (such as force and velocity) in terms of traveling-wave components. The traveling wave components are called wave variables. For example, the force $ f(n)$ on a mass is decomposed as $ f(n) = f^{{+}}(n)+f^{{-}}(n)$, where $ f^{{+}}(n)$ is regarded as a traveling wave propagating toward the mass, while $ f^{{-}}(n)$ is seen as the traveling component propagating away from the mass. A ``traveling wave'' view of force mediation (at the speed of light) is actually much closer to underlying physical reality than any instantaneous model.

  3. Next, digitize the resulting traveling-wave system using the bilinear transform7.3.2,[449, p. 386]). The bilinear transform is equivalent in the time domain to the trapezoidal rule for numerical integration7.3.2).

  4. Connect $ N$ elementary units together by means of $ N$-port scattering junctions. There are two basic types of scattering junction, one for parallel, and one for series connection. The theory of scattering junctions is introduced in the digital waveguide context (§C.8).
A more detailed introduction to WDFs is provided in Appendix F. In particular, the force-driven mass is considered in §F.3.4, and its wave digital model is shown in Fig.1.12.

Figure 1.12: Wave digital mass driven by external force $ f(n)$.
\includegraphics{eps/forcemasswdfCopy}

We will not make much use of WDFs in this book, preferring instead more prosaic finite-difference models for simplicity. However, we will utilize closely related concepts in the digital waveguide modeling context (Chapter 6).


Digital Waveguide Modeling Elements

As mentioned above, digital waveguide models are built out of digital delay-lines and filters (and nonlinear elements), and they can be understood as propagating and filtering sampled traveling-wave solutions to the wave equation (PDE), such as for air, strings, rods, and the like [433,437]. It is noteworthy that strings, woodwinds, and brasses comprise three of the four principal sections of a classical orchestra (all but percussion). The digital waveguide modeling approach has also been extended to propagation in 2D, 3D, and beyond [518,396,522,400]. They are not finite-difference models, but paradoxically they are equivalent under certain conditions (Appendix E). A summary of historical aspects appears in §A.9.

As mentioned at Eq.$ \,$(1.1), the ideal wave equation comes directly from Newton's laws of motion ($ f=ma$). For example, in the case of vibrating strings, the wave equation is derived from first principles (in Chapter 6, and more completely in Appendix C) to be

\begin{eqnarray*}
Ky''&=& \epsilon {\ddot y}\\ [5pt]
\mbox{(Restoring Force Density} &=& \mbox{Mass Density times
Acceleration)}, \end{eqnarray*}

where

\begin{eqnarray*}
K& \isdef & \mbox{string tension}\\
\epsilon & \isdef & \mbox{linear mass density}.
\end{eqnarray*}

Defining $ c=\sqrt{K/\epsilon }$, we obtain the usual form of the PDE known as the ideal 1D wave equation.

$\displaystyle y''= \frac{1}{c^2}{\ddot y} \protect$ (2.15)

where $ y(t,x)$ is the string displacement at time $ t$ and position $ x$. (We omit the time and position arguments $ (t,x)$ when they are the same for all signal terms in the equation.) For example, $ y$ can be the transverse displacement of an ideal stretched string or the longitudinal displacement (or pressure, velocity, etc.) in an air column. The independent variables are time $ t$ and the distance $ x$ along the string or air-column axis. The partial-derivative notation is more completely written out as

\begin{eqnarray*}
{\ddot y}& \isdef & \frac{\partial^2}{\partial t^2} y(t,x)\\ [5pt]
y''& \isdef & \frac{\partial^2}{\partial x^2} y(t,x).
\end{eqnarray*}

As has been known since d'Alembert [100], the 1D wave equation is obeyed by arbitrary traveling waves at speed $ c$:

$\displaystyle y(t,x) \eqsp y_r(t-x/c) + y_l(t+x/c)
$

To show this, just plug $ y_r(t-x/c)$ or $ y_l(t+x/c)$ (or any linear combination of them) into the wave equation Eq.$ \,$(1.15). Thus, $ c=\sqrt{K/\epsilon }$ is the traveling-wave propagation speed expressed in terms of the string tension $ K$ and mass density $ \epsilon $.

In digital waveguide modeling, the traveling-waves are sampled:

\begin{eqnarray*}
y(nT,mX)
&=& y_r(nT-mX/c) + y_l(nT+mX/c)\qquad \mbox{(set $X=...
...y_r(nT-mT) + y_l(nT+mT)\\ [5pt]
&\isdef &y^{+}(n-m) + y^{-}(n+m)
\end{eqnarray*}

where $ T$ denotes the time sampling interval in seconds, $ X=cT$ denotes the spatial sampling interval in meters, and $ y^{+}$ and $ y^{-}$ are defined for notational convenience.

An ideal string (or air column) can thus be simulated using a bidirectional delay line, as shown in Fig.1.13 for the case of an $ N$-sample section of ideal string or air column. The ``$ R$'' label denotes its wave impedance6.1.5) which is needed when connecting digital waveguides to each other and to other kinds of computational physical models (such as finite difference schemes). While propagation speed on an ideal string is $ c=\sqrt{K/\epsilon }$, we will derive (§C.7.3) that the wave impedance is $ R=\sqrt{K\epsilon }$.

Figure 1.13: A digital waveguide--a sampled traveling-wave simulation for waves in ideal strings or acoustic tubes having wave impedance $ R$.
\includegraphics{eps/BidirectionalDelayLineCopy2}

Figure 1.14 (from Chapter 6, §6.3), illustrates a simple digital waveguide model for rigidly terminated vibrating strings (more specifically, one polarization-plane of transverse vibration). The traveling-wave components are taken to be displacement samples, but the diagram for velocity-wave and acceleration-wave simulation are identical (inverting reflection at each rigid termination). The output signal $ y(nT,\xi)$ is formed by summing traveling-wave components at the desired ``virtual pickup'' location (position $ x=\xi $ in this example). To drive the string at a particular point, one simply takes the transpose [449] of the output sum, i.e., the input excitation is summed equally into the left- and right-going delay-lines at the same $ x$ position (details will be discussed near Fig.6.14).

Figure 1.14: Digital waveguide model of a rigidly terminated ideal string, with a displacement output indicated at position $ x=\xi $. Rigid terminations reflect traveling displacement, velocity, or acceleration waves with a sign inversion. Slope or force waves reflect with no sign inversion.
\includegraphics[width=\twidth]{eps/fterminatedstringCopy}

In Chapter 9 (example applications), we will discuss digital waveguide models for single-reed instruments such as the clarinet (Fig.1.15), and bowed-string instruments (Fig.1.16) such as the violin.

Figure: Digital waveguide model of a single-reed, cylindrical-bore woodwind, such as a clarinet (copy of Fig.9.39).
\includegraphics[width=\twidth]{eps/fSingleReedWGMCopy}

Figure: Digital waveguide model for a bowed-string instrument, such as a violin (copy of Fig.9.52).
\includegraphics[width=\twidth]{eps/fBowedStringsWGMCopy}


General Modeling Procedure

While each situation tends to have special opportunities, the following procedure generally works well:

  1. Formulate a state-space model.
  2. If it is nonlinear, use numerical time-integration: There are many other descriptions of nonlinear systems to consider, some of which will be noted as we proceed.
  3. In the linear case, diagonalize the state-space model to obtain the modal representation.


Next Section:
Our Plan
Previous Section:
Signal Models