Free Books

Lumped Models

This chapter introduces modeling of ``lumped'' physical systems, such as configurations of masses, springs, and ``dashpots''.

The term ``lumped'' comes from electrical engineering, and refers to lumped-parameter analysis, as opposed to distributed-parameter analysis. Examples of ``distributed'' systems in musical acoustics include ideal strings, acoustic tubes, and anything that propagates waves. In general, a lumped-parameter approach is appropriate when the physical object has dimensions that are small relative to the wavelength of vibration. Examples from musical acoustics include brass-players' lips (modeled using one or two masses attached to springs--see §9.7), and the piano hammer (modeled using a mass and nonlinear spring, as discussed in §9.4). In contrast to these lumped-modeling examples, the vibrating string is most efficiently modeled as a sampled distributed-parameter system, as discussed in Chapter 6, although lumped models of strings (using, e.g., a mass-spring-chain [318]) work perfectly well, albeit at a higher computational expense for a given model quality [69,145]. In the realm of electromagnetism, distributed-parameter systems include electric transmission lines and optical waveguides, while the typical lumped-parameter systems are ordinary RLC circuits (connecting resistors, capacitors, and inductors). Again, whenever the oscillation wavelength is large relative to the geometry of the physical component, a lumped approximation may be considered. As a result, there is normally a high-frequency limit on the validity of a lumped-parameter model. For the same reason, there is normally an upper limit on physical size as well.

We begin with the fundamental concept of impedance, and discuss the elementary lumped impedances associated with springs, mass, and dashpots. These physical objects are analogous to capacitors, inductors, and resistors in lumped-parameter electrical circuits. Next, we discuss general interconnections of such elements, characterized at a single input/output location by means of one-port network theory. In particular, we will see that all passive networks present a positive real impedance at any port (input/output point). A network diagram may be replaced by an impedance diagram, which may then be translated into its equivalent circuit (replacing springs by capacitors, masses by inductors, and dashpots by resistors).

In the following chapter, we discuss digitization of lumped networks by various means, including finite differences and the bilinear transformation.


Impedance is defined for mechanical systems as force divided by velocity, while the inverse (velocity/force) is called an admittance. For dynamic systems, the impedance of a ``driving point'' is defined for each frequency $ \omega $, so that the ``force'' in the definition of impedance is best thought of as the peak amplitude of a sinusoidal applied force, and similarly for the velocity. Thus, if $ F(\omega)$ denotes the Fourier transform of the applied force at a driving point, and $ V(\omega)$ is the Fourier transform of the resulting velocity of the driving point, then the driving-point impedance is given by

$\displaystyle R(\omega) \isdef \frac{F(\omega)}{V(\omega)}.

In the lossless case (no dashpots, only masses and springs), all driving-point impedances are purely imaginary, and a purely imaginary impedance is called a reactance. A purely imaginary admittance is called a susceptance. The term immittance refers to either an impedance or an admittance [35]. In mechanics, force is typically in units of newtons (kilograms times meters per second squared) and velocity is in meters per second.

In acoustics [317,318], force takes the form of pressure (e.g., in physical units of newtons per meter squared), and velocity may be either particle velocity in open air (meters per second) or volume velocity in acoustic tubes (meters cubed per second) (see §B.7.1 for definitions). The wave impedance (also called the characteristic impedance) in open air is the ratio of pressure to particle velocity in a sound wave traveling through air, and it is given by $ R=
\sqrt{\gamma P_0\rho } = \rho c$, where $ \rho$ is the density (mass per unit volume) of air, $ c$ is the speed of sound propagation, $ P_0$ is ambient pressure, and $ \gamma_c = 1.4$ is the ratio of the specific heat of air at constant pressure to that at constant volume. In a vibrating string, the wave impedance is given by $ R= \sqrt{K\epsilon }=\epsilon
c$, where $ \epsilon $ is string density (mass per unit length) and $ K$ is the tension of the string (stretching force), as discussed further in §C.1 and §B.5.2.

In circuit theory [110], force takes the form of electric potential in volts, and velocity manifests as electric current in amperes (coulombs per second). In an electric transmission line, the characteristic impedance is given by $ R=\sqrt{L/C}=Lc$ where $ L$ and $ C$ are the inductance and capacitance, respectively, per unit length along the transmission line. In free space, the wave impedance for light is $ R=\sqrt{\mu_0/\epsilon_0} =\mu_0 c$, where $ \mu_0$ and $ \epsilon_0$ are the permeability and permittivity, respectively, of free space. One might be led from this to believe that there must exist a medium, or `ether', which sustains wave propagation in free space; however, this is one instance in which ``obvious'' predictions from theory turn out to be wrong.


The elementary impedance element in mechanics is the dashpot which may be approximated mechanically by a plunger in a cylinder of air or liquid, analogous to a shock absorber for a car. A constant impedance means that the velocity produced is always linearly proportional to the force applied, or $ f(t) = \mu v(t)$, where $ \mu $ is the dashpot impedance, $ f(t)$ is the applied force at time $ t$, and $ v(t)$ is the velocity. A diagram is shown in Fig. 7.1.

Figure 7.1: The ideal dashpot characterized by a constant impedance $ \mu $. For all applied forces $ f(t)$, the resulting velocity $ v(t)$ obeys $ f(t) = \mu v(t)$.

In circuit theory, the element analogous to the dashpot is the resistor $ R$, characterized by $ v(t) = R i(t)$, where $ v$ is voltage and $ i$ is current. In an analog equivalent circuit, a dashpot can be represented using a resistor $ R = \mu$.

Over a specific velocity range, friction force can also be characterized by the relation $ f(t) = \mu v(t)$. However, friction is very complicated in general [419], and as the velocity goes to zero, the coefficient of friction $ \mu $ may become much larger. The simple model often presented is to use a static coefficient of friction when starting at rest ($ v(t)=0$) and a dynamic coefficient of friction when in motion ( $ v(t)\neq 0$). However, these models are too simplified for many practical situations in musical acoustics, e.g., the frictional force between the bow and string of a violin [308,549], or the internal friction losses in a vibrating string [73].

Ideal Mass

Figure: The ideal mass characterized by $ f(t) = m \protect\dot v(t) = m{\ddot x}(t)$.

The concept of impedance extends also to masses and springs. Figure 7.2 illustrates an ideal mass of $ m$ kilograms sliding on a frictionless surface. From Newton's second law of motion, we know force equals mass times acceleration, or

$\displaystyle f(t) = m a(t) \isdef m \dot v(t) \isdef m \ddot x(t).

Since impedance is defined in terms of force and velocity, we will prefer the form $ f(t) = m \dot v(t)$. By the differentiation theorem for Laplace transforms [284],8.1we have

$\displaystyle F(s) = m [s V(s) - v(0)].

If we assume the initial velocity of the mass is zero, we have

$\displaystyle F(s) = m s V(s),

and the impedance $ F(s)/V(s)$ of the mass in the frequency domain is simply

$\displaystyle R_m(s) \isdef m s.

The admittance of a mass $ m$ is therefore

$\displaystyle \Gamma_m(s) \isdef \frac{1}{ms}

This is the transfer function of an integrator. Thus, an ideal mass integrates the applied force (divided by $ m$) to produce the output velocity. This is just a ``linear systems'' way of saying force equals mass times acceleration.

Since we normally think of an applied force as an input and the resulting velocity as an output, the corresponding transfer function is $ H(s) = \Gamma(s) = V(s)/F(s)$. The system diagram for this view is shown in Fig. 7.3.

The impulse response of a mass, for a force input and velocity output, is defined as the inverse Laplace transform of the transfer function:

$\displaystyle \gamma_m(t) \isdef {\cal L}^{-1}\left\{\Gamma_m(s)\right\} = \frac{1}{m}u(t)

In this instance, setting the input to $ \delta(t)$ corresponds to transferring a unit momentum to the mass at time 0. (Recall that momentum is the integral of force with respect to time.) Since momentum is also equal to mass $ m$ times its velocity $ v(t)$, it is clear that the unit-momentum velocity must be $ v(t)=1/m$.

Figure 7.3: Input/output description of a general impedance, with force $ F(s)$ as the input, velocity $ V(s)$ as the output, and admittance $ \Gamma (s)$ as the transfer function.

Once the input and output signal are defined, a transfer function is defined, and therefore a frequency response is defined [485]. The frequency response is given by the transfer function evaluated on the $ j\omega $ axis in the $ s$ plane, i.e., for $ s=j\omega$. For the ideal mass, the force-to-velocity frequency response is

$\displaystyle \Gamma_m(j\omega) = \frac{1}{m j\omega}

Again, this is just the frequency response of an integrator, and we can say that the amplitude response rolls off $ -6$ dB per octave, and the phase shift is $ -\pi /2$ radians at all frequencies.

In circuit theory, the element analogous to the mass is the inductor, characterized by $ v(t) = L di/dt$, or $ V(s) = Ls I(s)$. In an analog equivalent circuit, a mass can be represented using an inductor with value $ L=m$.

Ideal Spring

Figure 7.4 depicts the ideal spring.

Figure 7.4: The ideal spring characterized by $ f(t) = k x(t)$.

From Hooke's law, we have that the applied force is proportional to the displacement of the spring:

$\displaystyle f(t) \eqsp k\, x(t) \isdefs k \int_0^t v(\tau)d\tau

where it is assumed that $ x(0) = 0$. The spring constant $ k$ is sometimes called the stiffness of the spring. Taking the Laplace transform gives

$\displaystyle F(s) = k V(s)/s

so that the impedance of a spring is

$\displaystyle R_k(s) \isdef \frac{k}{s}

and the admittance is

$\displaystyle \Gamma_k(s) \isdef \frac{s}{k}

This is the transfer function of a differentiator. We can say that the ideal spring differentiates the applied force (divided by $ k$) to produce the output velocity.

The frequency response of the ideal spring, given the applied force as input and resulting velocity as output, is

$\displaystyle \Gamma_k(j\omega) = \frac{j\omega}{k}

In this case, the amplitude response grows $ +6$ dB per octave, and the phase shift is $ +\pi /2$ radians for all $ \omega $. Clearly, there is no such thing as an ideal spring which can produce arbitrarily large gain as frequency goes to infinity; there is always some mass in a real spring.

We call $ v(t)$ the compression velocity of the spring. In more complicated configurations, the compression velocity is defined as the difference between the velocity of the two spring endpoints, with positive velocity corresponding to spring compression.

In circuit theory, the element analogous to the spring is the capacitor, characterized by $ i(t) = C dv/dt$, or $ I(s) = Cs V(s)$. In an equivalent analog circuit, we can use the value $ C = 1/k$. The inverse $ 1/k$ of the spring stiffness is sometimes called the compliance of the spring.

Don't forget that the definition of impedance requires zero initial conditions for elements with ``memory'' (masses and springs). This means we can only use impedance descriptions for steady state analysis. For a complete analysis of a particular system, including the transient response, we must go back to full scale Laplace transform analysis.

One-Port Network Theory

The basic idea of a one-port network [524] is shown in Fig. 7.5. The one-port is a ``black box'' with a single pair of input/output terminals, referred to as a port. A force is applied at the terminals and a velocity ``flows'' in the direction shown. The admittance ``seen'' at the port is called the driving point admittance. Network theory is normally described in terms of circuit theory elements, in which case a voltage is applied at the terminals and a current flows as shown. However, in our context, mechanical elements are preferable.

Figure 7.5: A one-port network characterized by its driving point admittance $ \Gamma (s)$. For any applied force $ F(s)$, the observed velocity is $ V(s) = \Gamma (s)F(s)$.

Series Combination of One-Ports

Figure 7.6 shows the series combination of two one-ports.

Figure 7.6: Two one-port networks combined in series. The impedance of the series combination is $ R(s) = F(s)/V(s) = R_1(s) + R_2(s)$.

Impedances add in series, so the aggregate impedance is $ R(s) = R_1(s) + R_2(s)$, and the admittance is

$\displaystyle \Gamma(s) = \frac{1}{\frac{1}{\Gamma_1(s)} + \frac{1}{\Gamma_2(s)}} =
\frac{\Gamma_1(s) \Gamma_2(s) }{\Gamma_1(s) + \Gamma_2(s)}

The latter expression is the handy product-over-sum rule for combining admittances in series.

Mass-Spring-Wall System

In a physical situation, if two elements are connected in such a way that they share a common velocity, then they are in series. An example is a mass connected to one end of a spring, where the other end is attached to a rigid support, and the force is applied to the mass, as shown in Fig. 7.7.

Figure 7.7: A mass and spring combined as one-ports in series.

Figure 7.8 shows the electrical equivalent circuit corresponding to Fig.7.7.

Figure: Electrical equivalent circuit of the series mass-spring driven by an external force diagrammed in Fig.7.7.

Figure: Impedance diagram for the force-driven, series arrangement of mass and spring shown in Fig.7.7.

Parallel Combination of One-Ports

Figure Fig.7.10 shows the parallel combination of two one-ports.

Figure 7.10: Two one-port networks combined in parallel. The admittance of the parallel combination is $ \Gamma (s) = \Gamma _1(s) + \Gamma _2(s)$.

Admittances add in parallel, so the combined admittance is $ \Gamma (s) = \Gamma _1(s) + \Gamma _2(s)$, and the impedance is

$\displaystyle R(s) = \frac{1}{\frac{1}{R_1(s)} + \frac{1}{R_2(s)}}
= \frac{R_1(s) R_2(s) }{R_1(s) + R_2(s)}

which is the familiar product-over-sum rule for combining impedances in parallel. This operation is often denoted by

$\displaystyle R= R_1 \vert\vert R_2

Spring-Mass System

When two physical elements are driven by a common force (yet have independent velocities, as we'll soon see is quite possible), they are formally in parallel. An example is a mass connected to a spring in which the driving force is applied to one end of the spring, and the mass is attached to the other end, as shown in Fig.7.11. The compression force on the spring is equal at all times to the rightward force on the mass. However, the spring compression velocity $ v_k(t)$ does not always equal the mass velocity $ v_m(t)$. We do have that the sum of the mass velocity and spring compression velocity gives the velocity of the driving point, i.e., $ v(t)=v_m(t)+v_k(t)$. Thus, in a parallel connection, forces are equal and velocities sum.

Figure 7.11: A mass and spring combined as one-ports in parallel.

Figure 7.12 shows the electrical equivalent circuit corresponding to Fig.7.11.

Figure: Electrical equivalent circuit of the parallel mass-spring combination driven by an external force, as diagrammed in Fig.7.11.

Figure: Impedance diagram for the force-driven, parallel mass-spring arrangement shown in Fig.7.11.

Mechanical Impedance Analysis

Impedance analysis is commonly used to analyze electrical circuits [110]. By means of equivalent circuits, we can use the same analysis methods for mechanical systems.

For example, referring to Fig.7.9, the Laplace transform of the force on the spring $ k$ is given by the so-called voltage divider relation:8.2

$\displaystyle F_k(s)
= F_{\mbox{ext}}(s) \frac{R_k(s)}{R_m(s)+R_k(s)}
= F_{\mbox{ext}}(s) \frac{\frac{k}{s}}{ms+\frac{k}{s}}

Similarly, the Laplace transform of the force on the mass $ m$ is given by

$\displaystyle F_m(s) = F_{\mbox{ext}}(s) \frac{R_m(s)}{R_m(s)+R_k(s)} = F_{\mbox{ext}}(s) \frac{ms}{ms+\frac{k}{s}}. \protect$ (8.1)

As a simple application, let's find the motion of the mass $ m$, after time zero, given that the input force is an impulse at time 0:

$\displaystyle f_{\mbox{ext}}(t)=\delta(t) \;\leftrightarrow\; F_{\mbox{ext}}(s)=1

Then, by the ``voltage divider'' relation Eq.$ \,$(7.1), the Laplace transform of the mass force $ f_m(t)$ after time 0 is given by

$\displaystyle F_m(s) = \frac{ms}{ms+\frac{k}{s}}
= \frac{s^2}{s^2+\frac{k}{m}}
\isdef \frac{s^2}{s^2+\omega_0^2},

where we have defined $ \omega_0^2\isdef k/m$. The mass velocity Laplace transform is then

V_m(s) &=& \frac{F_m(s)}{ms} \;=\; \frac{1}{m} \cdot \frac{s}{...
...}\right]\\ [5pt]
&\leftrightarrow& \frac{1}{m} \cos(\omega_0 t).

Thus, the impulse response of the mass oscillates sinusoidally with radian frequency $ \omega_0=\sqrt{k/m}$, and amplitude $ 1/m$. The velocity starts out maximum at time $ t=0$, which makes physical sense. Also, the momentum transferred to the mass at time 0 is $ m\,v(0+) = 1$; this is also expected physically because the time-integral of the applied force is 1 (the area under any impulse $ \delta(t)$ is 1).

General One-Ports

An arbitrary interconnection of $ N$ impedances and admittances, with input and output force and/or velocities defined, results in a one-port with admittance expressible as

$\displaystyle \Gamma(s) =
\frac{b_0 s^N + b_1 s^{N-1}
+ \cdots + b_N}{s^N + a_1 s^{N-1} + \cdots + a_N}
\isdef \frac{B(s)}{A(s)}

In any mechanical situation we have $ b_0 = 0$, in principle, since at sufficiently high frequencies, every mechanical system must ``look like a mass.''8.3 However, for purposes of approximation to a real physical system, it may well be best to allow $ b_0\neq 0$ and consider the above expression to be a rational approximation to the true admittance function.

Passive One-Ports

It is well known that the impedance of every passive one-port is positive real (see §C.11.2). The reciprocal of a positive real function is positive real, so every passive impedance corresponds also to a passive admittance.

A complex-valued function of a complex variable $ \Gamma (s)$ is said to be positive real (PR) if

$ \Gamma (s)$ is real whenever $ s$ is real
$ \Re\{\Gamma(s)\} \geq 0$ whenever $ \Re\{s\} \geq 0$.

A particularly important property of positive real functions is that the phase is bounded between plus and minus $ 90$ degrees, i.e.,

$\displaystyle -\frac{\pi}{2} \leq \angle{\Gamma(j\omega)} \leq \frac{\pi}{2}

This is a significant constraint on the rational function $ \Gamma (s)$. One implication is that in the lossless case (no dashpots, only masses and springs--a reactance) all poles and zeros interlace along the $ j\omega $ axis, as depicted in Fig.7.14.

Figure 7.14: Poles and zeros of a lossless immittance (reactance or suseptance) must interlace along the $ j\omega $ Axis. Left: Pole-zero plot. Right: Phase response. The ``spring/mass'' labels along the frequency axis correspond to the case of a lossless admittance (susceptance) in which a spring admittance ( $ \Gamma _k(j\omega )=j\omega /k$) gives a $ +\pi /2$ phase shift, while that of a mass ( $ \Gamma _m(j\omega )=-j/(m\omega )$) gives a $ -\pi /2$ phase shift between the input driving-force and output velocity.

Referring to Fig.7.14, consider the graphical method for computing phase response of a reactance from the pole zero diagram [449].8.4Each zero on the positive $ j\omega $ axis contributes a net 90 degrees of phase at frequencies above the zero. As frequency crosses the zero going up, there is a switch from $ -90$ to $ +90$ degrees. For each pole, the phase contribution switches from $ +90$ to $ -90$ degrees as it is passed going up in frequency. In order to keep phase in $ [-\pi/2,\pi/2]$, it is clear that the poles and zeros must strictly alternate. Moreover, all poles and zeros must be simple, since a multiple poles or zero would swing the phase by more than $ 180$ degrees, and the reactance could not be positive real.

The positive real property is fundamental to passive immittances and comes up often in the study of measured resonant systems. A practical modeling example (passive digital modeling of a guitar bridge) is discussed in §9.2.1.

Digitization of Lumped Models

Since lumped models are described by differential equations, they are digitized (brought into the digital-signal domain) by converting them to corresponding finite-difference equations (or simply ``difference equations''). General aspects of finite difference schemes are discussed in Appendix D. This chapter introduces a couple of elementary methods in common use:

  1. Finite difference approximation: $ sT\leftarrow 1-z^{-1}$
  2. Bilinear transform: $ sT\leftarrow 2\,(1-z^{-1})/(1+z^{-1})$
Note that digitization by the bilinear transform is closely related to the Wave Digital Filter (WDF) method introduced in Appendix F. Section 9.3.1 discusses a bilinearly transformed mass colliding with a digital waveguide string (an idealized struck-string example).

Finite Difference Approximation

A finite difference approximation (FDA) approximates derivatives with finite differences, i.e.,

$\displaystyle \frac{d}{dt} x(t) \isdefs \lim_{\delta\to 0} \frac{x(t) - x(t-\delta)}{\delta} \;\approx\; \frac{x(n T)-x[(n-1)T]}{T} \protect$ (8.2)

for sufficiently small $ T$.8.5

Equation (7.2) is also known as the backward difference approximation of differentiation.

See §C.2.1 for a discussion of using the FDA to model ideal vibrating strings.

FDA in the Frequency Domain

Viewing Eq.$ \,$(7.2) in the frequency domain, the ideal differentiator transfer-function is $ H(s)=s$, which can be viewed as the Laplace transform of the operator $ d/dt$ (left-hand side of Eq.$ \,$(7.2)). Moving to the right-hand side, the z transform of the first-order difference operator is $ (1-z^{-1})/T$. Thus, in the frequency domain, the finite-difference approximation may be performed by making the substitution

$\displaystyle s \;\leftarrow\; \frac{1-z^{-1}}{T} \protect$ (8.3)

in any continuous-time transfer function (Laplace transform of an integro-differential operator) to obtain a discrete-time transfer function (z transform of a finite-difference operator).

The inverse of substitution Eq.$ \,$(7.3) is

$\displaystyle z \eqsp \frac{1}{1 - sT} \eqsp 1 + sT+ (sT)^2 + \cdots \, .

As discussed in §8.3.1, the FDA is a special case of the matched $ z$ transformation applied to the point $ s=0$.

Note that the FDA does not alias, since the conformal mapping $ s = {1
- z^{-1}}$ is one to one. However, it does warp the poles and zeros in a way which may not be desirable, as discussed further on p. [*] below.

Delay Operator Notation

It is convenient to think of the FDA in terms of time-domain difference operators using a delay operator notation. The delay operator $ d$ is defined by

$\displaystyle d^k x(n) \eqsp x(n-k).

Thus, the first-order difference (derivative approximation) is represented in the time domain by $ (1-d)/T$. We can think of $ d$ as $ z^{-1}$ since, by the shift theorem for $ z$ transforms, $ z^{-k}
X(z)$ is the $ z$ transform of $ x$ delayed (right shifted) by $ k$ samples.

The obvious definition for the second derivative is

$\displaystyle {\hat{\ddot x}}(n) \eqsp \frac{(1-d)^2}{T^2} x(n).$ (8.4)

However, a better definition is the centered finite difference

$\displaystyle {\hat{\ddot x}}(n) \isdefs \frac{(d^{-1}-1)(1-d)}{T^2} x(n) \eqsp \frac{d^{-1}-2+d}{T^2}x(n) \protect$ (8.5)

where $ d^{-1}$ denotes a unit-sample advance. This definition is preferable as long as one sample of look-ahead is available, since it avoids an operator delay of one sample. Equation (7.5) is a zero phase filter, meaning it has no delay at any frequency, while (7.4) is a linear phase filter having a delay of $ 1$ sample at all frequencies.

Bilinear Transformation

The bilinear transform is defined by the substitution

$\displaystyle s$ $\displaystyle =$ $\displaystyle c\frac{1-z^{-1}}{1+z^{-1}}, \quad c>0, \; c=2/T\;$   (typically) (8.6)
$\displaystyle \,\,\Rightarrow\,\,
z$ $\displaystyle =$ $\displaystyle \frac{1+s/c}{1-s/c} \eqsp 1 + 2(s/c) + 2(s/c)^2 + 2(s/c)^3 + \cdots$ (8.7)

where $ c$ is some positive constant [83,326]. That is, given a continuous-time transfer function $ H_a(s)$, we apply the bilinear transform by defining

$\displaystyle H_d(z) = H_a\left(c\frac{1-z^{-1}}{1+z^{-1}}\right)

where the ``$ d$'' subscript denotes ``digital,'' and ``$ a$'' denotes ``analog.''

It can be seen that analog dc ($ s=0$) maps to digital dc ($ z=1$) and the highest analog frequency ($ s=\infty$) maps to the highest digital frequency ($ z=-1$). It is easy to show that the entire $ j\omega $ axis in the $ s$ plane (where $ s\isdeftext \sigma+j\omega$) is mapped exactly once around the unit circle in the $ z$ plane (rather than summing around it infinitely many times, or ``aliasing'' as it does in ordinary sampling). With $ c$ real and positive, the left-half $ s$ plane maps to the interior of the unit circle, and the right-half $ s$ plane maps outside the unit circle. This means stability is preserved when mapping a continuous-time transfer function to discrete time.

Another valuable property of the bilinear transform is that order is preserved. That is, an $ N$th-order $ s$-plane transfer function carries over to an $ N$th-order $ z$-plane transfer function. (Order in both cases equals the maximum of the degrees of the numerator and denominator polynomials [449]).8.6

The constant $ c$ provides one remaining degree of freedom which can be used to map any particular finite frequency from the $ j\omega_a$ axis in the $ s$ plane to a particular desired location on the unit circle $ e^{j\omega_d}$ in the $ z$ plane. All other frequencies will be warped. In particular, approaching half the sampling rate, the frequency axis compresses more and more. Note that at most one resonant frequency can be preserved under the bilinear transformation of a mass-spring-dashpot system. On the other hand, filters having a single transition frequency, such as lowpass or highpass filters, map beautifully under the bilinear transform; one simply uses $ c$ to map the cut-off frequency where it belongs, and the response looks great. In particular, ``equal ripple'' is preserved for optimal filters of the elliptic and Chebyshev types because the values taken on by the frequency response are identical in both cases; only the frequency axis is warped.

The bilinear transform is often used to design digital filters from analog prototype filters [343]. An on-line introduction is given in [449].

Finite Difference Approximation vs. Bilinear Transform

Recall that the Finite Difference Approximation (FDA) defines the elementary differentiator by $ y(n) = x(n) -
x(n-1)$ (ignoring the scale factor $ T$ for now), and this approximates the ideal transfer function $ H(s)=s$ by $ H_d(z)=1-z^{-1}$. The bilinear transform calls instead for the transfer function $ H'_d(z)=(1-z^{-1})/(1+z^{-1})$ (again dropping the scale factor) which introduces a pole at $ z=-1$ and gives us the recursion $ y(n) = x(n) - x(n-1) - y(n-1)$. Note that this new pole is right on the unit circle and is therefore undamped. Any signal energy at half the sampling rate will circulate forever in the recursion, and due to round-off error, it will tend to grow. This is therefore a potentially problematic revision of the differentiator. To get something more practical, we need to specify that the filter frequency response approximate $ H(j\omega)=j\omega$ over a finite range of frequencies $ [-\omega_c,\omega_c]$, where $ \omega_c\ll\pi f_s$, above which we allow the response to ``roll off'' to zero. This is how we pose the differentiator problem in terms of general purpose filter design (see §8.6) [362].

To understand the properties of the finite difference approximation in the frequency domain, we may look at the properties of its $ s$-plane to $ z$-plane mapping

$\displaystyle s = \frac{1 - z^{-1}}{T}.

We see the FDA is actually a portion of the bilinear transform, since following the FDA mapping by the mapping $ s = (c/T)/(1+z^{-1})$ would convert it to the bilinear transform. Like the bilinear transform, the FDA does not alias, since the mapping $ s = 1 - z^{-1}$ is one-to-one.

Setting $ T$ to 1 for simplicity and solving the FDA mapping for z gives

$\displaystyle z = \frac{1 }{1-s}.

We see that analog dc ($ s=0$) maps to digital dc ($ z=1$) as desired, but higher frequencies unfortunately map inside the unit circle rather than onto the unit circle in the $ z$ plane. Solving for the image in the z plane of the $ j\omega $ axis in the s plane gives

$\displaystyle z = \frac{1}{1-j \omega } = \frac{1 + j \omega }{1+\omega^2}

From this it can be checked that the FDA maps the $ j\omega $ axis in the $ s$ plane to the circle of radius $ 1/2$ centered at the point $ z = 1/2$ in the $ z$ plane, as shown in Fig. 7.15

Figure 7.15: Image of the $ j\omega $ axis in the $ z$ plane: a circle of radius $ 1/2$ centered at the point $ z = 1/2$.

Under the FDA, analog and digital frequency axes coincide well enough at very low frequencies (high sampling rates), but at high frequencies relative to the sampling rate, artificial damping is introduced as the image of the $ j\omega $ axis diverges away from the unit circle.

While the bilinear transform ``warps'' the frequency axis, we can say the FDA ``doubly warps'' the frequency axis: It has a progressive, compressive warping in the direction of increasing frequency, like the bilinear transform, but unlike the bilinear transform, it also warps normal to the frequency axis.

Consider a point traversing the upper half of the unit circle in the z plane, starting at $ z=1$ and ending at $ z=-1$. At dc, the FDA is perfect, but as we proceed out along the unit circle, we diverge from the $ j\omega $ axis image and carve an arc somewhere out in the image of the right-half $ s$ plane. This has the effect of introducing an artificial damping.

Consider, for example, an undamped mass-spring system. There will be a complex conjugate pair of poles on the $ j\omega $ axis in the $ s$ plane. After the FDA, those poles will be inside the unit circle, and therefore damped in the digital counterpart. The higher the resonance frequency, the larger the damping. It is even possible for unstable $ s$-plane poles to be mapped to stable $ z$-plane poles.

In summary, both the bilinear transform and the FDA preserve order, stability, and positive realness. They are both free of aliasing, high frequencies are compressively warped, and both become ideal at dc, or as $ f_s$ approaches $ \infty$. However, at frequencies significantly above zero relative to the sampling rate, only the FDA introduces artificial damping. The bilinear transform maps the continuous-time frequency axis in the $ s$ (the $ j\omega $ axis) plane precisely to the discrete-time frequency axis in the $ z$ plane (the unit circle).

Application of the Bilinear Transform

The impedance of a mass in the frequency domain is

$\displaystyle R_M(s) = Ms.

In the $ s$ plane, we have

$\displaystyle F_a(s) = (Ms) V_a(s)

where the ``a'' subscript denotes ``analog''. For simplicity, let's choose the free constant $ c$ in the bilinear transform such that $ 1$ rad/sec maps to one fourth the sampling rate, i.e., $ s=j$ maps to $ z=j$ which implies $ c=1$. Then the impedance relation maps across as

$\displaystyle F_d(z) = \left(M\frac{1-z^{-1}}{1+z^{-1}}\right) V_d(z)

where the ``d'' subscript denotes ``digital. Multiplying through by the denominator and applying the shift theorem for $ z$ transforms gives the corresponding difference equation

(1+z^{-1})F_d(z) &=& M (1-z^{-1}) V_d(z) \\
\,\,\Rightarrow\,\,f_d(n) &=& M[v_d(n) - v_d(n-1)] - f_d(n-1).

This difference equation is diagrammed in Fig. 7.16. We recognize this recursive digital filter as the direct form I structure. The direct-form II structure is obtained by commuting the feedforward and feedback portions and noting that the two delay elements contain the same value and can therefore be shared [449]. The two other major filter-section forms are obtained by transposing the two direct forms by exchanging the input and output, and reversing all arrows. (This is a special case of Mason's Gain Formula which works for the single-input, single-output case.) When a filter structure is transposed, its summers become branching nodes and vice versa. Further discussion of the four basic filter section forms can be found in [333].

Figure 7.16: A direct-form-I digital filter simulating a mass $ M$ created using the bilinear transform $ s=(1-z^{-1})/(1+z^{-1})$.

Practical Considerations

While the digital mass simulator has the desirable properties of the bilinear transform, it is also not perfect from a practical point of view: (1) There is a pole at half the sampling rate ($ z=-1$). (2) There is a delay-free path from input to output.

The first problem can easily be circumvented by introducing a loss factor $ g$, moving the pole from $ z=-1$ to $ z=-g$, where $ g\in[0,1)$ and $ g\approx1$. This is sometimes called the ``leaky integrator''.

The second problem arises when making loops of elements (e.g., a mass-spring chain which forms a loop). Since the individual elements have no delay from input to output, a loop of elements is not computable using standard signal processing methods. The solution proposed by Alfred Fettweis is known as ``wave digital filters,'' a topic taken up in §F.1.

Limitations of Lumped Element Digitization

Model discretization by the FDA7.3.1) and bilinear transform7.3.2) methods are order preserving. As a result, they suffer from significant approximation error, especially at high frequencies relative to half the sampling rate. By allowing a larger order in the digital model, we may obtain arbitrarily accurate transfer-function models of LTI subsystems, as discussed in Chapter 8. Of course, in higher-order approximations, the state variables of the simulation no longer have a direct physical intepretation, and this can have implications, particularly when trying to extend to the nonlinear case. The benefits of a physical interpretation should not be given up lightly. For example, one may consider oversampling in place of going to higher-order element approximations.

More General Finite-Difference Methods

The FDA and bilinear transform of the previous sections can be viewed as first-order conformal maps from the analog $ s$ plane to the digital $ z$ plane. These maps are one-to-one and therefore non-aliasing. The FDA performs well at low frequencies relative to the sampling rate, but it introduces artificial damping at high frequencies. The bilinear transform preserves the frequency axis exactly, but over a warped frequency scale. Being first order, both maps preserve the number of poles and zeros in the model.

We may only think in terms of mapping the $ s$ plane to the $ z$ plane for linear, time-invariant systems. This is because Laplace transform analysis is not defined for nonlinear and/or time-varying differential equations (no $ s$ plane). Therefore, such systems are instead digitized by some form of numerical integration to produce solutions that are ideally sampled versions of the continuous-time solutions. It is often necessary to work at sampling rates much higher than the desired audio sampling rate, due to the bandwidth-expanding effects of nonlinear elements in the continuous-time system.

A tutorial review of numerical solutions of Ordinary Differential Equations (ODE), including nonlinear systems, with examples in the realm of audio effects (such as a diode clipper), is given in [555]. Finite difference schemes specifically designed for nonlinear discrete-time simulation, such as the energy-preserving ``Kirchoff-Carrier nonlinear string model'' and ``von Karman nonlinear plate model'', are discussed in [53].

The remaining sections here summarize a few of the more elementary techniques discussed in [555].

General Nonlinear ODE

In state-space form (§1.3.7) [449],8.7a general class of $ N$th-order Ordinary Differential Equations (ODE), can be written as

$\displaystyle \dot{\underline{x}}(t) \eqsp f(t,\underline{x},\underline{u}) \protect$ (8.8)

where $ t$ denotes time in seconds, $ \underline{x}(t)$ denotes a vector of $ N$ state variables at time $ t$,

$\displaystyle \dot{\underline{x}}(t) \isdefs \frac{d}{dt}\underline{x}(t)

denotes the time derivative of $ \underline{x}(t)$, and $ \underline{u}(t)$ is a vector (any length) of the system input signals, if any. Thus, Eq.$ \,$(7.8) says simply that the time-derivative of the state vector is some function $ f$ depending on time $ t$, the current state $ \underline{x}(t)$, and the current input signals $ \underline{u}(t)$. The basic problem is to solve for the state trajectory $ \underline{x}(t)$ given its initial condition $ \underline{x}(0)$, the system definition function $ f$, and the input signals $ \underline{u}(t)$ for all $ t\ge 0$.

In the linear, time-invariant (LTI) case, Eq.$ \,$(7.8) can be expressed in the usual state-space form for LTI continuous-time systems:

$\displaystyle \frac{d}{dt}\underline{x}(t) \eqsp A\,\underline{x}(t) + B\,\underline{u}(t) \protect$ (8.9)

In this case, standard methods for converting a filter from continuous to discrete time may be used, such as the FDA7.3.1) and bilinear transform7.3.2).8.8

Forward Euler Method

The finite-difference approximation (Eq.$ \,$(7.2)) with the derivative evaluated at time $ n-1$ yields the forward Euler method of numerical integration:

$\displaystyle \underline{\hat{x}}(n) \isdefs \underline{\hat{x}}(n-1) + T\, \do...
...\hat{x}}(n-1) + T\, f[n-1,\underline{\hat{x}}(n-1),\underline{u}(n-1)] \protect$ (8.10)

where $ \underline{\hat{x}}(n)$ denotes the approximation to $ \underline{x}(nT)$ computed by the forward Euler method. Note that the ``driving function'' $ f$ is evaluated at time $ n-1$, not $ n$. As a result, given, $ \underline{\hat{x}}(0)=\underline{x}(0)$ and the input vector $ \underline{u}(n)$ for all $ n\ge0$, Eq.$ \,$(7.10) can be iterated forward in time to compute $ \underline{\hat{x}}(n)$ for all $ n>0$. Since $ f$ is an arbitrary function, we have a solver that is applicable to nonlinear, time-varying ODEs Eq.$ \,$(7.8).

Because each iteration of the forward Euler method depends only on past quantities, it is termed an explicit method. In the LTI case, an explicit method corresponds to a causal digital filter [449]. Methods that depend on current and/or future solution samples (i.e., $ \underline{\hat{x}}(n)$ for $ n\ge0$) are called implicit methods. When a nonlinear numerical-integration method is implicit, each step forward in time typically uses some number of iterations of Newton's Method (see §7.4.5 below).

Backward Euler Method

An example of an implicit method is the backward Euler method:

$\displaystyle \underline{\hat{x}}(n) \isdefs \underline{\hat{x}}(n-1) + T\dot{\...
...nderline{\hat{x}}(n-1) + Tf[n,\underline{\hat{x}}(n),\underline{u}(n)] \protect$ (8.11)

Because the derivative is now evaluated at time $ n$ instead of $ n-1$, the backward Euler method is implicit. Notice, however, that if time were reversed, it would become explicit; in other words, backward Euler is implicit in forward time and explicit in reverse time.

Trapezoidal Rule

The trapezoidal rule is defined by

$\displaystyle \underline{\hat{x}}(n) \isdefs \underline{\hat{x}}(n-1) + T\, \frac{\dot{\underline{\hat{x}}}(n) + \dot{\underline{\hat{x}}}(n-1)}{2}. \protect$ (8.12)

Thus, the trapezoidal rule is driven by the average of the derivative estimates at times $ n$ and $ n-1$. The method is implicit in either forward or reverse time.

The trapezoidal rule gets its name from the fact that it approximates an integral by summing the areas of trapezoids. This can be seen by writing Eq.$ \,$(7.12) as

$\displaystyle \underline{\hat{x}}(n) = \underline{\hat{x}}(n-1) + T\,\dot{\unde...
... T\,\left[\dot{\underline{\hat{x}}}(n) - \dot{\underline{\hat{x}}}(n-1)\right]

Imagine a plot of $ \dot{\underline{\hat{x}}}(n)$ versus $ n$, and connect the samples with linear segments to form a sequence of trapezoids whose areas must be summed to yield an approximation to $ \underline{\hat{x}}(n)$. Then the integral at time $ n$, $ \underline{\hat{x}}(n)$, is given by the integral at time $ n-1$, $ \underline{\hat{x}}(n-1)$, plus the area of the next rectangle, $ T\,\dot{\underline{\hat{x}}}(n-1)$, plus the area of the new triangular piece atop the new rectangle, $ T\,[\dot{\underline{\hat{x}}}(n) - \dot{\underline{\hat{x}}}(n-1)]/2$. In other words, the integral at time $ n$ equals the integral at time $ n-1$ plus the area of the next trapezoid in the sum.

An interesting fact about the trapezoidal rule is that it is equivalent to the bilinear transform in the linear, time-invariant case. Carrying Eq.$ \,$(7.12) to the frequency domain gives

X(z) &=& z^{-1}X(z) + T\, \frac{s X(z) + z^{-1}s X(z)]}{2}\\
...gleftrightarrow\quad s &=& \frac{2}{T}\frac{1-z^{-1}}{1+z^{-1}}.

Newton's Method of Nonlinear Minimization

Newton's method [162],[166, p. 143] finds the minimum of a nonlinear (scalar) function of several variables by locally approximating the function by a quadratic surface, and then stepping to the bottom of that ``bowl'', which generally requires a matrix inversion. Newton's method therefore requires the function to be ``close to quadratic'', and its effectiveness is directly tied to the accuracy of that assumption. For smooth functions, Newton's method gives very rapid quadratic convergence in the last stages of iteration. Quadratic convergence implies, for example, that the number of significant digits in the minimizer approximately doubles each iteration.

Newton's method may be derived as follows: Suppose we wish to minimize the real, positive function $ J(\underline{x})$ with respect to $ \underline{x}$. The vector Taylor expansion [546] of $ J(\underline{x})$ about $ \underline{\hat{x}}$ gives

$\displaystyle J(\underline{\hat{x}}^\ast ) = J(\underline{\hat{x}}) + J^\prime(...
\left(\underline{\hat{x}}^\ast -\underline{\hat{x}}\right),

for some $ 0\leq\lambda\leq 1$, where $ \overline{\lambda}\isdef
1-\lambda$. It is now necessary to assume that $ J''\left(
\lambda\underline{\hat{x}}^\ast +\overline{\lambda}\underline{\hat{x}}\right)\approx J''(\underline{\hat{x}})$. Differentiating with respect to $ \underline{\hat{x}}^\ast $, where $ J(\underline{\hat{x}}^\ast )$ is presumed to be minimized, this becomes

$\displaystyle 0 = 0 + J^\prime(\underline{\hat{x}}) + J''(\underline{\hat{x}})
\left(\underline{\hat{x}}^\ast -\underline{\hat{x}}\right).

Solving for $ \underline{\hat{x}}^\ast $ yields

$\displaystyle \underline{\hat{x}}^\ast = \underline{\hat{x}}- [J''(\underline{\hat{x}})]^{-1} J^\prime(\underline{\hat{x}}).$ (8.13)

Applying Eq.$ \,$(7.13) iteratively, we obtain Newton's method:

$\displaystyle \underline{\hat{x}}_{n+1} = \underline{\hat{x}}_n - [J''(\underline{\hat{x}}_n)]^{-1} J^\prime(\underline{\hat{x}}_n), \quad n=1,2,\ldots\,,$ (8.14)

where $ \underline{\hat{x}}_0$ is given as an initial condition.

When the $ J(\underline{\hat{x}})$ is any quadratic form in $ \underline{\hat{x}}$, then $ J''\left(\lambda\underline{\hat{x}}^\ast +\overline{\lambda}\underline{\hat{x}}\right)= J''(\underline{\hat{x}})$, and Newton's method produces $ \underline{\hat{x}}^\ast $ in one iteration; that is, $ \underline{\hat{x}}_1=\underline{\hat{x}}^\ast $ for every $ \underline{\hat{x}}_0$.

Semi-Implicit Methods

A semi-implicit method for numerical integration is based on an implicit method by using only one iteration of Newton's method [354,555]. This effectively converts the implicit method into a corresponding explicit method. Best results are obtained for highly oversampled systems (i.e., $ f_s=1/T$ is larger than typical audio sampling rates).

Semi-Implicit Backward Euler

The semi-implicit backward Euler method is defined by [555]

$\displaystyle \underline{\hat{x}}(n) \isdefs \underline{\hat{x}}(n-1) + T\, \frac{f[n,\underline{\hat{x}}(n-1)]}{1-T\,\ddot{\underline{\hat{x}}}(n-1)} \protect$ (8.15)

where $ \ddot{\underline{\hat{x}}}(n-1)$ denotes an estimate of the second time derivative $ \ddot{\underline{x}}(t)$.

Semi-Implicit Trapezoidal Rule

The semi-implicit trapezoidal rule method is given by [555]

$\displaystyle \underline{\hat{x}}(n) \isdefs \underline{\hat{x}}(n-1) + \frac{T...
...rline{\hat{x}}(n-1)]}{1-\frac{T}{2}\,\ddot{\underline{\hat{x}}}(n-1)}. \protect$ (8.16)


We have looked at a number of methods for solving nonlinear ordinary differential equations, including explicit, implicit, and semi-implicit numerical integration methods. Specific methods included the explicit forward Euler (similar to the finite difference approximation of §7.3.1), backward Euler (implicit), trapezoidal rule (implicit, and equivalent to the bilinear transform of §7.3.2 in the LTI case), and semi-implicit variants of the backward Euler and trapezoidal methods.

As demonstrated and discussed further in [555], implicit methods are generally more accurate than explicit methods for nonlinear systems, with semi-implicit methods (§7.4.6) typically falling somewhere in between. Semi-implicit methods therefore provide a source of improved explicit methods. See [555] and the references therein for a discussion of accuracy and stability of such schemes, as well as applied examples.

Further Reading in Nonlinear Methods

Other well known numerical integration methods for ODEs include second-order backward difference formulas (commonly used in circuit simulation [555]), the fourth-order Runge-Kutta method [99], and their various explicit, implicit, and semi-implicit variations. See [555] for further discussion of these and related finite-difference schemes, and for application examples in the virtual analog area (digitization of musically useful analog circuits). Specific digitization problems addressed in [555] include electric-guitar distortion devices [553,556], the classic ``tone stack'' [552] (an often-used bass, midrange, and treble control circuit in guitar amplifiers), the Moog VCF, and other electronic components of amplifiers and effects. Also discussed in [555] is the ``K Method'' for nonlinear system digitization, with comparison to nonlinear wave digital filters (see Appendix F for an introduction to linear wave digital filters).

The topic of real-time finite difference schemes for virtual analog systems remains a lively research topic [554,338,293,84,264,364,397].

For Partial Differential Equations (PDEs), in which spatial derivatives are mixed with time derivatives, the finite-difference approach remains fundamental. An introduction and summary for the LTI case appear in Appendix D. See [53] for a detailed development of finite difference schemes for solving PDEs, both linear and nonlinear, applied to digital sound synthesis. Physical systems considered in [53] include bars, stiff strings, bow coupling, hammers and mallets, coupled strings and bars, nonlinear strings and plates, and acoustic tubes (voice, wind instruments). In addition to numerous finite-difference schemes, there are chapters on finite-element methods and spectral methods.

Summary of Lumped Modeling

In this chapter, we looked at the fundamentals of lumped modeling elements such as masses, springs, and dashpots. The important concept of driving-point impedance was defined and discussed, and electrical equivalent circuits were developed, along with associated elementary (circuit) network theory. Finally, we looked at basic ways of digitizing lumped elements and more complex ODEs and PDEs, including a first glance at some nonlinear methods.

Practical examples of lumped models begin in §9.3.1. In particular, piano-like models require a ``hammer'' to strike the string, and §9.3.1 explicates the simplest case of an ideal point-mass striking an ideal vibrating string. In that model, when the mass is in contact with the string, it creates a scattering junction on the string having reflection and transmission coefficients that are first-order filters. These filters are then digitized via the bilinear transform. The ideal string itself is of course modeled as a digital waveguide. A detailed development of wave scattering at impedance-discontinuities is presented for digital waveguide models in §C.8, and for wave digital filters in Appendix F.

Next Section:
Transfer Function Models
Previous Section:
Digital Waveguide Models