The term ``lumped'' comes from electrical engineering, and refers to lumped-parameter analysis, as opposed to distributed-parameter analysis. Examples of ``distributed'' systems in musical acoustics include ideal strings, acoustic tubes, and anything that propagates waves. In general, a lumped-parameter approach is appropriate when the physical object has dimensions that are small relative to the wavelength of vibration. Examples from musical acoustics include brass-players' lips (modeled using one or two masses attached to springs--see §9.7), and the piano hammer (modeled using a mass and nonlinear spring, as discussed in §9.4). In contrast to these lumped-modeling examples, the vibrating string is most efficiently modeled as a sampled distributed-parameter system, as discussed in Chapter 6, although lumped models of strings (using, e.g., a mass-spring-chain ) work perfectly well, albeit at a higher computational expense for a given model quality [69,145]. In the realm of electromagnetism, distributed-parameter systems include electric transmission lines and optical waveguides, while the typical lumped-parameter systems are ordinary RLC circuits (connecting resistors, capacitors, and inductors). Again, whenever the oscillation wavelength is large relative to the geometry of the physical component, a lumped approximation may be considered. As a result, there is normally a high-frequency limit on the validity of a lumped-parameter model. For the same reason, there is normally an upper limit on physical size as well.
We begin with the fundamental concept of impedance, and discuss the elementary lumped impedances associated with springs, mass, and dashpots. These physical objects are analogous to capacitors, inductors, and resistors in lumped-parameter electrical circuits. Next, we discuss general interconnections of such elements, characterized at a single input/output location by means of one-port network theory. In particular, we will see that all passive networks present a positive real impedance at any port (input/output point). A network diagram may be replaced by an impedance diagram, which may then be translated into its equivalent circuit (replacing springs by capacitors, masses by inductors, and dashpots by resistors).
Impedance is defined for mechanical systems as force divided by velocity, while the inverse (velocity/force) is called an admittance. For dynamic systems, the impedance of a ``driving point'' is defined for each frequency , so that the ``force'' in the definition of impedance is best thought of as the peak amplitude of a sinusoidal applied force, and similarly for the velocity. Thus, if denotes the Fourier transform of the applied force at a driving point, and is the Fourier transform of the resulting velocity of the driving point, then the driving-point impedance is given by
In acoustics [317,318], force takes the form of pressure (e.g., in physical units of newtons per meter squared), and velocity may be either particle velocity in open air (meters per second) or volume velocity in acoustic tubes (meters cubed per second) (see §B.7.1 for definitions). The wave impedance (also called the characteristic impedance) in open air is the ratio of pressure to particle velocity in a sound wave traveling through air, and it is given by , where is the density (mass per unit volume) of air, is the speed of sound propagation, is ambient pressure, and is the ratio of the specific heat of air at constant pressure to that at constant volume. In a vibrating string, the wave impedance is given by , where is string density (mass per unit length) and is the tension of the string (stretching force), as discussed further in §C.1 and §B.5.2.
In circuit theory , force takes the form of electric potential in volts, and velocity manifests as electric current in amperes (coulombs per second). In an electric transmission line, the characteristic impedance is given by where and are the inductance and capacitance, respectively, per unit length along the transmission line. In free space, the wave impedance for light is , where and are the permeability and permittivity, respectively, of free space. One might be led from this to believe that there must exist a medium, or `ether', which sustains wave propagation in free space; however, this is one instance in which ``obvious'' predictions from theory turn out to be wrong.
The elementary impedance element in mechanics is the dashpot which may be approximated mechanically by a plunger in a cylinder of air or liquid, analogous to a shock absorber for a car. A constant impedance means that the velocity produced is always linearly proportional to the force applied, or , where is the dashpot impedance, is the applied force at time , and is the velocity. A diagram is shown in Fig. 7.1.
In circuit theory, the element analogous to the dashpot is the resistor , characterized by , where is voltage and is current. In an analog equivalent circuit, a dashpot can be represented using a resistor .
Over a specific velocity range, friction force can also be characterized by the relation . However, friction is very complicated in general , and as the velocity goes to zero, the coefficient of friction may become much larger. The simple model often presented is to use a static coefficient of friction when starting at rest () and a dynamic coefficient of friction when in motion ( ). However, these models are too simplified for many practical situations in musical acoustics, e.g., the frictional force between the bow and string of a violin [308,549], or the internal friction losses in a vibrating string .
The concept of impedance extends also to masses and springs. Figure 7.2 illustrates an ideal mass of kilograms sliding on a frictionless surface. From Newton's second law of motion, we know force equals mass times acceleration, or
The impulse response of a mass, for a force input and velocity output, is defined as the inverse Laplace transform of the transfer function:
Once the input and output signal are defined, a transfer function is defined, and therefore a frequency response is defined . The frequency response is given by the transfer function evaluated on the axis in the plane, i.e., for . For the ideal mass, the force-to-velocity frequency response isamplitude response rolls off dB per octave, and the phase shift is radians at all frequencies.
Figure 7.4 depicts the ideal spring.
The frequency response of the ideal spring, given the applied force as input and resulting velocity as output, isamplitude response grows dB per octave, and the phase shift is radians for all . Clearly, there is no such thing as an ideal spring which can produce arbitrarily large gain as frequency goes to infinity; there is always some mass in a real spring.
We call the compression velocity of the spring. In more complicated configurations, the compression velocity is defined as the difference between the velocity of the two spring endpoints, with positive velocity corresponding to spring compression.
In circuit theory, the element analogous to the spring is the capacitor, characterized by , or . In an equivalent analog circuit, we can use the value . The inverse of the spring stiffness is sometimes called the compliance of the spring.
Don't forget that the definition of impedance requires zero initial conditions for elements with ``memory'' (masses and springs). This means we can only use impedance descriptions for steady state analysis. For a complete analysis of a particular system, including the transient response, we must go back to full scale Laplace transform analysis.
The basic idea of a one-port network  is shown in Fig. 7.5. The one-port is a ``black box'' with a single pair of input/output terminals, referred to as a port. A force is applied at the terminals and a velocity ``flows'' in the direction shown. The admittance ``seen'' at the port is called the driving point admittance. Network theory is normally described in terms of circuit theory elements, in which case a voltage is applied at the terminals and a current flows as shown. However, in our context, mechanical elements are preferable.
Series Combination of One-Ports
Figure 7.6 shows the series combination of two one-ports.
In a physical situation, if two elements are connected in such a way that they share a common velocity, then they are in series. An example is a mass connected to one end of a spring, where the other end is attached to a rigid support, and the force is applied to the mass, as shown in Fig. 7.7.
Parallel Combination of One-Ports
Figure Fig.7.10 shows the parallel combination of two one-ports.
Admittances add in parallel, so the combined admittance is , and the impedance is
When two physical elements are driven by a common force (yet have independent velocities, as we'll soon see is quite possible), they are formally in parallel. An example is a mass connected to a spring in which the driving force is applied to one end of the spring, and the mass is attached to the other end, as shown in Fig.7.11. The compression force on the spring is equal at all times to the rightward force on the mass. However, the spring compression velocity does not always equal the mass velocity . We do have that the sum of the mass velocity and spring compression velocity gives the velocity of the driving point, i.e., . Thus, in a parallel connection, forces are equal and velocities sum.
Mechanical Impedance Analysis
As a simple application, let's find the motion of the mass , after time zero, given that the input force is an impulse at time 0:
Thus, the impulse response of the mass oscillates sinusoidally with radian frequency , and amplitude . The velocity starts out maximum at time , which makes physical sense. Also, the momentum transferred to the mass at time 0 is ; this is also expected physically because the time-integral of the applied force is 1 (the area under any impulse is 1).
It is well known that the impedance of every passive one-port is positive real (see §C.11.2). The reciprocal of a positive real function is positive real, so every passive impedance corresponds also to a passive admittance.
A complex-valued function of a complex variable is said to be positive real (PR) if
- is real whenever is real
- whenever .
A particularly important property of positive real functions is that the phase is bounded between plus and minus degrees, i.e.,
Referring to Fig.7.14, consider the graphical method for computing phase response of a reactance from the pole zero diagram .8.4Each zero on the positive axis contributes a net 90 degrees of phase at frequencies above the zero. As frequency crosses the zero going up, there is a switch from to degrees. For each pole, the phase contribution switches from to degrees as it is passed going up in frequency. In order to keep phase in , it is clear that the poles and zeros must strictly alternate. Moreover, all poles and zeros must be simple, since a multiple poles or zero would swing the phase by more than degrees, and the reactance could not be positive real.
The positive real property is fundamental to passive immittances and comes up often in the study of measured resonant systems. A practical modeling example (passive digital modeling of a guitar bridge) is discussed in §9.2.1.
Digitization of Lumped Models
Since lumped models are described by differential equations, they are digitized (brought into the digital-signal domain) by converting them to corresponding finite-difference equations (or simply ``difference equations''). General aspects of finite difference schemes are discussed in Appendix D. This chapter introduces a couple of elementary methods in common use:Wave Digital Filter (WDF) method introduced in Appendix F. Section 9.3.1 discusses a bilinearly transformed mass colliding with a digital waveguide string (an idealized struck-string example).
Finite Difference Approximation
A finite difference approximation (FDA) approximates derivatives with finite differences, i.e.,
for sufficiently small .8.5
Equation (7.2) is also known as the backward difference approximation of differentiation.
Viewing Eq.(7.2) in the frequency domain, the ideal differentiator transfer-function is , which can be viewed as the Laplace transform of the operator (left-hand side of Eq.(7.2)). Moving to the right-hand side, the z transform of the first-order difference operator is . Thus, in the frequency domain, the finite-difference approximation may be performed by making the substitution
in any continuous-time transfer function (Laplace transform of an integro-differential operator) to obtain a discrete-time transfer function (z transform of a finite-difference operator).
The inverse of substitution Eq.(7.3) is
It is convenient to think of the FDA in terms of time-domain difference operators using a delay operator notation. The delay operator is defined by
The obvious definition for the second derivative is
However, a better definition is the centered finite difference
where denotes a unit-sample advance. This definition is preferable as long as one sample of look-ahead is available, since it avoids an operator delay of one sample. Equation (7.5) is a zero phase filter, meaning it has no delay at any frequency, while (7.4) is a linear phase filter having a delay of sample at all frequencies.
The bilinear transform is defined by the substitution
where is some positive constant [83,326]. That is, given a continuous-time transfer function , we apply the bilinear transform by defining
It can be seen that analog dc () maps to digital dc () and the highest analog frequency () maps to the highest digital frequency (). It is easy to show that the entire axis in the plane (where ) is mapped exactly once around the unit circle in the plane (rather than summing around it infinitely many times, or ``aliasing'' as it does in ordinary sampling). With real and positive, the left-half plane maps to the interior of the unit circle, and the right-half plane maps outside the unit circle. This means stability is preserved when mapping a continuous-time transfer function to discrete time.
Another valuable property of the bilinear transform is that order is preserved. That is, an th-order -plane transfer function carries over to an th-order -plane transfer function. (Order in both cases equals the maximum of the degrees of the numerator and denominator polynomials ).8.6
The constant provides one remaining degree of freedom which can be used to map any particular finite frequency from the axis in the plane to a particular desired location on the unit circle in the plane. All other frequencies will be warped. In particular, approaching half the sampling rate, the frequency axis compresses more and more. Note that at most one resonant frequency can be preserved under the bilinear transformation of a mass-spring-dashpot system. On the other hand, filters having a single transition frequency, such as lowpass or highpass filters, map beautifully under the bilinear transform; one simply uses to map the cut-off frequency where it belongs, and the response looks great. In particular, ``equal ripple'' is preserved for optimal filters of the elliptic and Chebyshev types because the values taken on by the frequency response are identical in both cases; only the frequency axis is warped.
Recall that the Finite Difference Approximation (FDA) defines the elementary differentiator by (ignoring the scale factor for now), and this approximates the ideal transfer function by . The bilinear transform calls instead for the transfer function (again dropping the scale factor) which introduces a pole at and gives us the recursion . Note that this new pole is right on the unit circle and is therefore undamped. Any signal energy at half the sampling rate will circulate forever in the recursion, and due to round-off error, it will tend to grow. This is therefore a potentially problematic revision of the differentiator. To get something more practical, we need to specify that the filter frequency response approximate over a finite range of frequencies , where , above which we allow the response to ``roll off'' to zero. This is how we pose the differentiator problem in terms of general purpose filter design (see §8.6) .
To understand the properties of the finite difference approximation in the frequency domain, we may look at the properties of its -plane to -plane mapping
Setting to 1 for simplicity and solving the FDA mapping for z gives
Under the FDA, analog and digital frequency axes coincide well enough at very low frequencies (high sampling rates), but at high frequencies relative to the sampling rate, artificial damping is introduced as the image of the axis diverges away from the unit circle.
While the bilinear transform ``warps'' the frequency axis, we can say the FDA ``doubly warps'' the frequency axis: It has a progressive, compressive warping in the direction of increasing frequency, like the bilinear transform, but unlike the bilinear transform, it also warps normal to the frequency axis.
Consider a point traversing the upper half of the unit circle in the z plane, starting at and ending at . At dc, the FDA is perfect, but as we proceed out along the unit circle, we diverge from the axis image and carve an arc somewhere out in the image of the right-half plane. This has the effect of introducing an artificial damping.
Consider, for example, an undamped mass-spring system. There will be a complex conjugate pair of poles on the axis in the plane. After the FDA, those poles will be inside the unit circle, and therefore damped in the digital counterpart. The higher the resonance frequency, the larger the damping. It is even possible for unstable -plane poles to be mapped to stable -plane poles.
In summary, both the bilinear transform and the FDA preserve order, stability, and positive realness. They are both free of aliasing, high frequencies are compressively warped, and both become ideal at dc, or as approaches . However, at frequencies significantly above zero relative to the sampling rate, only the FDA introduces artificial damping. The bilinear transform maps the continuous-time frequency axis in the (the axis) plane precisely to the discrete-time frequency axis in the plane (the unit circle).
This difference equation is diagrammed in Fig. 7.16. We recognize this recursive digital filter as the direct form I structure. The direct-form II structure is obtained by commuting the feedforward and feedback portions and noting that the two delay elements contain the same value and can therefore be shared . The two other major filter-section forms are obtained by transposing the two direct forms by exchanging the input and output, and reversing all arrows. (This is a special case of Mason's Gain Formula which works for the single-input, single-output case.) When a filter structure is transposed, its summers become branching nodes and vice versa. Further discussion of the four basic filter section forms can be found in .
While the digital mass simulator has the desirable properties of the bilinear transform, it is also not perfect from a practical point of view: (1) There is a pole at half the sampling rate (). (2) There is a delay-free path from input to output.
The first problem can easily be circumvented by introducing a loss factor , moving the pole from to , where and . This is sometimes called the ``leaky integrator''.
The second problem arises when making loops of elements (e.g., a mass-spring chain which forms a loop). Since the individual elements have no delay from input to output, a loop of elements is not computable using standard signal processing methods. The solution proposed by Alfred Fettweis is known as ``wave digital filters,'' a topic taken up in §F.1.
Model discretization by the FDA (§7.3.1) and bilinear transform (§7.3.2) methods are order preserving. As a result, they suffer from significant approximation error, especially at high frequencies relative to half the sampling rate. By allowing a larger order in the digital model, we may obtain arbitrarily accurate transfer-function models of LTI subsystems, as discussed in Chapter 8. Of course, in higher-order approximations, the state variables of the simulation no longer have a direct physical intepretation, and this can have implications, particularly when trying to extend to the nonlinear case. The benefits of a physical interpretation should not be given up lightly. For example, one may consider oversampling in place of going to higher-order element approximations.
More General Finite-Difference Methods
The FDA and bilinear transform of the previous sections can be viewed as first-order conformal maps from the analog plane to the digital plane. These maps are one-to-one and therefore non-aliasing. The FDA performs well at low frequencies relative to the sampling rate, but it introduces artificial damping at high frequencies. The bilinear transform preserves the frequency axis exactly, but over a warped frequency scale. Being first order, both maps preserve the number of poles and zeros in the model.
We may only think in terms of mapping the plane to the plane for linear, time-invariant systems. This is because Laplace transform analysis is not defined for nonlinear and/or time-varying differential equations (no plane). Therefore, such systems are instead digitized by some form of numerical integration to produce solutions that are ideally sampled versions of the continuous-time solutions. It is often necessary to work at sampling rates much higher than the desired audio sampling rate, due to the bandwidth-expanding effects of nonlinear elements in the continuous-time system.
A tutorial review of numerical solutions of Ordinary Differential Equations (ODE), including nonlinear systems, with examples in the realm of audio effects (such as a diode clipper), is given in . Finite difference schemes specifically designed for nonlinear discrete-time simulation, such as the energy-preserving ``Kirchoff-Carrier nonlinear string model'' and ``von Karman nonlinear plate model'', are discussed in .
The remaining sections here summarize a few of the more elementary techniques discussed in .
where denotes time in seconds, denotes a vector of state variables at time ,
In this case, standard methods for converting a filter from continuous to discrete time may be used, such as the FDA (§7.3.1) and bilinear transform (§7.3.2).8.8
where denotes the approximation to computed by the forward Euler method. Note that the ``driving function'' is evaluated at time , not . As a result, given, and the input vector for all , Eq.(7.10) can be iterated forward in time to compute for all . Since is an arbitrary function, we have a solver that is applicable to nonlinear, time-varying ODEs Eq.(7.8).
Because each iteration of the forward Euler method depends only on past quantities, it is termed an explicit method. In the LTI case, an explicit method corresponds to a causal digital filter . Methods that depend on current and/or future solution samples (i.e., for ) are called implicit methods. When a nonlinear numerical-integration method is implicit, each step forward in time typically uses some number of iterations of Newton's Method (see §7.4.5 below).
An example of an implicit method is the backward Euler method:
Because the derivative is now evaluated at time instead of , the backward Euler method is implicit. Notice, however, that if time were reversed, it would become explicit; in other words, backward Euler is implicit in forward time and explicit in reverse time.
The trapezoidal rule is defined by
Thus, the trapezoidal rule is driven by the average of the derivative estimates at times and . The method is implicit in either forward or reverse time.
The trapezoidal rule gets its name from the fact that it approximates an integral by summing the areas of trapezoids. This can be seen by writing Eq.(7.12) as
Newton's Method of Nonlinear Minimization
Newton's method ,[166, p. 143] finds the minimum of a nonlinear (scalar) function of several variables by locally approximating the function by a quadratic surface, and then stepping to the bottom of that ``bowl'', which generally requires a matrix inversion. Newton's method therefore requires the function to be ``close to quadratic'', and its effectiveness is directly tied to the accuracy of that assumption. For smooth functions, Newton's method gives very rapid quadratic convergence in the last stages of iteration. Quadratic convergence implies, for example, that the number of significant digits in the minimizer approximately doubles each iteration.
Newton's method may be derived as follows: Suppose we wish to minimize the real, positive function with respect to . The vector Taylor expansion  of about gives
Applying Eq.(7.13) iteratively, we obtain Newton's method:
where is given as an initial condition.
A semi-implicit method for numerical integration is based on an implicit method by using only one iteration of Newton's method [354,555]. This effectively converts the implicit method into a corresponding explicit method. Best results are obtained for highly oversampled systems (i.e., is larger than typical audio sampling rates).
The semi-implicit backward Euler method is defined by 
where denotes an estimate of the second time derivative .
Semi-Implicit Trapezoidal Rule
The semi-implicit trapezoidal rule method is given by 
We have looked at a number of methods for solving nonlinear ordinary differential equations, including explicit, implicit, and semi-implicit numerical integration methods. Specific methods included the explicit forward Euler (similar to the finite difference approximation of §7.3.1), backward Euler (implicit), trapezoidal rule (implicit, and equivalent to the bilinear transform of §7.3.2 in the LTI case), and semi-implicit variants of the backward Euler and trapezoidal methods.
As demonstrated and discussed further in , implicit methods are generally more accurate than explicit methods for nonlinear systems, with semi-implicit methods (§7.4.6) typically falling somewhere in between. Semi-implicit methods therefore provide a source of improved explicit methods. See  and the references therein for a discussion of accuracy and stability of such schemes, as well as applied examples.
Other well known numerical integration methods for ODEs include second-order backward difference formulas (commonly used in circuit simulation ), the fourth-order Runge-Kutta method , and their various explicit, implicit, and semi-implicit variations. See  for further discussion of these and related finite-difference schemes, and for application examples in the virtual analog area (digitization of musically useful analog circuits). Specific digitization problems addressed in  include electric-guitar distortion devices [553,556], the classic ``tone stack''  (an often-used bass, midrange, and treble control circuit in guitar amplifiers), the Moog VCF, and other electronic components of amplifiers and effects. Also discussed in  is the ``K Method'' for nonlinear system digitization, with comparison to nonlinear wave digital filters (see Appendix F for an introduction to linear wave digital filters).
For Partial Differential Equations (PDEs), in which spatial derivatives are mixed with time derivatives, the finite-difference approach remains fundamental. An introduction and summary for the LTI case appear in Appendix D. See  for a detailed development of finite difference schemes for solving PDEs, both linear and nonlinear, applied to digital sound synthesis. Physical systems considered in  include bars, stiff strings, bow coupling, hammers and mallets, coupled strings and bars, nonlinear strings and plates, and acoustic tubes (voice, wind instruments). In addition to numerous finite-difference schemes, there are chapters on finite-element methods and spectral methods.
In this chapter, we looked at the fundamentals of lumped modeling elements such as masses, springs, and dashpots. The important concept of driving-point impedance was defined and discussed, and electrical equivalent circuits were developed, along with associated elementary (circuit) network theory. Finally, we looked at basic ways of digitizing lumped elements and more complex ODEs and PDEs, including a first glance at some nonlinear methods.
Practical examples of lumped models begin in §9.3.1. In particular, piano-like models require a ``hammer'' to strike the string, and §9.3.1 explicates the simplest case of an ideal point-mass striking an ideal vibrating string. In that model, when the mass is in contact with the string, it creates a scattering junction on the string having reflection and transmission coefficients that are first-order filters. These filters are then digitized via the bilinear transform. The ideal string itself is of course modeled as a digital waveguide. A detailed development of wave scattering at impedance-discontinuities is presented for digital waveguide models in §C.8, and for wave digital filters in Appendix F.
Transfer Function Models
Digital Waveguide Models