DSPRelated.com
Free Books

Sampled Traveling Waves

To carry the traveling-wave solution into the ``digital domain,'' it is necessary to sample the traveling-wave amplitudes at intervals of $ T$ seconds, corresponding to a sampling rate $ f_s
\isdeftext 1/T$ samples per second. For CD-quality audio, we have $ f_s= 44.1$ kHz. The natural choice of spatial sampling interval $ X$ is the distance sound propagates in one temporal sampling interval $ T$, or $ X\isdeftext cT$ meters. In a lossless traveling-wave simulation, the whole wave moves left or right one spatial sample each time sample; hence, lossless simulation requires only digital delay lines. By lumping losses parsimoniously in a real acoustic model, most of the traveling-wave simulation can in fact be lossless even in a practical application.

Formally, sampling is carried out by the change of variables

\begin{displaymath}
\begin{array}{rclcl}
x &\to& x_m&=& mX, \nonumber \\
t &\to& t_n&=& nT. \nonumber
\end{array}\end{displaymath}

Substituting into the traveling-wave solution of the wave equation gives
$\displaystyle y(t_n,x_m)$ $\displaystyle \,\mathrel{\mathop=}\,$ $\displaystyle y_r(t_n- x_m/c) + y_l(t_n+ x_m/c) \protect$  
  $\displaystyle \,\mathrel{\mathop=}\,$ $\displaystyle y_r(nT- mX/c) + y_l(nT+ mX/c) \nonumber$  
  $\displaystyle \,\mathrel{\mathop=}\,$ $\displaystyle y_r\left[(n-m)T\right]+ y_l\left[(n+m)T\right].
\protect$  

Since $ T$ multiplies all arguments, let's suppress it by defining

$\displaystyle y^{+}(n) \isdefs y_r(nT), \qquad\qquad y^{-}(n) \isdefs y_l(nT). \protect$ (C.16)

This new notation also introduces a ``$ +$'' superscript to denote a traveling-wave component propagating to the right, and a ``$ -$'' superscript to denote propagation to the left. This notation is similar to that used for acoustic tubes [297].

Digital Waveguide Model

Figure C.3: Digital simulation of the ideal, lossless waveguide with observation points at $ x=0$ and $ x=3X=3cT$. (The symbol ``$ z^{-1}$'' denotes a one-sample delay.)
\includegraphics[scale=0.9]{eps/fideal}

In this section, we interpret the sampled d'Alembert traveling-wave solution of the ideal wave equation as a digital filtering framework. This is an example of what are generally known as digital waveguide models [430,431,433,437,442].

The term $ y_r\left[(n-m)T\right]\isdef y^{+}(n-m)$ in Eq.$ \,$(C.16) can be thought of as the output of an $ m$-sample delay line whose input is $ y^{+}(n)$. In general, subtracting a positive number $ m$ from a time argument $ n$ corresponds to delaying the waveform by $ m$ samples. Since $ y^{+}$ is the right-going component, we draw its delay line with input $ y^{+}(n)$ on the left and its output $ y^{+}(n-m)$ on the right. This can be seen as the upper ``rail'' in Fig.C.3

Similarly, the term $ y_l\left[(n+m)T\right]\isdeftext y^{-}(n+m)$ can be thought of as the input to an $ m$-sample delay line whose output is $ y^{-}(n)$. Adding $ m$ to the time argument $ n$ produces an $ m$-sample waveform advance. Since $ y^{-}$ is the left-going component, it makes sense to draw the delay line with its input $ y^{-}(n+m)$ on the right and its output $ y^{-}(n)$ on the left. This can be seen as the lower ``rail'' in Fig.C.3.

Note that the position along the string, $ x_m = mX= m cT$ meters, is laid out from left to right in the diagram, giving a physical interpretation to the horizontal direction in the diagram. Finally, the left- and right-going traveling waves must be summed to produce a physical output according to the formula

$\displaystyle y(t_n,x_m) = y^{+}(n-m) + y^{-}(n+m). \protect$ (C.17)

We may compute the physical string displacement at any spatial sampling point $ x_m$ by simply adding the upper and lower rails together at position $ m$ along the delay-line pair. In Fig.C.3, ``transverse displacement outputs'' have been arbitrarily placed at $ x=0$ and $ x=3X$. The diagram is similar to that of well known ladder and lattice digital filter structures (§C.9.4,[297]), except for the delays along the upper rail, the absence of scattering junctions, and the direct physical interpretation. (A scattering junction implements partial reflection and partial transmission in the waveguide.) We could proceed to ladder and lattice filters by (1) introducing a perfectly reflecting (rigid or free) termination at the far right, and (2) commuting the delays rightward from the upper rail down to the lower rail [432,434]. The absence of scattering junctions is due to the fact that the string has a uniform wave impedance. In acoustic tube simulations, such as for voice [87,297] or wind instruments [195], lossless scattering junctions are used at changes in cross-sectional tube area and lossy scattering junctions are used to implement tone holes. In waveguide bowed-string synthesis (discussed in a later section), the bow itself creates an active, time-varying, and nonlinear scattering junction on the string at the bowing point.

Any ideal, one-dimensional waveguide can be simulated in this way. It is important to note that the simulation is exact at the sampling instants, to within the numerical precision of the samples themselves. To avoid aliasing associated with sampling, we require all waveshapes traveling along the string to be initially bandlimited to less than half the sampling frequency. In other words, the highest frequencies present in the signals $ y_r(t)$ and $ y_l(t)$ may not exceed half the temporal sampling frequency $ f_s
\isdeftext 1/T$; equivalently, the highest spatial frequencies in the shapes $ y_r(x/c)$ and $ y_l(x/c)$ may not exceed half the spatial sampling frequency $ \nu_s \isdeftext 1/X$.

A C program implementing a plucked/struck string model in the form of Fig.C.3 is available at http://ccrma.stanford.edu/~jos/pmudw/.


Digital Waveguide Interpolation

A more compact simulation diagram which stands for either sampled or continuous waveguide simulation is shown in Fig.C.4. The figure emphasizes that the ideal, lossless waveguide is simulated by a bidirectional delay line, and that bandlimited spatial interpolation may be used to construct a displacement output for an arbitrary $ x$ not a multiple of $ cT$, as suggested by the output drawn in Fig.C.4. Similarly, bandlimited interpolation across time serves to evaluate the waveform at an arbitrary time not an integer multiple of $ T$4.4).

Figure C.4: Simplified picture of ideal waveguide simulation.
\includegraphics[scale=0.9]{eps/fcompact}

Ideally, bandlimited interpolation is carried out by convolving a continuous ``sinc function'' sinc$ (x)\isdeftext \sin(\pi x)/\pi x$ with the signal samples. Specifically, convolving a sampled signal $ x(t_n)$ with sinc$ [(t_n-t_0)/T)]$ ``evaluates'' the signal at an arbitrary continuous time $ t_0$. The sinc function is the impulse response of the ideal lowpass filter which cuts off at half the sampling rate.

In practice, the interpolating sinc function must be windowed to a finite duration. This means the associated lowpass filter must be granted a ``transition band'' in which its frequency response is allowed to ``roll off'' to zero at half the sampling rate. The interpolation quality in the ``pass band'' can always be made perfect to within the resolution of human hearing by choosing a sufficiently large product of window-length times transition-bandwidth. Given ``audibly perfect'' quality in the pass band, increasing the transition bandwidth reduces the computational expense of the interpolation. In fact, they are approximately inversely proportional. This is one reason why oversampling at rates higher than twice the highest audio frequency is helpful. For example, at a $ 44.1$ kHz sampling rate, the transition bandwidth above the nominal audio upper limit of $ 20$ kHz is only $ 2.1$ kHz, while at a $ 48$ kHz sampling rate (used in DAT machines) the guard band is $ 4$ kHz wide--nearly double. Since the required window length (impulse response duration) varies inversely with the provided transition bandwidth, we see that increasing the sampling rate by less than ten percent reduces the filter expense by almost fifty percent. Windowed-sinc interpolation is described further in §4.4. Many more techniques for digital resampling and delay-line interpolation are reviewed in [267].


Relation to the Finite Difference Recursion

In this section we will show that the digital waveguide simulation technique is equivalent to the recursion produced by the finite difference approximation (FDA) applied to the wave equation [442, pp. 430-431]. A more detailed derivation, with examples and exploration of implications, appears in Appendix E. Recall from (C.6) that the time update recursion for the ideal string digitized via the FDA is given by

$\displaystyle y(n+1,m) = y(n,m+1) + y(n,m-1) - y(n-1,m). \protect$ (C.18)

To compare this with the waveguide description, we substitute the traveling-wave decomposition $ y(n,m) = y^{+}(n-m) +
y^{-}(n+m)$ (which is exact in the ideal case at the sampling instants) into the right-hand side of the FDA recursion above and see how good is the approximation to the left-hand side $ y(n+1,m) = y^{+}(n+1-m) + y^{-}(n+1+m)$. Doing this gives
$\displaystyle y(n+1,m)$ $\displaystyle =$ $\displaystyle y(n,m+1) + y(n,m-1) - y(n-1,m)$ (C.19)
  $\displaystyle =$ $\displaystyle y^{+}(n-m-1) + y^{-}(n+m+1)$  
    $\displaystyle + y^{+}(n-m+1) + y^{-}(n+m-1)$  
    $\displaystyle - y^{+}(n-m-1) - y^{-}(n+m-1)$  
  $\displaystyle =$ $\displaystyle y^{-}(n+m+1) + y^{+}(n-m+1)$  
  $\displaystyle =$ $\displaystyle y^{+}[(n+1)-m] + y^{-}[(n+1)+m]$  
  $\displaystyle \isdef$ $\displaystyle y(n+1,m).$  

Thus, we obtain the result that the FDA recursion is also exact in the lossless case, because it is equivalent to the digital waveguide method which we know is exact at the sampling points. This is surprising since the FDA introduces artificial damping when applied to lumped, mass-spring systems, as discussed earlier.

The last identity above can be rewritten as

$\displaystyle y(n+1,m)$ $\displaystyle \isdef$ $\displaystyle y^{+}[(n+1)-m] + y^{-}[(n+1)+m]$ (C.20)
  $\displaystyle =$ $\displaystyle y^{+}[n-(m-1)] + y^{-}[n+(m+1)]$  

which says the displacement at time $ n+1$, position $ m$, is the superposition of the right-going and left-going traveling wave components at positions $ m-1$ and $ m+1$, respectively, from time $ n$. In other words, the physical wave variable can be computed for the next time step as the sum of incoming traveling wave components from the left and right. This picture also underscores the lossless nature of the computation.

This results extends readily to the digital waveguide meshC.14), which is essentially a lattice-work of digital waveguides for simulating membranes and volumes. The equivalence is important in higher dimensions because the finite-difference model requires less computations per node than the digital waveguide approach.

Even in one dimension, the digital waveguide and finite-difference methods have unique advantages in particular situations, and as a result they are often combined together to form a hybrid traveling-wave/physical-variable simulation [351,352,222,124,123,224,263,223]. In this hybrid simulations, the traveling-wave variables are called ``W variables'' (where `W' stands for ``Wave''), while the physical variables are caled ``K variables'' (where `K' stands for ``Kirchoff''). Each K variable, such as displacement $ y(nT,mX)$ on a vibrating string, can be regarded as the sum of two traveling-wave components, or W variables:

$\displaystyle y(nT,mX) = y_r(nT-mX/c) + y_l(nT+mX/c)
$

Conversion between K variables and W variables can be non-trivial due to the non-local dependence of one set of state variables on the other, in general. A detailed examination of this issue is given in Appendix E.


Next Section:
A Lossy 1D Wave Equation
Previous Section:
Traveling-Wave Solution