In the
ideal vibrating string, the only restoring
force for
transverse
displacement comes from the string tension (§
C.1 above);
specifically, the transverse restoring force is equal the net
transverse component of the axial string tension. Consider in place
of the ideal string a
bundle of ideal strings, such as a
stranded cable. When the cable is bent, there is now a new restoring
force arising from some of the fibers being compressed and others
being stretched by the bending. This force sums with that due to
string tension. Thus, stiffness in a
vibrating string introduces a
new restoring force proportional to bending angle. It is important to
note that
string stiffness is a linear phenomenon resulting
from the finite diameter of the string.

In typical treatments,
C.3bending stiffness adds a new term to the
wave equation that is
proportional to the
fourth spatial derivative of string
displacement:
 |
(C.32) |
where the moment constant

is the product of
Young's
modulus 
(the ``relative-displacement
spring constant per unit
cross-sectional area,'' discussed in §
B.5.1) and the
area moment of inertia 
(§
B.4.8); as
derived in §
B.4.9, a cylindrical string of radius

has
area
moment of inertia equal to

.
This
wave equation works well enough for small amounts of bending
stiffness, but it is clearly missing some terms because it predicts
that deforming the string into a parabolic shape will incur no
restoring force due to stiffness. See §
6.9 for
further discussion of
wave equations for
stiff strings.
To solve the stiff
wave equation Eq.

(
C.32),
we may set

to get
At very low frequencies, or when stiffness is negligible in comparison
with

, we obtain again the non-
stiff string:

.
At very high frequencies, or when the tension

is negligible relative
to

, we obtain the
ideal bar (or rod) approximation:
In an ideal bar, the
only restoring force is due to
bending stiffness. Setting

gives solutions

and

. In the first case, the wave
velocity becomes proportional to

. That is, waves
travel faster along the ideal bar as oscillation frequency increases,
going up as the square root of frequency. The second solution
corresponds to a change in the wave shape which prevents sharp corners
from forming due to stiffness [
95,
118].
At intermediate frequencies, between the ideal string and the ideal bar,
the stiffness contribution can be treated as a correction term
[
95]. This is the region of most practical interest because
it is the principal operating region for strings, such as
piano strings,
whose stiffness has audible consequences (an inharmonic, stretched
overtone
series). Assuming

,
Substituting for

in terms of

in

gives the general
eigensolution
Setting

as before, corresponding to driving the medium sinusoidally
over time at frequency

, the medium response is
where
Because the effective wave velocity depends on

, we cannot use
Fourier's theorem to construct arbitrary traveling shapes by superposition.
At

, we can construct any function of time, but the waveshape
disperses as it
propagates away from

. The higher-frequency Fourier
components travel faster than the lower-frequency components.
Since the temporal and spatial
sampling intervals are related by

, this must generalize to

, where

is the size of a unit
delay in the absence of stiffness. Thus, a unit delay

may be
replaced by

(for frequency-dependent wave velocity).
That is, each delay element becomes an
allpass filter which
approximates the required delay versus frequency. A diagram appears in
Fig.
C.8, where

denotes the
allpass filter which
provides a rational approximation to

.
Figure C.8:
Section of a stiff string
where allpass filters play the role of unit delay elements.
![\includegraphics[scale=0.9]{eps/fstiffstring}](http://www.dsprelated.com/josimages_new/pasp/img3406.png) |
The general, order

,
allpass filter is given by [
449]
where
and the roots of

must all have modulus less than

.
That is, the numerator polynomial is just the reverse of the
denominator polynomial. This implies each
pole 
is
gain-compensated by a zero at

.
For computability of the string simulation in the presence of
scattering
junctions, there must be at least one sample of pure delay along each
uniform section of string. This means for at least one allpass filter in
Fig.
C.8, we must have

which
implies

can be factored as

. In a
systolic VLSI implementation, it is desirable to have at least one real
delay from the input to the output of
every allpass filter, in order
to be able to pipeline the computation of all of the allpass filters in
parallel. Computability can be arranged in practice by deciding on a
minimum delay, (
e.g., corresponding to the wave velocity at a maximum
frequency), and using an allpass filter to provide excess delay beyond the
minimum.
Because allpass filters are
linear and time invariant, they commute
like gain factors with other linear, time-invariant components.
Fig.
C.9 shows a diagram equivalent to
Fig.
C.8 in which the allpass filters have been
commuted and consolidated at two points. For computability in all
possible contexts (
e.g., when looped on itself), a single sample of
delay is pulled out along each rail. The remaining
transfer function,

in the example of
Fig.
C.9, can be approximated using any allpass
filter
design technique
[
1,
2,
267,
272,
551].
Alternatively, both gain and dispersion for a stretch of
waveguide can
be provided by a single filter which can be designed using any
general-purpose filter design method which is sensitive to
frequency-response phase as well as magnitude; examples include
equation error methods (such as used in the
matlab invfreqz
function (§
8.6.4), and Hankel
norm methods
[
177,
428,
36].
Figure C.9:
Section of a stiff string where the allpass
delay elements are consolidated at two points, and a sample of pure
delay is extracted from each allpass chain.
![\includegraphics[scale=0.9]{eps/flstiffstring}](http://www.dsprelated.com/josimages_new/pasp/img3413.png) |
In the case of a lossless, stiff string, if

denotes the
consolidated allpass transfer function, it can be argued that the filter
design technique used should minimize the
phase-delay error, where
phase delay is defined by [
362]

(Phase Delay)
Minimizing the
Chebyshev norm of the phase-delay error,
approximates minimization of the error in
mode tuning for the
freely vibrating string [
428, pp. 182-184]. Since the
stretching of the
overtone series is typically what we hear most in a
stiff, vibrating string, the worst-case phase-delay error is a good
choice in such a case.
Alternatively, a lumped allpass filter can be designed by
minimizing
group delay,

(Group Delay)
The group delay of a filter gives the delay experienced by the amplitude
envelope of a narrow frequency band centered at

, while the
phase delay applies to the ``carrier'' at

, or a
sinusoidal
component at frequency

[
342]. As a result, for proper
tuning of overtones, phase delay is what matters, while
for precisely estimating (or controlling) the
decay time in a lossy
waveguide, group delay gives the effective filter delay ``seen'' by the
exponential decay envelope.
See §
9.4.1 for designing allpass filters with a
prescribed delay versus frequency.
To model stiff strings, the allpass filter must supply a
phase delay which
decreases as frequency increases. A good
approximation may require a fairly high-order filter, adding significantly
to the cost of simulation. (For low-
pitched piano strings, order 8
allpass filters work well perceptually [
1].)
To a large extent, the allpass order required
for a given error tolerance increases as the number of lumped
frequency-dependent delays is increased. Therefore, increased dispersion
consolidation is accompanied by larger required allpass filters, unlike the
case of resistive losses.
The function
piano_dispersion_filter in the
Faust
distribution (in
effect.lib) designs and implements an
allpass filter modeling the dispersion due to stiffness in a piano
string [
154,
170,
368].
The complete, linear, time-invariant generalization of the lossy,
stiff
string is described by the
differential equation
 |
(C.33) |
which, on setting

, (or taking the 2D
Laplace transform
with zero
initial conditions), yields the algebraic equation,
 |
(C.34) |
Solving for

in terms of

is, of course, nontrivial in general.
However, in specific cases, we can determine the appropriate
attenuation per sample

and wave
propagation speed

by numerical means. For example, starting at

, we
normally also have

(corresponding to the absence of static
deformation in the medium). Stepping

forward by a small
differential

, the left-hand side can be approximated by

. Requiring the generalized wave
velocity

to be continuous, a physically reasonable assumption, the
right-hand side can be approximated by

, and
the solution is easy. As

steps forward, higher order terms become
important one by one on both sides of the equation. Each new term in

spawns a new solution for

in terms of

, since the order of
the polynomial in

is incremented. It appears possible that
homotopy continuation methods [
316] can be used to
keep track of the branching solutions of

as a function of

.
For each solution

, let

denote the real part of

and let

denote the imaginary part. Then the
eigensolution family can be seen in the form

. Defining

, and
sampling according to

and

, with

as before, (the spatial
sampling
period is taken to be frequency invariant, while the temporal
sampling
interval is modulated versus frequency using
allpass filters), the
left- and right-going sampled eigensolutions become
where

. Thus, a general map of

versus

, corresponding to a
partial differential equation of any
order in the form (
C.33), can be translated, in principle, into an
accurate, local, linear, time-invariant, discrete-time simulation.
The
boundary conditions and initial state determine the initial
mixture of the various solution branches as usual.
We see that a large class of
wave equations with constant
coefficients, of any order, admits a decaying, dispersive,
traveling-wave type solution. Even-order time derivatives give rise
to frequency-dependent dispersion and odd-order time derivatives
correspond to
frequency-dependent losses. The corresponding digital
simulation of an arbitrarily long (undriven and unobserved) section of
medium can be simplified via commutativity to at most two pure delays
and at most two linear, time-invariant
filters.
Every linear, time-invariant filter can be expressed as a
zero-phase
filter in series with an
allpass filter. The
zero-phase part can be
interpreted as implementing a frequency-dependent gain (damping in a
digital waveguide), and the allpass part can be seen as
frequency-dependent delay (dispersion in a digital
waveguide).
Next Section: Alternative Wave VariablesPrevious Section: A Lossy 1D Wave Equation