## Convergence

A finite-difference scheme is said to be
*convergent* if all of its solutions
in response to initial conditions and excitations converge pointwise
to the corresponding solutions of the original differential equation
as the step size(s) approach zero.

In other words, as the step-size(s) shrink, the FDS solution must improve, ultimately converging to the corresponding solution of the original differential equation at every point of the domain.

In the vibrating string example, the limit is taken as the step sizes (sampling intervals) and approach zero. Since the finite-difference approximations in Eq.(D.1) converge in the limit to the very definitions of the corresponding partial derivatives, we expect the FDS in Eq.(D.3) based on these approximations to be convergent (and it is).

In establishing convergence, it is necessary to provide that any initial conditions and boundary conditions in the finite-difference scheme converge to those of the continuous differential equation, in the limit. See [481] for a more detailed discussion of this topic.

The *Lax-Richtmyer equivalence theorem* provides a means of
showing convergence of a finite-difference scheme by showing it is
both *consistent* and *stable* (and that the initial-value
problem is *well posed*) [481]. The following
subsections give basic definitions for these terms which applicable to
our simplified scenario (linear, shift-invariant, fixed sampling
rates).

### Consistency

A finite-difference scheme is said to be
*consistent* with the original
partial differential equation if, given any sufficiently
differentiable function , the differential equation operating
on approaches the value of the finite difference equation
operating on , as and approach zero.

Thus, in the ideal string example, to show the consistency of Eq.(D.3) we must show that

*shift operator notation*:

In particular, we have

In taking the limit as and approach zero, we must maintain the relationship , and we must scale the FDS by in order to achieve an exact result:

as required. Thus, the FDS is consistent.
See, *e.g.*, [481] for more examples.

In summary, consistency of a finite-difference scheme means that, in the limit as the sampling intervals approach zero, the original PDE is obtained from the FDS.

### Well Posed Initial-Value Problem

For a proper authoritative definition of ``well posed'' in the field
of finite-difference schemes, see, *e.g.*, [481]. The
definition we will use here is less general in that it excludes
amplitude growth from initial conditions which is faster than
polynomial in time.

We will say that an initial-value problem is
*well posed*
if the linear system defined by the PDE, together with any bounded initial
conditions is *marginally stable*.

As discussed in [449], a system is defined to be
*stable* when its response to bounded initial
conditions approaches zero as time goes to infinity. If the response
fails to approach zero but does not exponentially grow over time (the
*lossless* case), it is called *marginally stable*.

In the literature on finite-difference schemes, lossless systems are classified as stable [481]. However, in this book series, lossless systems are not considered stable, but only marginally stable.

When marginally stable systems are allowed, it is necessary to
accommodate
*polynomial growth* with respect to time. As is well known
in linear systems theory, repeated poles can yield polynomial growth
[449]. A very simple example is the ordinary differential
equation (ODE)

When all poles of the system are strictly in the left-half of the
Laplace-transform plane, the system is *stable*, even when
the poles are repeated. This is because exponentials are faster than
polynomials, so that any amount of exponential decay will eventually
overtake polynomial growth and drag it to zero in the limit.

Marginally stable systems arise often in computational physical modeling. In particular, the ideal string is only marginally stable, since it is lossless. Even a simple unaccelerated mass, sliding on a frictionless surface, is described by a marginally stable PDE when the position of the mass is used as a state variable (see §7.1.2). Given any nonzero initial velocity, the position of the mass approaches either or infinity, exactly as in the example above. To avoid unbounded growth in practical systems, it is often preferable to avoid the use of displacement as a state variable. For ideal strings and freely sliding masses, force and velocity are usually good choices.

It should perhaps be emphasized that the term ``well posed'' normally allows for more general energy growth at a rate which can be bounded over all initial conditions [481]. In this book, however, the ``marginally stable'' case (at most polynomial growth) is what we need. The reason is simply that we wish to excluded unstable PDEs as a modeling target. Note, however, that unstable systems can be used profitable over carefully limited time durations (see §9.7.2 for an example).

In the ideal vibrating string, energy is conserved. Therefore, it is a marginally stable system. To show mathematically that the PDE Eq.(D.2) is marginally stable, we may show that

*I.e.*, we can show

Note that solutions on the ideal string are not bounded, since, for example, an infinitely long string (non-terminated) can be initialized with a constant positive velocity everywhere along its length. This corresponds physically to a nonzero transverse momentum, which is conserved. Therefore, the string will depart in the positive direction, with an average displacement that grows linearly with .

The well-posedness of a class of damped PDEs used in string modeling is analyzed in §D.2.2.

#### A Class of Well Posed Damped PDEs

A large class of well posed PDEs is given by [45]

Thus, to the ideal string wave equation Eq.(C.1) we may add any number of even-order partial derivatives in , plus any number of mixed odd-order partial derivatives in and , where differentiation with respect to occurs only once. Because every member of this class of PDEs is only second-order in time, it is guaranteed to be

*well posed*, as we now show.

To show Eq.(D.5) is well posed [45], we must
show that the roots of the characteristic polynomial equation
(§D.3) have negative real parts, *i.e.*, that they correspond to
decaying exponentials instead of growing exponentials. To do this, we
may insert the general eigensolution

*characteristic polynomial equation*:

Let's now set , where
is radian spatial
frequency (called the ``wavenumber'' in acoustics) and of course
, thereby converting the implicit spatial Laplace
transform to a spatial Fourier transform. Since there are only even
powers of the spatial Laplace transform variable , the polynomials
and are *real*. Therefore, the roots of the
characteristic polynomial equation (the natural frequencies of the
time response of the system), are given by

#### Proof that the Third-Order Time Derivative is Ill Posed

For its tutorial value, let's also show that the PDE of Ruiz
[392] is ill posed, *i.e.*, that at least one component of the
solution is a growing exponential. In this case, setting
in Eq.(C.28), which we restate as

It is interesting to note that Ruiz discovered the exponentially growing solution, but simply dropped it as being non-physical. In the work of Chaigne and Askenfelt [77], it is believed that the finite difference approximation itself provided the damping necessary to eliminate the unstable solution [45]. (See §7.3.2 for a discussion of how finite difference approximations can introduce damping.) Since the damping effect is sampling-rate dependent, there is an upper bound to the sampling rate that can be used before an unstable mode appears.

### Stability of a Finite-Difference Scheme

A finite-difference scheme is said to be
*stable*
if it forms a *digital filter* which is at least *marginally
stable* [449].

To distinguish between the stable and marginally stable cases, we may
classify a finite-difference scheme as *strictly stable*,
*marginally stable*, or *unstable*.

### Lax-Richtmyer equivalence theorem

The Lax-Richtmyer equivalence theorem states that *``a consistent
finite-difference scheme for a partial differential equation for which
the initial-value problem is well posed is convergent if and only if
it is stable.''* For a proof, see [481, Ch. 10].

### Passivity of a Finite-Difference Scheme

A condition stronger than stability as defined above is
*passivity*. Passivity is not a traditional metric for
finite-difference scheme analysis, but it arises naturally in special
cases such as wave digital filters (§F.1) and digital waveguide
networks [55,35]. In such modeling frameworks, all
signals have a *physical interpretation* as wave variables, and
therefore a physical energy can be associated with them. Moreover,
each delay element can be associated with some real *wave
impedance*. In such situations, passivity can be defined as the case
in which all impedances are nonnegative. When complex, they must be
*positive* real (see §C.11.2).

To define passivity for all linear, shift-invariant finite difference schemes, irrespective of whether or not they are based on an impedance description, we will say that a finite-difference scheme is passive if all of its internal modes are stable. Thus, passivity is sufficient, but not necessary, for stability. In other words, there are finite difference schemes which are stable but not passive [55]. A stable FDS can have internal unstable modes which are not excited by initial conditions, or which always cancel out in pairs. A passive FDS cannot have such ``hidden'' unstable modes.

The absence of hidden modes can be ascertained by converting the FDS
to a state-space model and checking that it is *controllable*
(from initial conditions and/or excitations) and *observable*
[449]. When the initial conditions can set the entire initial
state of the FDS, it is then controllable from initial conditions, and
only observability needs to be checked. A simple example of an
unobservable mode is the second harmonic of an ideal string (and all
even-numbered harmonics) when the only output observation is the
midpoint of the string.

### Summary

In summary, we have defined the following terms from the analysis of finite-difference schemes for the linear shift-invariant case with constant sampling rates:

- PDE
*well posed*PDE at least marginally stable - FDS
*consistent*FDS shift operator PDE operator as - FDS
*stable*stable or marginally stable as a digital filter - FDS
*strictly stable*stable as a digital filter - FDS
*marginally stable*marginally stable as a digital filter

### Convergence in Audio Applications

Because the range of human hearing is bounded (nominally between 20 and 20 kHz), spectral components of a signal outside this range are not audible. Therefore, when the solution to a differential equation is to be considered an audio signal, there are frequency regions over which convergence is not a requirement.

Instead of pointwise convergence, we may ask for the following two properties:

*Superposition*holds.- Convergence occurs
*within the frequency band of human hearing*.

*bandlimited interpolator*design (see §4.4).

In many cases, such as in digital waveguide modeling of vibrating
strings, we can do better than convergence. We can construct finite
difference schemes which agree with the corresponding continuous
solutions *exactly* at the sample points. (See §C.4.1.)

**Next Section:**

Characteristic Polynomial Equation

**Previous Section:**

Finite-Difference Schemes