Free Books

Lagrange Interpolation

Lagrange interpolation is a well known, classical technique for interpolation [193]. It is also called Waring-Lagrange interpolation, since Waring actually published it 16 years before Lagrange [309, p. 323]. More generically, the term polynomial interpolation normally refers to Lagrange interpolation. In the first-order case, it reduces to linear interpolation.

Given a set of $ N+1$ known samples $ f(x_k)$, $ k=0,1,2,\ldots,N$, the problem is to find the unique order $ N$ polynomial $ y(x)$ which interpolates the samples.5.2The solution can be expressed as a linear combination of elementary $ N$th order polynomials:

$\displaystyle y(x) = \sum_{k=0}^N l_k(x)f(x_k) \protect$ (5.6)


$\displaystyle l_k(x) \isdef \frac{(x - x_0) \cdots (x - x_{k-1}) (x - x_{k+1}) ...
}{(x_k - x_0) \cdots (x_k - x_{k-1}) (x_k - x_{k+1}) \cdots (x_k - x_N)}.

From the numerator of the above definition, we see that $ l_k(x)$ is an order $ N$ polynomial having zeros at all of the samples except the $ k$th. The denominator is simply the constant which normalizes to give $ l_k(x_k)=1$. Thus, we have

$\displaystyle l_k(x_j) = \delta_{kj} \isdef \left\{\begin{array}{ll}
1, & j=k, \\ [5pt]
0, & j\neq k. \\

The polynomial $ l_k$ can be interpreted as the $ k$th basis polynomial for constructing a polynomial interpolation of order $ N$ over the $ N+1$ sample points $ x_k$. It is an order $ N$ polynomial having zeros at all of the samples except the $ k$th, where it is 1. An example of a set of eight basis functions $ l_k$ for randomly selected interpolation points $ x_k$ is shown in Fig.4.10.

Figure 4.10: Example Lagrange basis functions in the eighth-order case for randomly selected interpolation points (marked by dotted lines). The unit-amplitude points are marked by dashed lines.

Interpolation of Uniformly Spaced Samples

In the uniformly sampled case ($ x_k=kT$ for some sampling interval $ T$), a Lagrange interpolator can be viewed as a Finite Impulse Response (FIR) filter [449]. Such filters are often called fractional delay filters [267], since they are filters providing a non-integer time delay, in general. Let $ h(n)$ denote the impulse response of such a fractional-delay filter. That is, assume the interpolation at point $ x$ is given by

y(x) &=& h(0)\,f(x_N) + h(1)\,f(x_{N-1}) + \cdots h(N)\,f(x_0)\\
&=& h(0)\,y(N) + h(1)\,y(N-1) + \cdots h(N)\,y(0).

where we have set $ T=1$ for simplicity, and used the fact that $ y(x_k)=f(x_k)$ for $ k=0,1,\ldots,N$ in the case of ``true interpolators'' that pass through the given samples exactly. For best results, $ y(x)$ should be evaluated in a one-sample range centered about $ x=N/2$. For delays outside the central one-sample range, the coefficients can be shifted to translate the desired delay into that range.

Fractional Delay Filters

In fractional-delay filtering applications, the interpolator typically slides forward through time to produce a time series of interpolated values, thereby implementing a non-integer signal delay:

$\displaystyle \hat{y}\left(n-\frac{N}{2}-\eta\right)
= h(0)\,y(n) + h(1)\,y(n-1) + \cdots h(N)\,y(0)

where $ \eta\in[-1/2,1/2]$ spans the central one-sample range of the interpolator. Equivalently, the interpolator may be viewed as an FIR filter having a linear phase response corresponding to a delay of $ N/2 +
\eta$ samples. Such filters are often used in series with a delay line in order to implement an interpolated delay line4.1) that effectively provides a continuously variable delay for discrete-time signals.

The frequency response [449] of the fractional-delay FIR filter $ h(n)$ is

$\displaystyle H(e^{j\omega}) \eqsp \sum_{n=0}^N h(n)e^{-j\omega n}.

For an ideal fractional-delay filter, the frequency response should be equal to that of an ideal delay

$\displaystyle H^\ast(e^{j\omega}) \eqsp e^{-j\omega\Delta}

where $ \Delta\isdeftext N/2 + \eta$ denotes the total desired delay of the filter. Thus, the ideal desired frequency response is a linear phase term corresponding to a delay of $ \Delta$ samples.

Lagrange Interpolation Optimality

As derived in §4.2.14, Lagrange fractional-delay filters are maximally flat in the frequency domain at dc. That is,

$\displaystyle \left.\frac{d^m E(e^{j\omega})}{d\omega^m}\right\vert _{\omega=0} = 0, \quad m=0,1,2,\ldots,N,

where $ E(e^{j\omega})$ is the interpolation error expressed in the frequency domain:

$\displaystyle E(e^{j\omega})\isdefs H^\ast(e^{j\omega}) - H(e^{j\omega}),

where $ H$ and $ H^\ast$ are defined in §4.2.2 above. This is the same optimality criterion used for the power response of (recursive) Butterworth filters in classical analog filter design [343,449]. It can also be formulated in terms of ``Pade approximation'' [373,374]. To summarize, the basic idea of maximally flat filter design is to match exactly as many leading terms as possible in the Taylor series expansion of the desired frequency response. Equivalently, we zero the maximum number of leading terms in the Taylor expansion of the frequency-response error.

Figure 4.11 compares Lagrange and optimal Chebyshev fractional-delay filter frequency responses. Optimality in the Chebyshev sense means minimizing the worst-case error over a given frequency band (in this case, $ \vert\omega\vert\in
[0,0.8\pi]$). While Chebyshev optimality is often the most desirable choice, we do not have closed-form formulas for such solutions, so they must be laboriously pre-calculated, tabulated, and interpolated to produce variable-delay filtering [358].

Figure 4.11: Comparison of Lagrange and Optimal Chebyshev Fractional-Delay Filter Frequency Responses

Explicit Lagrange Coefficient Formulas

Given a desired fractional delay of $ \Delta$ samples, the Lagrange fraction-delay impulse response can be written in closed form as

$\displaystyle h_\Delta(n) = \prod_{\stackrel{k=0}{k\neq n}} \frac{\Delta-k}{n-k}, \quad n=0,1,2,\ldots,N. \protect$ (5.7)

The following table gives specific examples for orders 1, 2, and 3:

\begin{array}{\vert\vert r\vert\vert c\vert c\vert c...
\frac{\Delta(\Delta-1)(\Delta-2)}{6} \\

Note that, for $ N=1$, Lagrange interpolation reduces to linear interpolation, i.e., the interpolator impulse response is $ h = [1-\Delta,\Delta]$. Also, remember that, for order $ N$, the desired delay should be in a one-sample range centered about $ \Delta=N/2$.

Lagrange Interpolation Coefficient Symmetry

As shown in [502, §3.3.3], directly substituting into Eq.$ \,$(4.7) derives the following coefficient symmetry property for the interpolation coefficients (impulse response) of a Lagrange fractional delay filter:

$\displaystyle h_\Delta(n) \eqsp h_{N-\Delta}(N-n), \quad n =0,1,\ldots,N, \protect$ (5.8)

where $ N$ is the order of the interpolator. Thus, the interpolation coefficients for delay $ N-\Delta$ are the ``flip'' (time reverse) of the coefficients for delay $ \Delta$.

Matlab Code for Lagrange Interpolation

A simple matlab function for computing the coefficients of a Lagrange fractional-delay FIR filter is as follows:

function h = lagrange( N, delay )

n = 0:N;
h = ones(1,N+1);
for k = 0:N
    index = find(n ~= k);
    h(index) = h(index) *  (delay-k)./ (n(index)-k);

Maxima Code for Lagrange Interpolation

The maxima program is free and open-source, like Octave for matlab:5.3

  (%i1) lagrange(N, n) :=
           product(if equal(k,n) then 1
                  else (D-k)/(n-k), k, 0, N);

  (%o1) lagrange(N, n) := product(if equal(k, n) then 1

                                       D - k
                                  else -----, k, 0, N)
                                       n - k
Usage examples in maxima:
  (%i2) lagrange(1,0);
  (%o2)                             1 - D

  (%i3) lagrange(1,1);
  (%o3)                               D

  (%i4) lagrange(4,0);
                           (1 - D) (D - 4) (D - 3) (D - 2)
  (%o4)                 - -------------------------------
  (%i5) ratsimp(lagrange(4,0));
                           4       3       2
                          D  - 10 D  + 35 D  - 50 D + 24
  (%o5)                  ------------------------------

  (%i6) expand(lagrange(4,0));
                            4      3       2
                           D    5 D    35 D    25 D
  (%o6)                    -- - ---- + ----- - ---- + 1
                           24    12     24      12

  (%i7) expand(lagrange(4,0)), numer;
                           4                     3
  (%o7) 0.041666666666667 D  - 0.41666666666667 D
      + 1.458333333333333 D  - 2.083333333333333 D + 1.0

Faust Code for Lagrange Interpolation

The Faust programming language for signal processing [453,450] includes support for Lagrange fractional-delay filtering, up to order five, in the library file filter.lib. For example, the fourth-order case is listed below:

// fourth-order (quartic) case, delay d in [1.5,2.5]
fdelay4(n,d,x) = delay(n,id,x)   * fdm1*fdm2*fdm3*fdm4/24
               + delay(n,id+1,x) * (0-fd*fdm2*fdm3*fdm4)/6
               + delay(n,id+2,x) * fd*fdm1*fdm3*fdm4/4
               + delay(n,id+3,x) * (0-fd*fdm1*fdm2*fdm4)/6
               + delay(n,id+4,x) * fd*fdm1*fdm2*fdm3/24
with {
  o = 1.49999;
  dmo = d - o; // assumed nonnegative
  id = int(dmo);
  fd = o + frac(dmo);
  fdm1 = fd-1;
  fdm2 = fd-2;
  fdm3 = fd-3;
  fdm4 = fd-4;

An example calling program is shown in Fig.4.12.

Figure: Faust program tlagrange.dsp used to generate Figures 4.13 through 4.16.

// tlagrange.dsp - test Lagrange interpolation in Faust


N = 16; % Allocated delay-line length

% Compare various orders:
D = 5.4;
process = 1-1' <: fdelay1(N,D),
// To see results:
// [in a shell]:
//   faust2octave tlagrange.dsp
// [at the Octave command prompt]:
//   plot(db(fft(faustout,1024)(1:512,:)));

// Alternate example for testing a range of 4th-order cases
// (change name to "process" and rename "process" above):
process2  = 1-1' <: fdelay4(N, 1.5),
                    fdelay4(N, 1.6),
                    fdelay4(N, 1.7),
                    fdelay4(N, 1.8),
                    fdelay4(N, 1.9),
                    fdelay4(N, 2.0),
                    fdelay4(N, 2.1),
                    fdelay4(N, 2.2),
                    fdelay4(N, 2.3),
                    fdelay4(N, 2.4),
                    fdelay4(N, 2.499),
                    fdelay4(N, 2.5);

Lagrange Frequency Response Examples

The following examples were generated using Faust code similar to that in Fig.4.12 and the faust2octave command distributed with Faust.

Orders 1 to 5 on a fractional delay of 0.4 samples

Figure [*] shows the amplitude responses of Lagrange interpolation, orders 1 through 5, for the case of implementing an interpolated delay line of length $ 5.4$ samples. In all cases the interpolator follows a delay line of appropriate length so that the interpolator coefficients operate over their central one-sample interval. Figure [*] shows the corresponding phase delays. As discussed in §4.2.10, the amplitude response of every odd-order case is constrained to be zero at half the sampling rate when the delay is half-way between integers, which this example is near. As a result, the curves for the two even-order interpolators lie above the three odd-order interpolators at high frequencies in Fig.[*]. It is also interesting to note that the 4th-order interpolator, while showing a wider ``pass band,'' exhibits more attenuation near half the sampling rate than the 2nd-order interpolator.

Figure 4.13: Amplitude responses, Lagrange interpolation, orders 1 to 5, for an interpolated delay of $ 5.4$ samples. From the bottom-right corner along the right edge, the curves represent orders 1,3,5,4,2.

Figure 4.14: Phase delays, Lagrange interpolation, orders 1 to 5, for an interpolated delay of $ 5.4$ samples. From bottom to top, the curves represent orders 2,4,1,3,5.

In the phase-delay plots of Fig.[*], all cases are exact at frequency zero. At half the sampling rate they all give 5 samples of delay.

Note that all three odd-order phase delay curves look generally better in Fig.[*] than both of the even-order phase delays. Recall from Fig.[*] that the two even-order amplitude responses outperformed all three odd-order cases. This illustrates a basic trade-off between gain accuracy and delay accuracy. The even-order interpolators show generally less attenuation at high frequencies (because they are not constrained to approach a gain of zero at half the sampling rate for a half-sample delay), but they pay for that with a relatively inferior phase-delay performance at high frequencies.

Order 4 over a range of fractional delays

Figures 4.15 and 4.16 show amplitude response and phase delay, respectively, for 4th-order Lagrange interpolation evaluated over a range of requested delays from $ 1.5$ to $ 2.5$ samples in increments of $ 0.1$ samples. The amplitude response is ideal (flat at 0 dB for all frequencies) when the requested delay is $ 2$ samples (as it is for any integer delay), while there is maximum high-frequency attenuation when the fractional delay is half a sample. In general, the closer the requested delay is to an integer, the flatter the amplitude response of the Lagrange interpolator.

Figure 4.15: Amplitude responses, Lagrange interpolation, order 4, for the range of requested delays $ [1.5 : 0.1 : 2.5]$, with $ 2.499$ thrown in as well (see next plot for why). From bottom to top, ignoring the almost invisible split in the bottom curve, the curves represent requested delays $ 1.5, 1.6, 1.7, 1.8, 1.9$, and $ 2.0$. Then, because the curve for requested delay $ 2+\eta $ is the same as the curve for delay $ 2-\eta $, for $ \vert\eta \vert<1/2$, the same curves, from top to bottom, represent requested delays $ 2.0, 2.1, 2.2, 2.3, 2.4$ and $ 2.5$ (which is nearly indistinguishable from $ 2.499$).

Figure 4.16: Phase delays, Lagrange interpolation, order 4, for the range of requested delays $ [1.5 : 0.1 : 2.5]$, and additionally $ 2.499$.

Note in Fig.4.16 how the phase-delay jumps discontinuously, as a function of delay, when approaching the desired delay of $ 2.5$ samples from below: The top curve in Fig.4.16 corresponds to a requested delay of 2.5 samples, while the next curve below corresponds to 2.499 samples. The two curves roughly coincide at low frequencies (being exact at dc), but diverge to separate integer limits at half the sampling rate. Thus, the ``capture range'' of the integer 2 at half the sampling rate is numerically suggested to be the half-open interval $ [1.5,2.5)$.

Order 5 over a range of fractional delays

Figures 4.17 and 4.18 show amplitude response and phase delay, respectively, for 5th-order Lagrange interpolation, evaluated over a range of requested delays between $ 2$ and $ 3$ samples in steps of $ 0.1$ samples. Note that the vertical scale in Fig.4.17 spans $ 100$ dB while that in Fig.4.15 needed less than $ 9$ dB, again due to the constrained zero at half the sampling rate for odd-order interpolators at the half-sample point.

Figure 4.17: Amplitude responses, Lagrange interpolation, order 5, for the range of requested delays $ [2.0 : 0.1 : 3.0]$, with $ 2.495$ and $ 2.505$ included as well (see next plot for why).

Figure 4.18: Phase delays, Lagrange interpolation, order 5, for the range of requested delays $ [2.0 : 0.1 : 3.0]$, with $ 2.495$ and $ 2.505$ included as well.

Notice in Fig.4.18 how suddenly the phase-delay curves near 2.5 samples delay jump to an integer number of samples as a function of frequency near half the sample rate. The curve for $ 2.495$ samples swings down to 2 samples delay, while the curve for $ 2.505$ samples goes up to 3 samples delay at half the sample rate. Since the gain is zero at half the sample rate when the requested delay is $ 2.5$ samples, the phase delay may be considered to be exactly $ 2.5$ samples at all frequencies in that special case.

Avoiding Discontinuities When Changing Delay

We have seen examples (e.g., Figures 4.16 and 4.18) of the general fact that every Lagrange interpolator provides an integer delay at frequency $ \omega T = \pi$, except when the interpolator gain is zero at $ \omega T = \pi$. This is true for any interpolator implemented as a real FIR filter, i.e., as a linear combination of signal samples using real coefficients.5.4Therefore, to avoid a relatively large discontinuity in phase delay (at high frequencies) when varying the delay over time, the requested interpolation delay should stay within a half-sample range of some fixed integer, irrespective of interpolation order. This provides that the requested delay stays within the ``capture zone'' of a single integer at half the sampling rate. Of course, if the delay varies by more than one sample, there is no way to avoid the high-frequency discontinuity in the phase delay using Lagrange interpolation.

Even-order Lagrange interpolators have an integer at the midpoint of their central one-sample range, so they spontaneously offer a one-sample variable delay free of high-frequency discontinuities.

Odd-order Lagrange interpolators, on the other hand, must be shifted by $ 1/2$ sample in either direction in order to be centered about an integer delay. This can result in stability problems if the interpolator is used in a feedback loop, because the interpolation gain can exceed 1 at some frequency when venturing outside the central one-sample range (see §4.2.11 below).

In summary, discontinuity-free interpolation ranges include

\frac{N}{2}-\frac{1}{2} < D < \frac{N}{2}+\frac{1}{2}&& \mbox{...
...\frac{1}{2} < D < \frac{N+1}{2}+\frac{1}{2}&& \mbox{($N$\ odd).}

Wider delay ranges, and delay ranges not centered about an integer delay, will include a phase discontinuity in the delay response (as a function of delay) which is largest at frequency $ \omega T = \pi$, as seen in Figures 4.16 and 4.18.

Lagrange Frequency Response Magnitude Bound

The amplitude response of fractional delay filters based on Lagrange interpolation is observed to be bounded by 1 when the desired delay $ \Delta$ lies within half a sample of the midpoint of the coefficient span [502, p. 92], as was the case in all preceeding examples above. Moreover, even-order interpolators are observed to have this boundedness property over a two-sample range centered on the coefficient-span midpoint [502, §3.3.6]. These assertions are easily proved for orders 1 and 2. For higher orders, a general proof appears not to be known, and the conjecture is based on numerical examples. Unfortunately, it has been observed that the gain of some odd-order Lagrange interpolators do exceed 1 at some frequencies when used outside of their central one-sample range [502, §3.3.6].

Even-Order Lagrange Interpolation Summary

We may summarize some characteristics of even-order Lagrange fractional delay filtering as follows:

  • Two-sample bounded-by-1 delay-range instead of only one-sample
  • No gain zero at half the sampling rate for the middle delay
  • No phase-delay discontinuity when crossing the middle delay
  • Optimal (central) delay range is centered about an integer
To stay within the central, one-sample delay range for even-order Lagrange interpolators, the delay range should be

$\displaystyle \Delta\in\left(\frac{N}{2}-\frac{1}{2},\frac{N}{2}+\frac{1}{2}\right),

where $ N$ is the order of the interpolator.

Odd-Order Lagrange Interpolation Summary

In contrast to even-order Lagrange interpolation, the odd-order case has the following properties (in fractional delay filtering applications):

  • Improved phase-delay accuracy at the expense of decreased amplitude-response accuracy (low-order examples in Fig.[*])
  • Optimal (centered) delay range lies between two integers
The usual centered delay range is

$\displaystyle \Delta\in\left(\frac{N}{2}-\frac{1}{2},\frac{N}{2}+\frac{1}{2}\right),

which is between integers, and in this range, the amplitude response is observed to be bounded by 1.

To avoid a discontinuous phase-delay jump at high frequencies when crossing the middle delay, the delay range can be shifted to

$\displaystyle \Delta\in\left(\frac{N\pm 1}{2}-\frac{1}{2},\frac{N\pm 1}{2}+\frac{1}{2}\right),

but then the gain may exceed 1 at some frequencies.

Proof of Maximum Flatness at DC

The maximumally flat fractional-delay FIR filter is obtained by equating to zero all $ N+1$ leading terms in the Taylor (Maclaurin) expansion of the frequency-response error at dc:

0 &=& \left.\frac{d^k}{d\omega^k} E(e^{j\omega}) \right\vert _...
...ert _{\omega=0}\\
&=& (-j\Delta)^k - \sum_{n=0}^N (-jn)^k h(n)

$\displaystyle \,\,\Rightarrow\,\,\zbox {\sum_{n=0}^N n^k h(n) = \Delta^k, \; k=0,1,\ldots,N}

This is a linear system of equations of the form $ V\underline{h}=\underline{\Delta}$, where $ V$ is a Vandermonde matrix. The solution can be written as a ratio of Vandermonde determinants using Cramer's rule [329]. As shown by Cauchy (1812), the determinant of a Vandermonde matrix $ [p_i^{j-1}]$, $ i,j=1,\ldots,N$ can be expressed in closed form as

\left\vert\left[p_i^{j-1}\right]\right\vert &=& \prod_{j>i}(p_...

Making this substitution in the solution obtained by Cramer's rule yields that the impulse response of the order $ N$, maximally flat, fractional-delay FIR filter may be written in closed form as

$\displaystyle h(n) = \prod_{\stackrel{k=0}{k\ne n}}^N \frac{D-k}{n-k}, \quad n=0,1,\ldots N,

which is the formula for Lagrange-interpolation coefficients (Eq.$ \,$(4.6)) adapted to this problem (in which abscissae are equally spaced on the integers from 0 to $ N-1$).

Further details regarding the theory of Lagrange interpolation can be found (online) in [502, Ch. 3, Pt. 2, pp. 82-84].

Variable Filter Parametrizations

In practical applications of Lagrange Fractional-Delay Filtering (LFDF), it is typically necessary to compute the FIR interpolation coefficients $ h_\Delta(n)$ as a function of the desired delay $ \Delta$, which is usually time varying. Thus, LFDF is a special case of FIR variable filtering in which the FIR coefficients must be time-varying functions of a single delay parameter $ \Delta(t)$.

Table Look-Up

A general approach to variable filtering is to tabulate the filter coefficients as a function of the desired variables. In the case of fractional delay filters, the impulse response $ h_\Delta(n)$ is tabulated as a function of delay $ \Delta = N/2+\eta$, $ \eta\in[-1/2,1/2)$, $ n=0,1,2,\ldots,N$, where $ N$ is the interpolation-filter order. For each $ n$, $ \Delta$ may be sampled sufficiently densely so that linear interpolation will give a sufficiently accurate ``interpolated table look-up'' of $ h_\Delta(n)$ for each $ n$ and (continuous) $ \Delta$. This method is commonly used in closely related problem of sampling-rate conversion [462].

Polynomials in the Delay

A more parametric approach is to formulate each filter coefficient $ h_\Delta(n)$ as a polynomial in the desired delay $ \Delta$:

$\displaystyle h_\Delta(n) \isdefs \sum_{m=0}^M c_n(m)\Delta^m, \quad n=0,1,2,\ldots,N \protect$ (5.9)

Taking the z transform of this expression leads to the interesting and useful Farrow structure for variable FIR filters [134].

Farrow Structure

Taking the z transform of Eq.$ \,$(4.9) yields

$\displaystyle h_\Delta(n)$ $\displaystyle \isdef$ $\displaystyle \sum_{m=0}^M c_n(m)\Delta^m, \quad n=0,1,2,\ldots,N$  
$\displaystyle \Longleftrightarrow \quad
H_\Delta(z)$ $\displaystyle \isdef$ $\displaystyle \sum_{n=0}^N h_\Delta(n)z^{-n}$  
  $\displaystyle =$ $\displaystyle \sum_{n=0}^N \left[\sum_{m=0}^M c_n(m)\Delta^m\right]z^{-n}$  
  $\displaystyle =$ $\displaystyle \sum_{m=0}^M \left[\sum_{n=0}^N c_n(m) z^{-n}\right]\Delta^m$  
  $\displaystyle \isdef$ $\displaystyle \sum_{m=0}^M C_m(z) \Delta^m
\protect$ (5.10)

Since $ H_\Delta(z)$ is an $ N$th-order FIR filter, at least one of the $ C_m(z)$ must be $ N$th order, so that we need $ M\ge N$. A typical choice is $ M=N$.

Such a parametrization of a variable filter as a polynomial in fixed filters $ C_m(z)$ is called a Farrow structure [134,502]. When the polynomial Eq.$ \,$(4.10) is evaluated using Horner's rule,5.5 the efficient structure of Fig.4.19 is obtained. Derivations of Farrow-structure coefficients for Lagrange fractional-delay filtering are introduced in [502, §3.3.7].

Figure 4.19: Farrow structure for implementing parametrized filters as a fixed-filter polynomial in the varying parameter.

As we will see in the next section, Lagrange interpolation can be implemented exactly by the Farrow structure when $ M=N$. For $ M<N$, approximations that do not satisfy the exact interpolation property can be computed [148].

Farrow Structure Coefficients

Beginning with a restatement of Eq.$ \,$(4.9),

$\displaystyle h_\Delta(n) \isdefs \sum_{m=0}^M c_n(m)\Delta^m, \quad n=0,1,2,\ldots,N,

we can express each FIR coefficient $ h_\Delta(n)$ as a vector expression:

$\displaystyle h_\Delta(n) \eqsp
\left[\begin{array}{ccccc} 1 & \...
...y}{c} C_n(0) \\ [2pt] C_n(1) \\ [2pt] \vdots \\ [2pt] C_n(M)\end{array}\right]

Making a row-vector out of the FIR coefficients gives

$\displaystyle \underbrace{\left[\begin{array}{cccc}h_\Delta(0)\!&\!h_\Delta(1)\...
...\vdots \\
C_0(M) & C_1(M) & \cdots & C_N(M)


$\displaystyle \underline{h}_\Delta \eqsp \underline{V}_\Delta^T \mathbf{C}.

We may now choose a set of parameter values $ {\underline{\Delta}}^T=[\Delta_0,\Delta_1,\ldots,\Delta_L]$ over which an optimum approximation is desired, yielding the matrix equation

$\displaystyle \mathbf{H}_{\underline{\Delta}}\eqsp \mathbf{V}_{\underline{\Delta}}\mathbf{C}, \protect$ (5.11)


$\displaystyle \mathbf{H}_{\underline{\Delta}}\isdefs \left[\begin{array}{c} \un...
...elta_0}^T \\ [2pt] \vdots \\ [2pt] \underline{h}_{\Delta_L}^T\end{array}\right]$   and$\displaystyle \qquad
\mathbf{V}_{\underline{\Delta}}\isdefs \left[\begin{array}...
...ta_0}^T \\ [2pt] \vdots \\ [2pt] \underline{V}_{\Delta_L}^T\end{array}\right].

Equation (4.11) may be solved for the polynomial-coefficient matrix $ \mathbf{C}$ by usual least-squares methods. For example, in the unweighted case, with $ L\ge M$, we have

$\displaystyle \zbox {\mathbf{C}\eqsp \left(\mathbf{V}_{\underline{\Delta}}^T\ma...
\mathbf{V}_{\underline{\Delta}}^T \mathbf{H}_{\underline{\Delta}}.}

Note that this formulation is valid for finding the Farrow coefficients of any $ N$th-order variable FIR filter parametrized by a single variable $ \Delta$. Lagrange interpolation is a special case corresponding to a particular choice of $ \mathbf{H}_{\underline{\Delta}}$.

Differentiator Filter Bank

Since, in the time domain, a Taylor series expansion of $ x(n-\Delta)$ about time $ n$ gives

&=& x(n) -\Delta\, x^\prime(n)
+ \frac{\Delta^2...
...D^2(z) + \cdots
+ \frac{(-\Delta)^k}{k!}D^k(z) + \cdots \right]

where $ D(z)$ denotes the transfer function of the ideal differentiator, we see that the $ m$th filter in Eq.$ \,$(4.10) should approach

$\displaystyle C_m(z) \eqsp \frac{(-1)^m}{m!}D^m(z), \protect$ (5.12)

in the limit, as the number of terms $ M$ goes to infinity. In other terms, the coefficient $ C_m(z)$ of $ \Delta^m$ in the polynomial expansion Eq.$ \,$(4.10) must become proportional to the $ m$th-order differentiator as the polynomial order increases. For any finite $ N$, we expect $ C_m(z)$ to be close to some scaling of the $ m$th-order differentiator. Choosing $ C_m(z)$ as in Eq.$ \,$(4.12) for finite $ N$ gives a truncated Taylor series approximation of the ideal delay operator in the time domain [184, p. 1748]. Such an approximation is ``maximally smooth'' in the time domain, in the sense that the first $ N$ derivatives of the interpolation error are zero at $ x(n)$.5.6 The approximation error in the time domain can be said to be maximally flat.

Farrow structures such as Fig.4.19 may be used to implement any one-parameter filter variation in terms of several constant filters. The same basic idea of polynomial expansion has been applied also to time-varying filters ( $ \Delta\leftarrow t$).

Recent Developments in Lagrange Interpolation

Franck (2008) [148] provides a nice overview of the various structures being used for Lagrange interpolation, along with their computational complexities and depths (relevant to parallel processing). He moreover proposes a novel computation having linear complexity and log depth that is especially well suited for parallel processing architectures.

Relation of Lagrange to Sinc Interpolation

For an infinite number of equally spaced samples, with spacing $ x_{k+1}-x_k = \Delta$, the Lagrangian basis polynomials converge to shifts of the sinc function, i.e.,

$\displaystyle l_k(x) =$   sinc$\displaystyle \left(\frac{x-k\Delta}{\Delta}\right), \quad k=\ldots,-2,-1,0,1,2,\ldots


   sinc$\displaystyle (x) \isdef \frac{\sin(\pi x)}{\pi x}

A simple argument is based on the fact that any analytic function is determined by its zeros and its value at one point. Since sinc$ (x)$ is zero on all the integers except 0, and since sinc$ (0)=1$, it must coincide with the infinite-order Lagrangian basis polynomial for the sample at $ x=0$ which also has its zeros on the nonzero integers and equals $ 1$ at $ x=0$.

The equivalence of sinc interpolation to Lagrange interpolation was apparently first published by the mathematician Borel in 1899, and has been rediscovered many times since [309, p. 325].

A direct proof can be based on the equivalance between Lagrange interpolation and windowed-sinc interpolation using a ``scaled binomial window'' [262,502]. That is, for a fractional sample delay of $ D$ samples, multiply the shifted-by-$ D$, sampled, sinc function

$\displaystyle h_s(n) =$   sinc$\displaystyle (n-D) = \frac{\sin[\pi(n-D)]}{\pi(n-D)}

by a binomial window

$\displaystyle w(n) = \left(\begin{array}{c}N\\ n\end{array}\right), \quad n=0,1,2,\ldots N

and normalize by [502]

$\displaystyle C(D) = (-1)^N\frac{\pi(N+1)}{\sin(\pi D)}\left(\begin{array}{c}D\\ N+1\end{array}\right),

which scales the interpolating filter to have a unit $ L_2$ norm, to obtain the $ N$th-order Lagrange interpolating filter

$\displaystyle h_D(n)=C(D)w(n)h_s(n), \quad n=0,1,2,\ldots,N

Since the binomial window converges to the Gaussian window as $ N\to\infty$, and since the window gets wider and wider, approaching a unit constant in the limit, the convergence of Lagrange to sinc interpolation can be seen.

A more recent alternate proof appears in [557].

Next Section:
Thiran Allpass Interpolators
Previous Section:
Delay-Line Interpolation