DSPRelated.com
Free Books

Taylor Series with Remainder

We repeat the derivation of the preceding section, but this time we treat the error term more carefully.

Again we want to approximate $ f(x)$ with an $ n$th-order polynomial:

$\displaystyle f(x) = f_0 + f_1 x + f_2 x^2 + \cdots + f_n x^n + R_{n+1}(x)
$

$ R_{n+1}(x)$ is the ``remainder term'' which we will no longer assume is zero.

Our problem is to find $ \{f_i\}_{i=0}^{n}$ so as to minimize $ R_{n+1}(x)$ over some interval $ I$ containing $ x$. There are many ``optimality criteria'' we could choose. The one that falls out naturally here is called Padé approximation. Padé approximation sets the error value and its first $ n$ derivatives to zero at a single chosen point, which we take to be $ x=0$. Since all $ n+1$ ``degrees of freedom'' in the polynomial coefficients $ f_i$ are used to set derivatives to zero at one point, the approximation is termed maximally flat at that point. In other words, as $ x\to0$, the $ n$th order polynomial approximation approaches $ f(x)$ with an error that is proportional to $ \vert x\vert^{n+1}$.

Padé approximation comes up elsewhere in signal processing. For example, it is the sense in which Butterworth filters are optimal [53]. (Their frequency responses are maximally flat in the center of the pass-band.) Also, Lagrange interpolation filters (which are nonrecursive, while Butterworth filters are recursive), can be shown to maximally flat at dc in the frequency domain [82,36].

Setting $ x=0$ in the above polynomial approximation produces

$\displaystyle f(0) = f_0 + R_{n+1}(0) = f_0
$

where we have used the fact that the error is to be exactly zero at $ x=0$ in Padé approximation.

Differentiating the polynomial approximation and setting $ x=0$ gives

$\displaystyle f^\prime(0) = f_1 + R^\prime_{n+1}(0) = f_1
$

where we have used the fact that we also want the slope of the error to be exactly zero at $ x=0$.

In the same way, we find

$\displaystyle f^{(k)}(0) = k! \cdot f_k + R^{(k)}_{n+1}(0) = k! \cdot f_k
$

for $ k=2,3,4,\dots,n$, and the first $ n$ derivatives of the remainder term are all zero. Solving these relations for the desired constants yields the $ n$th-order Taylor series expansion of $ f(x)$ about the point $ x=0$

$\displaystyle f(x) = \sum_{k=0}^n \frac{f^{(k)}(0)}{k!} x^k + R_{n+1}(x)
$

as before, but now we better understand the remainder term.

From this derivation, it is clear that the approximation error (remainder term) is smallest in the vicinity of $ x=0$. All degrees of freedom in the polynomial coefficients were devoted to minimizing the approximation error and its derivatives at $ x=0$. As you might expect, the approximation error generally worsens as $ x$ gets farther away from 0.

To obtain a more uniform approximation over some interval $ I$ in $ x$, other kinds of error criteria may be employed. Classically, this topic has been called ``economization of series,'' or simply polynomial approximation under different error criteria. In Matlab or Octave, the function polyfit(x,y,n) will find the coefficients of a polynomial $ p(x)$ of degree n that fits the data y over the points x in a least-squares sense. That is, it minimizes

$\displaystyle \left\Vert\,R_{n+1}\,\right\Vert^2 \isdef \sum_{i=1}^{n_x} \left\vert y(i) - p(x(i))\right\vert^2
$

where $ n_x \isdef {\tt length(x)}$.


Next Section:
Formal Statement of Taylor's Theorem
Previous Section:
Informal Derivation of Taylor Series