Taylor Series with Remainder
We repeat the derivation of the preceding section, but this time we treat the error term more carefully.
Again we want to approximate with an
th-order polynomial:
![$\displaystyle f(x) = f_0 + f_1 x + f_2 x^2 + \cdots + f_n x^n + R_{n+1}(x)
$](http://www.dsprelated.com/josimages_new/mdft/img1846.png)
![$ R_{n+1}(x)$](http://www.dsprelated.com/josimages_new/mdft/img1847.png)
Our problem is to find
so as to minimize
over some interval
containing
. There are many
``optimality criteria'' we could choose. The one that falls out
naturally here is called Padé approximation. Padé
approximation sets the error value and its first
derivatives to
zero at a single chosen point, which we take to be
. Since all
``degrees of freedom'' in the polynomial coefficients
are
used to set derivatives to zero at one point, the approximation is
termed maximally flat at that point. In other words, as
, the
th order polynomial approximation approaches
with an error that is proportional to
.
Padé approximation comes up elsewhere in signal processing. For example, it is the sense in which Butterworth filters are optimal [53]. (Their frequency responses are maximally flat in the center of the pass-band.) Also, Lagrange interpolation filters (which are nonrecursive, while Butterworth filters are recursive), can be shown to maximally flat at dc in the frequency domain [82,36].
Setting in the above polynomial approximation produces
![$\displaystyle f(0) = f_0 + R_{n+1}(0) = f_0
$](http://www.dsprelated.com/josimages_new/mdft/img1864.png)
![$ x=0$](http://www.dsprelated.com/josimages_new/mdft/img144.png)
Differentiating the polynomial approximation and setting gives
![$\displaystyle f^\prime(0) = f_1 + R^\prime_{n+1}(0) = f_1
$](http://www.dsprelated.com/josimages_new/mdft/img1865.png)
![$ x=0$](http://www.dsprelated.com/josimages_new/mdft/img144.png)
In the same way, we find
![$\displaystyle f^{(k)}(0) = k! \cdot f_k + R^{(k)}_{n+1}(0) = k! \cdot f_k
$](http://www.dsprelated.com/josimages_new/mdft/img1866.png)
![$ k=2,3,4,\dots,n$](http://www.dsprelated.com/josimages_new/mdft/img1867.png)
![$ n$](http://www.dsprelated.com/josimages_new/mdft/img80.png)
![$ n$](http://www.dsprelated.com/josimages_new/mdft/img80.png)
![$ f(x)$](http://www.dsprelated.com/josimages_new/mdft/img268.png)
![$ x=0$](http://www.dsprelated.com/josimages_new/mdft/img144.png)
![$\displaystyle f(x) = \sum_{k=0}^n \frac{f^{(k)}(0)}{k!} x^k + R_{n+1}(x)
$](http://www.dsprelated.com/josimages_new/mdft/img1868.png)
From this derivation, it is clear that the approximation error (remainder
term) is smallest in the vicinity of . All degrees of freedom
in the polynomial coefficients were devoted to minimizing the approximation
error and its derivatives at
. As you might expect, the approximation
error generally worsens as
gets farther away from 0.
To obtain a more uniform approximation over some interval
in
, other kinds of error criteria may be employed. Classically,
this topic has been called ``economization of series,'' or simply
polynomial approximation under different error criteria. In
Matlab or
Octave, the function
polyfit(x,y,n) will find the coefficients of a polynomial
of
degree n that fits the data y over the points x in a
least-squares sense. That is, it minimizes
![$\displaystyle \left\Vert\,R_{n+1}\,\right\Vert^2 \isdef \sum_{i=1}^{n_x} \left\vert y(i) - p(x(i))\right\vert^2
$](http://www.dsprelated.com/josimages_new/mdft/img1869.png)
![$ n_x \isdef {\tt length(x)}$](http://www.dsprelated.com/josimages_new/mdft/img1870.png)
Next Section:
Formal Statement of Taylor's Theorem
Previous Section:
Informal Derivation of Taylor Series