DSPRelated.com
Blogs

Roll Your Own Differentiation Filters

Matt McDonaldNovember 11, 2015

Introduction

There are many times in digital signal processing that it is necessary to obtain estimates of the derivative of some signal or process from discretely sampled values. Such numerical derivatives are useful in applications such as edge detection, rate of change estimation and optimization, and can even be used as part of quick and efficient dc-blocking filters.

Suppose then, that we want to approximate the derivative of a function at a set of points $\{x_i:i=1,...,N\}$ from a set of sampled values $\{f(x_i):i=1,...,N\}$. A common approach is to use Taylor series to build a few approximations around a point, then combine them. For example, if the $x_i$ are evenly spaced, with say $h = x_{i+1}-x_i$, then we may write $$f(x_{i+1}) \approx f(x_i) + h f'(x_i) + \frac{h^2}{2}f''(x_i)$$ and $$f(x_{i-1}) \approx f(x_i) - h f'(x_i) + \frac{h^2}{2} f''(x_i).$$ If we drop the $h^2$ term, we can solve for $f'(x_i)$ approximately as $$f'(x_i) \approx \frac{f(x_{i+1}) - f(x_i)}{h}.$$ If we subtract $f(x_{i-1})$ from $f(x_{i-1})$ then the even terms cancel, leaving $$f'(x_i) \approx \frac{f(x_{i+1}) - f(x_{i-1})}{2 h}.$$

Lagrange Polynomials

Building differentiation filters using Taylor series is all well and good, but what if we want higher order approximations, or maybe even formulas that are not central? We could continue to use Taylor series, but the math gets pretty messy. A more concise way is to make use of the Lagrange polynomials

$$l_j(x) = \prod_{i \neq j}\frac{(x - x_i)}{(x_j - x_i)}$$ and write $$f(x) \approx \sum_{i=1}^N f(x_i)l_i(x).$$ Substituting three points $\{x_{i-1},x_i,x_{i+1}\}$ into the above formula gives us the following approximation $$f(x) \approx \frac{(x-x_i)(x-x_{i+1})}{(x_{i-1}-x_i)(x_{i-1}-x_{i+1})}f(x_{i-1}) +\frac{(x-x_{i-1})(x-x_{i+1})}{(x_i-x_{i-1})(x_i-x_{i+1})}f(x_i)$$ $$+\frac{(x-x_{i-1})(x-x_i)}{(x_{i+1}-x_{i-1})(x_{i+1}-x_i)}f(x_{i+1}).$$ Differentiating, yields the first derivative approximation $$f'(x) = \frac{ 2x - x_i - x_{i+1}}{(x_{i-1} - x_i)(x_{i-1}-x_{i+1})}f(x_{i-1}) + \frac{2x-x_{i-1}-x_{i+1}}{(x_i-x_{i-1})(x_i-x_{i+1})}f(x_i)$$ $$+ \frac{2x-x_{i-1}-x_i}{(x_{i+1}-x_{i-1})(x_{i+1}-x_i)}f(x_{i+1}).$$ If we evaluate at $x = x_i$ and choose a uniform grid spacing $h = (x_{i+1}-x_i)$ then our formula simplifies into the familiar form $$f'(x_i) \approx \frac{f(x_{i+1}) - f(x_{i-1})}{2 h}.$$ There's no particular reason why we need to evaluate at $x_i$ though, we could have instead chosen $x_{i+1}$ and obtained the formula $$f'(x_{i+1}) \approx \frac{- 3f(x_{i-1}) + 4f(x_i) - f(x_{i+1})}{2 h}$$

which allows us to approximate the derivative at the current sample using only information from the previous sample.

A More General Formula for Interpolating Polynomials

A nice property of polynomials is that they are entirely defined by their roots. This means that no matter what form you write your interpolating polynomial in, if there's another form of the same degree that has the same roots, the two forms define the same polynomial. Because of this, we can assume that our polynomial has the simpler form $$p(x) = \sum_{i=0}^n a_n x^n.$$ If p(x) interpolates some function f(x) at $n+1$ nodes so that $p(x_i) = f(x_i)$, then we can write the matrix equation $$ \begin{bmatrix} 1 & x_0 & x_0^2 & \cdots & x_0^n \\ 1 & x_1 & x_1^2 & \cdots & x_1^n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^n \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_n \end{bmatrix} = \begin{bmatrix} f(x_0) \\ f(x_1) \\ \vdots \\ f(x_n) \end{bmatrix} $$ The matrix on the left is the generalized Vandermonde matrix. Let's just write this as $$X \mathbf{a} = \mathbf{f}$$ so then $$\mathbf{a} = X^{-1} \mathbf{f}.$$ If we differentiate the interpolating polynomial analytically we can write the matrix equation $$ \begin{bmatrix} 0 & 1 & 2x_0 & \cdots & nx_0^{n-1} \\ 0 & 1 & 2x_1 & \cdots & nx_1^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 1 & 2x_n & \cdots & nx_n^{n-1} \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_n \end{bmatrix} = \begin{bmatrix} f'(x_0) \\ f'(x_1) \\ \vdots \\ f'(x_n) \end{bmatrix} $$ or, more compactly, as $$\mathbf{f}' = X' \mathbf{a}.$$ We can then write the matrix equation that maps a vector of function values to a vector of approximate derivative values $$\mathbf{f}' = X' X^{-1} \mathbf{f}$$ The matrix $D = X' X^{-1}$ is commonly called the pseudospectral differentiation matrix in the literature. Here's a little snippet of Matlab code to compute $D$:

function [D,V,Vx] = getDiffMat(x)
x = x(:).';
N = numel(x)-1;
V = zeros(N+1);
Vx = zeros(N+1);
for j=0:N
    V(j+1,:) = x(j+1).^(0:N);
    Vx(j+1,:) = (0:N).*(x(j+1).^(-1:N-1));
end
D = Vx/V;

Now, say we want to define a few differentiation filters for 3 evenly-space points. We could then call the above function like so:

D = getDiffMat(1:3)
D =
   -1.5000    2.0000   -0.5000
   -0.5000         0    0.5000
    0.5000   -2.0000    1.5000
The rows of the resulting matrix are our differentiation filters (which should look familiar if you've been following along). We can make $D$ as large as we want now. For example, we could use 5 points and get something like:
D = getDiffMat(1:5)
D =
   -2.0833    4.0000   -3.0000    1.3333   -0.2500
   -0.2500   -0.8333    1.5000   -0.5000    0.0833
    0.0833   -0.6667         0    0.6667   -0.0833
   -0.0833    0.5000   -1.5000    0.8333    0.2500
    0.2500   -1.3333    3.0000   -4.0000    2.0833

Note that the matrix $D$ is entirely defined by the choice of the $x_i$ used to define it, and essentially you can choose whatever $x_i$ you like and define a pseudospectral differentiation matrix, but there are some choices which are better than others. Evenly-spaced values are the most convenient when defining differentiation filters, but for large N, accuracy becomes an increasing concern as the Vandermonde matrix becomes ill-conditioned. Ways around this problem are to choose the $x_i$ to be the singular values of some Sturm-Liouville polynomial and common choices are the Legendre, or Chebyshev Gauss-Lobatto points; unfortunately, these points are not equidistant and so usually require another interpolation step.



To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: