# Derivative of 2d function with fftw

Started by October 15, 2006
```I wont to calulate the derivative of the function f(x,y)=sin(x)*cos(y) in
[0,6.28]x[0,6.28] (or any other function).

I make the fft of f
I make the product of the coefficient for the frequency,and calculate the
inverse fft but there are strange oscillation in the derivative in the
contour of the domain.
I think i don't understand the allocation of the fourier coefficient(real
and imaginay) inthe array. What's the right allocation?

```
```charvellano wrote:
> I wont to calulate the derivative of the function f(x,y)=sin(x)*cos(y) in
> [0,6.28]x[0,6.28] (or any other function).
>
> I make the fft of f
> I make the product of the coefficient for the frequency,and calculate the
> inverse fft but there are strange oscillation in the derivative in the
> contour of the domain.
>  I think i don't understand the allocation of the fourier coefficient(real
> and imaginay) inthe array. What's the right allocation?

I suspect that you are making a common error.

When you take the derivative, you multiply the amplitude by the
frequency. Because of aliasing, however, you have a choice of how you
assign the frequencies, and you want the choice that has equal  numbers
of positive and negative frequencies in order to minimize the slope of
the trigonometric interpolation.

It is easier to explain in 1d.

Suppose you have a DFT of length N.  By inspecting the DFT definition,
you could think of the N outputs as corresponding to (angular)
frequencies 2*pi * [0:N-1] / T, in Matlab notation, where T is your
total sampling time N*dt (dt = sampling interval).  Then, you would
take the derivative by multiplying the amplitudes by these frequencies
(times i).  This is wrong, however.

Because of aliasing, you can take the coefficients [0:N-1] and add any
integer multiple of N to them, so you have a choice of frequencies.
The best choice, for Fourier differentiation and trigonometric
interpolation, is to write them as [ [0:N/2-1], [-N/2:-1] ] (i.e. shift
the second half by -N to make it negative).  (Actually, the N/2
(Nyquist element) should be handled specially: half of its amplitude
should be "assigned" to N/2 and the othe half to -N/2; from the point
of view of differentiation, this implies N/2 term of the derivative is
simply set to zero.)  This choice of frequencies has several nice
properties: it corresponds to a trigonometric interpolation with
minimal mean-square slope, and it also interpolates real inputs to real
outputs.

Probably, you are using the [0:N-1] choice, which leads to erroneous
high-frequency terms.

Regards,
Steven G. Johnson

```
```this is my code in MATLAB(derivative with respect to the variable x of a
funcion), i think it's ok with the  product of frequencies, but the are
always oscillation at the contour of domain,where's the error?

xmin=0;
ymin=0;
M=200;
N=200;
dx=6.28/M;
dy=6.28/N;
x=linspace(0,6.2832,M);
y=linspace(0,6.2832,N);

%%ASSEGNO LA FUNZIONE%%%%%%%%%

for j=1:M
for j2=1:N

funz(j,j2)=sin(xmin+j*dx-dx)*cos(ymin+j2*dy-dy)+cos(xmin+j*dx-dx)*sin(ymin+j2*dy-dy);

end
end

fftw('planner','measure')
funzfft=fft2(funz);

for j=1:M/2
prod(j)=i*(j-1) ;
end
for j=M/2+1:M
prod(j)=-prod(M-j+1);
end

for j=1:M/2
for j2=1:N
h(j,j2)=funzfft(j,j2)*prod(j);
end
end
for j=(M/2+1):M
for j2=1:N
h(j,j2)=funzfft(j,j2)*prod(j);
end
end

deriv=real(ifft2(h));

```
```charvellano wrote:
> this is my code in MATLAB(derivative with respect to the variable x of a
> funcion), i think it's ok with the  product of frequencies, but the are
> always oscillation at the contour of domain,where's the error?
>
> xmin=0;
> ymin=0;
> M=200;
> N=200;
> dx=6.28/M;
> dy=6.28/N;
> x=linspace(0,6.2832,M);
> y=linspace(0,6.2832,N);
>

Hello Charvellano,

I'm not a MATLAB guy, but one of the things to be careful about is
having a whole number of periods in the sample space. Otherwise the
leackage will show up as high frequencies, and this may be leading to
your problem. I.e. it brings out the aliasing mentioned by Stephen.

For example with just 5 points you would use points at

0,72,144,216, and 288 degrees.

Not use

0,90,180,270,360

So try using a better value (more precise) for 2 pi and make sure the
highest point is (N-1)/N * 2pi.

Also try working with a 1-D case until it works properly.

IHTH,

Clay

```
```charvellano wrote:
> for j=1:M/2
> prod(j)=i*(j-1) ;
> end
> for j=M/2+1:M
> prod(j)=-prod(M-j+1);
> end

This is wrong.  Try:

prod = [ i*[0:M/2-1], 0, i*[-M/2+1:-1] ];

which matches what I suggested (for the derivative along 1 dimension of
length M).  (If you don't see the difference, try printing out the
result of your expression for M=16 compared to my expression...you have
an off-by-1 error.)

```
```>
>This is wrong.  Try:
>
>prod = [ i*[0:M/2-1], 0, i*[-M/2+1:-1] ];
>

I made the changes but there are always these oscillations..
See the images of the derivative of f(x,y)=sin(x) (i know there is only a
variable but it's perfect to see the oscillations)

http://utenti.lycos.it/fragarghy/sinx2.jpg
http://utenti.lycos.it/fragarghy/sinx.jpg

It's seems the function is the correct derivative cos(x), but the
oscillations..I plot also the primitive function sin(x) and it's perfect

I notice that if i put the code

for j=1:M/2
prod(j)=2*i*(j-1) ;
end
for j=M/2+1:M
prod(j)=prod(M-j+1);
end

everything is ok, in the sense that the oscillations disappear, but i
don't know why, becouse i think the product of frequencies is non
matematically correct.

```
```On 2006-10-16 17:11:56 -0300, "charvellano" <francesco_gargano@yahoo.it> said:

>
>>
>> This is wrong.  Try:
>>
>> prod = [ i*[0:M/2-1], 0, i*[-M/2+1:-1] ];
>>
>
>
> I made the changes but there are always these oscillations..
> See the images of the derivative of f(x,y)=sin(x) (i know there is only a
> variable but it's perfect to see the oscillations)
>  http://utenti.lycos.it/fragarghy/sinx2.jpg
> http://utenti.lycos.it/fragarghy/sinx.jpg
>
> It's seems the function is the correct derivative cos(x), but the
> oscillations..I plot also the primitive function sin(x) and it's perfect
>
> I notice that if i put the code
> for j=1:M/2
> prod(j)=2*i*(j-1) ;  end
> for j=M/2+1:M
> prod(j)=prod(M-j+1);    end
>
> everything is ok, in the sense that the oscillations disappear, but i
> don't know why, becouse i think the product of frequencies is non
> matematically correct.

The operational calculus for the real line is not the same as the
operational calculus for the period indices. It is a minor algeraic
exercise to figure out the Fourier transform pair for the forward
difference.

The multiplier you are looking for will be of the nature of

exp( - 2 pi i k / N ) - 1

where i is the imaginary unit (often j for those who prefer i for current),
k is a "frequency" index and N is the size of the finite discrete Fourier
transform.

```
```Gordon Sande wrote:
> The operational calculus for the real line is not the same as the
> operational calculus for the period indices. It is a minor algeraic
> exercise to figure out the Fourier transform pair for the forward
> difference.
>
> The multiplier you are looking for will be of the nature of
>
>   exp( - 2 pi i k / N ) - 1
>
> where i is the imaginary unit (often j for those who prefer i for current),
> k is a "frequency" index and N is the size of the finite discrete Fourier
> transform.

Hi Gordon, despite your experience in DSP, I'm inclined to disagree
with your advice.

The formula you gave is equivalent to doing a first-order
finite-difference approximation for the derivative.  If this is what
you want, you should just do it in the time-domain where it is O(N).

The advantage of doing the derivative by multiplying the DFT by i*k
(appropriately aliased), i.e. differentiating the trigonometric
interpolation polynomial, is that it can achieve "spectral accuracy"
for smooth functions (i.e. for a sampled version of a smooth function,
the error in the derivative decreases at least exponentially fast with
the density of sample points), and is much more accurate even if your
function is only a few times differentiable.  The formula you give is
only first-order accurate regardless of how smooth the function is.
This difference in accuracy is almost the whole point of doing
FFT-based differentiation (e.g. in spectral methods for PDEs).

Regards,
Steven G. Johnson

PS. Regarding the "ripples" the poster keeps seeing: maybe this is just
a Gibb's phenomenon, of course, especially if the poster has a
discontinuous image (in which case he should think hard about whether
FFT-based differentiation is appropriate).  It seemed prudent to
eliminate the blatant bugs first, however.

```
```stevenj@alum.mit.edu wrote:
> The advantage of doing the derivative by multiplying the DFT by i*k
> (appropriately aliased), i.e. differentiating the trigonometric
> interpolation polynomial, is that it can achieve "spectral accuracy"
> for smooth functions (i.e. for a sampled version of a smooth function,
> the error in the derivative decreases at least exponentially fast with
> the density of sample points), and is much more accurate even if your
> function is only a few times differentiable.

I should mention that this smoothness must include the periodic
boundaries; if you have non-periodic data, all bets are off.  That's
why FFT differentiation is most useful in spectral methods for PDEs
with periodic or related boundary conditions.

(And Gauss himself first developed an FFT in the context of periodic
astronomical observations that he wanted to interpolate.)

Steven

```
```On 2006-10-16 23:11:28 -0300, stevenj@alum.mit.edu said:

> Gordon Sande wrote:
>> The operational calculus for the real line is not the same as the
>> operational calculus for the period indices. It is a minor algeraic
>> exercise to figure out the Fourier transform pair for the forward
>> difference.
>>
>> The multiplier you are looking for will be of the nature of
>>
>> exp( - 2 pi i k / N ) - 1
>>
>> where i is the imaginary unit (often j for those who prefer i for current),
>> k is a "frequency" index and N is the size of the finite discrete Fourier
>> transform.
>
> Hi Gordon, despite your experience in DSP, I'm inclined to disagree
> with your advice.
>
> The formula you gave is equivalent to doing a first-order
> finite-difference approximation for the derivative.  If this is what
> you want, you should just do it in the time-domain where it is O(N).

Given that I said it was the forward difference it is hardly surprising
that it is the forward difference.

> The advantage of doing the derivative by multiplying the DFT by i*k
> (appropriately aliased), i.e. differentiating the trigonometric
> interpolation polynomial, is that it can achieve "spectral accuracy"
> for smooth functions (i.e. for a sampled version of a smooth function,
> the error in the derivative decreases at least exponentially fast with
> the density of sample points), and is much more accurate even if your
> function is only a few times differentiable.  The formula you give is
> only first-order accurate regardless of how smooth the function is.
> This difference in accuracy is almost the whole point of doing
> FFT-based differentiation (e.g. in spectral methods for PDEs).

If you want to use some other difference approximation then it is a
straightforward algebraic exercise to determine what the Fourier
transform pair will be. The result will be more complicated but of the
same general form as the first order term will always be there.

The point is that while the real continuous operational calculus has many
similarities to the operational calculus of the discrete periodic
operational calculus the two situations are not the same. The modifications
to go from one to the other are usually not profound but are not absent.

The original poster wanted to to from one to the other without
modification and compounded it by ignoring the most obvious difference,
that of aliasing. The advantage of doing the forward difference as an
exercise is that it shows how the algebra does the aliasing "automatically".
Once past the tutorial aspects it is then time to worry about issues of
whether a first order approximation is what is appropriate. The impression
I got from the original posting was that this was an attempt at seeing
the operational calculus in operation and not one of worrying about
the details of optimizing some calculation. It seemed to be a case of
wanting to walk before attempting to run.

> Regards,
> Steven G. Johnson
>
> PS. Regarding the "ripples" the poster keeps seeing: maybe this is just
> a Gibb's phenomenon, of course, especially if the poster has a
> discontinuous image (in which case he should think hard about whether
> FFT-based differentiation is appropriate).  It seemed prudent to
> eliminate the blatant bugs first, however.

```