Reply by charvellano October 21, 20062006-10-21
I find the error. It was very stupid, because my code was correct of course
(i work with fft often), but i use a value for pi greek non precise (only
6.28). In matlab i set the value to 'pi' and the oscillations disappear.
Thank to all for your interest.


Reply by Scott Seidman October 20, 20062006-10-20
stevenj@alum.mit.edu wrote in news:1161362721.191554.44400
@f16g2000cwb.googlegroups.com:

> This strongly suggests that > you are using the wrong sign for your imaginary part, or that you have > the wrong symmetry in your frequencies, or something of that sort. >
Exactly why I suggested going over the code for transpose operators. A ' will give you a complex conjugate in matlab, while a .' won't. I think this bites everyone in the butt once and only once. -- Scott Reverse name to reply
Reply by October 20, 20062006-10-20
charvellano wrote:
> i notice the the problem of oscillations is only bound to certain > functions. > For example for the functionf f(x,y)=sin(x)^2,f(x,y)=cos(x)^2 it'all ok. > But for the function f(x,y)=sin(x), f(x,y)=sin(x)*sin(y) there are > problems. It's very strange.
Notice that you have a difference between even and odd functions. An even function like sin(x)^2 has a purely real and even DFT, whereas an odd function is purely imaginary and odd. This strongly suggests that you are using the wrong sign for your imaginary part, or that you have the wrong symmetry in your frequencies, or something of that sort. Anyway, rest assured that it is a bug in your code, and ultimately you have to learn how to debug things yourself (remote debugging is offtopic for this newsgroup). Since you have one working code using numerical recipes, and numerical recipes and Matlab or FFTW compute exactly the same thing with exactly the same data layout, you simply have to step through your code line by line and figure out where it begins to differ. Good luck, Steven G. Johnson
Reply by charvellano October 20, 20062006-10-20
i notice the the problem of oscillations is only bound to certain
functions.
For example for the functionf f(x,y)=sin(x)^2,f(x,y)=cos(x)^2 it'all ok.
But for the function f(x,y)=sin(x), f(x,y)=sin(x)*sin(y) there are
problems. It's very strange.
Reply by Scott Seidman October 18, 20062006-10-18
"charvellano" <francesco_gargano@yahoo.it> wrote in
news:zc2dnXGEcdTFVqjYnZ2dnUVZ_oydnZ2d@giganews.com: 

> Thanks Mr. Steven, but in 1d case there's no problem, while in 2d > there is always the same problem! It's incredible! I compare the > amplitude of each mode in 2d with both numerical recipes and fftw. I > notice that in the case of funciton sin(x)*cos(y)+sin(y)*cos(y) > (correspond to the imaginary part of exp(ix)*exp(iy) the 1-1 fourier > mode) the numerical recipes set 0 (more precisely are of order > 10^(-14)) all the mode different from 1-1, while fftw not set exactly > to 0 these mode. Can be this the problem? I know the layout of mode in > 2d or more dimension, but i have problem only coding with fftw. Thanks > for your help > > >
I haven't looked at the problem in depth, but I advise you to make a thorough pass through your matlab code and make sure you're not doing anything silly, like using the transpose operator, which will take a complex conjugate. -- Scott Reverse name to reply
Reply by charvellano October 18, 20062006-10-18
Thanks Mr. Steven, but in 1d case there's no problem, while in 2d there is
always the same problem! It's incredible! I compare the amplitude of each
mode in 2d with both numerical recipes and fftw. I notice that in the case
of funciton sin(x)*cos(y)+sin(y)*cos(y) (correspond to the imaginary part
of exp(ix)*exp(iy) the 1-1 fourier mode) the numerical recipes set 0 (more
precisely are of order 10^(-14)) all the mode different from 1-1, while
fftw not set exactly to 0 these mode. Can be this the problem?
I know the layout of mode in 2d or more dimension, but i have problem only
coding with fftw. Thanks for your help



Reply by October 17, 20062006-10-17
charvellano wrote:
> Is there a solution to my problem? Your helps (thanks) weren't succesfull. > I have no problem with other famous fft (numerical recipes,slower but > perfect), but i need to program with fftw.
If you have no problem with Numerical Recipes, then you screwed up in porting your code to Matlab, because Numerical Recipes and Matlab (and FFTW) use exactly the same layout of the frequency components, and compute exactly the same thing. The only possible differences are: a) you are using a different sign convention for the forward/inverse transforms in Numerical Recipes. (The Matlab fft function corresponds to the Numerical Recipes four1 function with isign=-1). This is easy to fix (just flip the sign of i when you multiply the frequencies). b) you are using the real-input FFT from Numerical Recipes, which only computes half the spectrum compared to the full complex FFT (as in Matlab's fft function) and so you didn't have to think about the aliasing of the negative half of the spectrum before. I would suggest debugging the 1d case first, before doing the 2d case. For example, the following should work: N = 50; % number of sample points x = linspace(0,2*pi,N+1); x = x(1:N); % periodic x coordinate (note that 2*pi == 0) f = 1 ./ (2 + cos(x)); % example function to be differentiated k = i * [ 0:N/2-1, 0, -N/2+1:-1]; % frequencies (properly aliased), scaled by 2*pi*i/(N*dx) dfdx = ifft(fft(f) .* k); % the derivative Then, if you compare to the analytical derivative, you see the error is pretty darn small (since this is a smooth function, FFT differentiation is extremely accurate): max(abs(dfdx - sin(x) ./ (2 + cos(x)).^2)) ans = 8.5432e-14 Regards, Steven G. Johnson
Reply by charvellano October 17, 20062006-10-17
Is there a solution to my problem? Your helps (thanks) weren't succesfull.
I have no problem with other famous fft (numerical recipes,slower but
perfect), but i need to program with fftw.


Reply by October 17, 20062006-10-17
Gordon Sande wrote:
> If you want to use some other difference approximation then it is a > straightforward algebraic exercise to determine what the Fourier > transform pair will be. The result will be more complicated but of the > same general form as the first order term will always be there.
No fixed-order finite difference approximation will match Fourier differentiation for accuracy, given a smooth function to be differentiated. Hence the fundamental difference in their formulation.
> The point is that while the real continuous operational calculus has many > similarities to the operational calculus of the discrete periodic > operational calculus the two situations are not the same. The modifications > to go from one to the other are usually not profound but are not absent.
The point is that the poster asked about the derivative, not the discrete calculus. It's fine to *explicitly* change the subject or recommend a different approach, of course; difference approximations are, naturally, simpler (although there's not much point in doing them by FFTs). What I object to is your implication that there is something inherently *wrong* about using ordinary calculus to do FFT differentiation, when the opposite is the case.
> The original poster wanted to to from one to the other without > modification and compounded it by ignoring the most obvious difference, > that of aliasing. The advantage of doing the forward difference as an > exercise is that it shows how the algebra does the aliasing "automatically".
It's fine to give exercises, but only when you explain what the point of the exercise is, and don't imply that the answer to exercise A is given by the quite different exercise B. Steven
Reply by Gordon Sande October 17, 20062006-10-17
On 2006-10-16 23:11:28 -0300, stevenj@alum.mit.edu said:

> Gordon Sande wrote: >> The operational calculus for the real line is not the same as the >> operational calculus for the period indices. It is a minor algeraic >> exercise to figure out the Fourier transform pair for the forward >> difference. >> >> The multiplier you are looking for will be of the nature of >> >> exp( - 2 pi i k / N ) - 1 >> >> where i is the imaginary unit (often j for those who prefer i for current), >> k is a "frequency" index and N is the size of the finite discrete Fourier >> transform. > > Hi Gordon, despite your experience in DSP, I'm inclined to disagree > with your advice. > > The formula you gave is equivalent to doing a first-order > finite-difference approximation for the derivative. If this is what > you want, you should just do it in the time-domain where it is O(N).
Given that I said it was the forward difference it is hardly surprising that it is the forward difference.
> The advantage of doing the derivative by multiplying the DFT by i*k > (appropriately aliased), i.e. differentiating the trigonometric > interpolation polynomial, is that it can achieve "spectral accuracy" > for smooth functions (i.e. for a sampled version of a smooth function, > the error in the derivative decreases at least exponentially fast with > the density of sample points), and is much more accurate even if your > function is only a few times differentiable. The formula you give is > only first-order accurate regardless of how smooth the function is. > This difference in accuracy is almost the whole point of doing > FFT-based differentiation (e.g. in spectral methods for PDEs).
If you want to use some other difference approximation then it is a straightforward algebraic exercise to determine what the Fourier transform pair will be. The result will be more complicated but of the same general form as the first order term will always be there. The point is that while the real continuous operational calculus has many similarities to the operational calculus of the discrete periodic operational calculus the two situations are not the same. The modifications to go from one to the other are usually not profound but are not absent. The original poster wanted to to from one to the other without modification and compounded it by ignoring the most obvious difference, that of aliasing. The advantage of doing the forward difference as an exercise is that it shows how the algebra does the aliasing "automatically". Once past the tutorial aspects it is then time to worry about issues of whether a first order approximation is what is appropriate. The impression I got from the original posting was that this was an attempt at seeing the operational calculus in operation and not one of worrying about the details of optimizing some calculation. It seemed to be a case of wanting to walk before attempting to run.
> Regards, > Steven G. Johnson > > PS. Regarding the "ripples" the poster keeps seeing: maybe this is just > a Gibb's phenomenon, of course, especially if the poster has a > discontinuous image (in which case he should think hard about whether > FFT-based differentiation is appropriate). It seemed prudent to > eliminate the blatant bugs first, however.