DSPRelated.com
Forums

Phase of FFT compared to phase of Sinusoid

Started by Unknown March 21, 2006
Thanks guys, I have worked out the problem.

When i did  fft(cos(2 Pi x/16)) with x=0 to 15, the phase was correct.

My problem was coming from the fact that my frequency was not an
integer number of wavelengths over the range.

This gives spectral leakage into neighbouring bins and the phase is no
longer correct.

But say the corrrect frequency is at f =15.125, and there is a pi phase
jump between 15 and 16. If i interpolate the phase by doing 0.125 x pi=
0.393, then add the phase i would expect(phi), this gives the correct
phase!


Thanks for your help Mike, sometimes i forget the best advise is keep
it simple!

Cheers,


Peter

Thanks guys, I have worked out the problem.

When i did  fft(cos(2 Pi x/16)) with x=0 to 15, the phase was correct.

My problem was coming from the fact that my frequency was not an
integer number of wavelengths over the range.

This gives spectral leakage into neighbouring bins and the phase is no
longer correct.

But say the corrrect frequency is at f =15.125, and there is a pi phase
jump between 15 and 16. If i interpolate the phase by doing 0.125 x pi=
0.393, then add the phase i would expect(phi), this gives the correct
phase!


Thanks for your help Mike, sometimes i forget the best advise is keep
it simple!

Cheers,


Peter

David Kirkland wrote:
> peter.moreton@gmail.com wrote: > > Hi everyone, > > > > I'm generating a sinusoid of the form cos(wt + phi), then taking the > > FFT. > > > > I was expecting to find the phase of the FFT (at the frequency peak) > > allows me to determine 'phi' in the generating sinusoid, but there > > appears to be little correlation! > > > > For example, > > > > plotting cos(2 Pi x/10) from x=0 to 2million (with step 1) , the phase > > at the FFT is 2.82?!! Surely the phase should be zero since the inital > > phase at x=0 was zero? > > > > Thanks for your help! > > > > > > Peter Moreton > > > > So you're saying you took a 2 million point FFT? I have my doubts about > this. Could you describe exactly what you did. > > Cheers, > David
David, How old is your computer? Just curious, John
john wrote:
> David Kirkland wrote: >> peter.moreton@gmail.com wrote: >>> Hi everyone, >>> >>> I'm generating a sinusoid of the form cos(wt + phi), then taking the >>> FFT. >>> >>> I was expecting to find the phase of the FFT (at the frequency peak) >>> allows me to determine 'phi' in the generating sinusoid, but there >>> appears to be little correlation! >>> >>> For example, >>> >>> plotting cos(2 Pi x/10) from x=0 to 2million (with step 1) , the phase >>> at the FFT is 2.82?!! Surely the phase should be zero since the inital >>> phase at x=0 was zero? >>> >>> Thanks for your help! >>> >>> >>> Peter Moreton >>> >> So you're saying you took a 2 million point FFT? I have my doubts about >> this. Could you describe exactly what you did. >> >> Cheers, >> David > > David, > > How old is your computer? > > Just curious, > > John >
I'm not questioning that it can be done, but rather why? Alarm bells ring when someone says I'm doing a 2 million point FFT. Cheers, David
in article q%_Tf.3050$qX6.117333@news20.bellglobal.com, David Kirkland at
spam@netscape.net wrote on 03/21/2006 17:21:

> john wrote: >> David Kirkland wrote:
...
>>> So you're saying you took a 2 million point FFT? I have my doubts about >>> this. Could you describe exactly what you did. >> >> How old is your computer? >> >> Just curious, > > I'm not questioning that it can be done, but rather why? Alarm bells > ring when someone says I'm doing a 2 million point FFT.
it's not just the time it takes to perform a 2M FFT (i don't see that as implausible) but also the buildup of quantization error (there are log2(2M) = 21 passes in a 2M FFT) and the quantization error of the twiddle factor coefficients (lessee cos(2*pi/2^21) = ?). i can see lot's of problems with a 2M FFT unless something special is done to deal with numerical problems. StevenJ, is FFTW okay with size of 2^21? what are the limits of FFTW? i presume we're using double-precision floating point (8 bytes). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
The time is pretty negligible to be honest (at least for what i'm doing
), i'm using the FFTW librarys in C++ with double precision and it
takes a second or two to do the FFT- much less than the time it takes
to read-in the data from disk.

I haven't worried about any quantisation errors due to the size of the
data set or the like, but if you have any advise or information i'd be
very grateful. I'm working at the 'proof of concept' stage, and this is
my first foray into discrete fourier transforms!

Cheers 


Peter

> StevenJ, is FFTW okay with size of 2^21? what are the limits of FFTW? i > presume we're using double-precision floating point (8 bytes).
A 2^21 FFT is no problem. Regarding time, on my 2GHz P-IV, with FFTW 3.1, in double precision with SSE2, it takes less than 1/3 of a second. Regarding accuracy, I think we've had this discussion before--perhaps you're too pessimistic because you're used to fixed-point FFTs where the rms error grows as O(sqrt(N)). A floating-point FFT has rms errors that grow only as O(sqrt(log(N))). This is borne out in practice by our accuracy benchmarks (www.fftw.org/accuracy). I just ran a quick test where I ran our accuracy benchmark for size 2^21 on a P-IV in double precision, and the rms (L2) relative error (for random inputs compared to an arbitrary-precision FFT, see our web page for methodology) was only 3.03e-16. There are people around who regularly do billion-point FFTs in double precision for analyzing astronomical data. Those kinds of sizes are also not uncommon in multidimensional FFTs for spectral PDE solvers. Cordially, Steven G. JohnON
in article 1143049687.866699.309660@e56g2000cwe.googlegroups.com,
stevenj@alum.mit.edu at stevenj@alum.mit.edu wrote on 03/22/2006 12:48:

>> StevenJ, is FFTW okay with size of 2^21? what are the limits of FFTW? i >> presume we're using double-precision floating point (8 bytes). > > A 2^21 FFT is no problem.
...
> Regarding accuracy, I think we've had this discussion before--perhaps > you're too pessimistic because you're used to fixed-point FFTs where > the rms error grows as O(sqrt(N)). A floating-point FFT has rms errors > that grow only as O(sqrt(log(N))).
no i figured that out a year ago when we were talking about it: http://groups.google.com/group/comp.dsp/msg/7b05b24e56c4f445 it's just that double-precision still has a finite number of bits and cos(theta) gets very close to 1 when theta -> 0. ...
> There are people around who regularly do billion-point FFTs in double > precision for analyzing astronomical data. Those kinds of sizes are > also not uncommon in multidimensional FFTs for spectral PDE solvers.
what do they do for the twiddle factors for these billion-point FFTs? they need better than double precision. do they manipulate the cos(theta) and sin(pi/2-theta) twiddles to be something like 1-2*sin^2(theta/2) for small theta? on my computer, the difference between cos(2*pi*2^(-26)) and 1 is nothing. that is for a 64M FFT. how do they do 2^30? triple-precision floats? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
> what do they do for the twiddle factors for these billion-point FFTs? they > need better than double precision.
No they don't. As long as the twiddle factors are computed with a precision that is independent of N, the logarithmic error scaling applies. And if you use library sin/cos routines you get essentially machine precision for all N. There is no special problem with cosine. Yes. cos(2*pi/N) = 1 for N
>= 2^30 or so, but so what? It is still computed accurately to machine precision (14-16 decimal places), and the corresponding sin(2*pi/N) is accurate and is certainly not zero, so if \omega is the root of unity exp(2*pi*i/N) then \omega-1 is computed to nearly machine precision for essentially all N.
You only really run into problems if you are using trigonometric recurrences rather than library routines (although there is one rarely-used trig. recurrence with logarithmic error, albeit requiring logarithmic storage as well).
> on my computer, the difference between cos(2*pi*2^(-26)) and 1 is > nothing.
You need to get a new computer (or a new math library). 1 - cos(x) should be about x^2/2 for very small x. For x=2*pi/2^26, x^2/2 is ~ 4.3830e-15, while 1-cos(x) on my computer (a PowerPC with no extended-precision registers) gives ~ 4.3299e-15 in double precision -- a quite good agreement. Not that it matters, since for small x the accuracy of sin(x) ~ x is much more important than that of 1-cos(x) in determining the accuracy of the complex twiddle factor cos+i*sin. Overall, though, it's simply hard to wrap one's mind around the extraordinary accuracy of an FFT in floating-point arithmetic... Steven
By the way, as a further demonstration that the relative accuracy of
1-cos does not determine the accuracy of the FFT, consider a *single*
precision FFT of size 2^21, i.e. with a machine precision of about
1e-7.

In single precision, FFTW precomputes the twiddle factors and stores
them in an array of floats, which truncates them to single precision
regardless of how they are computed.  Many of the cos terms are then
exactly equal to 1.  And yet the rms relative error on my P-IV is less
than 2e-7.

Steven