DSPRelated.com
Forums

Some Common FFT Misconceptions

Started by Ron N. March 28, 2008
Just for fun, I thought I'd try and list many of the misconceptions
that people seem to have regarding using the FFT, as seen
in various previous posts to comp.dsp.  Feel free to disagree,
or to add your own favorites.

The FFT is a widely used algorithm because the result has
something (not precisely understood) to do with frequencies.

Some Common FFT Misconceptions:

1.
The FFT will directly produce the frequency of some
phenomena, without regard to the relationship between FFT
length and the period of any oscillations.  (e.g. what
rectangular window?)

2.
The FFT will find the correct frequency content without
regard to the relationship between the sample rate and the
highest frequency content present in the sampled phenomena,
or any other need for bandlimiting (for example, of closing
price data).
2b.
The sample rate only needs to be twice the highest
present, or twice the highest  frequency of interest for an
FFT to "find" that frequency.

3.
Since the FFT always produces "frequencies", there must
be some periodic phenomena within the sampled data.

4.
The perceive pitch of a musical or speech waveform is the
same as the frequency represented by the FFT magnitude peak.

5.
Windowing (non-rectangular) will always result in a more
accurate frequency estimation.

6.
One can perfectly filter out a signal band by just zeroing
out some bins between an FFT and IFFT.

7.
One can filter in the frequency domain without regard to the
length of the (significant) impulse response of the filter.

8.
One can reconstruct a signal from just the FFT magnitude
result.

9.
One can reconstruct a real signal by just feeding the
"positive frequency" bins to a generic IFFT.

10.
One can extrapolate a signal from an FFT without inferring
circular boundary conditions.

11.
The (complex) FFT contains no information about the time
domain envelope.

11.
The phase of a sinusoid (or how far that sinusoid is away
from Fs/4) does not affect the magnitude result of an FFT.

12.
Information from preceding, overlapping, or different length
FFT frames is of no interest in analyzing the spectrum of
some waveform.

13.
The "resolution" of a frequency estimation depends only
on the FFT length, and has nothing to do with anything
known about the signal-to-noise ratio in the sampled
data.
13b.
More "resolution" of frequency estimation is created by
zero-padding (...beyond that provided by other methods
of interpolation).

14.
An FFT is not just an efficient implementation of a DFT.
(Or: "What's a DFT?")

15.
Using an entire FFT is the best way to check for even a
very small number of frequencies (say a DTMF tone).

16.
The FFT can only operate on vector lengths that are an
integer power of 2.  (Or FFT's of vectors not a power of
2 are "really" slow.)

17.
The performance of an FFT implementation (on contemporary
PC's) depends mostly on the number of arithmetic operations
(or MACs or FMACs).

18.
An FFT is less accurate than using the defining DFT
formula for computation.

19.
???


IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
 http://www.nicholson.com/rhn/dsp.html
On Mar 28, 1:02 pm, "Ron N." <rhnlo...@yahoo.com> wrote:
> Just for fun, I thought I'd try and list many of the misconceptions > that people seem to have regarding using the FFT, as seen > in various previous posts to comp.dsp. Feel free to disagree, > or to add your own favorites. > > The FFT is a widely used algorithm because the result has > something (not precisely understood) to do with frequencies. > > Some Common FFT Misconceptions: > > 1. > The FFT will directly produce the frequency of some > phenomena, without regard to the relationship between FFT > length and the period of any oscillations. (e.g. what > rectangular window?) > > 2. > The FFT will find the correct frequency content without > regard to the relationship between the sample rate and the > highest frequency content present in the sampled phenomena, > or any other need for bandlimiting (for example, of closing > price data). > 2b. > The sample rate only needs to be twice the highest > present, or twice the highest frequency of interest for an > FFT to "find" that frequency. > > 3. > Since the FFT always produces "frequencies", there must > be some periodic phenomena within the sampled data. > > 4. > The perceive pitch of a musical or speech waveform is the > same as the frequency represented by the FFT magnitude peak. > > 5. > Windowing (non-rectangular) will always result in a more > accurate frequency estimation. > > 6. > One can perfectly filter out a signal band by just zeroing > out some bins between an FFT and IFFT. > > 7. > One can filter in the frequency domain without regard to the > length of the (significant) impulse response of the filter. > > 8. > One can reconstruct a signal from just the FFT magnitude > result. > > 9. > One can reconstruct a real signal by just feeding the > "positive frequency" bins to a generic IFFT. > > 10. > One can extrapolate a signal from an FFT without inferring > circular boundary conditions. > > 11. > The (complex) FFT contains no information about the time > domain envelope. > > 11. > The phase of a sinusoid (or how far that sinusoid is away > from Fs/4) does not affect the magnitude result of an FFT. > > 12. > Information from preceding, overlapping, or different length > FFT frames is of no interest in analyzing the spectrum of > some waveform. > > 13. > The "resolution" of a frequency estimation depends only > on the FFT length, and has nothing to do with anything > known about the signal-to-noise ratio in the sampled > data. > 13b. > More "resolution" of frequency estimation is created by > zero-padding (...beyond that provided by other methods > of interpolation). > > 14. > An FFT is not just an efficient implementation of a DFT. > (Or: "What's a DFT?") > > 15. > Using an entire FFT is the best way to check for even a > very small number of frequencies (say a DTMF tone). > > 16. > The FFT can only operate on vector lengths that are an > integer power of 2. (Or FFT's of vectors not a power of > 2 are "really" slow.) > > 17. > The performance of an FFT implementation (on contemporary > PC's) depends mostly on the number of arithmetic operations > (or MACs or FMACs). > > 18. > An FFT is less accurate than using the defining DFT > formula for computation. > > 19. > ??? > > IMHO. YMMV. > -- > rhn A.T nicholson d.0.t C-o-M > http://www.nicholson.com/rhn/dsp.html
At first I thought that they were all either misconceptions or ill- posed, then I realized that the ones that are ill-posed are therefore misconceptions, too. Dale B. Dalrymple http://dbdimages.com
Ron N. wrote:

> Just for fun, I thought I'd try and list many of the misconceptions > that people seem to have regarding using the FFT, as seen > in various previous posts to comp.dsp. Feel free to disagree, > or to add your own favorites.
"Just for fun" ?????? Does that make you a masochist or a sadist?
> > The FFT is a widely used algorithm because the result has > something (not precisely understood) to do with frequencies. > > Some Common FFT Misconceptions: > [snipping] > > 5. > Windowing (non-rectangular) will always result in a more > accurate frequency estimation.
I'll direct my next thread to you as I was forced to add an "Ooops" to subject line over this. But first I'll have to figure out how to intelligently ask my question.
> > 6. > One can perfectly filter out a signal band by just zeroing > out some bins between an FFT and IFFT.
I resemble that remark ;/
On Mar 28, 4:02&#4294967295;pm, "Ron N." <rhnlo...@yahoo.com> wrote:
> Just for fun, I thought I'd try and list many of the misconceptions > that people seem to have regarding using the FFT, as seen > in various previous posts to comp.dsp. &#4294967295;Feel free to disagree, > or to add your own favorites. > > The FFT is a widely used algorithm because the result has > something (not precisely understood) to do with frequencies.
Seeing as I am trying to better understand this topic, I will bite:
> Some Common FFT Misconceptions: > > 1. > The FFT will directly produce the frequency of some > phenomena, without regard to the relationship between FFT > length and the period of any oscillations. &#4294967295;(e.g. what > rectangular window?) >
pass
> 2. > The FFT will find the correct frequency content without > regard to the relationship between the sample rate and the > highest frequency content present in the sampled phenomena, > or any other need for bandlimiting (for example, of closing > price data). > 2b. > The sample rate only needs to be twice the highest > present, or twice the highest &#4294967295;frequency of interest for an > FFT to "find" that frequency. >
no - 2.01 times - ok just kidding - pass
> 3. > Since the FFT always produces "frequencies", there must > be some periodic phenomena within the sampled data. >
White Noise has frequency content without a periodic phenomena
> 4. > The perceive pitch of a musical or speech waveform is the > same as the frequency represented by the FFT magnitude peak. >
I don't know how the ear works relative to spectrum , but I could imagine that a musical intrument could play a note at a certain frequency, but has higher energy content on a higher harmonic, and the ear may still perceive the fundamental tone.
> 5. > Windowing (non-rectangular) will always result in a more > accurate frequency estimation. >
If you were only trying to estimate a single dominant frequency without regard to any other lower frequency energy, I would think that windowing would make things worse. Windowing widens the main lobe (less accuracy) with the gain of picking up dynamic range (finding other frequency content at lower levels)
> 6. > One can perfectly filter out a signal band by just zeroing > out some bins between an FFT and IFFT.
Not sure on this, but because of leakage the enrgy spilled into other bins? If you had a perfect frequency tone that was exactly on the center of a lobe, then zeroing out the bin would perfectly filter it, but as soon as the single tone is off center then it has spilled into other bins.
> > 7. > One can filter in the frequency domain without regard to the > length of the (significant) impulse response of the filter. >
pass
> 8. > One can reconstruct a signal from just the FFT magnitude > result. >
Each magnitude component must be further broken down into a specific amplitude and phase.
> 9. > One can reconstruct a real signal by just feeding the > "positive frequency" bins to a generic IFFT. >
Have to recontruct the real signal with positve and negative frequencies. For a real signal, the positive and negatives are conjugates of each other.
> 10. > One can extrapolate a signal from an FFT without inferring > circular boundary conditions. >
pass
> 11. > The (complex) FFT contains no information about the time > domain envelope. >
pass
> 11. > The phase of a sinusoid (or how far that sinusoid is away > from Fs/4) does not affect the magnitude result of an FFT. >
pass
> 12. > Information from preceding, overlapping, or different length > FFT frames is of no interest in analyzing the spectrum of > some waveform. >
pass
> 13. > The "resolution" of a frequency estimation depends only > on the FFT length, and has nothing to do with anything > known about the signal-to-noise ratio in the sampled > data.
of course noise will screw things up .
> 13b. > More "resolution" of frequency estimation is created by > zero-padding (...beyond that provided by other methods > of interpolation). > > 14. > An FFT is not just an efficient implementation of a DFT. > (Or: "What's a DFT?") >
Your wording is confusing, but FFT *is* an efficient implementation of a DFT. Result is the same- better be.
> 15. > Using an entire FFT is the best way to check for even a > very small number of frequencies (say a DTMF tone). >
pass
> 16. > The FFT can only operate on vector lengths that are an > integer power of 2. &#4294967295;(Or FFT's of vectors not a power of > 2 are "really" slow.) >
Radix 2 FFT's need powers of two. Other FFT routines exist for other integers.
> 17. > The performance of an FFT implementation (on contemporary > PC's) depends mostly on the number of arithmetic operations > (or MACs or FMACs). >
pass- (I thought it was predominently dependent on number of complex multiplications)
> 18. > An FFT is less accurate than using the defining DFT > formula for computation.
shold be the same
> > 19. > ??? > > IMHO. YMMV. > -- > rhn A.T nicholson d.0.t C-o-M > &#4294967295;http://www.nicholson.com/rhn/dsp.html
Ron N. wrote:
(snip)

> Some Common FFT Misconceptions:
> 2. > The FFT will find the correct frequency content without > regard to the relationship between the sample rate and the > highest frequency content present in the sampled phenomena, > or any other need for bandlimiting (for example, of closing > price data). > 2b. > The sample rate only needs to be twice the highest > present, or twice the highest frequency of interest for an > FFT to "find" that frequency.
I suppose these are reasonable misconceptions, but not really of the FFT. The problems occur in the data before it even gets to the FFT. (snip)
> 7. > One can filter in the frequency domain without regard to the > length of the (significant) impulse response of the filter.
A misconception if the FFT is used for non-periodic signals. Again, not the FFT's fault, but an indirect result of misconception by the user. (snip)
> 12. > Information from preceding, overlapping, or different length > FFT frames is of no interest in analyzing the spectrum of > some waveform.
This probably should not have the word 'some' in it. It is a misconception if one thinks it will work for the particular waveform, but it might work for some waveforms. Otherwise, I pretty much agree with the list, and am glad that you posted it. -- glen
bulegoge@columbus.rr.com wrote:

   ...

>> 6. >> One can perfectly filter out a signal band by just zeroing >> out some bins between an FFT and IFFT.
It makes the equivalent filter's impulse response longer than the FFT, introducing artifacts. Those show up as ringing near the stop-band edges, and more perniciously, response -- sometimes strong -- in the zeroed bins away from their centers.
> Not sure on this, but because of leakage the enrgy spilled into other > bins? > If you had a perfect frequency tone that was exactly on the center of > a lobe, then zeroing out the bin would perfectly filter it, but as > soon as the single tone is off center then it has spilled into other > bins.
Yes.
>> 7. >> One can filter in the frequency domain without regard to the >> length of the (significant) impulse response of the filter.
See above. ...
>> An FFT is less accurate than using the defining DFT >> formula for computation. > > shold be the same
It would be the same if the math were exact. The FFT's being more efficient means that it needs fewer computation. There are therefore fewer roundoff errors, so in fact, it usually is better. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
On Fri, 28 Mar 2008 22:40:21 -0400, Jerry Avins wrote:

>>> An FFT is less accurate than using the defining DFT formula for >>> computation. >> >> shold be the same > > It would be the same if the math were exact. The FFT's being more > efficient means that it needs fewer computation. There are therefore > fewer roundoff errors, so in fact, it usually is better.
I wouldn't have thought so, although it probably depends on the arithmetic precision and model involved. FFTs usually have to round as they store the results of every stage, which means that each bin gets on the order of log(N) roundings (for an FFT of length N). It's worse than that, though, because the rounding noise propagates from every input into every result bin, so you normally wind up with rounding noise proportional to N in every result bin. On the other hand, a direct DFT implementation could arrange to accumulate all of the terms for each bin at once, and only round once, when the result is written, at the end. Of course, if you're using non- widening arithmetic (e.g., 32-bit floating point), then you generally can't avoid a rounding for every term, so you'll still wind up with O(N) rounding error per bin, same as for FFT. Cheers, -- Andrew
Andrew Reilly wrote:
> On Fri, 28 Mar 2008 22:40:21 -0400, Jerry Avins wrote: > >>>> An FFT is less accurate than using the defining DFT formula for >>>> computation. >>> shold be the same >> It would be the same if the math were exact. The FFT's being more >> efficient means that it needs fewer computation. There are therefore >> fewer roundoff errors, so in fact, it usually is better. > > I wouldn't have thought so, although it probably depends on the > arithmetic precision and model involved. > > FFTs usually have to round as they store the results of every stage, > which means that each bin gets on the order of log(N) roundings (for an > FFT of length N). It's worse than that, though, because the rounding > noise propagates from every input into every result bin, so you normally > wind up with rounding noise proportional to N in every result bin. > > On the other hand, a direct DFT implementation could arrange to > accumulate all of the terms for each bin at once, and only round once, > when the result is written, at the end. Of course, if you're using non- > widening arithmetic (e.g., 32-bit floating point), then you generally > can't avoid a rounding for every term, so you'll still wind up with O(N) > rounding error per bin, same as for FFT.
I should have added "according to Steven G. Johnson." While I deplore appeal to authority over reason, I think he ought to know. I took it that he wrote from experience, not merely a gedanken experiment. It might also have been good to qualify my statement with "in the usual way (non-widening arithmetic) that those computation are arranged. Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
On Mar 28, 10:40 pm, Jerry Avins <j...@ieee.org> wrote:
> >> An FFT is less accurate than using the defining DFT > >> formula for computation. > > > shold be the same > > It would be the same if the math were exact. The FFT's being more > efficient means that it needs fewer computation. There are therefore > fewer roundoff errors, so in fact, it usually is better.
Right, but not really for the right reason. First, at least in floating-point arithmetic, the Cooley-Tukey FFT (and most other FFT algorithms, with one or two exceptions) is a *lot* more accurate than evaluating the naive DFT formula directly by two nested loops. As was first shown by Gentleman and Sande in 1966, the FFT's roundoff error bounds grow as O(log N), whereas for the naive DFT they grow as O(N) (or O(N^{3/2} for absolute rather than relative error IIRC). The mean (rms) relative errors of the C-T FFT grow as O(sqrt(log N)), whereas for the naive DFT it is O(sqrt N); see e.g. Manfred Tasche and Hansmartin Zeuner, Handbook of Analytic- Computational Methods in Applied Mathematics, chapter 8. However, the reason for this is not really the fact that the FFT has fewer operations. For example, take an FFT of size N, and throw out (prune) all of the computations except for those to compute a single one of the outputs. That is, that output is computed *exactly* as in the full FFT, but we eliminate any operations that don't go into computing that output. Then you will find that, to compute that output, exactly N-1 complex additions and N complex multiplications are required (although some of the multiplications may be by 1's), *exactly* as for the naive DFT formula. So, it's not so much that the FFT requires fewer operations between a given inputs and a given output, it's that the FFT shares operations between outputs. The reason for the improved accuracy is the *arrangement* of the operations rather than the *number* of operations. To see this concretely, let's just look at the computation of a single output, the DC (0-frequency) term. In the naive DFT formula, this is just a sum of N-1 additions to add all N inputs. The roundoff errors of this sum grow as O(N) in the worst case and O(sqrt(N)) in the average (random- walk) case. If you take a radix-2 Cooley-Tukey FFT and again throw away all operations except those to compute the DC term, it also requires N-1 additions (there is no way to add N numbers in fewer operations), but it does them in a different order: first it adds all the even-indexed inputs, and then it adds all the odd-indexed inputs, and then it adds the two halves (and repeats this process recursively). This is called "cascade summation", and it is a well known fact that the roundoff error in cascade summation grows only as O(log N) in the worst case and O(sqrt(log N)) in the average case (for random inputs). See e.g. Higham (http://citeseer.ist.psu.edu/ higham93accuracy.html). Regards, Steven G. Johnson PS. In fixed-point arithmetic, matters are quite different. There, the errors from a Cooley-Tukey FFT grow as O(sqrt N). (I'm not sure if this is average or worst case.) See P. Welch, IEEE Trans. Audio. Electroacoust 17, p. 151 (1969).
"Steven G. Johnson" <stevenj@alum.mit.edu> writes:
> [...] > PS. In fixed-point arithmetic, matters are quite different. There, > the errors from a Cooley-Tukey FFT grow as O(sqrt N). (I'm not sure > if this is average or worst case.) See P. Welch, IEEE Trans. Audio. > Electroacoust 17, p. 151 (1969).
Hi Steven, Thanks very much for that enlightening post. I find the following interesting. Welch gives the upper bound of the ratio p of the RMS error to the RMS result as p = 2^(M+3)/2 * 2^(-B+1) * (0.3) / (RMS result), where M = log2(N), N is the FFT size, and B is the number of bits used to represent the data. That means that for a 1024-pt FFT using 16-bit numbers, the roundoff error is about 57 dB down. In other words, you've degraded your SQNR from the idea 96 dB to 57 dB, or about 10 bits. That means that when you're using a fixed-point FFT for frequency-domain filtering at the parameter values, you're losing at least 40 dB of SNR, especially if you account for both the forward and the inverse FFT that's required. Hmmm. All of a sudden fixed-point FDF doesn't look so attractive, at least not for high-resolution applications like audio. -- % Randy Yates % "The dreamer, the unwoken fool - %% Fuquay-Varina, NC % in dreams, no pain will kiss the brow..." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO http://www.digitalsignallabs.com