DSPRelated.com
Forums

Some Common FFT Misconceptions

Started by Ron N. March 28, 2008
Andrew Reilly wrote:

(snip regarding precision in FFT computation)

> I wouldn't have thought so, although it probably depends on the > arithmetic precision and model involved.
> FFTs usually have to round as they store the results of every stage, > which means that each bin gets on the order of log(N) roundings (for an > FFT of length N). It's worse than that, though, because the rounding > noise propagates from every input into every result bin, so you normally > wind up with rounding noise proportional to N in every result bin.
> On the other hand, a direct DFT implementation could arrange to > accumulate all of the terms for each bin at once, and only round once, > when the result is written, at the end. Of course, if you're using non- > widening arithmetic (e.g., 32-bit floating point), then you generally > can't avoid a rounding for every term, so you'll still wind up with O(N) > rounding error per bin, same as for FFT.
There has been much work done on the best way to do sums, though it is not usually applied to DFT computation. The implementation of 80 bit (64 bit significand) in floating point hardware helps, but for high-level languages you usually can't depend on it. Intermediate results may stay in registers, or may be stored at reduced precision. -- glen
bulegoge@columbus.rr.com wrote:
(snip)

> I don't know how the ear works relative to spectrum , but I could > imagine that a musical intrument could play a note at a certain > frequency, but has higher energy content on a higher harmonic, and the > ear may still perceive the fundamental tone.
This is supposed to be true. You can electrically generate them with no amplitude in the fundamental and still hear it. I once had a book (with a recording included) that did a study with tones having 'harmonics' at factors of 2.1, called stretched. They sound horrible. -- glen
Steven G. Johnson wrote:
> On Mar 28, 10:40 pm, Jerry Avins <j...@ieee.org> wrote: >>>> An FFT is less accurate than using the defining DFT >>>> formula for computation. >>> shold be the same >> It would be the same if the math were exact. The FFT's being more >> efficient means that it needs fewer computation. There are therefore >> fewer roundoff errors, so in fact, it usually is better. > > Right, but not really for the right reason. > > First, at least in floating-point arithmetic, the Cooley-Tukey FFT > (and most other FFT algorithms, with one or two exceptions) is a *lot* > more accurate than evaluating the naive DFT formula directly by two > nested loops. As was first shown by Gentleman and Sande in 1966, the > FFT's roundoff error bounds grow as O(log N), whereas for the naive > DFT they grow as O(N) (or O(N^{3/2} for absolute rather than relative > error IIRC). The mean (rms) relative errors of the C-T FFT grow as > O(sqrt(log N)), whereas for the naive DFT it is O(sqrt N); see e.g. > Manfred Tasche and Hansmartin Zeuner, Handbook of Analytic- > Computational Methods in Applied Mathematics, chapter 8. > > However, the reason for this is not really the fact that the FFT has > fewer operations. For example, take an FFT of size N, and throw out > (prune) all of the computations except for those to compute a single > one of the outputs. That is, that output is computed *exactly* as in > the full FFT, but we eliminate any operations that don't go into > computing that output. Then you will find that, to compute that > output, exactly N-1 complex additions and N complex multiplications > are required (although some of the multiplications may be by 1's), > *exactly* as for the naive DFT formula. So, it's not so much that the > FFT requires fewer operations between a given inputs and a given > output, it's that the FFT shares operations between outputs. > > The reason for the improved accuracy is the *arrangement* of the > operations rather than the *number* of operations. To see this > concretely, let's just look at the computation of a single output, the > DC (0-frequency) term. In the naive DFT formula, this is just a sum > of N-1 additions to add all N inputs. The roundoff errors of this sum > grow as O(N) in the worst case and O(sqrt(N)) in the average (random- > walk) case. If you take a radix-2 Cooley-Tukey FFT and again throw > away all operations except those to compute the DC term, it also > requires N-1 additions (there is no way to add N numbers in fewer > operations), but it does them in a different order: first it adds all > the even-indexed inputs, and then it adds all the odd-indexed inputs, > and then it adds the two halves (and repeats this process > recursively). This is called "cascade summation", and it is a well > known fact that the roundoff error in cascade summation grows only as > O(log N) in the worst case and O(sqrt(log N)) in the average case (for > random inputs). See e.g. Higham (http://citeseer.ist.psu.edu/ > higham93accuracy.html). > > Regards, > Steven G. Johnson > > PS. In fixed-point arithmetic, matters are quite different. There, > the errors from a Cooley-Tukey FFT grow as O(sqrt N). (I'm not sure > if this is average or worst case.) See P. Welch, IEEE Trans. Audio. > Electroacoust 17, p. 151 (1969).
Thanks for the clarification. I evidently misinterpreted your earlier remarks. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
glen herrmannsfeldt wrote:
> bulegoge@columbus.rr.com wrote: > (snip) > >> I don't know how the ear works relative to spectrum , but I could >> imagine that a musical intrument could play a note at a certain >> frequency, but has higher energy content on a higher harmonic, and the >> ear may still perceive the fundamental tone. > > This is supposed to be true. You can electrically generate > them with no amplitude in the fundamental and still hear it. > > I once had a book (with a recording included) that did a study > with tones having 'harmonics' at factors of 2.1, called > stretched. They sound horrible.
By smoothly changing its relative harmonic energies in the right way as a tone is swept, it is possible to create a tone which appears to rise or fall in frequency forever. An auditory equivalent of an optical illusion. It sounds really odd. This depends on the ear being tricked about the musical fundamental it is hearing amongst a set of harmonics. Steve
On Mar 29, 9:06 pm, Steve Underwood <ste...@dis.org> wrote:
> glen herrmannsfeldt wrote: > > buleg...@columbus.rr.com wrote: > > (snip) > > >> I don't know how the ear works relative to spectrum , but I could > >> imagine that a musical intrument could play a note at a certain > >> frequency, but has higher energy content on a higher harmonic, and the > >> ear may still perceive the fundamental tone. > > > This is supposed to be true. You can electrically generate > > them with no amplitude in the fundamental and still hear it. > > > I once had a book (with a recording included) that did a study > > with tones having 'harmonics' at factors of 2.1, called > > stretched. They sound horrible. > > By smoothly changing its relative harmonic energies in the right way as > a tone is swept, it is possible to create a tone which appears to rise > or fall in frequency forever. An auditory equivalent of an optical > illusion. It sounds really odd. This depends on the ear being tricked > about the musical fundamental it is hearing amongst a set of harmonics. > > Steve
The ASA has an Auditory Demonstrations CD that includes a demo of the "constantly rising" effect. http://asa.aip.org/discs.html Dale B. Dalrymple
Randy Yates wrote:

(snip)

> That means that for a 1024-pt FFT using 16-bit numbers, the roundoff > error is about 57 dB down. In other words, you've degraded your SQNR > from the idea 96 dB to 57 dB, or about 10 bits. That means that when > you're using a fixed-point FFT for frequency-domain filtering at the > parameter values, you're losing at least 40 dB of SNR, especially if > you account for both the forward and the inverse FFT that's required.
Do you shift right (divide by two) at each stage? Maybe every other stage? I think it depends a lot on the statistics of your data. -- glen
Hi Jerry,

On Sat, 29 Mar 2008 09:01:09 -0400, Jerry Avins wrote:

> I should have added "according to Steven G. Johnson." While I deplore > appeal to authority over reason, I think he ought to know. I took it > that he wrote from experience, not merely a gedanken experiment.
's OK: I'm quite prepared to accept an appeal to Steven Johnson on this argument. I must have missed him on the subject, earlier. (Not surprising: my reading of c.dsp is spotty these days.) On the other hand, the arithmetic models used do make a difference, and in that regard, we probably have different focusses...
> It might also have been good to qualify my statement with "in the usual > way (non-widening arithmetic) that those computation are arranged.
See: my "usual way" is with properly-widening arithmetic, as provided by most fixed-point and integer implementations. Certainly there are swings and roundabouts, and it seems that FFTs are one area where floating point might have an edge. It was good to read Stephen's follow-up post. I'll have to read up on cascade summation: I wasn't aware of that result. Seems plausible, I suppose. Cheers, -- Andrew
Steve Underwood schrieb:

> By smoothly changing its relative harmonic energies in the right way as > a tone is swept, it is possible to create a tone which appears to rise > or fall in frequency forever.
These are called shepard tones. Direct link to a nice example: http://en.wikipedia.org/wiki/Image:DescenteInfinie.ogg Cheers, Nils
Nils <n.pipenbrinck@cubic.org> writes:

> Steve Underwood schrieb: > >> By smoothly changing its relative harmonic energies in the right way >> as a tone is swept, it is possible to create a tone which appears to >> rise or fall in frequency forever. > > These are called shepard tones. > > Direct link to a nice example: > > http://en.wikipedia.org/wiki/Image:DescenteInfinie.ogg
That is REALLY strange! Thanks for the link, Nils! -- % Randy Yates % "Though you ride on the wheels of tomorrow, %% Fuquay-Varina, NC % you still wander the fields of your %%% 919-577-9882 % sorrow." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://www.digitalsignallabs.com

glen herrmannsfeldt wrote:
...........
> > Do you shift right (divide by two) at each stage? > Maybe every other stage? I think it depends a lot on the > statistics of your data. > > -- glen >
I wrote an FHT for the 80386 some years ago. I used the 32 bit registers and didn't do any scaling until the end. For a 8192 point FHT (I think an FFT would be the same) this would have meant 13 right shifts with 16 bit registers so using 32 bit registers meant that at the end, I shifted 3 to the left and just used the upper 16 bits of the registers. Seemed to work OK although I didn't formally calculate the round-off errors. I sort of felt that it would have been less than shifting at every stage. Alan