DSPRelated.com
Forums

fft

Started by adriano meis February 17, 2008
Hi,
I am an italian student (excuse me for my bad english).
I have to evaluate the numerical accuracy of an fft algorithm. How can I do?
Is there any standard method to follow? What paper describe this standard
method?
I had the idea to follow this process:
1) I start from a random signal,
2) I calculate the fft
3) I calculat the IfFT (inverse fft),
4) at the end, I compare the rms of the starting signal, with the rms of the
last signal?

Is it a good idea?

Thank you.
Adriano



On 2008-02-17 15:20:01 -0400, "adriano meis" <doreciakgulp@tin.invalid> said:

> > Hi, > I am an italian student (excuse me for my bad english). > I have to evaluate the numerical accuracy of an fft algorithm. How can I do? > Is there any standard method to follow? What paper describe this standard > method? > I had the idea to follow this process: > 1) I start from a random signal, > 2) I calculate the fft > 3) I calculat the IfFT (inverse fft), > 4) at the end, I compare the rms of the starting signal, with the rms of the > last signal? > > Is it a good idea? > > Thank you. > Adriano
That is one good thing to do. Another is to try the calculation from the definition, so there are N^2 terms. Another is to find signals with known FTs, like half, third or quarter length constant values. Or ramps. Or Triangles. Or ... well you are the student so you need to do some work yourself. There are error analysis papers for FFTs as old as fourty plus years so this is a well worn topic.
adriano meis wrote:
> Hi, > I am an italian student (excuse me for my bad english). > I have to evaluate the numerical accuracy of an fft algorithm. How can I do? > Is there any standard method to follow? What paper describe this standard > method? > I had the idea to follow this process: > 1) I start from a random signal, > 2) I calculate the fft > 3) I calculat the IfFT (inverse fft), > 4) at the end, I compare the rms of the starting signal, with the rms of the > last signal? > > Is it a good idea?
Adriano, Your English is more that good enough. Why not start with a simple known signal instead of a random one? Departures from theory will be more evident that way. You may even be able to evaluate the accuracy without the IFFT, so you won't have two algorithms to evaluate. You will see two types of departures. The amplitudes of the components you know will be slightly off, and components that should be zero will have small values. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Hi Adriano,
Here's a link on the method (see Fig. 12-9):

http://www.dspguide.com/ch12/4.htm
http://www.dspguide.com/ch12/5.htm

Regards,
Steve
Hi Adriano,

   My suggestion is like this. Take a sine-wave signal. Quantize it to
   known resolution. With this you will know the input SNR.

   Compute the FFT (finite precision sense).

   From the FFT samples compute SNR by applying Parsevals theorem.

   Now just compare the SNR degradation that has happened by means
   of finite precision. And yes do not forget to window the time
   domain data (use blackman harris 7-Term windowing approach).

   Also the gain of the FFT can be split into individual stages and
   be applied at the butterfly output. 

   In this approach you do not need IFFT to calculate the degradation
   in SNR.

Regards
Bharat Pathak

bharat@arithos.com

www.arithos.com


>Hi, >I am an italian student (excuse me for my bad english). >I have to evaluate the numerical accuracy of an fft algorithm. How can I
do?
>Is there any standard method to follow? What paper describe this
standard
>method? >I had the idea to follow this process: >1) I start from a random signal, >2) I calculate the fft >3) I calculat the IfFT (inverse fft), >4) at the end, I compare the rms of the starting signal, with the rms of
the
>last signal? > >Is it a good idea? > >Thank you. >Adriano > > > >
On Feb 17, 3:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote:
> On 2008-02-17 15:20:01 -0400, "adriano meis" <doreciakg...@tin.invalid> said: > > I have to evaluate the numerical accuracy of an fft algorithm. How can I do? > > Is there any standard method to follow? > > That is one good thing to do. Another is to try the calculation from > the definition, so there are N^2 terms.
Gordon, as you know from your 1966 paper, that's a terrible way to evaluate the numerical accuracy of an FFT because the O(N^2) algorithm has far worse floating-point error growth than most FFT algorithms (on average, O(sqrt(N)) vs. O(sqrt(log N))). Perhaps you are assuming that the poster wanted to evaluate the correctness of some FFT implementation, rather than the numerical accuracy per se? (A nice way to rigorously evaluate the correctness of an FFT implementation in N log N time was described in a paper by Funda Ergun a few years ago; basically, you check the linearity, impulse response, and time-shift properties for random inputs. See http://dx.doi.org/10.1145/225058.225167) Probaby the simplest and most general thing one can do, for numerical accuracy, is to perform an FFT in single precision and compare it to the result in double precision (or compare double precision to an arbitrary-precision arithmetic FFT, which is how we perform our accuracy benchmarks in FFTW, fftw.org/accuracy/). Regards, Steven G. Johnson
On Feb 17, 4:32 pm, "SteveSmith" <Steve.Smi...@SpectrumSDI.com> wrote:
> http://www.dspguide.com/ch12/4.htmhttp://www.dspguide.com/ch12/5.htm
Steve, A couple constructive criticisms of your book from glancing at that chapter: 1) First, you say that the FFT is more accurate "because the fewer number of calculations results in less round-off error." This is a tempting way to understand the difference, but it is not correct. The easiest way to see that it is not correct is to look at the calculation of a *single* term, the DC (zero-frequency) term, and throw away all the other calculations. If you look at only the calculations that go into computing the DC term in a radix-2 FFT, you see that it works by dividing the inputs into two (interleaved) halves, and adding each half recursively. This process, called "cascade summation" or "pairwise summation", requires N-1 additions, exactly the same number of additions as a simple loop (called "recursive summation," counterintuitively) to add up the numbers. However, summing N numbers by cascade summation has O(log N) error bounds, compared to O(N) error bounds for a simple loop to add them up, even though the two processes have exactly the same number of operations. See eq. (3.6) in http://citeseer.ist.psu.edu/higham93accuracy.html 2) In the next section, you write "There are several techniques for making the FFT even faster; however, the improvements are only about 20-40%." This is a mistake that has been circulating for at least 20 years now, apparently popularized by Numerical Recipes. The correct statement is that there are techniques for reducing the *number of floating-point operations* compared to a radix-2 FFT, but these reductions are only (as of 2008) about 25% (34/45) asymptotically. The problem is that the number of floating-point operations no longer determines the relative speed of two FFT implementations, and hasn't for many years now, although it may have been true in 1980. Even thought the number of floating-point operations differs by at most 25% or so, the most highly optimized FFTs (using larger radices and other tricks to placate the memory hierarchy) are 3--40(!) times faster than a textbook radix-2 implementation on general-purpose CPUs these days (depending on the machine and the transform size). See e.g. fftw.org/ speed, and look through the single-precision graphs for "nr- c" (Numerical Recipes in C, a typical radix-2 FFT in the traditional style). Regards, Steven G. Johnson
On 2008-02-18 12:57:32 -0400, "Steven G. Johnson" <stevenj@alum.mit.edu> said:

> On Feb 17, 3:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote: >> On 2008-02-17 15:20:01 -0400, "adriano meis" <doreciakg...@tin.invalid> said: >>> I have to evaluate the numerical accuracy of an fft algorithm. How can I do? >>> Is there any standard method to follow? >> >> That is one good thing to do. Another is to try the calculation from >> the definition, so there are N^2 terms. > > Gordon, as you know from your 1966 paper, that's a terrible way to > evaluate the numerical accuracy of an FFT because the O(N^2) algorithm > has far worse floating-point error growth than most FFT algorithms (on > average, O(sqrt(N)) vs. O(sqrt(log N))).
The OP wanted to evaluate the numerical accurracy of an FFT. One thing that would be instructive is to see that it is more accurate than the naive definition. A nice easy way is to do it forward and back as he proposed for the FFT. I assumed that if he had the sense to ask for comments he was likely to try his proposed forward and back method on the other ways. Once the round trip showed that the naive route had more error it would be rather obvious that treating the naive result are a reference for error evaluation was not a good way to go. Fine for checking the results to insure that there was not an obscure error that made the round trip otherwise seem to work. The OP implied, or at least I assumed, that the implementation was presumed to work and the interest was more in looking at the accumulated error. The other is to compare the calculation to a known expression. Since the OP stated that he is a student. One might have thought that suggesting a student to do other things to illustrate the background might have helped the student's general education. The student is more likely to be helped if the initial suggestions are easily doable. It is not really necessary for the student's first step towards understanding to be at the frontiers of the state of the art. With a reasonable basic understanding those frontiers will be less intimidating for the student at later stages.
> Perhaps you are assuming that the poster wanted to evaluate the > correctness of some FFT implementation, rather than the numerical > accuracy per se?
No. I was more interested in reinforcing the notion that the FFT provided both lower computational cost and lower accumulated error. And in suggesting things that were simple enough that they might reasonably be done.
> (A nice way to rigorously evaluate the correctness of an FFT > implementation in N log N time was described in a paper by Funda Ergun > a few years ago; basically, you check the linearity, impulse response, > and time-shift properties for random inputs. See > http://dx.doi.org/10.1145/225058.225167) > > Probaby the simplest and most general thing one can do, for numerical > accuracy, is to perform an FFT in single precision and compare it to > the result in double precision (or compare double precision to an > arbitrary-precision arithmetic FFT, which is how we perform our > accuracy benchmarks in FFTW, fftw.org/accuracy/).
Fine if you have all that infrastructure. What should a under-resourced student, and probably without a lot of supervision, away from a major research university do? Remember this a student just starting and not some junior professor trying to generate lots of papers.
> Regards, > Steven G. Johnson
On Feb 18, 12:46 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote:
> The OP wanted to evaluate the numerical accurracy of an FFT. One thing > that would be instructive is to see that it is more accurate than the > naive definition. A nice easy way is to do it forward and back as he > proposed for the FFT. I assumed that if he had the sense to ask for > comments he was likely to try his proposed forward and back method > on the other ways.
In my experience, it's unreliable to assume that students will check their answers in multiple redundant ways. They find one way that seems to work, and stop there. So if you tell students to compare to the O(N^2) algorithm, without warning them that the latter is less accurate, it's quite likely they will just compare the two algorithms for random inputs and stop there.
> Since the OP stated that he is a student. One might have thought that > suggesting a student to do other things to illustrate the background > might have helped the student's general education.
The problem is, if a student asks how to measure the "numerical accuracy of an fft" and you tell him to compare to the O(N^2) algorithm, it is a bit deceptive, because that comparison is measuring the accuracy of the O(N^2) algorithm rather than the fft. Of course, given enough time, hints, and prodding, the student may figure out for himself that your answer didn't mean what it seemed, but posting trick answers on internet forum is a bit dubious from a pedagogical perspective, in my opinion.
> It is not really necessary for the student's first step towards > understanding to be at the frontiers of the state of the art.
I agree, but an forum like this has wide readership; it's nice to give a pointer or two to more advanced/rigorous approaches in addition to the simplest answers (and often students appreciate seeing a broader context as well). Also, this student *explicitly* asked for "papers" describing methods to check accuracy, which indicates an attempt to go beyond the level of the traditional homework problem.
> > Probaby the simplest and most general thing one can do, for numerical > > accuracy, is to perform an FFT in single precision and compare it to > > the result in double precision (or compare double precision to an > > arbitrary-precision arithmetic FFT, which is how we perform our > > accuracy benchmarks in FFTW, fftw.org/accuracy/). > > Fine if you have all that infrastructure. What should a under-resourced > student, and probably without a lot of supervision, away from a major > research university do?
s/double/float/ is pretty easy even for a student. (Also, our accuracy benchmark code is available to download.) Regards, Steven G. Johnson
>On Feb 17, 4:32 pm, "SteveSmith" <Steve.Smi...@SpectrumSDI.com> wrote: >> http://www.dspguide.com/ch12/4.htmhttp://www.dspguide.com/ch12/5.htm > >Steve, > >A couple constructive criticisms of your book from glancing at that >chapter: > >1) First, you say that the FFT is more accurate "because the fewer >number of calculations results in less round-off error." This is a >tempting way to understand the difference, but it is not correct. > ... >2) In the next section, you write "There are several techniques for >making the FFT even faster; however, the improvements are only about >20-40%." > ... > >Regards, >Steven G. Johnson >
Hi Steven, Very interesting; Thanks! Regards, Steve