DSPRelated.com
Forums

fft

Started by adriano meis February 17, 2008
On 19 Feb, 16:31, "Steven G. Johnson" <stev...@alum.mit.edu> wrote:
> On Feb 19, 9:13 am, Rune Allnor <all...@tele.ntnu.no> wrote: > > > > > > > On 19 Feb, 14:54, "Steven G. Johnson" <stev...@alum.mit.edu> wrote: > > > > On Feb 19, 6:11 am, Rune Allnor <all...@tele.ntnu.no> wrote: > > > > > I agree with you that it is dangerous to assume anything from > > > > students (or anybody else, for that matter). On the other hand, > > > > one can almost never *force* other people to do things in a > > > > certain way > > > > I'm confused. &#4294967295;Who is talking about "forcing" someone to do anything? > > > We were just discussing how to answer a certain online question in a > > > clear and useful way. &#4294967295;How would it even be possible to "force" > > > someone to do something in an online forum? > > > Gordon's first post seems to have been based on pedagogical > > principles similar to those outlined by me. You responded > > with a very technical reply based on what might be expected > > from students. The only contexts where such a highly technical > > reply might be relevant are > > I responded that Gordon's answer was misleading, because the student > asked how to measure X and Gordon gave a suggestion that would > actually measure Y, not X.
...
> I gave additional information and references in response to a > question, and I clarified a point that was arguably misleading; I'm > really baffled regarding how this led to a long lecture about not > "forcing" students.
Here is what you wrote previously and that I reacted to: "In my experience, it's unreliable to assume that students will check their answers in multiple redundant ways. They find one way that seems to work, and stop there. So if you tell students to compare to the O(N^2) algorithm, without warning them that the latter is less accurate, it's quite likely they will just compare the two algorithms for random inputs and stop there." To me, this is text not about Gordon giving wrong or misleading answers. This text is about your expectancy that students will not do a complete investigation unless 'encouraged' (in practice, that means 'forced') to do so. In my experience, it is no point to suggest how to go beyond the basics ('incomplete' or 'misleading') tests until one sees that whoever is ready to go there. Rune
On Feb 19, 11:16 am, "Steven G. Johnson" <stev...@alum.mit.edu> wrote:
> One interesting problem that occurs to me is to ask what is the > "worst" input, i.e. the input that gives the worst errors in some > norm.
To be more precise, since you could always get a big error just by making your numbers so large they approach overflow or something like that, let me pose the question (for fun and profit): can you find a vector of inputs x (exactly representable in your floating-point) with L2 norm |x| = 1 that maximizes the root-mean-square relative error |FFT(x) - exactDFT(x)| / |exactDFT(x)| for a given FFT implementation, where all expressions except FFT(x) are in exact arithmetic? (Of course, assuming a unitary DFT normalization, the denominator is |exactDFT(x)| = |x| = 1.) Steven
Steven G. Johnson wrote:
> On Feb 19, 11:16 am, "Steven G. Johnson" <stev...@alum.mit.edu> wrote: >> One interesting problem that occurs to me is to ask what is the >> "worst" input, i.e. the input that gives the worst errors in some >> norm. > > To be more precise, since you could always get a big error just by > making your numbers so large they approach overflow or something like > that, let me pose the question (for fun and profit): can you find a > vector of inputs x (exactly representable in your floating-point) with > L2 norm |x| = 1 that maximizes the root-mean-square relative error > |FFT(x) - exactDFT(x)| / |exactDFT(x)| > for a given FFT implementation, where all expressions except FFT(x) > are in exact arithmetic? (Of course, assuming a unitary DFT > normalization, the denominator is |exactDFT(x)| = |x| = 1.) > > Steven
Steven, That's a very interesting question. Even more so considering it comes from someone who is so well versed in the state of the FFT art. I agree it would be very implementation specific. You'd need to consider not only the immediate precision of a single addition, but also the cumulative effect it has on subsequent operations. ... for floating point FFTs ... In order to find such a worst-case vector, you'd need to maximize the energy that gets lost due to a) limits of precision in addition/subtraction and b) the amplification of those errors in multiplication. When two floats get added or subtracted, whichever has the greater of the two exponents drives the precision of the result. Multiplications do not *add* much error per se, but they do amplify errors from previous steps. ... for fixed point FFTs ... Finding such a vector may be pretty easy. For many libraries,(kissfft included) to maximize the error-to-signal ratio, simply set the lowest significant bit of one input sample and leave the rest of the input zeros -- call that your signal. That single lsb is not likely to contribute enough on its own to make any non-zero output, making the error energy equal to the signal energy -- the maximum relative error of 1.0. If the FFT library allows the possibility of overflow/saturation, you can craft vectors to cause such overflow. If the fft algo uses block floating point, then figure out ways to make the blocks always have very few points that are maximal magnitude and the rest are small enough so they will suffer precision loss, but big enough to contribute handily to the overall error. Again -- very implementation-specific. -- Mark Borgerding
On Feb 19, 4:36 pm, Mark Borgerding <m...@borgerding.net> wrote:
> For many libraries,(kissfft included) to maximize the error-to-signal > ratio, simply set the lowest significant bit of one input sample and > leave the rest of the input zeros -- call that your signal. That single > lsb is not likely to contribute enough on its own to make any non-zero > output, making the error energy equal to the signal energy -- the > maximum relative error of 1.0.
Remember, I specified that you should maximize the error over a fixed L2 norm=1 of the signal, so your suggested signal doesn't have that property. What you are doing is loosely analogous to choosing a signal that is close to overflow/underflow in floating-point; it was precisely to exclude such obvious cases that I suggested one constrain the signal norm. Steven
Steven G. Johnson wrote:

> Certainly that is practical to do as a simple check; Gordon made a > similar suggestion. (Of course, the error that many students make in > applying that suggestion is that they compare to the analytical > Fourier integral that is tabulated in their textbooks, not to the > analytical DFT; I can't count how many times someone has posted on > Usenet that their FFT routine is not working because the transform of > a Gaussian is not a Gaussian, etc.)
I thought I did that once. The transform has to be long enough that it actually looks like a gaussian, and it has to be centered on zero. Also, those negative frequencies come out again. -- glen