Newbie question on FFT

Started by May 19, 2005
Andor wrote:
> Rune Allnor wrote: > > stevenj@alum.mit.edu wrote: > > > If the inputs are zero, the outputs should be *exactly* zero. > > > > Are you sure about this? No numerical noise is introduced anywhere? > > I sincerely hope you are right, but I don't remember having seen > > such claims before. > > I don't know about other standards, but this is stated  about IEEE > 754 arithmetic: > > " > Each of the [...] operations must deliver to its destination the
exact
> result, unless there is no such result or that result does not fit in > the destination's format. In the latter case, the operation must > minimally modify the exact result according to the rules of
prescribed
> rounding modes [...] and deliver the result so modified to the > operation's destination. > " > > As addition and multiplication of an IEEE 754 value with zero is
always
> another IEEE 754 value, the result is exact.
I haven't seen the IEEE 754 standard itself. For some reason I believe one have to pay to get a copy, which to me is a contradiction in terms: A standard should be distributed as widely as possible as easily as possible, to become effective. I *have* seen somewhere that the number 0 has an exact bit representation in the IEEE 754 format. So if you specify float a = 0; in you C code, the initial bit pattern in the variable 'a' is given. If, however, you do a computation like float b=2; float c=10; float d=5; float e; e= c/b-d; /****/ the analytic answer is 0, but the numerical answer deviates from the exact representation because of representation issues in the numerical format. That's why you can't make a test like if(e==0) but have to use something like if(abs(e)<1e-5) when testing for vanishing numbers. I might be misunderstanding what Steven means, but I interpret his post as saying that a statement like float f; f= b*0; /* Multiplying with the exact value of 0 */ where the answer is 0 also makes the variable 'f' contain the exact bit representation of 0, and not a numerical approximation as in /****/ above. I don't know the inner workings of floating point functions well enough to know whether this is correct or not. Even if Steven is correct, it seems to be very dangerous to rely on a variable containing the exact representation of 0 when it contains a result from computations.
>  Sun Microsystems "Numerical Computation Guide"
Rune
Rune wrote:

> I *have* seen somewhere that the number 0 has an exact bit > representation in the IEEE 754 format.
Yes, there are two representations for zero (+ / - 0).
>If, however, you do a computation like > > float b=2; > float c=10; > float d=5; > float e;
> e= c/b-d;
You chose a bad example :-). This computation can be performed with no error on any 32bit IEEE 754 machine. However, float a = 0.1; float b = 10.0; float c = 1.0; float e; e = c - a * b; poses a real (haha, nice pun) problem, as the number "0.1" has no exact representation. Note however, that the standard states that if the numbers involved have exact representation, then the result must be exact. As I said, multiplying or adding zero to any floating-point number can be represented exactly, therefore no error occurs when 0 is input into an operation (as contrasted to when we expect 0 to be output of an operation). There are tons of references available on the net if you are interested in the standard (Google for IEEE 754 floating-point). Regards, Andor
I somewhat clumsily wrote:

>Note however, that the standard states that if the >numbers involved have exact representation, then the result must be >exact.
This is bad wording (or actually plain wrong). The standard says if the _result_ has exact representation, then this must be obtained from the arithmetic operation. In my example, the result of the operation "float a = 0.1" has no exact representation, and must therefore be appropriatly rounded.
oh yeah, those things were due to a stupid error in my code. i was
free'ing my arrays at the end of my function... people, dont copy codes
from tutorials without even thinking of what it does or you're gonna
end up with stupid bugs like this

by the way, in IEEE standard, no matter if its a float or a double,
when it's a zero, every (32 or 64) bit is 0, so yeah 0 is always
exactly 0 in IEEE

This is one of the common myths about floating-point, that every
floating-point number somehow has some intrinsic "uncertainty" (due to
quantum effects? ;-), or somehow represents a "range" of values (ala
interval arithmetic).  Floating-point is just a set of rational
numbers, no more (well, excepting -0, Inf, NaN, ...), no less, and the
only error creeps in when the result of an operation is not exactly
representable in this set.

A fun and informative read is, "How Java's floating-point hurts
everyone everywhere" on William Kahan's web site
(http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf); contrary to the
title, it's not (mostly) about Java, and gives lots of examples of

(A lot of people recommend the "What every computer scientist should
know about floating-point" essay as well.  I find that useful, but
Kahan's slides are less dense and jump right to the stupid mistakes.)

Cordially,
Steven G. Johnson

PS. IEEE 754 aside, I would be astonished if there were ever any
floating-point implementation, anywhere, where multiplying by 0 didn't
give you exactly (+/-)0 and adding x+0 didn't give you exactly x.
(Except for NaN * 0, etc., of course.)

stevenj@alum.mit.edu wrote:
> This is one of the common myths about floating-point, that every > floating-point number somehow has some intrinsic "uncertainty" (due
to
> quantum effects? ;-), or somehow represents a "range" of values (ala > interval arithmetic).
Isn't this to turn the question upside-down? What you say is correct, if one restricts the discussion to floating-point numbers, i.e. the numbers that can be represented by, say, the IEEE 754 format or something similar. But no one are interested in those numbers for their own sake. Everybody who use the floating point formats are interested in *real* numbers. Not integers. Not natural numbers. Not rational numbers. Not trancedental numbers. Real numbers. The problems occur because the floating point format is the only (well, the least impractical) non-integer number format that can be implemented on a computer, and it can only represent a very small subset of the real numbers. That's the sole reason for floating point numbers being discussed at all. Not because they are particularly well suited for their use. Because FP numbers are available. Beggars can't be choosers.
> Floating-point is just a set of rational > numbers, no more (well, excepting -0, Inf, NaN, ...), no less,
Exactly. It is a very limited set, at that.
> and the > only error creeps in when the result of an operation is not exactly > representable in this set.
Eh... this may very well be "the only error" but it's a pretty important one, if you ask me. [...]
> PS. IEEE 754 aside, I would be astonished if there were ever any > floating-point implementation, anywhere, where multiplying by 0
didn't
> give you exactly (+/-)0 and adding x+0 didn't give you exactly x. > (Except for NaN * 0, etc., of course.)
I don't know. While I would agree that one would expect this to happen, I don't know enough about how FP processors are actually implemented to see what would happen in practice. Rune
> PS. IEEE 754 aside, I would be astonished if there were ever any > floating-point implementation, anywhere, where multiplying by 0
didn't
> give you exactly (+/-)0 and adding x+0 didn't give you exactly x. > (Except for NaN * 0, etc., of course.)
A slight exception: (-0) + (+0) = (+0) in IEEE 754, not that this matters here.
If you are doing arbitrary calculations on arbitrary inputs, of course
errors can creep in.  However, this process should be viewed rationally
(no pun intended), not in terms of fanciful superstitions like the idea
that "noise" will magically result from operations on zeros. In
particular, the representable set of floating-point numbers (i.e. the
precision) has to be viewed as an independent concept from the accuracy
when they are used as approximations for other values...otherwise
people tend towards silly conclusions such as the fallacy that the
error bar is always the machine epsilon (or always at least the machine
epsilon).

As Knuth said in the introduction to _The Art of Computer Programming_
(paraphrasing), "You need to have at least some idea of what is going
on inside a computer, or the programs you write will be pretty weird."

Steven

stevenj@alum.mit.edu at stevenj@alum.mit.edu wrote on 05/20/2005 22:12:

> IEEE 754 aside, I would be astonished if there were ever any > floating-point implementation, anywhere, where multiplying by 0 didn't > give you exactly (+/-)0 and adding x+0 didn't give you exactly x.
i thought the standard would mandate what would come out in those cases. as far as i thought, the only "optional" feature you could have and still call your FP implementation "IEEE 754 compliant" was the denorms.
> (Except for NaN * 0, etc., of course.)
what does +Inf * 0 get you? a NaN? do NaNs have signs in IEEE 754? i'm also curious if the +/- multiplication conventions apply to IEEE zero. does +0 * -0 get you -0? and -0 * -0 = +0? (just a curiousity.) here's another curiousity: what is +0 + -0 = ? + or - zero? or a more concrete way to put it is 1.0/(+0.0 + -0.0) = +Inf or -Inf in IEEE 754? one last thing: does IEEE 754 define compliance down to the LSB? is the method of rounding specified by the standard? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Rune Allnor wrote:

> stevenj@alum.mit.edu wrote:
>>This is one of the common myths about floating-point, that every >>floating-point number somehow has some intrinsic "uncertainty"
>> (due to quantum effects? ;-), or somehow represents a >> "range" of values (ala interval arithmetic). (snip)
> But no one are interested in those numbers for their own sake. > Everybody who use the floating point formats are interested > in *real* numbers. Not integers. Not natural numbers. Not > rational numbers. Not trancedental numbers.
> Real numbers.
How about the other way around. That floating point (computer) arithmetic was designed for use by scientists used to working with numbers with uncertainties. The first machines with floating point were vacuum tube machines, so I am sure they wanted to minimize the logic. Otherwise, they might have implemented a representation with explicit uncertainty. -- glen