DSPRelated.com
Forums

Newbie question on FFT

Started by Michel Rouzic May 19, 2005
glen herrmannsfeldt wrote:
> Rune Allnor wrote: > > > stevenj@alum.mit.edu wrote: > > >>This is one of the common myths about floating-point, that every > >>floating-point number somehow has some intrinsic "uncertainty" > >> (due to quantum effects? ;-), or somehow represents a > >> "range" of values (ala interval arithmetic). > > (snip) > > > But no one are interested in those numbers for their own sake. > > Everybody who use the floating point formats are interested > > in *real* numbers. Not integers. Not natural numbers. Not > > rational numbers. Not trancedental numbers. > > > Real numbers. > > How about the other way around. That floating point (computer) > arithmetic was designed for use by scientists used to working > with numbers with uncertainties.
So... the "obvious" corollary from such a claim would be that there exists an exact representation of real numbers, that can be implemented on a computer, but for some reason has not been used. Sorry, I don't buy that one. I've always thought that the floating point format was the closest, least impractical implementation that was possible to use in a computer that can only use finite- length binary words internally. I think I'll stick with that view for yet some time...
> The first machines with floating > point were vacuum tube machines, so I am sure they wanted to minimize > the logic.
I'd think so, yes.
> Otherwise, they might have implemented a representation > with explicit uncertainty.
...like the fixed-point formats? Rune
stevenj@alum.mit.edu wrote:
> If you are doing arbitrary calculations on arbitrary inputs, of
course
> errors can creep in. However, this process should be viewed
rationally
> (no pun intended), not in terms of fanciful superstitions like the
idea
> that "noise" will magically result from operations on zeros.
Agreed. However, if numerical noise does *not* affect the floating point computations on (exact representations of) the number 0, it is because the number 0 is an exceptional case. If you are right, one could disregard any concerns about underflow or overflow, which would need to be addressed when computing on those non-zero numbers that *can* be represented exactly in a floating point format. Likewise, one could disregard numerical inaccuracies that would need to be addressed when computing on those numbers that can only be *approximated* by the floating point format. If you continue to claim such concerns are caused by belief in "superstition" and "magic", I am sorry to say we live in quite different worlds. It might well be that the exact representation of 0 takes these considerations into account, and solves them, or that they just are no concern with these representations. As I have said before and now say again for at least the third time, I don't know the internal workings of the floating point processor sufficiently well to judge your (implicit) claim that they are no concern.
> In > particular, the representable set of floating-point numbers (i.e. the > precision) has to be viewed as an independent concept from the
accuracy
> when they are used as approximations for other values...
Agreed. However, to me the value to be approximated is the starting point. The floating point number is the approximation that contains the error. For some reason I suspect we approach this particular question from quite the opposite sides.
> otherwise > people tend towards silly conclusions such as the fallacy that the > error bar is always the machine epsilon (or always at least the
machine
> epsilon).
I'm not sure what you mean here. I know of one or two numerical algorithms that fail to produce a single significant digit. These failures are caused by approximation errors caused by the inexact floating point format. It is by no means a "silly conclusion" that the "error bar" that *is* due to the inexact number representation, within machine precision, causes those routines to go bonkers. One famous example would be a particular matrix whose name I can't rememger off the top of my head; I believe it is "Wilkinson's matrix". It's been a few years since I looked that example over, but basic conclusion if you change one 6th order digit in one coefficient a_ij in the matrix, i.e. makes the correction a_ij*(1+ 1e-6) all the numbers, even the most signifiant digit, has changed in the inverse of the matrix. You'll find it in any introductory text on numerical methods. I think there is a similar example concerning roots of polynomials.
> As Knuth said in the introduction to _The Art of Computer
Programming_
> (paraphrasing), "You need to have at least some idea of what is going > on inside a computer, or the programs you write will be pretty
weird." Writing programs is one thing. Understanding and using the results is quite another.
> Steven
Rune
wow, i know about that floating point representation thing, about
numbers that can't be represented precisely, but you know, i'm using
doubles for representing 16-bit sounds.

in case anyone forgot, doubles have one bit for sign, 11 for exponent
and 52 for the number (i think its called mantisse or something like
this)

It means that basically i'm representing 16-bit values with a precision
of 52 bits, so for any given value i got, i hardly can have any error.

In article <1116647116.867407.186320@g14g2000cwa.googlegroups.com>,
Rune Allnor <allnor@tele.ntnu.no> wrote:
>But no one are interested in those numbers for their own sake. >Everybody who use the floating point formats are interested >in *real* numbers. Not integers. Not natural numbers. Not >rational numbers. Not trancedental numbers. > >Real numbers.
That's a mathematicians point-of-view. But, long before there was floating point computer hardware, there was the common use of log table, slide rules, and scientific notation. What was of interest (outside of the accounting practice) were not the numbers (real or otherwise), but the results of approximate calculations given measurements and scientific "constants" limited to a given accuracy (e.g. how many digits I can get out of a slide rule calculation depends on which glasses prescription I'm wearing :). Slide rule calculation with the results recorded in scientific notation did had "fuzzy" LSB's (least significant digits of questionable (in)significance). For a large class of users, that is what computer floating point emulates. The fact that the KCS/IEEE representation is exact for an infinitely tiny subset of the real number line (but including zero) makes the floating point representation "accidentally" more useful for other applications, IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
I agree with Rune.

How could you have any better representation of real numbers?

"a representation with explicit uncertainty"

does it mean that IEEE 754's precision is uncertain to you? cuz if you
think so well, take any number you want, and add 1 to it
(hexadecimaly). for example, if you wanna know whats the precision of
1.00000 on a 32-bit float, add 1 to the last byte and you get
1.00000011920929. i think the uncertainity is explicit enough

Michel Rouzic wrote:
> I agree with Rune. > > How could you have any better representation of real numbers? > > "a representation with explicit uncertainty" > > does it mean that IEEE 754's precision is uncertain to you? cuz if you > think so well, take any number you want, and add 1 to it > (hexadecimaly). for example, if you wanna know whats the precision of > 1.00000 on a 32-bit float, add 1 to the last byte and you get > 1.00000011920929. i think the uncertainity is explicit enough
1 added to the exponent creates an even greater error; so what? What is the result of adding 1.00000 and 1.00000? I suspect it will not differ much from 2.00000. Did you want to make a point? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
in article 1116647116.867407.186320@g14g2000cwa.googlegroups.com, Rune
Allnor at allnor@tele.ntnu.no wrote on 05/20/2005 23:45:

> stevenj@alum.mit.edu wrote: >> This is one of the common myths about floating-point, that every >> floating-point number somehow has some intrinsic "uncertainty" (due to >> quantum effects? ;-), or somehow represents a "range" of values (ala >> interval arithmetic). > > Isn't this to turn the question upside-down? What you say is > correct, if one restricts the discussion to floating-point > numbers, i.e. the numbers that can be represented by, say, > the IEEE 754 format or something similar. > > But no one are interested in those numbers for their own sake. > Everybody who use the floating point formats are interested > in *real* numbers. Not integers. Not natural numbers. Not > rational numbers.
gee, i guess a lot of us are interested in irrational numbers (dunno what you mean by "trancedental numbers").
> The problems occur because the floating point format is the > only (well, the least impractical) non-integer number format > that can be implemented on a computer,
when i was at Fostex R&D, they had a class called "real" or maybe it was "rational" that contained two long ints, one for numerator and the other for denominator. we were implementing in a fixed-point target (some 68030 variant) and they wanted to be able to represent 1/3 and such numbers exactly.
> and it can only represent a very small subset of the real numbers.
i guess, expressed as a percentage of the uncountably infinite set of real numbers, that's true. that very small subset or IEEE doubles is 0% (in the limit) of the entire set of reals. but, somehow, i'm still satisfied with the coverage.
> That's the sole reason > for floating point numbers being discussed at all. Not because > they are particularly well suited for their use.
i'm not too hot about the IEEE 754 format, but i think that floating-point numbers with sufficient bits for both exponent and range are pretty well suited for nearly all computer computation that normal people do. there are times that i think implementation of DSP algorithms (like filtering) can make more sense in fixed-point (for the same number of bits in the word), but floating-point format works pretty good.
> Because FP numbers are available. > > Beggars can't be choosers.
ya never know. i've run into a few pretty choosy beggars. literally. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
added to the exponent? i'm afraid we're not talking about the same
thing. i was talking about the margin of error that a number like 1 can
have when represented as a 32-bit float.

Michel Rouzic wrote:
> added to the exponent? i'm afraid we're not talking about the same > thing. i was talking about the margin of error that a number like 1 can > have when represented as a 32-bit float.
"Add 1 to the last byte" isn't a valid way to treat a floating-point number. Assuming that the original and final numbers have the same exponent and that they don't have the same representation, the difference between them will be precisely one. Of course, adding 1 to 1E35 will have no effect at all. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
In article <BEB43192.77A2%rbj@audioimagination.com>,
 robert bristow-johnson <rbj@audioimagination.com> wrote:


|  i thought the standard would mandate what would come out in those cases.  as
|  far as i thought, the only "optional" feature you could have and still call
|  your FP implementation "IEEE 754 compliant" was the denorms.

Denorms are not optional in IEEE-754.  Many floating-point 
implementations, however, have a non-IEEE mode where denorms are flushed 
to zero.


|  what does +Inf * 0 get you?  a NaN?  do NaNs have signs in IEEE 754?

Yes, you get a NaN (if floating-point exceptions aren't enabled).  NaNs 
are not signed nor ordered.

|  
|  i'm also curious if the +/- multiplication conventions apply to IEEE zero.
|  does +0 * -0 get you -0?  and -0 * -0 = +0?  (just a curiousity.)

Both give you -0.

 
|  here's another curiousity:  what is +0 + -0 = ?    + or - zero?

   +0.


|  one last thing:  does IEEE 754 define compliance down to the LSB?

Results for defined operations must be accurate to within 1/2 ULP (unit 
of least precision) of the infinitely-precise result.  This value 
depends upon whether the computation is done in single, double, 
extended-single or extended-double precision.

| is the
|  method of rounding specified by the standard?

Yes -- round to:
   nearest (even) ; default
   zero (truncate)
   +Inf (ceiling)
   -Inf (floor)

   -- Tim Olson