DSPRelated.com
Forums

Newbie question on FFT

Started by Michel Rouzic May 19, 2005
Rune Allnor wrote:

(snip)

> What makes the floating point format so useful, is that the interval > between consecutive floating point numbers vary over the range between > -MAX_FLOAT and +MAX_FLOAT. In fact, half the floating point numbers > exist in the interval [-1,1]. Just check what happens when you > toggle the sign of the exponent. Which, in turn, means that the > computations become increasingly inaccurate the larger the magnitude > of the numbers.
It depends on what you mean by inaccurate. The relative error is approximately constant, the absolute error changes. Some calculations need one, some the other. -- glen
In article <BEB62FB5.7835%rbj@audioimagination.com>,
 robert bristow-johnson <rbj@audioimagination.com> wrote:

|  in article ogailx502-7C0166.09200522052005@news.isp.giganews.com, Tim Olson
|  at ogailx502@NOSPAMsneakemail.com wrote on 05/22/2005 10:20:
|  
|  > |  i'm also curious if the +/- multiplication conventions apply to IEEE 
|  > |  zero.
|  > |  does +0 * -0 get you -0?  and -0 * -0 = +0?  (just a curiousity.)
|  > 
|  > Both give you -0.
|  
|  is seems odd that -0 * -0 gives you -(0^2) when for any other non-negative
|  x,  -x * -x = +(x^2).

My mistake -- I didn't see the negation of the second 0 in the second 
case (font problem); I thought you were asking about commutitivity of 
-0.  Yes, -0 * -0 does give +0.


|  which means it has to be bit accurate.  all IEEE 754 compliant
|  implementations must yield precisely the same result word for the same
|  operation and same word size.  then, it seems to me, the only difference
|  between implementations are the code they get compiled to.

Well, almost.  There is the problem of "double rounding" when an 
implementaiton performs all operations with higher precision internally 
in registers, then rounds to the destination precision.  This happens 
frequently when comparing x86 results (which are performed to 80-bit 
extended precision, then rounded again to single or double precision) vs 
most RISC processors, which perform the operation directly in single or 
double precision, with a single rounding.

 
|  > | is the  method of rounding specified by the standard?
|  > 
|  > Yes -- round to:
|  >    nearest (even) ; default
|  >    zero (truncate)
|  >    +Inf (ceiling)
|  >    -Inf (floor)
|  
|  so the user can decide which way it goes?  what is the function call, say in
|  the standard C library, to set this?

That is one of the biggest problems with the standard; system software 
has been lax in providing the hooks needed to control the rounding and 
the exceptions in a standard way.  C9x does implement this in a standard 
fashion.

In Mac OSX, here is how you do it:

#include <fenv.h>

 // set rounding mode to round-to-zero
fesetround(FE_TOWARDZERO);


   -- Tim Olson
In article <1116823652.733699.199490@g44g2000cwa.googlegroups.com>,
 stevenj@alum.mit.edu wrote:

|  Tim, was that a typo?
|  
|  I'm pretty sure that -0 * -0 gives you +0 in IEEE 754, just like you
|  would expect. 

Yes, you are right.  I missed the second minus sign in the original 
question, and thought it was a question on the commutitivity of -0 * 0 
vs 0 * -0.

   -- Tim Olson
Rune Allnor wrote:
> Jerry Avins wrote: > >>Michel Rouzic wrote: >> >>>I agree with Rune. >>> >>>How could you have any better representation of real numbers? >>> >>>"a representation with explicit uncertainty" >>> >>>does it mean that IEEE 754's precision is uncertain to you? cuz if > > you > >>>think so well, take any number you want, and add 1 to it >>>(hexadecimaly). for example, if you wanna know whats the precision > > of > >>>1.00000 on a 32-bit float, add 1 to the last byte and you get >>>1.00000011920929. i think the uncertainity is explicit enough >> >>1 added to the exponent creates an even greater error; so what? What > > is > >>the result of adding 1.00000 and 1.00000? I suspect it will not > > differ > >>much from 2.00000. > > > Not in this simple example, no. There are some acoustic modeling > schemes that involve expressions of the sort > > c=exp(a)*exp(-b) [1] > > where both 'a' and 'b' are large, so they almost cancel in the > analystic computations. In numerical schemes, however, 'a' and > 'b' are computed separately and inserted into [1]. In these > schemes the numerical errors do not cancel, and they completely > dominate the computed number 'c'. > > The basic schemes were proposed in the early 1950ies, but stable > numerical implementations were not available until the mid 1990ies. > > >>Did you want to make a point? > > > The point was that in a finite representation of real numbers, > there is finite uncertainty. By toggling the LSB of the number A, > Michel showed that the effect is on the order of A*1e-7. > Which is the very reason why people insist that single-precision > floating point numbers have six significant digits and not seven.
I know your point; I asked Rouzic. (I understand what he was getting at now.)
> Which is the whole question behind my arguning in this thread: > Can it be guaranteed unconditionally that even the LSB of the > mantissa in the answer is 0 after having multiplied a non-zero > number with the exact representation for +/-0?
As far as I know, it is guaranteed. But it doesn't help. For example, the term sqrt(b^2 - 4ac) arises in factoring a quadratic of the form ax^2 + bx + c. When the two factors are equal, that term becomes zero _by_subtraction_. At least one serious design flaw was caused by the numerical resulting instability.
> If not, there is the possibility that LSB errors can accumulate > throughout a sequence of computations in such a way that there > is non-zero numerical garbage contained in variables that formally > should contain the number 0.
The difference of two large numbers can have an absolute error equal to the sum of their absolute errors. The difference between two numbers in the order of a billion that differ by a hundred or so and are accurate to one ppm is entirely random. This is not a flaw limited to computer representations. It applies to all calculations on measured quantities; i.e., to all those that matter to engineers. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Rune Allnor wrote:

   ...

> computations become increasingly inaccurate the larger the magnitude > of the numbers.
Only in an absolute sense. The relative error remains more-or-less constant over the gamut. That's a very useful attribute. Perhaps those of us who did real engineering with slide rules are in the best position to appreciate that. I have worked with both microvolts and kilovolts. I can't imagine needing to work with their sum. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Jon Harris wrote:
> "Jerry Avins" <jya@ieee.org> wrote in message > news:P9WdnavHPr57Ww3fRVn-sQ@rcn.net... > >>Michel Rouzic wrote: >> >>>ok, im not sure i made what i meant very clear, and i hardly understand >>>what you're talkin about >> >>Unfortunately, I think that you hardly know what you're talking about. >> >> >>>in IEEE representation, 1.00000's decimal value is 1065353216 >> >>No the value of 1.00000 is 1.00000. Why do you call it a "decimal" >>value" it's just bits that represent something. Those bits, construed as >>two's complement may be 1065353216. If you construe them as packed BCD, >>excess-three, ASCII, EBCDIC, signed binary, or reflected-binary Gray, >>they will represent other numbers or symbols. So what? >> >> >>>If you add one to that decimal value, 1065353217, you obtain >>>1.00000011920929, thus, you see clearly the unprecision. >> >>No. Adding 1 to 1065353217 gives 1065353218. Performing two's complement >>addition on an object that is not two's complement yields an invalid >>result. Would you expect otherwise? > > > > Well, Mike may have gone about things a strange way (although it kind of makes > sense to me), but he did get the right answer in the end. The smallest value > larger than 1 that can be represented in IEE 754 single-precision (32-bit) > floating point format is indeed 1.00000011920929. Maybe that is imprecise for > some, but in my work, that level of precision is sufficient for the vast > majority of tasks.
OK. I grant you that it might be a language problem. IEEE floats, like all other floats except possibly bignums, are quantized on a grid that changes size when the exponent does. I'm used to making physical measurements; my tools are folding rules, vernier calipers. micrometers, and rarely Jo blocks. With interferometry, one sodium line is about 11 microinches; I can estimate maybe a third of a line. So single-precision floats are on the edge of good enough for anything I might need. Doubles are for preserving accuracy, not expressing it.
> I sometimes look at IEEE 754 floats as hex numbers, in which case 1.0 = > 0x3F800000. The next largest float does happen to be 0x3F800000 + 1 = > 0x3F800001, which interpreted as a float is 1.00000011920929. After all, an > IEEE float is just 32 bits of data, and you can choose to interpret that data as > a float, hex value, or decimal value, or anything else you like as convenient. > The SHARC tools let you view registers as floats, signed/unsigned integers, hex, > etc. so I am kind of used to interpreting things different ways. I often find > the hex notation useful, because you can easily separate the sign bit, exponent, > and mantissa. The decimal interpretation that Mike used is rarely illuminating > for me.
Mike's expression of quantization size in decimal might be useful for some purposes. It would have been more useful has in not been expressed as the result of adding one to a float's LSB in such a way that I thought he implied that the result is an error. That's not to say that the separate parts of a float aren't useful or interesting in themselves. The exponent -- with attention to offset if needed -- gives the high bits of a number's logarithm, and the mantissa can be an index to a small table for the rest. Half the exponent (again, offset considered) and a mantissa of 3 is an excellent starting point for an iterative square-root routine. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
"Jerry Avins" <jya@ieee.org> wrote in message
news:jK-dnQ07sMZgpA_fRVn-uw@rcn.net...
> Rune Allnor wrote: > > ... > > > computations become increasingly inaccurate the larger the magnitude > > of the numbers. > > Only in an absolute sense. The relative error remains more-or-less > constant over the gamut. That's a very useful attribute. Perhaps those > of us who did real engineering with slide rules are in the best position > to appreciate that. I have worked with both microvolts and kilovolts. I > can't imagine needing to work with their sum.
Or said another way, when you are working with kilovolts, you probably aren't concerned with microvolt accuracy. But when measuring millivolts, you probably are.
Jon Harris wrote:

   ...

> Or said another way, when you are working with kilovolts, you probably aren't > concerned with microvolt accuracy. But when measuring millivolts, you probably > are.
Precisely! (Or should I say, "Accurate enough"?) :-) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Jerry Avins wrote:
> Rune Allnor wrote: > > ... > > > computations become increasingly inaccurate the larger the
magnitude
> > of the numbers. > > Only in an absolute sense. The relative error remains more-or-less > constant over the gamut. That's a very useful attribute. Perhaps
those
> of us who did real engineering with slide rules are in the best
position
> to appreciate that. I have worked with both microvolts and kilovolts.
I
> can't imagine needing to work with their sum.
I don't know if I did "real engineering" at the time (or have done ever since, for that matter), but before I went to School of Engineering I had a vacation internship at a metal factory. My job was to see that the frunaces went OK, that they held their set points etc. These furnaces trotted along at 30 - 40 megawatts, the currents being on the order of 110 - 140 kiloamps. During the first electronics lab exercise we went to in School of Engineering, we got a handful of 7400 ICs (the ICs contained four or six 'NOT' logical gates), and was assigned to set up some network of logical gates. My network didn't work, and when the lab engineer came and measured the cirquit, he found currents in the order of 0.5 amps. I didn't understand the problem. In my world, a cirquit drawing 0.5 amps was hardly even switced on. Rune
Rune Allnor wrote:
> Jerry Avins wrote: > >>Rune Allnor wrote: >> >> ... >> >> >>>computations become increasingly inaccurate the larger the > > magnitude > >>>of the numbers. >> >>Only in an absolute sense. The relative error remains more-or-less >>constant over the gamut. That's a very useful attribute. Perhaps > > those > >>of us who did real engineering with slide rules are in the best > > position > >>to appreciate that. I have worked with both microvolts and kilovolts. > > I > >>can't imagine needing to work with their sum. > > > I don't know if I did "real engineering" at the time (or have done > ever since, for that matter), but before I went to School of > Engineering I had a vacation internship at a metal factory. My job > was to see that the frunaces went OK, that they held their set > points etc. These furnaces trotted along at 30 - 40 megawatts, > the currents being on the order of 110 - 140 kiloamps. > > During the first electronics lab exercise we went to in School of > Engineering, we got a handful of 7400 ICs (the ICs contained four > or six 'NOT' logical gates), and was assigned to set up some network > of logical gates. My network didn't work, and when the lab engineer > came and measured the cirquit, he found currents in the order of > 0.5 amps. I didn't understand the problem. In my world, a cirquit > drawing 0.5 amps was hardly even switced on.
Ah! but that's a story about a ratio, not a sum. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;