DSPRelated.com
Forums

Newbie question on FFT

Started by Michel Rouzic May 19, 2005
ok, im not sure i made what i meant very clear, and i hardly understand
what you're talkin about

in IEEE representation, 1.00000's decimal value is 1065353216

If you add one to that decimal value, 1065353217, you obtain
1.00000011920929, thus, you see clearly the unprecision.

in article ogailx502-7C0166.09200522052005@news.isp.giganews.com, Tim Olson
at ogailx502@NOSPAMsneakemail.com wrote on 05/22/2005 10:20:

> | i'm also curious if the +/- multiplication conventions apply to IEEE zero. > | does +0 * -0 get you -0? and -0 * -0 = +0? (just a curiousity.) > > Both give you -0.
is seems odd that -0 * -0 gives you -(0^2) when for any other non-negative x, -x * -x = +(x^2).
> > | here's another curiousity: what is +0 + -0 = ? + or - zero? > > +0.
i guess you have to pick one or the other.
> > | one last thing: does IEEE 754 define compliance down to the LSB? > > Results for defined operations must be accurate to within 1/2 ULP (unit > of least precision) of the infinitely-precise result. This value > depends upon whether the computation is done in single, double, > extended-single or extended-double precision.
which means it has to be bit accurate. all IEEE 754 compliant implementations must yield precisely the same result word for the same operation and same word size. then, it seems to me, the only difference between implementations are the code they get compiled to.
> | is the method of rounding specified by the standard? > > Yes -- round to: > nearest (even) ; default > zero (truncate) > +Inf (ceiling) > -Inf (floor)
so the user can decide which way it goes? what is the function call, say in the standard C library, to set this? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Michel Rouzic wrote:
> ok, im not sure i made what i meant very clear, and i hardly understand > what you're talkin about
Unfortunately, I think that you hardly know what you're talking about.
> in IEEE representation, 1.00000's decimal value is 1065353216
No the value of 1.00000 is 1.00000. Why do you call it a "decimal" value" it's just bits that represent something. Those bits, construed as two's complement may be 1065353216. If you construe them as packed BCD, excess-three, ASCII, EBCDIC, signed binary, or reflected-binary Gray, they will represent other numbers or symbols. So what?
> If you add one to that decimal value, 1065353217, you obtain > 1.00000011920929, thus, you see clearly the unprecision.
No. Adding 1 to 1065353217 gives 1065353218. Performing two's complement addition on an object that is not two's complement yields an invalid result. Would you expect otherwise? Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
robert bristow-johnson wrote:
> in article 1116647116.867407.186320@g14g2000cwa.googlegroups.com,
Rune
> Allnor at allnor@tele.ntnu.no wrote on 05/20/2005 23:45:
> > The problems occur because the floating point format is the > > only (well, the least impractical) non-integer number format > > that can be implemented on a computer, > > when i was at Fostex R&D, they had a class called "real" or maybe it
was
> "rational" that contained two long ints, one for numerator and the
other for
> denominator. we were implementing in a fixed-point target (some
68030
> variant) and they wanted to be able to represent 1/3 and such numbers > exactly.
Well, yes, that's an exact representation. But hardly practical, in the long run.
> > and it can only represent a very small subset of the real numbers. > > i guess, expressed as a percentage of the uncountably infinite set of
real
> numbers, that's true. that very small subset or IEEE doubles is 0%
(in the
> limit) of the entire set of reals.
True. The 64 bit floating point format can represent at most 2^64 different numbers. Some numbers are represented twice (+/- 0), other bit patterns are reserved stuff like +/-inf, NaN and so on, so there are slightly fewer double precision floating point numbers.
> but, somehow, i'm still satisfied with > the coverage.
So am I. The floating point formats are prefectly usable, as long as we all agree that they are only a crude representation of a subset of the real numbers, and know how to deal withe the shortccomings.
> > That's the sole reason > > for floating point numbers being discussed at all. Not because > > they are particularly well suited for their use. > > i'm not too hot about the IEEE 754 format, but i think that
floating-point
> numbers with sufficient bits for both exponent and range are pretty
well
> suited for nearly all computer computation that normal people do.
Agreed. But are they interesting for any other people than computer programmers? Are they studied in maths for other reasons than numerical computing tasks on digital computers? I would be very surprised if they are.
> there are > times that i think implementation of DSP algorithms (like filtering)
can
> make more sense in fixed-point (for the same number of bits in the
word),
> but floating-point format works pretty good.
I never said they are useless.
> > Because FP numbers are available. > > > > Beggars can't be choosers. > > ya never know. i've run into a few pretty choosy beggars.
literally. Rune
On Sun, 22 May 2005 12:46:45 -0400, robert bristow-johnson wrote:
>> | is the method of rounding specified by the standard? >> >> Yes -- round to: >> nearest (even) ; default >> zero (truncate) >> +Inf (ceiling) >> -Inf (floor) > > so the user can decide which way it goes? what is the function call, say in > the standard C library, to set this?
The C standard libraries (and the language) predate IEEE arithmetic, but I think that most implementations give you the fpsetround() and fpgetround() functions from SysV. The BSD math(3) man pages (at least the one on my FreeBSD box) have a nice description of some of the same sorts of issues associated with DEC D and G floating point formats, which were pretty common at the time the BSD systems were growing up. http://www.freebsd.org/cgi/man.cgi?query=math&apropos=0&sektion=3&manpath=FreeBSD+4.11-stable&format=html Hmm. That's interesting... I tried the default man page for math at the on-line site, to get that link, and discovered that the doco maintainers have edited the Vax D format floating point stuff out of the 5.3-series. I guess that's fair, since there isn't a Vax port in the FreeBSD tree. The Netbsd-current on-line manual pages still have the Vax stuff in them too. Cheers, -- Andrew
Tim, was that a typo?

I'm pretty sure that -0 * -0 gives you +0 in IEEE 754, just like you
would expect.  A little C program to print out the result, on two
different IEEE-compliant CPUs (a P-III and a G4), agrees:

0 * 0 = 0
0 * -0 = -0
-0 * 0 = -0
-0 * -0 = 0

Steven

"Jerry Avins" <jya@ieee.org> wrote in message
news:P9WdnavHPr57Ww3fRVn-sQ@rcn.net...
> Michel Rouzic wrote: > > ok, im not sure i made what i meant very clear, and i hardly understand > > what you're talkin about > > Unfortunately, I think that you hardly know what you're talking about. > > > in IEEE representation, 1.00000's decimal value is 1065353216 > > No the value of 1.00000 is 1.00000. Why do you call it a "decimal" > value" it's just bits that represent something. Those bits, construed as > two's complement may be 1065353216. If you construe them as packed BCD, > excess-three, ASCII, EBCDIC, signed binary, or reflected-binary Gray, > they will represent other numbers or symbols. So what? > > > If you add one to that decimal value, 1065353217, you obtain > > 1.00000011920929, thus, you see clearly the unprecision. > > No. Adding 1 to 1065353217 gives 1065353218. Performing two's complement > addition on an object that is not two's complement yields an invalid > result. Would you expect otherwise?
Well, Mike may have gone about things a strange way (although it kind of makes sense to me), but he did get the right answer in the end. The smallest value larger than 1 that can be represented in IEE 754 single-precision (32-bit) floating point format is indeed 1.00000011920929. Maybe that is imprecise for some, but in my work, that level of precision is sufficient for the vast majority of tasks. I sometimes look at IEEE 754 floats as hex numbers, in which case 1.0 = 0x3F800000. The next largest float does happen to be 0x3F800000 + 1 = 0x3F800001, which interpreted as a float is 1.00000011920929. After all, an IEEE float is just 32 bits of data, and you can choose to interpret that data as a float, hex value, or decimal value, or anything else you like as convenient. The SHARC tools let you view registers as floats, signed/unsigned integers, hex, etc. so I am kind of used to interpreting things different ways. I often find the hex notation useful, because you can easily separate the sign bit, exponent, and mantissa. The decimal interpretation that Mike used is rarely illuminating for me. -- Jon Harris
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BEB62FB5.7835%rbj@audioimagination.com...
> in article ogailx502-7C0166.09200522052005@news.isp.giganews.com, Tim Olson > at ogailx502@NOSPAMsneakemail.com wrote on 05/22/2005 10:20: > > > | is the method of rounding specified by the standard? > > > > Yes -- round to: > > nearest (even) ; default > > zero (truncate) > > +Inf (ceiling) > > -Inf (floor) > > so the user can decide which way it goes? what is the function call, say in > the standard C library, to set this?
On the SHARC, only the first 2 are supported (at least natively in hardware). There is a bit in a system register (MODE1) to choose nearest vs. truncate.
Jerry Avins wrote:
> Michel Rouzic wrote: > > I agree with Rune. > > > > How could you have any better representation of real numbers? > > > > "a representation with explicit uncertainty" > > > > does it mean that IEEE 754's precision is uncertain to you? cuz if
you
> > think so well, take any number you want, and add 1 to it > > (hexadecimaly). for example, if you wanna know whats the precision
of
> > 1.00000 on a 32-bit float, add 1 to the last byte and you get > > 1.00000011920929. i think the uncertainity is explicit enough > > 1 added to the exponent creates an even greater error; so what? What
is
> the result of adding 1.00000 and 1.00000? I suspect it will not
differ
> much from 2.00000.
Not in this simple example, no. There are some acoustic modeling schemes that involve expressions of the sort c=exp(a)*exp(-b) [1] where both 'a' and 'b' are large, so they almost cancel in the analystic computations. In numerical schemes, however, 'a' and 'b' are computed separately and inserted into [1]. In these schemes the numerical errors do not cancel, and they completely dominate the computed number 'c'. The basic schemes were proposed in the early 1950ies, but stable numerical implementations were not available until the mid 1990ies.
> Did you want to make a point?
The point was that in a finite representation of real numbers, there is finite uncertainty. By toggling the LSB of the number A, Michel showed that the effect is on the order of A*1e-7. Which is the very reason why people insist that single-precision floating point numbers have six significant digits and not seven. Which is the whole question behind my arguning in this thread: Can it be guaranteed unconditionally that even the LSB of the mantissa in the answer is 0 after having multiplied a non-zero number with the exact representation for +/-0? If not, there is the possibility that LSB errors can accumulate throughout a sequence of computations in such a way that there is non-zero numerical garbage contained in variables that formally should contain the number 0. Rune
Ronald H. Nicholson Jr. wrote:
> In article <1116647116.867407.186320@g14g2000cwa.googlegroups.com>, > Rune Allnor <allnor@tele.ntnu.no> wrote: > >But no one are interested in those numbers for their own sake. > >Everybody who use the floating point formats are interested > >in *real* numbers. Not integers. Not natural numbers. Not > >rational numbers. Not trancedental numbers. > > > >Real numbers. > > That's a mathematicians point-of-view.
While I have worked a bit with maths over the years, I'm not a mathematician. I get digital data that represent measurements of physical quantities. The data could be available in floating point, fixed point or integre formats. While each format have their own quirks and idiosyncrasies, they are *not* exact representations of whatever was measured (i.e. voltage in some electrical sensor), which ought to be represented by the elusive "real numbers".
> But, long before there was > floating point computer hardware, there was the common use of log > table, slide rules, and scientific notation. What was of interest > (outside of the accounting practice) were not the numbers (real > or otherwise), but the results of approximate calculations given > measurements and scientific "constants" limited to a given accuracy > (e.g. how many digits I can get out of a slide rule calculation
depends
> on which glasses prescription I'm wearing :).
Agreed.
> Slide rule calculation > with the results recorded in scientific notation did had "fuzzy"
LSB's
> (least significant digits of questionable (in)significance). For a > large class of users, that is what computer floating point emulates.
Agreed.
> The fact that the KCS/IEEE representation is exact for an infinitely
tiny
> subset of the real number line (but including zero) makes the
floating
> point representation "accidentally" more useful for other
applications,
> IMHO.
Can't agree here. There is only a finite set of numbers that can be represented by a 32 bit binary variable on floating point format, IEEE 754 or other. There are 2^32 possible bit patterns, and there can be no more numbers represented on the format. The same as a long integer in C. What makes the floating point format so useful, is that the interval between consecutive floating point numbers vary over the range between -MAX_FLOAT and +MAX_FLOAT. In fact, half the floating point numbers exist in the interval [-1,1]. Just check what happens when you toggle the sign of the exponent. Which, in turn, means that the computations become increasingly inaccurate the larger the magnitude of the numbers. Rune