Forums

Newbie question on FFT

Started by Michel Rouzic May 19, 2005
Rune Allnor wrote:

>[snip] > > > I don't know if I did "real engineering" at the time (or have done > ever since, for that matter), but before I went to School of > Engineering I had a vacation internship at a metal factory. My job > was to see that the frunaces went OK, that they held their set > points etc. These furnaces trotted along at 30 - 40 megawatts, > the currents being on the order of 110 - 140 kiloamps. > > During the first electronics lab exercise we went to in School of > Engineering, we got a handful of 7400 ICs (the ICs contained four > or six 'NOT' logical gates), and was assigned to set up some network > of logical gates. My network didn't work, and when the lab engineer > came and measured the cirquit, he found currents in the order of > 0.5 amps. I didn't understand the problem. In my world, a cirquit > drawing 0.5 amps was hardly even switced on. >
My first real world job ( if you ignore ABMAR Teleserve which repaired air conditioners *and* TV's ) introduced me to concept that a fingerprint was a short circuit ;) PS I can probably document being asked to count individual electrons/ions as they arrived. The post-DOC involved did not understand implications of a GAIN-BANDWIDTH product. My perverse nature demands someone deduce where I was working. For this menagerie I think I've given enough clues ;/
Jerry Avins wrote:

(snip)

>>> to appreciate that. I have worked with both microvolts and kilovolts. >>> I can't imagine needing to work with their sum.
(someone else wrote)
>> I don't know if I did "real engineering" at the time (or have done >> ever since, for that matter), but before I went to School of >> Engineering I had a vacation internship at a metal factory. My job >> was to see that the frunaces went OK, that they held their set >> points etc. These furnaces trotted along at 30 - 40 megawatts, >> the currents being on the order of 110 - 140 kiloamps.
>> During the first electronics lab exercise we went to in School of >> Engineering, we got a handful of 7400 ICs (the ICs contained four >> or six 'NOT' logical gates), and was assigned to set up some network >> of logical gates. My network didn't work, and when the lab engineer >> came and measured the cirquit, he found currents in the order of >> 0.5 amps. I didn't understand the problem. In my world, a cirquit >> drawing 0.5 amps was hardly even switced on.
> Ah! but that's a story about a ratio, not a sum.
One that may require unusual sensitivity to currents is cryptography. There are systems for learning the keys of cryptographic equipment by monitoring the current patterns used. Most likely still not such a large dynamic range, but small signals with a large background, maybe. -- glen
"No. Adding 1 to 1065353217 gives 1065353218"

*pats* it was about adding 1 to 1065353216, 1065353217 being the
result. its all about showing how precise it is, and no need to be
cocky about all the "BCD, excess-three, ASCII, EBCDIC, signed binary,
or reflected-binary Gray" representations that you know off

ah, someone understood me :). and yeah choosing the decimal value of
how its represented was to show what i was doing to find out what the
precision is. anyways, i dont see why this forum topic is getting so
big, we all know what float representation is all about.

On 20 May 2005 21:08:49 -0700, stevenj@alum.mit.edu wrote:

>> PS. IEEE 754 aside, I would be astonished if there were ever any >> floating-point implementation, anywhere, where multiplying by 0 >didn't >> give you exactly (+/-)0 and adding x+0 didn't give you exactly x. >> (Except for NaN * 0, etc., of course.)
>A slight exception: (-0) + (+0) = (+0) in IEEE 754, not that this >matters here.
I remember the good old days when everything was coded in BCD (Binary Coded Decimal). There was one digit for 4 bits (called a nybble), everything was converted to integers before calculation. The programmer just worked out what accuracy they wanted and wrote the program to handle that many digits. None of this new-fangled airy-fairy floating point. No silly questions like: does 1/3 = (1.3 recurring) * 3 = 1 ? or does 2 * 22/7 * r*r = 2 * (PI) * r^2 ? (removes tongue from cheek). Sig: Work saves us from three great evils: boredom, vice and need. -Voltaire, philosopher (1694-1778)
> I remember the good old days when everything was coded in BCD (Binary > Coded Decimal). There was one digit for 4 bits (called a nybble), > everything was converted to integers before calculation. The > programmer just worked out what accuracy they wanted and wrote the > program to handle that many digits. None of this new-fangled > airy-fairy floating point.
maybe, but the only silly thing about IEEE floating point representation is the question people can ask because if you understand it good (and im sure you do) you realize that there is no better way to represent floats (unless there's a smarter way i havent heard of)
"Michel Rouzic" <Michel0528@yahoo.fr> wrote in message
news:1117819203.135271.290510@g44g2000cwa.googlegroups.com...
> > I remember the good old days when everything was coded in BCD (Binary > > Coded Decimal). There was one digit for 4 bits (called a nybble), > > everything was converted to integers before calculation. The > > programmer just worked out what accuracy they wanted and wrote the > > program to handle that many digits. None of this new-fangled > > airy-fairy floating point. > > maybe, but the only silly thing about IEEE floating point > representation is the question people can ask because if you understand > it good (and im sure you do) you realize that there is no better way to > represent floats (unless there's a smarter way i havent heard of)
Even though some of the features of IEEE floating point make designing hardware to use it more complicated, those same features make it a pretty nice way to represent floating point values. I'm thinking of the things like the "hidden bit" which increases your precision by ~6dB for "free" and denormals.
Michel Rouzic wrote:
>>I remember the good old days when everything was coded in BCD (Binary >>Coded Decimal). There was one digit for 4 bits (called a nybble), >>everything was converted to integers before calculation. The >>programmer just worked out what accuracy they wanted and wrote the >>program to handle that many digits. None of this new-fangled >>airy-fairy floating point. > > > maybe, but the only silly thing about IEEE floating point > representation is the question people can ask because if you understand > it good (and im sure you do) you realize that there is no better way to > represent floats (unless there's a smarter way i havent heard of)
There is no absolute best floating-point format. IEEE 754 is a good compromise between conflicting needs. Note that the exponent is offset, rather than signed. That speeds certain implementations at the cost of efficiency for square-root and logarithm estimators. With fewer exponent bits, the range would be reduced and the number of significant bits increased. Most DSP applications would benefit from the increased precision and not be inconvenienced by the range reduction. Jerry -- Engineering is the art of making what you want from things you can get. &#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;
in article 11a1bf2ta619674@corp.supernews.com, Jon Harris at
jon_harrisTIGER@hotmail.com wrote on 06/03/2005 15:23:

> "Michel Rouzic" <Michel0528@yahoo.fr> wrote in message > news:1117819203.135271.290510@g44g2000cwa.googlegroups.com... >> >> maybe, but the only silly thing about IEEE floating point >> representation is the question people can ask because if you understand >> it good (and im sure you do) you realize that there is no better way to >> represent floats (unless there's a smarter way i havent heard of) > > Even though some of the features of IEEE floating point make designing > hardware to use it more complicated, those same features make it a pretty nice > way to represent floating point values. I'm thinking of the things like the > "hidden bit" which increases your precision by ~6dB for "free" and denormals.
it's also a bitch to deal with in software (a floating-point library running on an integer machine), largely for the same reasons. i've complained about this some years ago here on comp.dsp . i have to admit now, that although i then hated the "hidden 1 MSB", it's probably a good thing they put it in. but it *is* ugly. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BEC63948.7EE5%rbj@audioimagination.com...
> in article 11a1bf2ta619674@corp.supernews.com, Jon Harris at > jon_harrisTIGER@hotmail.com wrote on 06/03/2005 15:23: > > > "Michel Rouzic" <Michel0528@yahoo.fr> wrote in message > > news:1117819203.135271.290510@g44g2000cwa.googlegroups.com... > >> > >> maybe, but the only silly thing about IEEE floating point > >> representation is the question people can ask because if you understand > >> it good (and im sure you do) you realize that there is no better way to > >> represent floats (unless there's a smarter way i havent heard of) > > > > Even though some of the features of IEEE floating point make designing > > hardware to use it more complicated, those same features make it a pretty
nice
> > way to represent floating point values. I'm thinking of the things like the > > "hidden bit" which increases your precision by ~6dB for "free" and
denormals.
> > it's also a bitch to deal with in software (a floating-point library running > on an integer machine), largely for the same reasons. i've complained about > this some years ago here on comp.dsp . i have to admit now, that although i > then hated the "hidden 1 MSB", it's probably a good thing they put it in. > but it *is* ugly.
Yep, it's a double-edged sword: nice for the user, but more difficult to implement in either hardware or software emulation. When I was in college, a lab problem we had in our microprocessor design class was to write IEEE floating-point addition and multiplication routines (in assembler) on a Motorola integer chip (68000 family maybe?). It was surprisingly difficult and took quite a bit of code for such seemingly simple tasks.