DSPRelated.com
Forums

Guidance on Low Frequency Oscilator (LFO) implementation

Started by Jaime Andres Aranguren Cardona February 28, 2004
I just returned from a snowboarding weekend in Davos, so here is my
late reply:

robert bristow-johnson wrote:
> hi Andor. i didn't really expect this to get this far, but i think we have > a tangible, testable (which means it can get settled one way or another), > not merely semantic, difference about a very small and arcane thing: a > single bit of information.
Well, at least this discussion is more sensible than others in comp.dsp about the bit (or fractions thereof) :).
> i'll tell you what: please offer a specific counter example of a 25-bit 2's > complement integer that you believe does not get hit SPOT ON with a 32-bit > IEEE 754 float. you give me that number (hex or binary would save me work), > i will check it for range (it must be between -(2^24) and +(2^24) - 1, > inclusive), and then give you the exact bit pattern for the IEEE 754 that > exactly hits it.
OK. Admittedly, 25bit two's complement format is a bad one for counter examples. But 25bit unsigned integer is a good one (I'm representing 32bit IEEE 754 float in binary as sign.exponent.mantissa): 2^25 = 33554432 -> 0.10011000.00000000000000000000000 The exponent is 152, which after un-biasing is 25, so this floating-point number is in fact 2^25 (1 + 0) = 2^25 2^25-1 = 33554431 -> 0.10010111.11111111111111111111111 This is the next lowest number representable in 32bit IEEE 754, and it is 33554430.0 = 2^25 -2 So 33554431.0 _cannot_ be represented in this format. The same with all other odd numbers above 2^24. For these numbers, the exponent is 24 but the least signicifant bit in the mantissa represents 2^{-23}, which means that one can only increment or decrement in multiples of 2 for all numbers above 2^24.
> > I know I defined the term "equal resolution" in a way which makes me win > > the argument :) - I'm however open to other (sensible) definitions. > > sorry, Andor. i don't think you don't win the argument. > [friendly assertion :-) ] even given your definition.
My definition of match was in fact too general. By arithmetically converting 25bit unsigned to 25bit signed, you get to have an injective map. It needs to be formulated differently for me to "win".
> but for > everything between integer fixed-point al the way to unity normalized > fractional fixed-point, that "hidden 1 bit" really does add a bit making an > N-bit float ("N" not counting the "hidden 1 bit") at least as good as any > (N-E+1)-bit fixed representation (where E is the number of exponent bits in > the float).
A constant "bit" adding the full resolution of a moving "bit" sounds like free lunch to me.
> > > Regards, > > and also to you. > > r b-j
I'm glad we're chewing this through - it has always bothered me :). Andor
"Andor" <an2or@mailcircuit.com> wrote in message
news:ce45f9ed.0403080036.2bcbd117@posting.google.com...
> I just returned from a snowboarding weekend in Davos, so here is my > late reply: > > > i'll tell you what: please offer a specific counter example of a 25-bit
2's
> > complement integer that you believe does not get hit SPOT ON with a
32-bit
> > IEEE 754 float. you give me that number (hex or binary would save me
work),
> > i will check it for range (it must be between -(2^24) and +(2^24) - 1, > > inclusive), and then give you the exact bit pattern for the IEEE 754
that
> > exactly hits it. > > OK. Admittedly, 25bit two's complement format is a bad one for counter > examples. But 25bit unsigned integer is a good one (I'm representing > 32bit IEEE 754 float in binary as sign.exponent.mantissa):
Well, you've hit on the fact that for fixed point, there are both signed and unsigned interpertations of the data. But for floating point, signed is your only option. So I guess if you are using your fixed point numbers to represent unsigned integers (i.e. there are no negative numbers in your world), then 32-bit floating point has the same precision as 24-bit unsigned fixed point. In this case, the sign bit of floating point is wasted, always being set to 0. However, that seems a bit of a contrived example when we're talking about common DSP operations. All the DSP I've done uses signed numbers for either the data or the coefficients, and usually both. Of course, someone could invent a new "unsigned 32-bit floating point" data format where the existing sign bit is used for an extra bit of resolution in the mantissa. This new format would have the resolution of 25-bit unsigned integers and still retain 8 bits of exponent.
> 2^25 = 33554432 -> 0.10011000.00000000000000000000000 > The exponent is 152, which after un-biasing is 25, so this > floating-point number is in fact > 2^25 (1 + 0) = 2^25 > > 2^25-1 = 33554431 -> 0.10010111.11111111111111111111111 > This is the next lowest number representable in 32bit IEEE 754, and it > is 33554430.0 = 2^25 -2 > > So 33554431.0 _cannot_ be represented in this format. The same with > all other odd numbers above 2^24. For these numbers, the exponent is > 24 but the least signicifant bit in the mantissa represents 2^{-23}, > which means that one can only increment or decrement in multiples of 2 > for all numbers above 2^24.
<snip>
> A constant "bit" adding the full resolution of a moving "bit" sounds > like free lunch to me.
As long as you compare "apples to apples", i.e. signed fixed point (integer or fractional) to signed floating point, floating point does have the extra bit of resolution. -Jon
One must be careful when comparing the resolution of signed integers
with floating point numbers.  The absolute resolution of an integer
measurement is of fixed size, whereas the resolution of a floating
point value changes with its absolute size in a very non-linear manner:
every jump to the next lower exponent suddenly doubles the absolute
resolution (until you get to bottom of the lowest exponent where what
happens depends on whether or not IEEE underflow is supported)

So with floats containing a hidden bit plus 23-bit matissa, if a positive
value is above 2^24 a signed 32-bit integer representation would have
more resolution, below 2^23 the floating representation would have more
resolution, between 2^23 and 2^24 about the same.

------ Original Message ------
In article <BC6CBA1D.9215%rbj@surfglobal.net>,
robert bristow-johnson  <rbj@surfglobal.net> wrote:
>In article 40472035$1@pfaff2.ethz.ch, Andor Bariska at andor@nospam.net >wrote on 03/04/2004 07:25: > >> Jon Harris wrote: >>> I would disagree with this. With 40-bit floating point, you actually 33 >>> bits of mantissa because of the "hidden bit". So it's slightly better than >>> 32-bit fixed point. >> >> >> We still only have 31 bits of resolution (ie. 2^31 distinct states of >> the mantissa). Hidden "bit" or no hidden "bit", it doesn't make a >> difference to the resolution. > >no, if the hidden one bit was in the explicitly and the word size for the >mantissa remained the same, you would lose one bit of resolution. > >BTW, i always include the sign bit, too, when comparing floating-point to >fixed. a 32 bit float with 8 exp bits has 25 bits of resolution compared to >a properly normalized fixed point of the same range. so i would bet that >the 40 bit has 33 bits of resolution in the signed mantissa. > >r b-j
IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
rhn@mauve.rahul.net (Ronald H. Nicholson Jr.) wrote in message news:<c2iv6i$vnh$1@blue.rahul.net>...
> One must be careful when comparing the resolution of signed integers > with floating point numbers.
agreed.
> The absolute resolution of an integer > measurement is of fixed size, whereas the resolution of a floating > point value changes with its absolute size in a very non-linear manner: > every jump to the next lower exponent suddenly doubles the absolute > resolution (until you get to bottom of the lowest exponent where what > happens depends on whether or not IEEE underflow is supported)
agreed. if it's IEEE-754 compliant, it must be able to deal with denormalized floats.
> So with floats containing a hidden bit plus 23-bit matissa, if a positive > value is above 2^24 a signed 32-bit integer representation would have > more resolution,
of course. that is another (and very good) topic of discussion: When is an N-bit fixed point number system better than an N-bit floating point number system? i was comparing 32-bit IEEE floats to 25-bit 2's compliment fixed and said that the 32-bit float will *always* have at least as much resolution as the 25-bit fixed. (that would not be the case if it were a 26-bit fixed, and i thought that the dispute between Andor and myself was whether or not it would work for 25-bit fixed and not require going to 24-bit signed fixed.)
> below 2^23 the floating representation would have more > resolution, between 2^23 and 2^24 about the same.
yes. you can include negative numbers by adding the qualifier "below 2^23 [in magnitude] the floating representation..." r b-j
Jon Harris wrote:
...
> Well, you've hit on the fact that for fixed point, there are both signed and > unsigned interpertations of the data.
I guess that is an advantage you have when working with fixed-point format: if you don't need the sign bit, you can use it for further resolution, which you can't do with the floating-point format. ...
> Of course, someone could invent a new "unsigned 32-bit floating point" data > format where the existing sign bit is used for an extra bit of resolution in > the mantissa. This new format would have the resolution of 25-bit unsigned > integers and still retain 8 bits of exponent.
Instead, I'd much rather have that the ADI SHARC "float" instruction could be set to either signed or unsigned (same as the "fix" instruction can be set to round or truncate). ...
> As long as you compare "apples to apples", i.e. signed fixed point (integer > or fractional) to signed floating point, floating point does have the extra > bit of resolution.
I really think that 32bit floating-point format is far superior to 25bit fixed. You _can_ have all the resolution (by offsetting) of the latter, plus the stepwise doubling of resolution as the magnitude is halfed in floating-point seems ideally suited for audio work. Regards, Andor
In article <ce45f9ed.0403090013.7aa34bbe@posting.google.com>,
Andor <an2or@mailcircuit.com> wrote:
>Jon Harris wrote: >... >> Well, you've hit on the fact that for fixed point, there are both signed and >> unsigned interpertations of the data. > >I guess that is an advantage you have when working with fixed-point >format: if you don't need the sign bit, you can use it for further >resolution, which you can't do with the floating-point format.
Actually you can do the same thing in floating-point as with fixed-point to get further resolution. Just negate and offset the all positive FP by half the range data, so that sign bit becomes the MSB. IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
In article <14a86f87.0402281659.a95957b@posting.google.com>,
Jaime Andres Aranguren Cardona <jaime.aranguren@ieee.org> wrote:
>Hi, guys. > >I need your advice on the implementation of an LFO, with variable Fo >(oscillation frequency), in the range 0.01Hz <= Fo <= 2.0 Hz, sampling >at Fs = 44.1kHz. > >To be done with an ADSP-21065L in floating point (EzKit). > >Due to these ranges, a LUT (look up table) doesn't seem to be a viable >solution.
Whether or not a LUT is suitable depends on how many bits of accuracy (or how low a noise level) you need from your LFO. For 5 bits of accuracy, a 256 entry LUT should do for any frequency down to DC. Just convert your Fo into a period Po, and use (256.0 * t/Po) mod 256 to get a table index. Bigger tables and various interpolation schemes (linear, polynomial, windowed Sinc) will lower the noise level further (e.g. increase the number of usable bits). But even just a simple low pass filter on the 5-bit output may be enough for some applications. e.g. one way of looking at an LFO is as a resampler for a repeated sin (cos, or whatever) wave. (this is how a 3d graphics hardware designer looks at such problems.) IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.