DSPRelated.com
Forums

Guidance on Low Frequency Oscilator (LFO) implementation

Started by Jaime Andres Aranguren Cardona February 28, 2004
Jon Harris wrote:

> I would highly recommend using extended precision (40-bit) calculation _AND_ > storage for this problem. I made an audio-frequency oscillator (as low as > 20 Hz at 48kHz sample rate) and it didn't work properly until I used 40-bit > storage for the feedback elements. (My main problem was "shrinking" > amplitude.)
By applying the correction factor I posted this morning, the amplitude remains rock solid. Once a quadrant is enough, but applying it each iteration costs no more that testing to determine when.
> I also highly recommend the paper you've found by Clay.
Yes! Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
In comp.dsp, Chris Carlen <crcarle@BOGUS.sandia.gov> wrote:

>Jon Harris wrote: >> "Chris Carlen" <crcarle@BOGUS.sandia.gov> wrote in message >> news:c1vu4a01b97@enews2.newsguy.com...
>>>I still wonder why not DDS. Is it because the 44100/2^32=0.00001027Hz >>>frequency resolution of a 32-bit DDS isn't sufficient? I suppose a >>>floating point calculation would give the *exact* frequency. So a DDS >>>wouldn't be the right thing for exact frequencies, huh? >>> >>>Good day! >> >> >> A DDS is a separate chip. If the hardware is already designed, there may be >> no DDS available. As Jerry says, "Engineering is the art of making what you >> want from things you can get." > > >Thanks for the reply. > >I meant a software DDS. I have implemented a DDS in software on an AVR >microcontroller, and it is not terribly difficult. I would think that >the DSP could do this, instead of attempting to compute the waveforms.
I've done this too, but I think in software it's usually just called a 'phase accumulator.'
>The only reasons I can think of not to use DDS would be that the desired > phase resolution requires too lengthy a lookup table.
No, you just shift bits the upper bits down to get whatever resolution you want. The upper 8 bits gives a 256 word lookup table. It just gets stepped through very slowly for low frequencies.
>This is >probably the problem in this case. If one wanted a 16-bit signal >resolution, that would warrant an 18-bit address range for the LUT. Ouch!
There are other ways to get good sine waves, such as using a smaller LUT and interpolating. Just how low a distortion does this sine wave need to be, anyway? Elsewhere in the thread, it appears the OP got his oscillator working, and it does have programming simplicity going for it, but I still like the phase accumulator approach. It's like a gearbox, everything is in sync, and there are no 'corrections' needed. I just like the deterministic aspect part of the DDS/LUT solution.
>Maybe I've answered my own question. > >Good day!
----- http://mindspring.com/~benbradley
"Chris Carlen" <crcarle@BOGUS.sandia.gov> wrote in message
news:c20ihe08i8@enews1.newsguy.com...
> Jon Harris wrote: > > "Chris Carlen" <crcarle@BOGUS.sandia.gov> wrote in message > > news:c1vu4a01b97@enews2.newsguy.com... > > > > > > A DDS is a separate chip. If the hardware is already designed, there
may be
> > no DDS available. As Jerry says, "Engineering is the art of making what
you
> > want from things you can get." > > > Thanks for the reply. > > I meant a software DDS. I have implemented a DDS in software on an AVR > microcontroller, and it is not terribly difficult. I would think that > the DSP could do this, instead of attempting to compute the waveforms. > The only reasons I can think of not to use DDS would be that the desired > phase resolution requires too lengthy a lookup table. This is > probably the problem in this case. If one wanted a 16-bit signal > resolution, that would warrant an 18-bit address range for the LUT. Ouch! > > Maybe I've answered my own question. > > Good day!
Got it. I think you did answer the question with the LUT-size issue. On the hardware available, there is plenty of horsepower to calculate sine sampled at 44k.1Hz , but not enough memory for a table large enough for high-accuracy.
Jerry Avins <jya@ieee.org> wrote in message news:<4043ced8$0$3101$61fed72c@news.rcn.com>...
> Jon Harris wrote: > > > I would highly recommend using extended precision (40-bit) calculation _AND_ > > storage for this problem. I made an audio-frequency oscillator (as low as > > 20 Hz at 48kHz sample rate) and it didn't work properly until I used 40-bit > > storage for the feedback elements. (My main problem was "shrinking" > > amplitude.)
Thank you very much to all of you for these very informative and constructive comments. I did the implementation in 32 bit fixed point, plus the correction factor pointed out by Jerry in 32 bit floating point (maybe it should be done with 40 bit floating point, to keep up the 32bit precision??) and works fine. I must say I like this approach: it's fast and stable (when done with enough arithmetic precision). Could you give me some good references with the LUT-Interpolation approach? I'd like to play around with that too, and experience the "It's like a gearbox, everything is in sync, and there are no 'corrections' needed" features of it, as Ben Bradley wrote. I mean, I understand the underlaying process, but want to understand it more thoroughly. Regards,
> > By applying the correction factor I posted this morning, the amplitude > remains rock solid. Once a quadrant is enough, but applying it each > iteration costs no more that testing to determine when. > > > I also highly recommend the paper you've found by Clay. > > Yes! > > Jerry
Jon Harris wrote:
> I would disagree with this. With 40-bit floating point, you actually 33 > bits of mantissa because of the "hidden bit". So it's slightly better than > 32-bit fixed point.
Jon, I don't understand how the SHARC extended precision floating-point format with 31 mantissa bits, 8 exponent bits and 1 sign bit ends up with 33 mantisssa bits. The format of 40bit floating-point number is s[1].e[8].m[31] (. denotes bit concatenation). This is a bit field of size 40. (this format is described in http://www.analog.com/processors/processors/sharc/technicalLibrary/manuals/pdf/ADSP-21065L/65L_tr_numeric_ref.pdf ) Leaving out the exponent for now, we know that the mantissa is calculated as \sum_{i=30}^0 m[i] * 2^{31-i} = m[30] * 1/2 + m[29] * 1/4 + ... + m[0] * 2^{-31} This sum can represent 2^31 different values in the interval [0.0, 1.0[. The hidden "bit" just means we add the value 1.0 to the mantissa, it is simply an offset and does not add any resolution. It moves the value of the significand into the range [1.0, 2.0[. We still only have 31 bits of resolution (ie. 2^31 distinct states of the mantissa). Hidden "bit" or no hidden "bit", it doesn't make a difference to the resolution. The point: I argue there is a 31bit mantissa, and together with the sign bit we can say we have a 32bit resolution. By this I mean that there is a bijection between the integers from 1 - 2^32 and the numbers representable by the mantissa bits and the sign bit. If we had 33bit resolution, then we should get a bijection from the integers from 1 - 2^33 and the numbers representable by the mantissa bits and sign bit (if you will together with the hidden "bit"). Obviously, this is not possible, even with the hidden "bit" (because the hidden "bit" is not really a bit since it takes on only one value, and not two values, as any self-respecting bit should :). So where do you get the 33bits of resolution?
> Also, with floating point, you will always get the full > precision mantissa, even if the value is very small. With fixed point, if > the signal value is substantially smaller than the maximum value, you lose > precision.
Yes, definitely a point that is often overlooked when comparing floating- to fixed-point. Regards, Andor
I erroneously wrote:
...
> \sum_{i=30}^0 m[i] * 2^{31-i}
Invert the sign of the exponent of 2 for the correct formula, ie: \sum_{i=30}^0 m[i] * 2^{i-31} Otherwise it won't turn out as:
> = m[30] * 1/2 + m[29] * 1/4 + ... + m[0] * 2^{-31}
Regards, Andor
In article 40472035$1@pfaff2.ethz.ch, Andor Bariska at andor@nospam.net
wrote on 03/04/2004 07:25:

> Jon Harris wrote: >> I would disagree with this. With 40-bit floating point, you actually 33 >> bits of mantissa because of the "hidden bit". So it's slightly better than >> 32-bit fixed point. > > > We still only have 31 bits of resolution (ie. 2^31 distinct states of > the mantissa). Hidden "bit" or no hidden "bit", it doesn't make a > difference to the resolution.
no, if the hidden one bit was in the explicitly and the word size for the mantissa remained the same, you would lose one bit of resolution. BTW, i always include the sign bit, too, when comparing floating-point to fixed. a 32 bit float with 8 exp bits has 25 bits of resolution compared to a properly normalized fixed point of the same range. so i would bet that the 40 bit has 33 bits of resolution in the signed mantissa. r b-j
Hmm.  I've never analyzed it (though maybe I should), I was just basing my
statement off what ADI says in the Numeric Formats appendix for their SHARC
DSPs:

"The hidden bit effectively increases the precision of the floating-point
significand to 24 bits from the 23 bits actually stored in the data format.
It also insures that the significand of any number in the IEEE normalized
number format is always greater than or equal to 1 and less than 2."

This is of course referencing the standard 32-bit floating point, but by
extension, the same would apply to the 40-bit format which just adds 8 more
mantissa bits.

BTW, I was counting the sign bit as part of the mantissa, because when we
speak of 24-bit fixed point, if we say it has a 24-bit mantissa, then the
sign bit is also being counted as part of the mantissa when using signed
notation (two's complement).

"Andor Bariska" <andor@nospam.net> wrote in message
news:40472035$1@pfaff2.ethz.ch...
> Jon Harris wrote: > > I would disagree with this. With 40-bit floating point, you actually 33 > > bits of mantissa because of the "hidden bit". So it's slightly better
than
> > 32-bit fixed point. > > Jon, > > I don't understand how the SHARC extended precision floating-point > format with 31 mantissa bits, 8 exponent bits and 1 sign bit ends up > with 33 mantisssa bits. > > The format of 40bit floating-point number is s[1].e[8].m[31] (. denotes > bit concatenation). This is a bit field of size 40. > > (this format is described in >
http://www.analog.com/processors/processors/sharc/technicalLibrary/manuals/pdf/ADSP-21065L/65L_tr_numeric_ref.pdf
> ) > > Leaving out the exponent for now, we know that the mantissa is calculated
as
> > \sum_{i=30}^0 m[i] * 2^{31-i} > = m[30] * 1/2 + m[29] * 1/4 + ... + m[0] * 2^{-31} > > This sum can represent 2^31 different values in the interval [0.0, 1.0[. > The hidden "bit" just means we add the value 1.0 to the mantissa, it is > simply an offset and does not add any resolution. It moves the value of > the significand into the range [1.0, 2.0[. > > We still only have 31 bits of resolution (ie. 2^31 distinct states of > the mantissa). Hidden "bit" or no hidden "bit", it doesn't make a > difference to the resolution. > > The point: I argue there is a 31bit mantissa, and together with the sign > bit we can say we have a 32bit resolution. By this I mean that there is > a bijection between the integers from 1 - 2^32 and the numbers > representable by the mantissa bits and the sign bit. > > If we had 33bit resolution, then we should get a bijection from the > integers from 1 - 2^33 and the numbers representable by the mantissa > bits and sign bit (if you will together with the hidden "bit"). > Obviously, this is not possible, even with the hidden "bit" (because the > hidden "bit" is not really a bit since it takes on only one value, and > not two values, as any self-respecting bit should :). > > So where do you get the 33bits of resolution? > > > > Also, with floating point, you will always get the full > > precision mantissa, even if the value is very small. With fixed point,
if
> > the signal value is substantially smaller than the maximum value, you
lose
> > precision. > > Yes, definitely a point that is often overlooked when comparing > floating- to fixed-point. > > Regards, > Andor >
robert bristow-johnson wrote:
...
> [A] 32 bit float with 8 exp bits has 25 bits of resolution compared to > a properly normalized fixed point of the same range.
I think we have to define what we mean by "resolution". It is in fact senseless to say that a 32bit float has the same resolution as a 24 or 25 bit fixed number. 32bit floating-point usually has a much larger resolution for some normalized intervals, simply because it has more bits. So I would define "equal resolution" in two steps: i) A floating-point format _matches_ a fixed-point format if there is an arithmetic *) 1-to-1 (injective) map from the set of the numbers representable with the fixed-point format to the set of numbers representable with the floating-point format. ii) k bit floating-point point has _equal (in fact equal or better) resolution_ like n bit fixed-point if it matches all n bit fixed-point formats. In this sense, 32bit IEEE 754 floating-point has equal (or better) resolution like 24bit fixed-point, but does not have equal resolution like 25bit fixed-point (notice the different usage of "fixed-point" and "fixed-point format"). The reason for this is: As you mention correctly, 32bit IEEE 754 float format matches 25bit fixed-point signed fractionals with binary point at position 1 (eg. -1.001011001 ... with 23 binary places after the binary point). But it does not match the 25bit fixed-point two's complement integer format. Similarly for 40bit extended-precision floating-point and 33bit fixed-point. I know I defined the term "equal resolution" in a way which makes me win the argument :) - I'm however open to other (sensible) definitions. Regards, Andor *) The term arithmetic means we can just assign bits (this would mean that only 32bit fixed matches 32bit float), but we have to create the map with arithmetic operations addition and multiplication.
hi Andor.  i didn't really expect this to get this far, but i think we have
a tangible, testable (which means it can get settled one way or another),
not merely semantic, difference about a very small and arcane thing: a
single bit of information.

In article 40487b0a$1@pfaff2.ethz.ch, Andor Bariska at andor@nospam.net
wrote on 03/05/2004 08:05:

> robert bristow-johnson wrote: > ... >> [A] 32 bit float with 8 exp bits has 25 bits of resolution compared to >> a properly normalized fixed point of the same range. > > I think we have to define what we mean by "resolution". It is in fact > senseless to say that a 32bit float has the same resolution as a 24 or > 25 bit fixed number. 32bit floating-point usually has a much larger > resolution for some normalized intervals, simply because it has more bits. > > So I would define "equal resolution" in two steps: > > i) A floating-point format _matches_ a fixed-point format if there is an > arithmetic *) 1-to-1 (injective) map from the set of the numbers > representable with the fixed-point format to the set of numbers > representable with the floating-point format.
i wouldn't call it 1-to-1. my assertion is that *all* 25 bit fixed-point numbers can be represented exactly in a 32-bit IEEE (hidden 1) format, but not the other way around. there are many more numbers that the 32-bit floating format can represent that a 25-bit fixed cannot do without truncation or rounding. ...
> *) The term arithmetic means we can just assign bits (this would mean > that only 32bit fixed matches 32bit float), but we have to create the > map with arithmetic operations addition and multiplication.
i don't think this makes any difference in our argument. i presume we both mean the ability of whatever format to represent some particular set of numbers. ...
> ii) k bit floating-point point has _equal (in fact equal or better) > resolution_ like n bit fixed-point if it matches all n bit fixed-point > formats. > > In this sense, 32bit IEEE 754 floating-point has equal (or better) > resolution like 24bit fixed-point, but does not have equal resolution > like 25bit fixed-point (notice the different usage of "fixed-point" and > "fixed-point format").
sure it does. that "Hidden 1" bit *does* add a bit. you have 8 exponent bits, 1 sign bit, 1 "hidden one" bit, and 23 explicit mantissa bits. since we're counting the sign bit (MSB) as one of the bits of resolution in the fixed-point format, to compare apples to apples, we need to also for the floating-point. 1 + 1 + 23 = 25. every one of those 2^25 fixed point values can be represented exactly in a 32-bit IEEE 754 format (and then some).
> The reason for this is: > As you mention correctly, 32bit IEEE 754 float format matches 25bit > fixed-point signed fractionals with binary point at position 1 (eg. > -1.001011001 ... with 23 binary places after the binary point). But it > does not match the 25bit fixed-point two's complement integer format.
sure it does. it's just a different range of exponent, and i'm sure the 8-bit exponent can cover it. i'll tell you what: please offer a specific counter example of a 25-bit 2's complement integer that you believe does not get hit SPOT ON with a 32-bit IEEE 754 float. you give me that number (hex or binary would save me work), i will check it for range (it must be between -(2^24) and +(2^24) - 1, inclusive), and then give you the exact bit pattern for the IEEE 754 that exactly hits it.
> Similarly for 40bit extended-precision floating-point and 33bit fixed-point.
and the SHArC 40-bit format *can* exactly represent *every* number representable in a 33-bit two's complement fixed-point format, whether it is a signed integer (Q33.0) or a signed fractional (Q1.32).
> I know I defined the term "equal resolution" in a way which makes me win > the argument :) - I'm however open to other (sensible) definitions.
sorry, Andor. i don't think you don't win the argument. [friendly assertion :-) ] even given your definition. now if you wanna get wild with the implied scaling for a fixed-point number, then maybe you can come up with a definition where you are correct. but for everything between integer fixed-point al the way to unity normalized fractional fixed-point, that "hidden 1 bit" really does add a bit making an N-bit float ("N" not counting the "hidden 1 bit") at least as good as any (N-E+1)-bit fixed representation (where E is the number of exponent bits in the float).
> Regards,
and also to you. r b-j