DSPRelated.com
Forums

Sine Lookup Table with Linear Interpolation

Started by rickman March 16, 2013
In comp.dsp rickman <gnuarm@gmail.com> wrote:
> I've been studying an approach to implementing a lookup table (LUT) to > implement a sine function.
(snip)
> If you assume a straight line between the two endpoints the midpoint of > each interpolated segment will have an error of > ((Sin(high)-sin(low))/2)-sin(mid)
Seems what they instead do is implement sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less significant bits of theta. Also, cos(L) tends to be almost 1, so they just say it is 1. (snip)
> There are two issues I have not come across regarding these LUTs. One > is adding a bias to the index before converting to the sin() value for > the table. Some would say the index 0 represents the phase 0 and the > index 2^n represents 90 degrees. But this is 2^n+1 points which makes a > LUT inefficient, especially in hardware. If a bias of half the lsb is > added to the index before converting to a sin() value the value 0 to > 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized > table properly. I assume this is commonly done and I just can't find a > mention of it.
Well, you can fix it in various ways, but you definitely want 2^n.
> The other issue is how to calculate the values for the table to give the > best advantage to the linear interpolation. Rather than using the exact > match to the end points stored in the table, an adjustment could be done > to minimize the deviation over each interpolated segment. Without this, > the errors are always in the same direction. With an adjustment the > errors become bipolar and so will reduce the magnitude by half (approx). > Is this commonly done? It will require a bit of computation to get > these values, but even a rough approximation should improve the max > error by a factor of two to around 2-3 ppm.
Seems like that is one of the suggestions, but not done in the ROMs they were selling. Then the interpolation has to add or subtract, which is slightly (in TTL) harder. The interpolated sine was done in 1970 with 128x8 ROMs. With larger ROMs, like usual today, you shouldn't need it unless you want really high resolution.
> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take > advantage of this resolution! lol
> One thing I learned while doing this is that near 0 degrees the sin() > function is linear (we all knew that, right?) but near 90 degrees, the > sin() function is essentially quadratic. Who would have thunk it?
-- glen
On 3/16/2013 8:55 PM, Robert Baer wrote:
> rickman wrote: >> I've been studying an approach to implementing a lookup table (LUT) to >> implement a sine function. The two msbs of the phase define the >> quadrant. I have decided that an 8 bit address for a single quadrant is >> sufficient with an 18 bit output. Another 11 bits of phase will give me >> sufficient resolution to interpolate the sin() to 18 bits. >> >> If you assume a straight line between the two endpoints the midpoint of >> each interpolated segment will have an error of >> ((Sin(high)-sin(low))/2)-sin(mid) >> >> Without considering rounding, this reaches a maximum at the last segment >> before the 90 degree point. I calculate about 4-5 ppm which is about the >> same as the quantization of an 18 bit number. >> >> There are two issues I have not come across regarding these LUTs. One is >> adding a bias to the index before converting to the sin() value for the >> table. Some would say the index 0 represents the phase 0 and the index >> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT >> inefficient, especially in hardware. If a bias of half the lsb is added >> to the index before converting to a sin() value the value 0 to 2^n-1 >> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table >> properly. I assume this is commonly done and I just can't find a mention >> of it. >> >> The other issue is how to calculate the values for the table to give the >> best advantage to the linear interpolation. Rather than using the exact >> match to the end points stored in the table, an adjustment could be done >> to minimize the deviation over each interpolated segment. Without this, >> the errors are always in the same direction. With an adjustment the >> errors become bipolar and so will reduce the magnitude by half (approx). >> Is this commonly done? It will require a bit of computation to get these >> values, but even a rough approximation should improve the max error by a >> factor of two to around 2-3 ppm. >> >> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >> advantage of this resolution! lol >> >> One thing I learned while doing this is that near 0 degrees the sin() >> function is linear (we all knew that, right?) but near 90 degrees, the >> sin() function is essentially quadratic. Who would have thunk it? >> > Sounds like you are making excellent improvements on the standard ho-hum > algorithms; the net result will be superior to anything done out there > (commercially). > With the proper offsets, one needs only 22.5 degrees of lookup ("bounce" > off each multiple of 45 degrees).
I can't say I follow this. How do I get a 22.5 degree sine table to expand to 90 degrees? As to my improvements, I can see where most implementations don't bother pushing the limits of the hardware, but I can't imagine no one has done this before. I'm sure there are all sorts of apps, especially in older hardware where resources were constrained, where they pushed for every drop of precision they could get. What about space apps? My understanding is they *always* push their designs to be the best they can be. -- Rick
On 3/17/2013 12:35 PM, Nobody wrote:
> On Sat, 16 Mar 2013 18:30:51 -0400, rickman wrote: > >> There are two issues I have not come across regarding these LUTs. One >> is adding a bias to the index before converting to the sin() value for >> the table. Some would say the index 0 represents the phase 0 and the >> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a >> LUT inefficient, especially in hardware. > > An n-bit table has 2^n+1 entries for 2^n ranges. Range i has endpoints of > table[i] and table[i+1]. The final range has i=(1<<n)-1, so the last > entry in the table is table[1<<n], not table[(1<<n)-1]. > >> One thing I learned while doing this is that near 0 degrees the sin() >> function is linear (we all knew that, right?) but near 90 degrees, the >> sin() function is essentially quadratic. Who would have thunk it? > > sin((pi/2)+x) = sin((pi/2)-x) = cos(x), and the Maclaurin series for > cos(x) is: > > cos(x) = 1 - (x^2)/2! + (x^4)/4! - ...
Yeah, a friend of mine knows these polynomials inside and out. I never learned that stuff so well. At the time it didn't seem to have a lot of use and later there were always better ways of getting the answer. From looking at the above I see the X^2 term will dominate over the higher order terms near 0, but of course the lowest order term, 1, will be the truly dominate term... lol In other words, the function of cos(x) near 0 is just the constant 1 to the first order approximation. -- Rick
On 3/17/2013 6:02 PM, glen herrmannsfeldt wrote:
> In comp.dsp rickman<gnuarm@gmail.com> wrote: >> I've been studying an approach to implementing a lookup table (LUT) to >> implement a sine function. > > (snip) > >> If you assume a straight line between the two endpoints the midpoint of >> each interpolated segment will have an error of >> ((Sin(high)-sin(low))/2)-sin(mid) > > Seems what they instead do is implement > > sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less > significant bits of theta. Also, cos(L) tends to be almost 1, > so they just say it is 1.
Interesting. This tickles a few grey cells. Two values based solely on the MS portion of x and a value based solely on the LS portion. Two tables, three lookups a multiply and an add. That could work. The value for sin(M) would need to be full precision, but I assume the values of sin(L) could have less range because sin(L) will always be a small value...
>> There are two issues I have not come across regarding these LUTs. One >> is adding a bias to the index before converting to the sin() value for >> the table. Some would say the index 0 represents the phase 0 and the >> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a >> LUT inefficient, especially in hardware. If a bias of half the lsb is >> added to the index before converting to a sin() value the value 0 to >> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized >> table properly. I assume this is commonly done and I just can't find a >> mention of it. > > Well, you can fix it in various ways, but you definitely want 2^n.
I think one of the other posters was saying to add an entry for 90 degrees. I don't like that. I could be done but it complicates the process of using a table for 0-90 degrees.
>> The other issue is how to calculate the values for the table to give the >> best advantage to the linear interpolation. Rather than using the exact >> match to the end points stored in the table, an adjustment could be done >> to minimize the deviation over each interpolated segment. Without this, >> the errors are always in the same direction. With an adjustment the >> errors become bipolar and so will reduce the magnitude by half (approx). >> Is this commonly done? It will require a bit of computation to get >> these values, but even a rough approximation should improve the max >> error by a factor of two to around 2-3 ppm. > > Seems like that is one of the suggestions, but not done in the ROMs > they were selling. Then the interpolation has to add or subtract, > which is slightly (in TTL) harder. > > The interpolated sine was done in 1970 with 128x8 ROMs. With larger > ROMs, like usual today, you shouldn't need it unless you want really > high resolution.
I'm not using an actual ROM chip. This is block ram in a rather small FPGA with only six blocks. I need two channels and may want to use some of the blocks for other functions.
>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >> advantage of this resolution! lol > >> One thing I learned while doing this is that near 0 degrees the sin() >> function is linear (we all knew that, right?) but near 90 degrees, the >> sin() function is essentially quadratic. Who would have thunk it? > > -- glen
Thanks for the advice to everyone. -- Rick
On Sun, 17 Mar 2013 18:02:27 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/16/2013 8:55 PM, Robert Baer wrote: >> rickman wrote: >>> I've been studying an approach to implementing a lookup table (LUT) to >>> implement a sine function. The two msbs of the phase define the >>> quadrant. I have decided that an 8 bit address for a single quadrant is >>> sufficient with an 18 bit output. Another 11 bits of phase will give me >>> sufficient resolution to interpolate the sin() to 18 bits. >>> >>> If you assume a straight line between the two endpoints the midpoint of >>> each interpolated segment will have an error of >>> ((Sin(high)-sin(low))/2)-sin(mid) >>> >>> Without considering rounding, this reaches a maximum at the last segment >>> before the 90 degree point. I calculate about 4-5 ppm which is about the >>> same as the quantization of an 18 bit number. >>> >>> There are two issues I have not come across regarding these LUTs. One is >>> adding a bias to the index before converting to the sin() value for the >>> table. Some would say the index 0 represents the phase 0 and the index >>> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT >>> inefficient, especially in hardware. If a bias of half the lsb is added >>> to the index before converting to a sin() value the value 0 to 2^n-1 >>> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table >>> properly. I assume this is commonly done and I just can't find a mention >>> of it. >>> >>> The other issue is how to calculate the values for the table to give the >>> best advantage to the linear interpolation. Rather than using the exact >>> match to the end points stored in the table, an adjustment could be done >>> to minimize the deviation over each interpolated segment. Without this, >>> the errors are always in the same direction. With an adjustment the >>> errors become bipolar and so will reduce the magnitude by half (approx). >>> Is this commonly done? It will require a bit of computation to get these >>> values, but even a rough approximation should improve the max error by a >>> factor of two to around 2-3 ppm. >>> >>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >>> advantage of this resolution! lol >>> >>> One thing I learned while doing this is that near 0 degrees the sin() >>> function is linear (we all knew that, right?) but near 90 degrees, the >>> sin() function is essentially quadratic. Who would have thunk it? >>> >> Sounds like you are making excellent improvements on the standard ho-hum >> algorithms; the net result will be superior to anything done out there >> (commercially). >> With the proper offsets, one needs only 22.5 degrees of lookup ("bounce" >> off each multiple of 45 degrees). > >I can't say I follow this. How do I get a 22.5 degree sine table to >expand to 90 degrees? > >As to my improvements, I can see where most implementations don't bother >pushing the limits of the hardware, but I can't imagine no one has done >this before. I'm sure there are all sorts of apps, especially in older >hardware where resources were constrained, where they pushed for every >drop of precision they could get. What about space apps? My >understanding is they *always* push their designs to be the best they >can be.
We have a few products that do DDS with 4K point, 16-bit waveform tables (full 360 degrees if we do a sine wave) and optional linear interpolation at every DAC clock, usually 128 MHz. At lower frequencies, when straight DDS would normally output the same table point many times, interpolation gives us a linear ramp of points, one every 8 ns. It helps frequency domain specs a little. We didn't try fudging the table to reduce the max midpoint error... too much thinking. It would be interesting to distort the table entries to minimize THD. The problem there is how to measure very low THD to close the loop. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Mar 17, 10:56&#4294967295;pm, rickman <gnu...@gmail.com> wrote:
> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote: > > > > > > > > > > > On Mar 16, 11:30 pm, rickman<gnu...@gmail.com> &#4294967295;wrote: > >> I've been studying an approach to implementing a lookup table (LUT) to > >> implement a sine function. &#4294967295;The two msbs of the phase define the > >> quadrant. &#4294967295;I have decided that an 8 bit address for a single quadrant is > >> sufficient with an 18 bit output. &#4294967295;Another 11 bits of phase will give me > >> sufficient resolution to interpolate the sin() to 18 bits. > > >> If you assume a straight line between the two endpoints the midpoint of > >> each interpolated segment will have an error of > >> ((Sin(high)-sin(low))/2)-sin(mid) > > >> Without considering rounding, this reaches a maximum at the last segment > >> before the 90 degree point. &#4294967295;I calculate about 4-5 ppm which is about > >> the same as the quantization of an 18 bit number. > > >> There are two issues I have not come across regarding these LUTs. &#4294967295;One > >> is adding a bias to the index before converting to the sin() value for > >> the table. &#4294967295;Some would say the index 0 represents the phase 0 and the > >> index 2^n represents 90 degrees. &#4294967295;But this is 2^n+1 points which makes a > >> LUT inefficient, especially in hardware. &#4294967295;If a bias of half the lsb is > >> added to the index before converting to a sin() value the value 0 to > >> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized > >> table properly. &#4294967295;I assume this is commonly done and I just can't find a > >> mention of it. > > >> The other issue is how to calculate the values for the table to give the > >> best advantage to the linear interpolation. &#4294967295;Rather than using the exact > >> match to the end points stored in the table, an adjustment could be done > >> to minimize the deviation over each interpolated segment. &#4294967295;Without this, > >> the errors are always in the same direction. &#4294967295;With an adjustment the > >> errors become bipolar and so will reduce the magnitude by half (approx). > >> &#4294967295; &#4294967295;Is this commonly done? &#4294967295;It will require a bit of computation to get > >> these values, but even a rough approximation should improve the max > >> error by a factor of two to around 2-3 ppm. > > >> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take > >> advantage of this resolution! &#4294967295;lol > > >> One thing I learned while doing this is that near 0 degrees the sin() > >> function is linear (we all knew that, right?) but near 90 degrees, the > >> sin() function is essentially quadratic. &#4294967295;Who would have thunk it? > > >> -- > > >> Rick > > > why &#4294967295;not skip the lut and just do a full sine approximation > > > something like this, page 53-54 > > >http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD... > > I'm not sure this would be easier. &#4294967295;The LUT an interpolation require a > reasonable amount of logic. &#4294967295;This requires raising X to the 5th power > and five constant multiplies. &#4294967295;My FPGA doesn't have multipliers and this > may be too much for the poor little chip. &#4294967295;I suppose I could do some of > this with a LUT and linear interpolation... lol >
I see, I assumed you had an MCU (or fpga) with multipliers considered CORDIC? -Lasse
On 3/17/2013 6:23 PM, langwadt@fonz.dk wrote:
> On Mar 17, 10:56 pm, rickman<gnu...@gmail.com> wrote: >> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote: >> >>> why not skip the lut and just do a full sine approximation >> >>> something like this, page 53-54 >> >>> http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD... >> >> I'm not sure this would be easier. The LUT an interpolation require a >> reasonable amount of logic. This requires raising X to the 5th power >> and five constant multiplies. My FPGA doesn't have multipliers and this >> may be too much for the poor little chip. I suppose I could do some of >> this with a LUT and linear interpolation... lol >> > > I see, I assumed you had an MCU (or fpga) with multipliers > > considered CORDIC?
I should learn that. I know it has been used to good effect in FPGAs. Right now I am happy with the LUT, but I would like to learn more about CORDIC. Any useful referrences? -- Rick
On 3/17/2013 6:20 PM, John Larkin wrote:
> > It would be interesting to distort the table entries to minimize THD. The > problem there is how to measure very low THD to close the loop.
It could also be a difficult problem to solve without some sort of exhaustive search. Each point you fudge affects two curve segments and each curve segment is affected by two points. So there is some degree of interaction. Anyone know if there is a method to solve such a problem easily? I am sure that my current approach gets each segment end point to within &#4294967295;1 lsb and I suppose once you measure THD in some way it would not be an excessive amount of work to tune the 256 end points in a few passes through. This sounds like a tree search with pruning. I'm assuming there would be no practical closed form solution. But couldn't the THD be calculated in a simulation rather than having to be measured on the bench? -- Rick
On Sun, 17 Mar 2013 17:49:23 -0400, rickman
<gnuarm@gmail.com> wrote:

>On 3/16/2013 7:13 PM, robert bristow-johnson wrote: >> >> On 3/16/13 6:30 PM, rickman wrote: >>><snip> >>> One thing I learned while doing this is that near 0 degrees the sin() >>> function is linear (we all knew that, right?) but near 90 degrees, the >>> sin() function is essentially quadratic. Who would have thunk it? >> >> Newton? Leibnitz? Gauss? >> >> sin(t + pi/2) = cos(t) > >How does that imply a quadratic curve at 90 degrees? At least I think >like the greats!
You already know why. Just not wearing your hat right now. Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at sin(x) is 1 + x. One is quadratic, one is linear. Jon
On Sun, 17 Mar 2013 18:44:07 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/17/2013 6:20 PM, John Larkin wrote: >> >> It would be interesting to distort the table entries to minimize THD. The >> problem there is how to measure very low THD to close the loop. > >It could also be a difficult problem to solve without some sort of >exhaustive search. Each point you fudge affects two curve segments and >each curve segment is affected by two points. So there is some degree >of interaction. > >Anyone know if there is a method to solve such a problem easily? I am >sure that my current approach gets each segment end point to within &#4294967295;1 >lsb and I suppose once you measure THD in some way it would not be an >excessive amount of work to tune the 256 end points in a few passes >through. This sounds like a tree search with pruning.
If I could measure the THD accurately, I'd just blast in a corrective harmonic adder to the table for each of the harmonics. That could be scientific or just iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component into all the table points, many of which wouldn't even change by an LSB, to null out the observed distortion. Gain and phase, of course.
> >I'm assuming there would be no practical closed form solution. But >couldn't the THD be calculated in a simulation rather than having to be >measured on the bench?
Our experience is that the output amplifiers, after the DAC and lowpass filter, are the nasty guys for distortion, at least in the 10s of MHz. Lots of commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table correction would unfortunately be amplitude dependent, so it gets messy fast. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators