# Sine Lookup Table with Linear Interpolation

Started by March 16, 2013
```glen herrmannsfeldt wrote:
> In comp.dsp rickman<gnuarm@gmail.com>  wrote:
>> I've been studying an approach to implementing a lookup table (LUT) to
>> implement a sine function.  The two msbs of the phase define the
>> sufficient with an 18 bit output.  Another 11 bits of phase will give me
>> sufficient resolution to interpolate the sin() to 18 bits.
>
>> If you assume a straight line between the two endpoints the midpoint of
>> each interpolated segment will have an error of
>> ((Sin(high)-sin(low))/2)-sin(mid)
>
> See:
>
> http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf
>
> The description starts on page 273.
>
> -- glen
Problem with that..could see only front and back cover.
Different source perhaps?

```
```In comp.dsp Robert Baer <robertbaer@localnet.com> wrote:

(snip, I wrote)
>> See:

>> http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf

>> The description starts on page 273.

I just tried it, copy and paste from the above link, and it worked.

Also, it is page 273 in the document, page 282 in the PDF.

(snip)

>   Problem with that..could see only front and back cover.
>   Different source perhaps?

```
```rickman wrote:
> On 3/16/2013 8:55 PM, Robert Baer wrote:
>> rickman wrote:
>>> I've been studying an approach to implementing a lookup table (LUT) to
>>> implement a sine function. The two msbs of the phase define the
>>> sufficient with an 18 bit output. Another 11 bits of phase will give me
>>> sufficient resolution to interpolate the sin() to 18 bits.
>>>
>>> If you assume a straight line between the two endpoints the midpoint of
>>> each interpolated segment will have an error of
>>> ((Sin(high)-sin(low))/2)-sin(mid)
>>>
>>> Without considering rounding, this reaches a maximum at the last segment
>>> before the 90 degree point. I calculate about 4-5 ppm which is about the
>>> same as the quantization of an 18 bit number.
>>>
>>> There are two issues I have not come across regarding these LUTs. One is
>>> adding a bias to the index before converting to the sin() value for the
>>> table. Some would say the index 0 represents the phase 0 and the index
>>> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT
>>> inefficient, especially in hardware. If a bias of half the lsb is added
>>> to the index before converting to a sin() value the value 0 to 2^n-1
>>> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table
>>> properly. I assume this is commonly done and I just can't find a mention
>>> of it.
>>>
>>> The other issue is how to calculate the values for the table to give the
>>> best advantage to the linear interpolation. Rather than using the exact
>>> match to the end points stored in the table, an adjustment could be done
>>> to minimize the deviation over each interpolated segment. Without this,
>>> the errors are always in the same direction. With an adjustment the
>>> errors become bipolar and so will reduce the magnitude by half (approx).
>>> Is this commonly done? It will require a bit of computation to get these
>>> values, but even a rough approximation should improve the max error by a
>>> factor of two to around 2-3 ppm.
>>>
>>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take
>>> advantage of this resolution! lol
>>>
>>> One thing I learned while doing this is that near 0 degrees the sin()
>>> function is linear (we all knew that, right?) but near 90 degrees, the
>>> sin() function is essentially quadratic. Who would have thunk it?
>>>
>> Sounds like you are making excellent improvements on the standard ho-hum
>> algorithms; the net result will be superior to anything done out there
>> (commercially).
>> With the proper offsets, one needs only 22.5 degrees of lookup ("bounce"
>> off each multiple of 45 degrees).
>
> I can't say I follow this. How do I get a 22.5 degree sine table to
> expand to 90 degrees?
>
> As to my improvements, I can see where most implementations don't bother
> pushing the limits of the hardware, but I can't imagine no one has done
> this before. I'm sure there are all sorts of apps, especially in older
> hardware where resources were constrained, where they pushed for every
> drop of precision they could get. What about space apps? My
> understanding is they *always* push their designs to be the best they
> can be.
>
Once upon a time, about 30 years ago, i fiddled with routines to
multiply a millions-of-digits number by another millions-of-digits number.
Naturally, i used the FFT and convolution (and bit-reordering) to do
this.
As the number of digits grew, i needed to use a larger N (in 2^N
samples), so different algorithms were needed in different ranges to
maximize speed.
So i had to try a goodly number of published, standard, accepted,
used routines.
ALL algorithms for N larger than 5 worked but i was able to speed
them up anywhere from 10% to 30% by using "tricks" which were the
equivalent of "folding" around 22.5 degrees.
"Fold" in half: sin(-x)=-sin(x) or 180 degrees.
Half again on the imaginary axis for what you are doing for 90
degrees: sin(90-x) = sin(90)*cos(x)-cos(90)*sin(x) = -cos(x) exact.
Half again: sin(45)=cos(45)=2^(3/2)~~.707 carry out as far as you
want; sin/cos to 45 degrees. sin(x=0..45) direct, sin(x=45..90)
=cos(45-x) accuracy depending on value of sin/cos(45).
That is prolly as far as you want to go with a FPGA, because (as you
indicated) you do not want to attempt multiplies in the FPGA.
That last step of "folding" might give better results or better FPGA
/ LUT calcs.

Calculating actual values for the LUT for maximum accuracy: the trig
series x-(x^3/3!)+(x^5/5!) etc should give better results from 0..90
than the approx on page 51.
Do not hesitate to try alternate schemes: sin(2x) = 2*sin(x)*cos(x)
and sin(0.5*x)=+/- sqrt((1-cos(x))/2) where that sign depends on the
```
```On 2013-03-18, Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnuarm@gmail.com> wrote:

> If you are implementing in an FPGA with no multipliers, then CORDIC is
> the first thing you should be looking at.

?? CORDIC requires even more multiplies

--
&#9858;&#9859; 100% natural
```
```On 2013-03-18, Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnuarm@gmail.com> wrote:

> If you are implementing in an FPGA with no multipliers, then CORDIC is
> the first thing you should be looking at.

?? CORDIC requires even more multiplies

But you can muliply two short lookup tables to simulate a longer lookup
table using he identity:

sin(a+b) = sin(a)*cos(b) + cos(a)*sin(b)

so you do a coarse lookup table with sin(a) (and you can get cos(a) from it
ti if a is some integer factor of 90 degrees)

and do two fine lookup tables for sin(b) and cos(b)

eg: you can do 16 bits of angle at 16 bit resoluton in 512 bytes of
table and ~80 lines of assembler code
(but the computation needs two multiplies)

the first two bits determine the quadrant.

the next 7 bits are on a coarse lookup table
128 entries x 16 bits  is 256 bytes

next bit defines the sign of b (if set b is negative,add one to a)

and the final 6 are the rest of b

from a 64 entry table you can get sin(|b|)
and another 64 for cos(|b|)

treat b=0 as a special case - have entries for 1..64 in the table
correct sin(|b|) for negative b

a few branches and bit-shifts, two multiplies, one add and you have

--
&#9858;&#9859; 100% natural
```
```On 18 Mar 2013 07:35:28 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

>On 2013-03-18, Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
>> On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnuarm@gmail.com> wrote:
>
>> If you are implementing in an FPGA with no multipliers, then CORDIC is
>> the first thing you should be looking at.
>
>?? CORDIC requires even more multiplies

CORDIC requires shift and add. That's why it was used in early
scientific calculators- very little hardware was required.

```
```On 03/17/2013 08:27 PM, John Larkin wrote:
> On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
> <pcdhSpamMeSenseless@electrooptical.net> wrote:
>
>> On 3/17/2013 6:44 PM, rickman wrote:
>>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>>
>>>> It would be interesting to distort the table entries to minimize THD. The
>>>> problem there is how to measure very low THD to close the loop.
>>>
>>> It could also be a difficult problem to solve without some sort of
>>> exhaustive search.  Each point you fudge affects two curve segments and
>>> each curve segment is affected by two points.  So there is some degree
>>> of interaction.
>>>
>>> Anyone know if there is a method to solve such a problem easily?  I am
>>> sure that my current approach gets each segment end point to within &#2013266097;1
>>> lsb and I suppose once you measure THD in some way it would not be an
>>> excessive amount of work to tune the 256 end points in a few passes
>>> through.  This sounds like a tree search with pruning.
>>>
>>> I'm assuming there would be no practical closed form solution.  But
>>> couldn't the THD be calculated in a simulation rather than having to be
>>> measured on the bench?
>>>
>>
>> The simple method would be to compute a zillion-point FFT.  Since the
>> function is really truly periodic, the FFT gives you the right answer
>> for the continuous-time Fourier transform, provided you have enough
>> points that you can see all the relevant harmonics.
>>
>
> All you need now is a digitizer or spectrum analyzer that has PPM distortion!
>
> We've bumped into this problem, trying to characterize some of our ARBs. The
> best thing is to use a passive lowpass or notch filter (with very good Ls and
> Cs!) to remove the fundamental and analyze what's left.
>
>

I have an HP 339A you can borrow. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
```
```On Mon, 18 Mar 2013 10:12:45 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>On 03/17/2013 08:27 PM, John Larkin wrote:
>> On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
>> <pcdhSpamMeSenseless@electrooptical.net> wrote:
>>
>>> On 3/17/2013 6:44 PM, rickman wrote:
>>>> On 3/17/2013 6:20 PM, John Larkin wrote:
>>>>>
>>>>> It would be interesting to distort the table entries to minimize THD. The
>>>>> problem there is how to measure very low THD to close the loop.
>>>>
>>>> It could also be a difficult problem to solve without some sort of
>>>> exhaustive search.  Each point you fudge affects two curve segments and
>>>> each curve segment is affected by two points.  So there is some degree
>>>> of interaction.
>>>>
>>>> Anyone know if there is a method to solve such a problem easily?  I am
>>>> sure that my current approach gets each segment end point to within &#2013266097;1
>>>> lsb and I suppose once you measure THD in some way it would not be an
>>>> excessive amount of work to tune the 256 end points in a few passes
>>>> through.  This sounds like a tree search with pruning.
>>>>
>>>> I'm assuming there would be no practical closed form solution.  But
>>>> couldn't the THD be calculated in a simulation rather than having to be
>>>> measured on the bench?
>>>>
>>>
>>> The simple method would be to compute a zillion-point FFT.  Since the
>>> function is really truly periodic, the FFT gives you the right answer
>>> for the continuous-time Fourier transform, provided you have enough
>>> points that you can see all the relevant harmonics.
>>>
>>
>> All you need now is a digitizer or spectrum analyzer that has PPM distortion!
>>
>> We've bumped into this problem, trying to characterize some of our ARBs. The
>> best thing is to use a passive lowpass or notch filter (with very good Ls and
>> Cs!) to remove the fundamental and analyze what's left.
>>
>>
>
>I have an HP 339A you can borrow. ;)
>
>Cheers
>
>Phil Hobbs

That's for audio. Who cares about distortion for audio? And it gets nowhere near
PPM  turf.

--

John Larkin                  Highland Technology Inc
www.highlandtechnology.com   jlarkin at highlandtechnology dot com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME  analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
```
```>I've been studying an approach to implementing a lookup table (LUT) to
>implement a sine function.  The two msbs of the phase define the
>sufficient with an 18 bit output.  Another 11 bits of phase will give me
>sufficient resolution to interpolate the sin() to 18 bits.
>

This thesis should help you out a lot, especially chapter 6-7. Easy to
read, and good pointers/analysis of what you can easily do.

https://aaltodoc.aalto.fi/bitstream/handle/123456789/2308/isbn9512253186.pdf?sequence=1

I think noise-shaping (error feedback) of both phase/amplitude gives you
very good bang-for-the-buck.

Dave

```
```
rickman schrieb:

> I've been studying an approach to implementing a lookup table (LUT) to
> implement a sine function.  The two msbs of the phase define the
> sufficient with an 18 bit output.  Another 11 bits of phase will give me
> sufficient resolution to interpolate the sin() to 18 bits.
>
> If you assume a straight line between the two endpoints the midpoint of
> each interpolated segment will have an error of
> ((Sin(high)-sin(low))/2)-sin(mid)

Hello,

you may use the well known formula:
sin(a+b) = sin(a)cos(b) + cos(a)sin(b)

It looks like you will need four tables, one coarse sin(), one coarse
cos(), one fine sin() and one fine cos() table, but you only need the
coarse sin() for one quadrant and the fine sin() and cos() tables.
The coarse sin() table for one quadrant will also deliver the values for
cos()

Bye

```