Forums

Sine Lookup Table with Linear Interpolation

Started by rickman March 16, 2013
In comp.dsp robert bristow-johnson <rbj@audioimagination.com> wrote:

(snip, I wrote)
>>>> It requires a 12V >>>> power supply, though the outputs are still TTL compatible, using >>>> both -12V and +5V power supplies.
>>> my goodness we're old, glen.
>> I'm glad it's just you guys. ;)
> what was that EPROM? a 2716 or 2708 or something like that?
As I remember it, the 2708 requires +12, +5, and -5V. Also, the programmer has to put pulses of just the right current/voltage and timing into the data pins when writing. The Intel 2716 is 5V only, and for writing the data pins are at TTL levels. Only one pin needs to have an appropriate timed pulse. But the TI 2716 is more like the 2708. -- glen
On 3/19/13 1:35 AM, dbd wrote:
> On Monday, March 18, 2013 9:44:51 PM UTC-7, robert bristow-johnson wrote: > > sin(w + deltaw) = sin(w) * cos(deltaw) + cos(w) * sin(deltaw) > cos(w + deltaw) = cos(w) * cos(deltaw) &#2013266070; sin(w) * sin(deltaw) > by tables where: > #1 one quadrant sin(w) or cos(w) table of 2^M entries > #2 sin(deltaw) table of 2^N entries > #3 cos(deltaw) table of 2^N entries > >> i still don't get it. you only need a table or tables for the delta's. >> #2 and #3. i don't get what table #1 is about. > > #1 is the simple sine or cosine lookup table used to generate sin(w) or cos(w) for 2^M values of w in pi/2 (and extended to 2*pi by symmetries) > >> >> you start out with sin(w)=0 and cos(w)=1 and pick the sin(deltaw) and >> cos(deltaw) outa 2 tables (or, if you're a teeny bit clever, a single >> table), and then have at it with the recursion equations above. > > There is no recursion in the equations. > > Tables #2 and #3 represent values of sin(deltaw) and cos(deltaw) for 2^N values of deltaw in the interval between the first two values of w that address table #1, say the interval from 0.0 to 0.5*pi*((2^N)-1)/(2^(M+N)). >
i get it now. you're doing something with a table and a sorta CORDIC interpolation. i was doing a quadrature oscillator: x[n+1] = a*x[n] - b*y[n] y[n+1] = b*x[n] + a*y[n] where a = cos(2*pi*f0/Fs) b = sin(2*pi*f0/Fs) and you kick it off with x[0]=1 and y[0]=0 . another Emily Litella moment. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On 3/19/13 3:25 AM, robert bristow-johnson wrote:
> On 3/19/13 1:35 AM, dbd wrote: >> On Monday, March 18, 2013 9:44:51 PM UTC-7, robert bristow-johnson wrote: >> >> sin(w + deltaw) = sin(w) * cos(deltaw) + cos(w) * sin(deltaw) >> cos(w + deltaw) = cos(w) * cos(deltaw) &#2013266070; sin(w) * sin(deltaw) >> by tables where: >> #1 one quadrant sin(w) or cos(w) table of 2^M entries >> #2 sin(deltaw) table of 2^N entries >> #3 cos(deltaw) table of 2^N entries >> >>> i still don't get it. you only need a table or tables for the delta's. >>> #2 and #3. i don't get what table #1 is about. >> >> #1 is the simple sine or cosine lookup table used to generate sin(w) >> or cos(w) for 2^M values of w in pi/2 (and extended to 2*pi by >> symmetries) >> >>> >>> you start out with sin(w)=0 and cos(w)=1 and pick the sin(deltaw) and >>> cos(deltaw) outa 2 tables (or, if you're a teeny bit clever, a single >>> table), and then have at it with the recursion equations above. >> >> There is no recursion in the equations. >> >> Tables #2 and #3 represent values of sin(deltaw) and cos(deltaw) for >> 2^N values of deltaw in the interval between the first two values of w >> that address table #1, say the interval from 0.0 to >> 0.5*pi*((2^N)-1)/(2^(M+N)). >> > > i get it now. you're doing something with a table and a sorta CORDIC > interpolation. i was doing a quadrature oscillator: > > x[n+1] = a*x[n] - b*y[n] > y[n+1] = b*x[n] + a*y[n] > > where a = cos(2*pi*f0/Fs) b = sin(2*pi*f0/Fs) > > and you kick it off with x[0]=1 and y[0]=0 . > > another Emily Litella moment. >
BTW, Dale, i think i once asked you if you were related to this pretty well-to-do farming family from my hometown in North Dakota (and i think you replied you ain't). i just realized that the current governor *is* that farmer (or maybe his son, closer to my generation). way too many governors come from my boyhood home town and also from my present home town. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On 3/18/2013 9:44 PM, Robert Baer wrote:
>> > One of many reasons i hate Adobe Reader 7.0: along near the top is a > number of cons, one is the so-called hand tool and it defaults as > "selected" - BUT you see the "hand" only while moving the cursor and > when you stop that changes to the "select tool" so that it is IMPOSSIBLE > to move the page around. > When moving the hand, the mouse will highlight text if push left button; > CANNOT grab!!!!!!!!!!!!!!! > That always works in 4.0 .. > Newer versions are worse.
I have always felt that in terms of the UI, Acrobat was one of the *worst* commercial software packages I have ever seen and every new version just gets worse. On top of that, they seem to produce much more buggy code than most. I used it in one job as part of our "ISO" process for document sign off. This part of the tool was so bad that it actually impacted out productivity and not in a positive way. Yet, big companies continue to buy the tool because it is the one with the most professional image. Its not like companies actually test this stuff... -- Rick
Jamie wrote:
> Robert Baer wrote: > >> glen herrmannsfeldt wrote: >> >>> In comp.dsp Robert Baer<robertbaer@localnet.com> wrote: >>> >>> (snip, I wrote) >>> >>>>> See: >>> >>> >>>>> http://ia601506.us.archive.org/8/items/bitsavers_nationaldaMOSIntegratedCircuits_20716690/1972_National_MOS_Integrated_Circuits.pdf >>>>> >>> >>> >>>>> The description starts on page 273. >>> >>> >>> I just tried it, copy and paste from the above link, and it worked. >>> >>> You might be sure to use Adobe reader if that matters. >>> >>> Also, it is page 273 in the document, page 282 in the PDF. >>> >>> (snip) >>> >>>> Problem with that..could see only front and back cover. >>>> Different source perhaps? >>> >>> >> The basic approach given there is very good; use of puny 4 or 5 digit >> tables is a major weakness: sqrt(2)/2 as 0.7071 is junk; CRC in the >> squares, cubes and roots section gives sqrt(2) as 1.414214 (6 digits) >> and sqrt(50) as 7.07168 (also 6 digits). >> And if one needs more, use the Taylor series (for reasonably small >> angles) as it converges fast (Nth odd power and Nth odd factorial). >> --or, use the FPU in the computer to get SIN and COS at the same time! >> >> Reduction from 360 degrees helps: >> "Split" 180 degrees: sin(-x)=-sin(x) >> "Split" 90 degrees: sin(180-x)=sin(x) >> "Split" 45 degrees: sin(90-x)=cos(x) >> "Split" 22.5 degrees: (sqrt(0.5)*(cos(x)-sin(x)) >> Small angles requires less iterations for a given accuracy. > > > But, but, but, but,.... we work in radians!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > Except maybe a carpenter! > > Jamie >
So?? Convert already - or are you one of those heathens or worse, infidels?
On 16/03/2013 22:30, rickman wrote:
> I've been studying an approach to implementing a lookup table (LUT) to > implement a sine function. The two msbs of the phase define the > quadrant. I have decided that an 8 bit address for a single quadrant is > sufficient with an 18 bit output. Another 11 bits of phase will give me > sufficient resolution to interpolate the sin() to 18 bits. > > If you assume a straight line between the two endpoints the midpoint of > each interpolated segment will have an error of > ((Sin(high)-sin(low))/2)-sin(mid) > > Without considering rounding, this reaches a maximum at the last segment > before the 90 degree point. I calculate about 4-5 ppm which is about > the same as the quantization of an 18 bit number. > > There are two issues I have not come across regarding these LUTs. One > is adding a bias to the index before converting to the sin() value for > the table. Some would say the index 0 represents the phase 0 and the > index 2^n represents 90 degrees. But this is 2^n+1 points which makes a > LUT inefficient, especially in hardware. If a bias of half the lsb is > added to the index before converting to a sin() value the value 0 to > 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized > table properly. I assume this is commonly done and I just can't find a > mention of it. > > The other issue is how to calculate the values for the table to give the > best advantage to the linear interpolation. Rather than using the exact > match to the end points stored in the table, an adjustment could be done > to minimize the deviation over each interpolated segment. Without this, > the errors are always in the same direction. With an adjustment the > errors become bipolar and so will reduce the magnitude by half (approx). > Is this commonly done? It will require a bit of computation to get > these values, but even a rough approximation should improve the max > error by a factor of two to around 2-3 ppm.
The other question is why bother with linear interpolation when you can have quadratic accuracy with only a little more effort?
> > Now if I can squeeze another 16 dB of SINAD out of my CODEC to take > advantage of this resolution! lol > > One thing I learned while doing this is that near 0 degrees the sin() > function is linear (we all knew that, right?) but near 90 degrees, the > sin() function is essentially quadratic. Who would have thunk it?
It is obvious from the shape of the sine curve! Classic sin(A+B) = sin(A)cos(B)+cos(A)sin(B) If A is the actual nearest lookup and B<<1 then you can approximate cos(B) = 1 - B^2/2 sin(B) = B For known fixed B these can also be tabulated (exactly or approx). Then Sin(A+B) = sin(A) + cos(A).B - sin(A).B^2/2 Sin(A) is obtained by indexing forwards in your first quadrant LUT Cos(A) by indexing backwards from the end of the table More lookups needed but about the same number of multiplies. Sorting out phase is left as an exercise for the reader. The other method popular in FFT computer codes is to store tabulated the best approximation to the discrete roots of (1, 0)^(1/N) In practice storing slightly tweaked values of s = sin(B) s2 = 2*(sin(B/2)^2) Where B = 2pi/N And adjust so that to within acceptable tolerance it comes back to 1,0 after N iterations. Then reset to exactly 1,0 each time around. sn' = sn - s2*sn - s*cn cn' = cn - s2*cn + s*sn (subject to my memory, typos and sign convention errors) Probably only worth it if you need both sin and cos. -- Regards, Martin Brown
On Sat, 16 Mar 2013 18:30:51 -0400, rickman <gnuarm@gmail.com> wrote:

>I've been studying an approach to implementing a lookup table (LUT) to >implement a sine function. The two msbs of the phase define the >quadrant. I have decided that an 8 bit address for a single quadrant is >sufficient with an 18 bit output. Another 11 bits of phase will give me >sufficient resolution to interpolate the sin() to 18 bits. > >If you assume a straight line between the two endpoints the midpoint of >each interpolated segment will have an error of >((Sin(high)-sin(low))/2)-sin(mid) > >Without considering rounding, this reaches a maximum at the last segment >before the 90 degree point. I calculate about 4-5 ppm which is about >the same as the quantization of an 18 bit number. > >There are two issues I have not come across regarding these LUTs. One >is adding a bias to the index before converting to the sin() value for >the table. Some would say the index 0 represents the phase 0 and the >index 2^n represents 90 degrees. But this is 2^n+1 points which makes a >LUT inefficient, especially in hardware. If a bias of half the lsb is >added to the index before converting to a sin() value the value 0 to >2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized >table properly. I assume this is commonly done and I just can't find a >mention of it. > >The other issue is how to calculate the values for the table to give the >best advantage to the linear interpolation. Rather than using the exact >match to the end points stored in the table, an adjustment could be done >to minimize the deviation over each interpolated segment. Without this, >the errors are always in the same direction. With an adjustment the >errors become bipolar and so will reduce the magnitude by half (approx). > Is this commonly done? It will require a bit of computation to get >these values, but even a rough approximation should improve the max >error by a factor of two to around 2-3 ppm. > >Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >advantage of this resolution! lol > >One thing I learned while doing this is that near 0 degrees the sin() >function is linear (we all knew that, right?) but near 90 degrees, the >sin() function is essentially quadratic. Who would have thunk it?
Without reading any other replies, i would suggest either the CORDIC algorithm of simpson's rule interpolation. From the rule for integration to find any intermediate value you evaluate the curve value within the segment covered by rule. 4 intervals on the step size you have will be limited by the 18 bits of input. Do 20 bits of input and round to 18 or not. About 1 PPM. ?-)
On 3/20/2013 7:05 PM, josephkk wrote:
> > Without reading any other replies, i would suggest either the CORDIC > algorithm of simpson's rule interpolation. From the rule for integration > to find any intermediate value you evaluate the curve value within the > segment covered by rule. 4 intervals on the step size you have will be > limited by the 18 bits of input. Do 20 bits of input and round to 18 or > not. About 1 PPM.
I can't say I follow what you mean about Simpson's rule interpolation. Interpolation is used to derive Simpson's Rule, but I don't see a way to use an integral for obtaining an interpolation. Isn't Simpson's Rule about approximating integrals? What am I missing? -- Rick
On Mon, 18 Mar 2013 19:53:17 -0400, rickman <gnuarm@gmail.com> wrote:

> >>> No, not quite right. There is a LUT with points spaced at 90/255 degrees >>> apart starting at just above 0 degrees. The values between points in the >>> table are interpolated with a maximum deviation near the center of the >>> interpolation. Next to 90 degrees the interpolation is using the maximum >>> interpolation factor which will result in a value as close as you can >>> get to the correct value if the end points are used to construct the >>> interpolation line. 90 degrees itself won't actually be represented, but >>> rather points on either side, 90&#2013266097;delta where delta is 360&#2013266096; / 2^(n+1) >>> with n being the number of bits in the input to the sin function. >>> >> >> i read this over a couple times and cannot grok it quite. need to be >> more explicit (mathematically, or with C or pseudocode) with what your >> operations are. > >It is simple, the LUT gives values for each point in the table, precise >to the resolution of the output. The sin function between these points >varies from close to linear to close to quadratic. If you just >interpolate between the "exact" points, you have an error that is >*always* negative anywhere except the end points of the segments. So it >is better to bias the end points to center the linear approximation >somewhere in the middle of the real curve. Of course, this has to be >tempered by the resolution of your LUT output. > >If you really want to see my work it is all in a spreadsheet at the >moment. I can send it to you or I can copy some of the formulas here. >My "optimization" is not nearly optimal. But it is an improvement over >doing nothing I believe and is not so hard. I may drop it when I go to >VHDL as it is iterative and may be a PITA to code in a function. > > >>>> if you assume an approximate quadratic behavior over that short segment, >>>> you can compute the straight line where the error in the middle is equal >>>> in magnitude (and opposite in sign) to the error at the end points. >>>> that's a closed form solution, i think. >>> >>> Yes, it is a little tricky because at this point we are working with >>> integer math (or technically fixed point I suppose). >>
Hmm. I would like to take a crack at simpson's rule interpolation for this to measure error budget. I may not be timely but this is interesting. joseph underscore barrett at sbcglobal daht cahm ?-)
On Sun, 17 Mar 2013 17:56:21 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/16/2013 8:02 PM, langwadt@fonz.dk wrote: >> On Mar 16, 11:30 pm, rickman<gnu...@gmail.com> wrote: >>> I've been studying an approach to implementing a lookup table (LUT) to >>> implement a sine function. The two msbs of the phase define the >>> quadrant. I have decided that an 8 bit address for a single quadrant is >>> sufficient with an 18 bit output. Another 11 bits of phase will give me >>> sufficient resolution to interpolate the sin() to 18 bits. >>> >>> If you assume a straight line between the two endpoints the midpoint of >>> each interpolated segment will have an error of >>> ((Sin(high)-sin(low))/2)-sin(mid) >>> >>> Without considering rounding, this reaches a maximum at the last segment >>> before the 90 degree point. I calculate about 4-5 ppm which is about >>> the same as the quantization of an 18 bit number. >>> >>> There are two issues I have not come across regarding these LUTs. One >>> is adding a bias to the index before converting to the sin() value for >>> the table. Some would say the index 0 represents the phase 0 and the >>> index 2^n represents 90 degrees. But this is 2^n+1 points which makes a >>> LUT inefficient, especially in hardware. If a bias of half the lsb is >>> added to the index before converting to a sin() value the value 0 to >>> 2^n-1 becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized >>> table properly. I assume this is commonly done and I just can't find a >>> mention of it. >>> >>> The other issue is how to calculate the values for the table to give the >>> best advantage to the linear interpolation. Rather than using the exact >>> match to the end points stored in the table, an adjustment could be done >>> to minimize the deviation over each interpolated segment. Without this, >>> the errors are always in the same direction. With an adjustment the >>> errors become bipolar and so will reduce the magnitude by half (approx). >>> Is this commonly done? It will require a bit of computation to get >>> these values, but even a rough approximation should improve the max >>> error by a factor of two to around 2-3 ppm. >>> >>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >>> advantage of this resolution! lol >>> >>> One thing I learned while doing this is that near 0 degrees the sin() >>> function is linear (we all knew that, right?) but near 90 degrees, the >>> sin() function is essentially quadratic. Who would have thunk it? >>> >>> -- >>> >>> Rick >> >> why not skip the lut and just do a full sine approximation >> >> something like this, page 53-54 >> >> http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/ADSP-2181/Using%20the%20ADSP-2100%20Family%20Volume%201/Using%20the%20ADSP-2100%20Family%20Volume%201.pdf > >I'm not sure this would be easier. The LUT an interpolation require a >reasonable amount of logic. This requires raising X to the 5th power >and five constant multiplies. My FPGA doesn't have multipliers and this >may be too much for the poor little chip. I suppose I could do some of >this with a LUT and linear interpolation... lol
No multipliers(?); ouch. My Simpson's rule method may fit but it will need a trapezoidal strip out of a full Wallace tree multiplier. Preferably three (speed issue, that may not be that big for you). How do you do the linear interpolation without a multiplier? ?-)