DSPRelated.com
Forums

Sine Lookup Table with Linear Interpolation

Started by rickman March 16, 2013
On 3/17/2013 6:57 PM, John Larkin wrote:
> On Sun, 17 Mar 2013 18:44:07 -0400, rickman<gnuarm@gmail.com> wrote: > >> On 3/17/2013 6:20 PM, John Larkin wrote: >>> >>> It would be interesting to distort the table entries to minimize THD. The >>> problem there is how to measure very low THD to close the loop. >> >> It could also be a difficult problem to solve without some sort of >> exhaustive search. Each point you fudge affects two curve segments and >> each curve segment is affected by two points. So there is some degree >> of interaction. >> >> Anyone know if there is a method to solve such a problem easily? I am >> sure that my current approach gets each segment end point to within &#4294967295;1 >> lsb and I suppose once you measure THD in some way it would not be an >> excessive amount of work to tune the 256 end points in a few passes >> through. This sounds like a tree search with pruning. > > If I could measure the THD accurately, I'd just blast in a corrective harmonic > adder to the table for each of the harmonics. That could be scientific or just > iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component > into all the table points, many of which wouldn't even change by an LSB, to null > out the observed distortion. Gain and phase, of course.
I don't think you are clear on this. The table and linear interp are as good as they can get with the exception of one or possibly two lsbs to minimize the error in the linear interp. These is no way to correct for this by adding a second harmonic, etc. Remember, this is not a pure table lookup, it is a two step approach.
>> I'm assuming there would be no practical closed form solution. But >> couldn't the THD be calculated in a simulation rather than having to be >> measured on the bench? > > Our experience is that the output amplifiers, after the DAC and lowpass filter, > are the nasty guys for distortion, at least in the 10s of MHz. Lots of > commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table > correction would unfortunately be amplitude dependent, so it gets messy fast.
Ok, if you are compensating for the rest of the hardware, that's a different matter... -- Rick
On 3/17/2013 6:44 PM, rickman wrote:
> On 3/17/2013 6:20 PM, John Larkin wrote: >> >> It would be interesting to distort the table entries to minimize THD. The >> problem there is how to measure very low THD to close the loop. > > It could also be a difficult problem to solve without some sort of > exhaustive search. Each point you fudge affects two curve segments and > each curve segment is affected by two points. So there is some degree > of interaction. > > Anyone know if there is a method to solve such a problem easily? I am > sure that my current approach gets each segment end point to within &#4294967295;1 > lsb and I suppose once you measure THD in some way it would not be an > excessive amount of work to tune the 256 end points in a few passes > through. This sounds like a tree search with pruning. > > I'm assuming there would be no practical closed form solution. But > couldn't the THD be calculated in a simulation rather than having to be > measured on the bench? >
The simple method would be to compute a zillion-point FFT. Since the function is really truly periodic, the FFT gives you the right answer for the continuous-time Fourier transform, provided you have enough points that you can see all the relevant harmonics. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC Optics, Electro-optics, Photonics, Analog Electronics 160 North State Road #203 Briarcliff Manor NY 10510 USA +1 845 480 2058 hobbs at electrooptical dot net http://electrooptical.net
"Jon Kirwan" <jonk@infinitefactors.org> wrote in message 
news:r2ick8d61stgkekaelt9m5k4sqh0qc0sgm@4ax.com...
> You already know why. Just not wearing your hat right now. > Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at > sin(x) is 1 + x. One is quadratic, one is linear.
Oops, should be: cos(x) ~= 1 - x^2/2 sin(x) ~= x (- x^3/6) Mixed up your odd/even powers :) Of course, the cubic is less important than the quadratic, so sin(x) ~= x is sufficient for most purposes. I suppose you could potentially synthesize the functions using low power approximations (like these) and interpolating between them over modulo pi/4 segments. Probably wouldn't save many clock cycles relative to a CORDIC, higher order polynomial, or other, for a given accuracy, so the standard space-time tradeoff (LUT vs. computation) isn't changed. Tim -- Deep Friar: a very philosophical monk. Website: http://www.seventransistorlabs.com/
On Sun, 17 Mar 2013 19:03:21 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/17/2013 6:57 PM, John Larkin wrote: >> On Sun, 17 Mar 2013 18:44:07 -0400, rickman<gnuarm@gmail.com> wrote: >> >>> On 3/17/2013 6:20 PM, John Larkin wrote: >>>> >>>> It would be interesting to distort the table entries to minimize THD. The >>>> problem there is how to measure very low THD to close the loop. >>> >>> It could also be a difficult problem to solve without some sort of >>> exhaustive search. Each point you fudge affects two curve segments and >>> each curve segment is affected by two points. So there is some degree >>> of interaction. >>> >>> Anyone know if there is a method to solve such a problem easily? I am >>> sure that my current approach gets each segment end point to within &#4294967295;1 >>> lsb and I suppose once you measure THD in some way it would not be an >>> excessive amount of work to tune the 256 end points in a few passes >>> through. This sounds like a tree search with pruning. >> >> If I could measure the THD accurately, I'd just blast in a corrective harmonic >> adder to the table for each of the harmonics. That could be scientific or just >> iterative. For the 2nd harmonic, for example, add a 2nd harmonic sine component >> into all the table points, many of which wouldn't even change by an LSB, to null >> out the observed distortion. Gain and phase, of course. > >I don't think you are clear on this. The table and linear interp are as >good as they can get with the exception of one or possibly two lsbs to >minimize the error in the linear interp. These is no way to correct for >this by adding a second harmonic, etc. Remember, this is not a pure >table lookup, it is a two step approach.
Why can't some harmonic component be added to the table points to improve downstream distortion? Of course, you'd need a full 360 degree table, not a folded one.
> > >>> I'm assuming there would be no practical closed form solution. But >>> couldn't the THD be calculated in a simulation rather than having to be >>> measured on the bench? >> >> Our experience is that the output amplifiers, after the DAC and lowpass filter, >> are the nasty guys for distortion, at least in the 10s of MHz. Lots of >> commercial RF generators have amazing 2nd harmonic specs, like -20 dBc. A table >> correction would unfortunately be amplitude dependent, so it gets messy fast. > >Ok, if you are compensating for the rest of the hardware, that's a >different matter...
For everything, including the DAC, but especially downstream stuff. The amps and stuff may have 100x more distortion than the lookup table does. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Sun, 17 Mar 2013 19:31:38 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

>On 3/17/2013 6:44 PM, rickman wrote: >> On 3/17/2013 6:20 PM, John Larkin wrote: >>> >>> It would be interesting to distort the table entries to minimize THD. The >>> problem there is how to measure very low THD to close the loop. >> >> It could also be a difficult problem to solve without some sort of >> exhaustive search. Each point you fudge affects two curve segments and >> each curve segment is affected by two points. So there is some degree >> of interaction. >> >> Anyone know if there is a method to solve such a problem easily? I am >> sure that my current approach gets each segment end point to within &#4294967295;1 >> lsb and I suppose once you measure THD in some way it would not be an >> excessive amount of work to tune the 256 end points in a few passes >> through. This sounds like a tree search with pruning. >> >> I'm assuming there would be no practical closed form solution. But >> couldn't the THD be calculated in a simulation rather than having to be >> measured on the bench? >> > >The simple method would be to compute a zillion-point FFT. Since the >function is really truly periodic, the FFT gives you the right answer >for the continuous-time Fourier transform, provided you have enough >points that you can see all the relevant harmonics. >
All you need now is a digitizer or spectrum analyzer that has PPM distortion! We've bumped into this problem, trying to characterize some of our ARBs. The best thing is to use a passive lowpass or notch filter (with very good Ls and Cs!) to remove the fundamental and analyze what's left. -- John Larkin Highland Technology Inc www.highlandtechnology.com jlarkin at highlandtechnology dot com Precision electronic instrumentation Picosecond-resolution Digital Delay and Pulse generators Custom timing and laser controllers Photonics and fiberoptic TTL data links VME analog, thermocouple, LVDT, synchro, tachometer Multichannel arbitrary waveform generators
On Sun, 17 Mar 2013 19:03:54 -0500, "Tim Williams"
<tmoranwms@charter.net> wrote:

>"Jon Kirwan" <jonk@infinitefactors.org> wrote in message >news:r2ick8d61stgkekaelt9m5k4sqh0qc0sgm@4ax.com... >> You already know why. Just not wearing your hat right now. >> Two terms of taylors at 0 of cos(x) is x + x^2/2. Two at >> sin(x) is 1 + x. One is quadratic, one is linear. > >Oops, should be: >cos(x) ~= 1 - x^2/2 >sin(x) ~= x (- x^3/6) >Mixed up your odd/even powers :)
hehe. yes. But it gets the point across, either way. Detail I didn't care a lot about at the moment.
>Of course, the cubic is less important than the quadratic, so sin(x) ~= x is >sufficient for most purposes.
Yup. Jon
>I suppose you could potentially synthesize the functions using low power >approximations (like these) and interpolating between them over modulo pi/4 >segments. Probably wouldn't save many clock cycles relative to a CORDIC, >higher order polynomial, or other, for a given accuracy, so the standard >space-time tradeoff (LUT vs. computation) isn't changed. > >Tim
In comp.dsp rickman <gnuarm@gmail.com> wrote:
(snip)
>>> I've been studying an approach to implementing a lookup table (LUT) to >>> implement a sine function.
>> (snip)
>>> If you assume a straight line between the two endpoints the midpoint of >>> each interpolated segment will have an error of >>> ((Sin(high)-sin(low))/2)-sin(mid)
>> Seems what they instead do is implement
>> sin(M)cos(L)+cos(M)sin(L) where M and L are the more and less >> significant bits of theta. Also, cos(L) tends to be almost 1, >> so they just say it is 1.
> Interesting. This tickles a few grey cells. Two values based solely on > the MS portion of x and a value based solely on the LS portion. Two > tables, three lookups a multiply and an add. That could work. The > value for sin(M) would need to be full precision, but I assume the > values of sin(L) could have less range because sin(L) will always be a > small value...
The cos(M)sin(L) is one table, with some bits from each. Well, when you do the interpolation, you first need to know the spacing between points in the sine table. That spacing is proportional to cos(x). So, the interpolation table is indexed by the high four bits of M and the four bits of L. (snip on getting to 90 degrees, or 2^n in binary.
>> Well, you can fix it in various ways, but you definitely want 2^n.
(snip)
> I think one of the other posters was saying to add an entry for 90 > degrees. I don't like that. I could be done but it complicates the > process of using a table for 0-90 degrees.
I am not sure yet how they do that. I did notice that when adding the interpolation value they propagate the carry, so the output can go to the full 1.0000 instead of 0.99999. But it will do that before 90 degrees. (snip)
>> Seems like that is one of the suggestions, but not done in the ROMs >> they were selling. Then the interpolation has to add or subtract, >> which is slightly (in TTL) harder.
>> The interpolated sine was done in 1970 with 128x8 ROMs. With larger >> ROMs, like usual today, you shouldn't need it unless you want really >> high resolution.
> I'm not using an actual ROM chip. This is block ram in a rather small > FPGA with only six blocks. I need two channels and may want to use some > of the blocks for other functions.
So, the one referenced uses three 256x4 ROMs to get 12 bits of sin(M) from 8 bits of M, and then a fourth 128x8 ROM to get five bits to add to the low bits of the 12 bit sin(M). -- glen
On 3/17/13 5:49 PM, rickman wrote:
> On 3/16/2013 7:13 PM, robert bristow-johnson wrote: >> >> this *should* be a relatively simple issue, but i am confused >> >> On 3/16/13 6:30 PM, rickman wrote: >>> I've been studying an approach to implementing a lookup table (LUT) to >>> implement a sine function. The two msbs of the phase define the >>> quadrant. I have decided that an 8 bit address for a single quadrant is >>> sufficient with an 18 bit output. >> >> 10 bits or 1024 points. since you're doing linear interpolation, add one >> more, copy the zeroth point x[0] to the last x[1024] so you don't have >> to do any modulo (by ANDing with (1023-1) on the address of the second >> point. (probably not necessary for hardware implementation.) >> >> >> x[n] = sin( (pi/512)*n ) for 0 <= n <= 1024 > > So you are suggesting a table with 2^n+1 entries? Not such a great idea > in some apps, like hardware.
i think about hardware as a technology, not so much an app. i think that apps can have either hardware or software realizations.
> What is the advantage?
in *software* when using a LUT *and* linear interpolation for sinusoid generation, you are interpolating between x[n] and x[n+1] where n is from the upper bits of the word that comes outa the phase accumulator: #define PHASE_BITS 10 #define PHASE_MASK 0x001FFFFF // 2^21 - 1 #define FRAC_BITS 11 #define FRAC_MASK 0x000007FF // 2^11 - 1 #define ROUNDING_OFFSET 0x000400 long phase, phase_increment, int_part, frac_part; long y, x[1025]; ... phase = phase + phase_increment; phase = phase & PHASE_MASK; int_part = phase >> FRAC_BITS; frac_part = phase & FRAC_MASK; y = x[int_part]; y = y + (((x[int_part+1]-y)*frac_part + ROUNDING_OFFSET)>>FRAC_BITS); now if the lookup table is 1025 long with the last point a copy of the first (that is x[1024] = x[0]), then you need not mask int_part+1. x[int_part+1] always exists.
> Also, why 10 bit > address for a 1024 element table?
uh, because 2^10 = 1024? that came from your numbers: 8-bit address for a single quadrant and there are 4 quadrants, 2 bits for the quadrant.
> My calculations indicate a linear > interpolation can be done with 4 ppm accuracy with a 256 element LUT. > I'm not completely finished my simulation, but I'm pretty confident this > much is corrrect.
i'm agnostic about the specs for your numbers. you are saying that 8 bits per quadrant is enough and i am stipulating to that.
>>> Another 11 bits of phase will give me >>> sufficient resolution to interpolate the sin() to 18 bits. >> >> so a 21 bit total index. your frequency resolution would be 2^(-21) in >> cycles per sampling period or 2^(-21) * Fs. those 2 million values would >> be the only frequencies you can meaningful > > No, that is the phase sent to the LUT. The total phase accumulator can > be larger as the need requires.
why wouldn't you use those extra bits in the linear interpolation if they are part of the phase word? not doing so makes no sense to me at all. if you have more bits of precision in the fractional part of the phase
>>> If you assume a straight line between the two endpoints the midpoint of >>> each interpolated segment will have an error of >>> ((Sin(high)-sin(low))/2)-sin(mid) >>> >> >> do you mean "+" instead of the first "-"? to be explicit: >> >> ( sin((pi/512)*(n+1)) + sin((pi/512)*n) )/2 - sin((pi/512)*(n+0.5)) >> >> that's the error in the middle. dunno if it's the max error, but it's >> might be. > > Yes, thanks for the correction. The max error? I'm not so worried about > that exactly. The error is a curve with the max magnitude near the > middle if nothing further is done to minimize it. > > >>> Without considering rounding, this reaches a maximum at the last segment >>> before the 90 degree point. >> >> at both the 90 and 270 degree points. (or just before and after those >> points.) > > I'm talking about the LUT. The LUT only considers the first quadrant.
not in software. i realize that in hardware and you're worried about real estate, you might want only one quadrant in ROM, but in software you would have an entire cycle of the waveform (plus one repeated point at the end for the linear interpolation). and if you *don't* have the size constraint in your hardware target and you can implement a 1024 point LUT as easily as a 256 point LUT, then why not? so you don't have to fiddle with reflecting the single quadrant around a couple of different ways.
> > >>> I calculate about 4-5 ppm which is about the >>> same as the quantization of an 18 bit number. >>> >>> There are two issues I have not come across regarding these LUTs. One is >>> adding a bias to the index before converting to the sin() value for the >>> table. Some would say the index 0 represents the phase 0 and the index >>> 2^n represents 90 degrees. But this is 2^n+1 points which makes a LUT >>> inefficient, especially in hardware. If a bias of half the lsb is added >>> to the index before converting to a sin() value the value 0 to 2^n-1 >>> becomes symmetrical with 2^n to 2^(n+1)-1 fitting a binary sized table >>> properly. I assume this is commonly done and I just can't find a mention >>> of it. >> >> do you mean biasing by 1/2 of a point? then your max error will be *at* >> the 90 and 270 degree points and it will be slightly more than what you >> had before. > > No, not quite right. There is a LUT with points spaced at 90/255 degrees > apart starting at just above 0 degrees. The values between points in the > table are interpolated with a maximum deviation near the center of the > interpolation. Next to 90 degrees the interpolation is using the maximum > interpolation factor which will result in a value as close as you can > get to the correct value if the end points are used to construct the > interpolation line. 90 degrees itself won't actually be represented, but > rather points on either side, 90&#4294967295;delta where delta is 360&#4294967295; / 2^(n+1) > with n being the number of bits in the input to the sin function. >
i read this over a couple times and cannot grok it quite. need to be more explicit (mathematically, or with C or pseudocode) with what your operations are.
> >>> The other issue is how to calculate the values for the table to give the >>> best advantage to the linear interpolation. Rather than using the exact >>> match to the end points stored in the table, an adjustment could be done >>> to minimize the deviation over each interpolated segment. Without this, >>> the errors are always in the same direction. With an adjustment the >>> errors become bipolar and so will reduce the magnitude by half (approx). >>> Is this commonly done? It will require a bit of computation to get these >>> values, but even a rough approximation should improve the max error by a >>> factor of two to around 2-3 ppm. >> >> if you assume an approximate quadratic behavior over that short segment, >> you can compute the straight line where the error in the middle is equal >> in magnitude (and opposite in sign) to the error at the end points. >> that's a closed form solution, i think. > > Yes, it is a little tricky because at this point we are working with > integer math (or technically fixed point I suppose).
not with defining the points of the table. you do that in MATLAB or in C or something. your hardware gets its LUT spec from something you create with a math program.
> Rounding errors is > what this is all about. I've done some spreadsheet simulations and I > have some pretty good results. I updated it a bit to generalize it to > the LUT size and I keep getting the same max error counts (adjusted to > work with integers rather than fractions) &#4294967295;3 no matter what the size of > the interpolation factor. I don't expect this and I think I have > something wrong in the calculations. I'll need to resolve this. > > >> dunno if that is what you actually want for a sinusoidal waveform >> generator. i might think you want to minimize the mean square error. > > We are talking about the lsbs of a 20+ bit word. Do you think there will > be much of a difference in result?
i'm just thinking that if you were using LUT to generate a sinusoid that is a *signal* (not some parameter that you use to calculate other stuff), then i think you want to minimize the mean square error (which is the power of the noise relative to the power of the sinusoid). so the LUT values might not be exactly the same as the sine function evaluated at those points.
> I need to actually be able to do the > calculations and get this done rather than continue to work on the > process. Also, each end point affects two lines,
what are "lines"? not quite following you.
> so there are tradeoffs, > make one better and the other worse? It seems to get complicated very > quickly.
if you want to define your LUT to minimize the mean square error, assuming linear interpolation between LUT entries, that can be a little complicated, but i don't think harder than what i remember doing in grad school. want me to show you? you get a big system of linear equations to get the points. if you want to minimize the maximum error (again assuming linear interpolation), then it's that fitting a straight line to that little snippet of quadratic curve bit that i mentioned.
> >>> Now if I can squeeze another 16 dB of SINAD out of my CODEC to take >>> advantage of this resolution! lol >>> >>> One thing I learned while doing this is that near 0 degrees the sin() >>> function is linear (we all knew that, right?) but near 90 degrees, the >>> sin() function is essentially quadratic. Who would have thunk it? >> >> Newton? Leibnitz? Gauss? >> >> sin(t + pi/2) = cos(t) > > How does that imply a quadratic curve at 90 degrees?
sin(t) = t + ... small terms when t is small cos(t) = 1 - (t^2)/2 + ... small terms when t is small
> At least I think like the greats!
but they beat you by a few hundred years. :-) -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Sun, 17 Mar 2013 18:35:37 -0400, rickman <gnuarm@gmail.com> wrote:

>On 3/17/2013 6:23 PM, langwadt@fonz.dk wrote: >> On Mar 17, 10:56 pm, rickman<gnu...@gmail.com> wrote: >>> On 3/16/2013 8:02 PM, langw...@fonz.dk wrote: >>> >>>> why not skip the lut and just do a full sine approximation >>> >>>> something like this, page 53-54 >>> >>>> http://www.emt.uni-linz.ac.at/education/Inhalte/pr_mderf/downloads/AD... >>> >>> I'm not sure this would be easier. The LUT an interpolation require a >>> reasonable amount of logic. This requires raising X to the 5th power >>> and five constant multiplies. My FPGA doesn't have multipliers and this >>> may be too much for the poor little chip. I suppose I could do some of >>> this with a LUT and linear interpolation... lol >>> >> >> I see, I assumed you had an MCU (or fpga) with multipliers >> >> considered CORDIC? > >I should learn that. I know it has been used to good effect in FPGAs. >Right now I am happy with the LUT, but I would like to learn more about >CORDIC. Any useful referrences?
If you are implementing in an FPGA with no multipliers, then CORDIC is the first thing you should be looking at. Do a web search on "CORDIC" for references. I know that's not at all obvious to you, but it works to try. Or you could keep coming back here for more spoonfeeding.
> >-- > >Rick
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
For an approach with easier error estimation you could use the three table sine/cosine generator that implements the trig identities:

Sum formulas for sine and cosine
    sin(w + deltaw) = sin(w) * cos(deltaw) + cos(w) * sin(deltaw)
    cos(w + deltaw) = cos(w) * cos(deltaw) &#4294967295; sin(w) * sin(deltaw)

The tables required are:
#1 one quadrant sin(w)/cos(w) table of 2^M entries
#2 sin(deltaw) table of 2^N entries
#3 cos(deltaw) table of 2^N entries
(#3 only if cosine is generated as well as sine)

This will calculate the equivalent of a 2+M+N entry sine/cosine table. The only error sources in the calculated results are the errors in generating/representing the values in the tables and the accuracy of implementing the adds and multiplies in the trig identity.

Dale B. Dalrymple