DSPRelated.com
Forums

DDS LUT Size Calculation

Started by rickman February 7, 2017
Cedron <103185@DSPRelated> wrote:

>Since you intend on having two lookup tables, one for the value, one for >the slope, the optimal solution in terms of a least squares fit of a >uniformly distributed input variable is to do linear regression of your >function on each interval. This will only improve your accuracy by about >a factor of two, or one bit's worth. > >Now, I'm dropping the mic and walking away.
What I read Rick as saying is that he was constructing this flow chart with the idea of initially populating the second LUT with values that imply use of the tangent method, but then revisiting exactly what values are optimal in his case. This is important because the optimality criteria might not be least squares and might depend on running some datasets. Having this flexibility is an important reason for using LUT's in the first place. Steve
On 2/10/2017 9:59 AM, Cedron wrote:
>> On 2/9/2017 4:01 PM, Cedron wrote: >>>> On 2/9/2017 10:27 AM, Cedron wrote: >>>>>> cos(a) is proportional to the slope of sin(a). So cos(a)*b is > simply >>> a >>>>>> linear interpolation using the slope of the curve at (a). >>>>>> >>>>> >>>>> Small pedantic correction: interpolation --> approximation >>>> >>>> What exactly is your point about this? >>>> >>> >>> Nice pun. >>> >>> The point is you are using one point and a slope to define a line >> segment. >>> Linear interpolation means using two end points to define the line >>> segment. >>> >>> https://en.wikipedia.org/wiki/Linear_interpolation >>> >>> When it comes to finding the values between your table entries you > aren't >>> interpolating, you are approximating. >> >> Linear interpolation uses two points connected by a straight line. >> There are many forms of interpolation. >> >> https://en.wikipedia.org/wiki/Interpolation >> >> -- >> >> Rick C > > Rick, > > Honestly, I don't understand what you aren't understanding. I sure hope > you are one in ten and not ten in ten. I am also considering the > possibility that you are just toying with me. > > Oh well. > > Strictly speaking, you are interpolating, even interpolating linearly, but > not are not doing a linear interpolation. In the numerical example I > provided, "Linear Interpolation" corresponds to the column labeled > "Interpolation" and "interpolation using the slope of the curve at (a)" > corresponds to the column labeled "Taylor". They are not the same, nor > can they be made the same by shifting where the sampled points are taken. > > I misspoke earlier about interpolating with a polynomial and matching the > endpoints and their first derivatives. Generally, this requires a third > degree equation and not a second degree one as I stated. > > Since you intend on having two lookup tables, one for the value, one for > the slope, the optimal solution in terms of a least squares fit of a > uniformly distributed input variable is to do linear regression of your > function on each interval. This will only improve your accuracy by about > a factor of two, or one bit's worth. > > Now, I'm dropping the mic and walking away.
Yes, you said "linear interpolation". I didn't. -- Rick C
On 2/10/2017 2:58 PM, Steve Pope wrote:
> Cedron <103185@DSPRelated> wrote: > >> Since you intend on having two lookup tables, one for the value, one for >> the slope, the optimal solution in terms of a least squares fit of a >> uniformly distributed input variable is to do linear regression of your >> function on each interval. This will only improve your accuracy by about >> a factor of two, or one bit's worth. >> >> Now, I'm dropping the mic and walking away. > > What I read Rick as saying is that he was constructing this flow > chart with the idea of initially populating the second LUT with > values that imply use of the tangent method, but then revisiting > exactly what values are optimal in his case. > > This is important because the optimality criteria might not be > least squares and might depend on running some datasets. > > Having this flexibility is an important reason for using LUT's > in the first place.
Actually, both tables are optimized. While you might think rounding would always give the best result for the coarse table, that is not necessarily true depending on which way the value at the other end of the line rounded, up or down. If both ends would be rounded up or both ends rounded down, it can make sense to move one end the other way to get a better fit to the data across the line segment. In general, I believe the values chosen in the coarse sine table are more important than the values in the slope table and blindly rounding doesn't get the "best" result. This whole conversation started in another group regarding the use of the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* results. I recalled from work I had done a few years ago that I could get accuracy as good as I wish by adjusting the depth of the LUT and using a multiplier. What I should do is explore the CORDIC algorithm and find out just how "exact" the calculation is. I don't think the trig method can get the exact same result as you would get by doing the calculations in infinite precision and rounding. There will be some values that are off in the last bit always. In some cases there may be errors of two lsb, but I'm not sure of that. The spread sheet analyzing this is rather complex and I'd need to spend time with it to make sure of the results. -- Rick C
> >Yes, you said "linear interpolation". I didn't. > >-- > >Rick C
I guess I'm picking up the mic again. Yes you did. This was in your reply to Tim Wescott yesterday: "That is *exactly* what is going on. I realized there are a number of interesting relationships that allow this to work. One is that the cos(a) is proportional to the slope of sin(a). So cos(a)*b is simply a linear interpolation using the slope of the curve at (a)." Right there in the last sentence your said "linear interpolation". I merely suggested "linear approximation" would be more correct. Steve Pope's term "tangent method" would be even better. I didn't mean for it to be a big deal. Ced --------------------------------------- Posted through http://www.DSPRelated.com
rickman  <gnuarm@gmail.com> wrote:

>This whole conversation started in another group regarding the use of >the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >results. I recalled from work I had done a few years ago that I could >get accuracy as good as I wish by adjusting the depth of the LUT and >using a multiplier. What I should do is explore the CORDIC algorithm >and find out just how "exact" the calculation is.
There usually isn't much point to using CORDIC in DSP. It does have application in math libraries. If a CORDIC routine is using double double types internally, then it can probably give an exact result for the double case. But I'll leave such determinations for the unfortunate designers who actually have to worry about such things. I've been in DSP for decades and have never once used CORDIC in one of my own designs. S.
On 2/11/2017 12:24 AM, Steve Pope wrote:
> rickman <gnuarm@gmail.com> wrote: > >> This whole conversation started in another group regarding the use of >> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >> results. I recalled from work I had done a few years ago that I could >> get accuracy as good as I wish by adjusting the depth of the LUT and >> using a multiplier. What I should do is explore the CORDIC algorithm >> and find out just how "exact" the calculation is. > > There usually isn't much point to using CORDIC in DSP. It does have > application in math libraries. If a CORDIC routine is using > double double types internally, then it can probably give an > exact result for the double case. But I'll leave such determinations for > the unfortunate designers who actually have to worry about such things. > > I've been in DSP for decades and have never once used CORDIC in > one of my own designs.
I don't know what type of designs you do, but CORDIC was very popular for RADAR work among others back in the day when FPGAs didn't have multipliers. I believe Ray Andraka was a big user of them. It was often touted that CORDIC didn't require a multiplier, although the circuitry for CORDIC is of the same level of complexity as a multiplier. I didn't figure this out until I tried to learn about the CORDIC once. By that time multipliers were not uncommon in FPGAs, so I didn't pursue it further. BTW, what is a "double"? I've seen that used with integers as well as floating point data types, so I'm not sure which you are referring to. -- Rick C
In article <o7mirg$u3v$1@dont-email.me>, rickman  <gnuarm@gmail.com> wrote:

>On 2/11/2017 12:24 AM, Steve Pope wrote:
>> There usually isn't much point to using CORDIC in DSP. It does have >> application in math libraries. If a CORDIC routine is using >> double double types internally, then it can probably give an >> exact result for the double case. But I'll leave such determinations for >> the unfortunate designers who actually have to worry about such things.
>> I've been in DSP for decades and have never once used CORDIC in >> one of my own designs.
>I don't know what type of designs you do, but CORDIC was very popular >for RADAR work among others back in the day when FPGAs didn't have >multipliers. I believe Ray Andraka was a big user of them. It was >often touted that CORDIC didn't require a multiplier, although the >circuitry for CORDIC is of the same level of complexity as a multiplier. >I didn't figure this out until I tried to learn about the CORDIC once. >By that time multipliers were not uncommon in FPGAs, so I didn't >pursue it further.
Thanks for this insight. Yes, CORDIC has been popular and while I haven't used it in a design I have inherited designs that use it. It happened that in the first job I had in DSP included was a product where we could afford to include a TRW multiplier chip in the design. (This was around 1976-1979). After that I was at UC Berkeley, and we were doing VLSI designs so we could include multipliers as required (I usually used half-parallel multipliers, sometimes with Booth's recoding, and I recall some animated discussions with Rick Lyons on the topic of multiplier logic design.) I re-entered the industry in 1984 and multipliers continued to become more and more available. Although, it has been only in the last decade or so that you would count on just synthesizing the * operator in Verilog, as opposed to using something like Designware to instantiate a vendor multiplier from their library, or perhaps rolling you own.
>BTW, what is a "double"? I've seen that used with integers as well as >floating point data types, so I'm not sure which you are referring to.
double is 64 bits floating point, double double is 128 bits, in many (most?) flavors of C or C++. Steve
On 2/11/2017 8:47 AM, Steve Pope wrote:
> In article <o7mirg$u3v$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 12:24 AM, Steve Pope wrote: > >>> There usually isn't much point to using CORDIC in DSP. It does have >>> application in math libraries. If a CORDIC routine is using >>> double double types internally, then it can probably give an >>> exact result for the double case. But I'll leave such determinations for >>> the unfortunate designers who actually have to worry about such things. > >>> I've been in DSP for decades and have never once used CORDIC in >>> one of my own designs. > >> I don't know what type of designs you do, but CORDIC was very popular >> for RADAR work among others back in the day when FPGAs didn't have >> multipliers. I believe Ray Andraka was a big user of them. It was >> often touted that CORDIC didn't require a multiplier, although the >> circuitry for CORDIC is of the same level of complexity as a multiplier. >> I didn't figure this out until I tried to learn about the CORDIC once. >> By that time multipliers were not uncommon in FPGAs, so I didn't >> pursue it further. > > Thanks for this insight. Yes, CORDIC has been popular and while > I haven't used it in a design I have inherited designs that use it. > > It happened that in the first job I had in DSP included > was a product where we could afford to include a TRW multiplier > chip in the design. (This was around 1976-1979). > > After that I was at UC Berkeley, and we were doing VLSI designs > so we could include multipliers as required (I usually used > half-parallel multipliers, sometimes with Booth's recoding, and > I recall some animated discussions with Rick Lyons on the topic > of multiplier logic design.) > > I re-entered the industry in 1984 and multipliers continued to > become more and more available. Although, it has been only > in the last decade or so that you would count on just synthesizing > the * operator in Verilog, as opposed to using something like > Designware to instantiate a vendor multiplier from their library, > or perhaps rolling you own. > >> BTW, what is a "double"? I've seen that used with integers as well as >> floating point data types, so I'm not sure which you are referring to. > > double is 64 bits floating point, double double is 128 bits, > in many (most?) flavors of C or C++.
I didn't assume you were talking about C. In HDL bit widths are explicitly defined rather than using names for specific widths. A double in a sine calculation would already be far more precision than required by any app I can think of. In fact, the other person I was discussing this with was using a 32 bit CORDIC sine generator which is far more resolution than I would have ever expected to need. -- Rick C
On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote:

>On 2/11/2017 12:24 AM, Steve Pope wrote: >> rickman <gnuarm@gmail.com> wrote: >> >>> This whole conversation started in another group regarding the use of >>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>> results. I recalled from work I had done a few years ago that I could >>> get accuracy as good as I wish by adjusting the depth of the LUT and >>> using a multiplier. What I should do is explore the CORDIC algorithm >>> and find out just how "exact" the calculation is. >> >> There usually isn't much point to using CORDIC in DSP. It does have >> application in math libraries. If a CORDIC routine is using >> double double types internally, then it can probably give an >> exact result for the double case. But I'll leave such determinations for >> the unfortunate designers who actually have to worry about such things. >> >> I've been in DSP for decades and have never once used CORDIC in >> one of my own designs. > >I don't know what type of designs you do, but CORDIC was very popular >for RADAR work among others back in the day when FPGAs didn't have >multipliers. I believe Ray Andraka was a big user of them. It was >often touted that CORDIC didn't require a multiplier, although the >circuitry for CORDIC is of the same level of complexity as a multiplier. > I didn't figure this out until I tried to learn about the CORDIC once. > By that time multipliers were not uncommon in FPGAs, so I didn't >pursue it further.
Yes, in the early-to-mid 90s putting demodulators in FPGAs with no multipliers required the use of tricks like CORDIC rotators for carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs (or anywhere) in those days was also a very precious resource. We did a LOT of complexity tradeoff studies because we'd always try to place an FPGA just big enough to hold the designs, and if you could step down an FPGA size it usually meant saving a bunch of money on every unit. When the FPGA vendors started populating very usable multipliers on the die the CORDIC fell out of favor pretty quickly. I never saw a reason to look at CORDICs after that except for a few very nichey applications that needed FPGAs that didn't have multipliers for various reasons. It's still a good thing to have in the bag of tricks, though, for when it is useful. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote:
> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 12:24 AM, Steve Pope wrote: >>> rickman <gnuarm@gmail.com> wrote: >>> >>>> This whole conversation started in another group regarding the use of >>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>> results. I recalled from work I had done a few years ago that I could >>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>> and find out just how "exact" the calculation is. >>> >>> There usually isn't much point to using CORDIC in DSP. It does have >>> application in math libraries. If a CORDIC routine is using >>> double double types internally, then it can probably give an >>> exact result for the double case. But I'll leave such determinations for >>> the unfortunate designers who actually have to worry about such things. >>> >>> I've been in DSP for decades and have never once used CORDIC in >>> one of my own designs. >> >> I don't know what type of designs you do, but CORDIC was very popular >> for RADAR work among others back in the day when FPGAs didn't have >> multipliers. I believe Ray Andraka was a big user of them. It was >> often touted that CORDIC didn't require a multiplier, although the >> circuitry for CORDIC is of the same level of complexity as a multiplier. >> I didn't figure this out until I tried to learn about the CORDIC once. >> By that time multipliers were not uncommon in FPGAs, so I didn't >> pursue it further. > > Yes, in the early-to-mid 90s putting demodulators in FPGAs with no > multipliers required the use of tricks like CORDIC rotators for > carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs > (or anywhere) in those days was also a very precious resource. We did > a LOT of complexity tradeoff studies because we'd always try to place > an FPGA just big enough to hold the designs, and if you could step > down an FPGA size it usually meant saving a bunch of money on every > unit. When the FPGA vendors started populating very usable > multipliers on the die the CORDIC fell out of favor pretty quickly. > > I never saw a reason to look at CORDICs after that except for a few > very nichey applications that needed FPGAs that didn't have > multipliers for various reasons. It's still a good thing to have in > the bag of tricks, though, for when it is useful.
What about the claim that the CORDIC algorithm produces an "exact" result? That was defined as "Exact would mean all the bits are correct, though typically an error of 1 ULP (unit in the last place) is ok." I'm not certain the trig with table lookup approach can get to no more than &#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all the work I've done so far was looking at keeping the hardware reduced as much as possible so no extra bits calculated and rounded off. -- Rick C