DSPRelated.com
Forums

DDS LUT Size Calculation

Started by rickman February 7, 2017
On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote:

>On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >> >>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>> rickman <gnuarm@gmail.com> wrote: >>>> >>>>> This whole conversation started in another group regarding the use of >>>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>>> results. I recalled from work I had done a few years ago that I could >>>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>>> and find out just how "exact" the calculation is. >>>> >>>> There usually isn't much point to using CORDIC in DSP. It does have >>>> application in math libraries. If a CORDIC routine is using >>>> double double types internally, then it can probably give an >>>> exact result for the double case. But I'll leave such determinations for >>>> the unfortunate designers who actually have to worry about such things. >>>> >>>> I've been in DSP for decades and have never once used CORDIC in >>>> one of my own designs. >>> >>> I don't know what type of designs you do, but CORDIC was very popular >>> for RADAR work among others back in the day when FPGAs didn't have >>> multipliers. I believe Ray Andraka was a big user of them. It was >>> often touted that CORDIC didn't require a multiplier, although the >>> circuitry for CORDIC is of the same level of complexity as a multiplier. >>> I didn't figure this out until I tried to learn about the CORDIC once. >>> By that time multipliers were not uncommon in FPGAs, so I didn't >>> pursue it further. >> >> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >> multipliers required the use of tricks like CORDIC rotators for >> carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs >> (or anywhere) in those days was also a very precious resource. We did >> a LOT of complexity tradeoff studies because we'd always try to place >> an FPGA just big enough to hold the designs, and if you could step >> down an FPGA size it usually meant saving a bunch of money on every >> unit. When the FPGA vendors started populating very usable >> multipliers on the die the CORDIC fell out of favor pretty quickly. >> >> I never saw a reason to look at CORDICs after that except for a few >> very nichey applications that needed FPGAs that didn't have >> multipliers for various reasons. It's still a good thing to have in >> the bag of tricks, though, for when it is useful. > >What about the claim that the CORDIC algorithm produces an "exact" >result? That was defined as "Exact would mean all the bits are correct, >though typically an error of 1 ULP (unit in the last place) is ok." I'm >not certain the trig with table lookup approach can get to no more than >&#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all >the work I've done so far was looking at keeping the hardware reduced as >much as possible so no extra bits calculated and rounded off.
I don't recall accuracy being an issue with CORDIC to within the precision that we used, anyway. It's certainly not better than using multipliers or LUTs if you pay attention to what you're doing, IMHO, anyway. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote:
> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>> rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> This whole conversation started in another group regarding the use of >>>>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>>>> results. I recalled from work I had done a few years ago that I could >>>>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>>>> and find out just how "exact" the calculation is. >>>>> >>>>> There usually isn't much point to using CORDIC in DSP. It does have >>>>> application in math libraries. If a CORDIC routine is using >>>>> double double types internally, then it can probably give an >>>>> exact result for the double case. But I'll leave such determinations for >>>>> the unfortunate designers who actually have to worry about such things. >>>>> >>>>> I've been in DSP for decades and have never once used CORDIC in >>>>> one of my own designs. >>>> >>>> I don't know what type of designs you do, but CORDIC was very popular >>>> for RADAR work among others back in the day when FPGAs didn't have >>>> multipliers. I believe Ray Andraka was a big user of them. It was >>>> often touted that CORDIC didn't require a multiplier, although the >>>> circuitry for CORDIC is of the same level of complexity as a multiplier. >>>> I didn't figure this out until I tried to learn about the CORDIC once. >>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>> pursue it further. >>> >>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>> multipliers required the use of tricks like CORDIC rotators for >>> carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs >>> (or anywhere) in those days was also a very precious resource. We did >>> a LOT of complexity tradeoff studies because we'd always try to place >>> an FPGA just big enough to hold the designs, and if you could step >>> down an FPGA size it usually meant saving a bunch of money on every >>> unit. When the FPGA vendors started populating very usable >>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>> >>> I never saw a reason to look at CORDICs after that except for a few >>> very nichey applications that needed FPGAs that didn't have >>> multipliers for various reasons. It's still a good thing to have in >>> the bag of tricks, though, for when it is useful. >> >> What about the claim that the CORDIC algorithm produces an "exact" >> result? That was defined as "Exact would mean all the bits are correct, >> though typically an error of 1 ULP (unit in the last place) is ok." I'm >> not certain the trig with table lookup approach can get to no more than >> &#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all >> the work I've done so far was looking at keeping the hardware reduced as >> much as possible so no extra bits calculated and rounded off. > > I don't recall accuracy being an issue with CORDIC to within the > precision that we used, anyway. It's certainly not better than using > multipliers or LUTs if you pay attention to what you're doing, IMHO, > anyway.
No, the sine and cosine have no error. It is the angle that has the error. In other words, the sine and cosine are exact, just for the wrong angle. The angle is provided, the algorithm adds and subtracts angles from a lookup table to reduce the angle to zero while rotating the vector (sine and cosine) to match. The angles are chosen to result in simple binary arithmetic (adds and subtracts) in rotating the vector. The angles are not perfectly represented. So every iteration involves a potential error in the angle up to &#4294967295;0.5 lsb. This is in addition to the final error in resolution of up to &#4294967295;1 lsb. So for a 32 bit calculation this can lead to some 3 lsbs of error on the average and a larger maximum. Maybe this is high as the actual errors in the LUT values will not all be 0.5. But the point is the result is *NOT* exact. Right sine, wrong angle. -- Rick C
On Sun, 12 Feb 2017 01:37:52 -0500, rickman wrote:

> On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote: >> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: >> >>> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>>> rickman <gnuarm@gmail.com> wrote: >>>>>> >>>>>>> This whole conversation started in another group regarding the use >>>>>>> of the trig method vs. CORDIC. It was claimed that CORDIC gave >>>>>>> *exact* results. I recalled from work I had done a few years ago >>>>>>> that I could get accuracy as good as I wish by adjusting the depth >>>>>>> of the LUT and using a multiplier. What I should do is explore >>>>>>> the CORDIC algorithm and find out just how "exact" the calculation >>>>>>> is. >>>>>> >>>>>> There usually isn't much point to using CORDIC in DSP. It does >>>>>> have application in math libraries. If a CORDIC routine is using >>>>>> double double types internally, then it can probably give an exact >>>>>> result for the double case. But I'll leave such determinations for >>>>>> the unfortunate designers who actually have to worry about such >>>>>> things. >>>>>> >>>>>> I've been in DSP for decades and have never once used CORDIC in one >>>>>> of my own designs. >>>>> >>>>> I don't know what type of designs you do, but CORDIC was very >>>>> popular for RADAR work among others back in the day when FPGAs >>>>> didn't have multipliers. I believe Ray Andraka was a big user of >>>>> them. It was often touted that CORDIC didn't require a multiplier, >>>>> although the circuitry for CORDIC is of the same level of complexity >>>>> as a multiplier. >>>>> I didn't figure this out until I tried to learn about the CORDIC >>>>> once. >>>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>>> pursue it further. >>>> >>>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>>> multipliers required the use of tricks like CORDIC rotators for >>>> carrier mixing, DDS, etc. You could also use LUTs, but memory on >>>> FPGAs (or anywhere) in those days was also a very precious resource. >>>> We did a LOT of complexity tradeoff studies because we'd always try >>>> to place an FPGA just big enough to hold the designs, and if you >>>> could step down an FPGA size it usually meant saving a bunch of money >>>> on every unit. When the FPGA vendors started populating very usable >>>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>>> >>>> I never saw a reason to look at CORDICs after that except for a few >>>> very nichey applications that needed FPGAs that didn't have >>>> multipliers for various reasons. It's still a good thing to have in >>>> the bag of tricks, though, for when it is useful. >>> >>> What about the claim that the CORDIC algorithm produces an "exact" >>> result? That was defined as "Exact would mean all the bits are >>> correct, though typically an error of 1 ULP (unit in the last place) >>> is ok." I'm not certain the trig with table lookup approach can get >>> to no more than &plusmn;1. I think the combination of errors might result in >>> &plusmn;2. Bu then all the work I've done so far was looking at keeping the >>> hardware reduced as much as possible so no extra bits calculated and >>> rounded off. >> >> I don't recall accuracy being an issue with CORDIC to within the >> precision that we used, anyway. It's certainly not better than using >> multipliers or LUTs if you pay attention to what you're doing, IMHO, >> anyway. > > No, the sine and cosine have no error. It is the angle that has the > error. In other words, the sine and cosine are exact, just for the > wrong angle. > > The angle is provided, the algorithm adds and subtracts angles from a > lookup table to reduce the angle to zero while rotating the vector (sine > and cosine) to match. The angles are chosen to result in simple binary > arithmetic (adds and subtracts) in rotating the vector. The angles are > not perfectly represented. So every iteration involves a potential > error in the angle up to &plusmn;0.5 lsb. This is in addition to the final > error in resolution of up to &plusmn;1 lsb. So for a 32 bit calculation this > can lead to some 3 lsbs of error on the average and a larger maximum. > Maybe this is high as the actual errors in the LUT values will not all > be 0.5. But the point is the result is *NOT* exact. > > Right sine, wrong angle.
Which means that the sine and cosine are still wrong. Presumably you could extend the accuracy arbitrarily, by making the angle computation wider -- but that runs you right into your original complaint about the algorithm not really being much of a savings over multiplication. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
On 2/12/2017 12:55 PM, Tim Wescott wrote:
> On Sun, 12 Feb 2017 01:37:52 -0500, rickman wrote: > >> On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote: >>> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>>>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>>>> rickman <gnuarm@gmail.com> wrote: >>>>>>> >>>>>>>> This whole conversation started in another group regarding the use >>>>>>>> of the trig method vs. CORDIC. It was claimed that CORDIC gave >>>>>>>> *exact* results. I recalled from work I had done a few years ago >>>>>>>> that I could get accuracy as good as I wish by adjusting the depth >>>>>>>> of the LUT and using a multiplier. What I should do is explore >>>>>>>> the CORDIC algorithm and find out just how "exact" the calculation >>>>>>>> is. >>>>>>> >>>>>>> There usually isn't much point to using CORDIC in DSP. It does >>>>>>> have application in math libraries. If a CORDIC routine is using >>>>>>> double double types internally, then it can probably give an exact >>>>>>> result for the double case. But I'll leave such determinations for >>>>>>> the unfortunate designers who actually have to worry about such >>>>>>> things. >>>>>>> >>>>>>> I've been in DSP for decades and have never once used CORDIC in one >>>>>>> of my own designs. >>>>>> >>>>>> I don't know what type of designs you do, but CORDIC was very >>>>>> popular for RADAR work among others back in the day when FPGAs >>>>>> didn't have multipliers. I believe Ray Andraka was a big user of >>>>>> them. It was often touted that CORDIC didn't require a multiplier, >>>>>> although the circuitry for CORDIC is of the same level of complexity >>>>>> as a multiplier. >>>>>> I didn't figure this out until I tried to learn about the CORDIC >>>>>> once. >>>>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>>>> pursue it further. >>>>> >>>>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>>>> multipliers required the use of tricks like CORDIC rotators for >>>>> carrier mixing, DDS, etc. You could also use LUTs, but memory on >>>>> FPGAs (or anywhere) in those days was also a very precious resource. >>>>> We did a LOT of complexity tradeoff studies because we'd always try >>>>> to place an FPGA just big enough to hold the designs, and if you >>>>> could step down an FPGA size it usually meant saving a bunch of money >>>>> on every unit. When the FPGA vendors started populating very usable >>>>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>>>> >>>>> I never saw a reason to look at CORDICs after that except for a few >>>>> very nichey applications that needed FPGAs that didn't have >>>>> multipliers for various reasons. It's still a good thing to have in >>>>> the bag of tricks, though, for when it is useful. >>>> >>>> What about the claim that the CORDIC algorithm produces an "exact" >>>> result? That was defined as "Exact would mean all the bits are >>>> correct, though typically an error of 1 ULP (unit in the last place) >>>> is ok." I'm not certain the trig with table lookup approach can get >>>> to no more than &plusmn;1. I think the combination of errors might result in >>>> &plusmn;2. Bu then all the work I've done so far was looking at keeping the >>>> hardware reduced as much as possible so no extra bits calculated and >>>> rounded off. >>> >>> I don't recall accuracy being an issue with CORDIC to within the >>> precision that we used, anyway. It's certainly not better than using >>> multipliers or LUTs if you pay attention to what you're doing, IMHO, >>> anyway. >> >> No, the sine and cosine have no error. It is the angle that has the >> error. In other words, the sine and cosine are exact, just for the >> wrong angle. >> >> The angle is provided, the algorithm adds and subtracts angles from a >> lookup table to reduce the angle to zero while rotating the vector (sine >> and cosine) to match. The angles are chosen to result in simple binary >> arithmetic (adds and subtracts) in rotating the vector. The angles are >> not perfectly represented. So every iteration involves a potential >> error in the angle up to &plusmn;0.5 lsb. This is in addition to the final >> error in resolution of up to &plusmn;1 lsb. So for a 32 bit calculation this >> can lead to some 3 lsbs of error on the average and a larger maximum. >> Maybe this is high as the actual errors in the LUT values will not all >> be 0.5. But the point is the result is *NOT* exact. >> >> Right sine, wrong angle. > > Which means that the sine and cosine are still wrong. > > Presumably you could extend the accuracy arbitrarily, by making the angle > computation wider -- but that runs you right into your original complaint > about the algorithm not really being much of a savings over > multiplication.
No savings at all. It is like three multiplications. As to the accuracy, it is right in the same camp as calculating sin(a+b) by evaluating sin(a)+cos(a)*b. The only trade off is that CORDIC is done with a very small lookup table and the above trig equation uses a lookup table for sin(a). At 32 bits the lookup table becomes rather large for some FPGA sizes. But I still don't know why anyone needs 32 bit sines. Real world measurements are typically limited to around 24 bits. -- Rick C
Okay, I can't resist.

Who is this DD SLUT you are talking about?  Doesn't "DD" already tell you
the size?

Snicker.

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
On 2/12/2017 7:54 PM, Cedron wrote:
> Okay, I can't resist. > > Who is this DD SLUT you are talking about? Doesn't "DD" already tell you > the size? > > Snicker.
DD? -- Rick C
On 2/12/2017 7:54 PM, Cedron wrote:

> Doesn't "DD" already tell you the size?
Nope, only the cup size. There is still the numerical size. This is a trick question right? S.