Reply by Steve Pope February 13, 20172017-02-13
On 2/12/2017 7:54 PM, Cedron wrote:

> Doesn't "DD" already tell you the size?
Nope, only the cup size. There is still the numerical size. This is a trick question right? S.
Reply by rickman February 12, 20172017-02-12
On 2/12/2017 7:54 PM, Cedron wrote:
> Okay, I can't resist. > > Who is this DD SLUT you are talking about? Doesn't "DD" already tell you > the size? > > Snicker.
DD? -- Rick C
Reply by Cedron February 12, 20172017-02-12
Okay, I can't resist.

Who is this DD SLUT you are talking about?  Doesn't "DD" already tell you
the size?

Snicker.

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by rickman February 12, 20172017-02-12
On 2/12/2017 12:55 PM, Tim Wescott wrote:
> On Sun, 12 Feb 2017 01:37:52 -0500, rickman wrote: > >> On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote: >>> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>>>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>>>> rickman <gnuarm@gmail.com> wrote: >>>>>>> >>>>>>>> This whole conversation started in another group regarding the use >>>>>>>> of the trig method vs. CORDIC. It was claimed that CORDIC gave >>>>>>>> *exact* results. I recalled from work I had done a few years ago >>>>>>>> that I could get accuracy as good as I wish by adjusting the depth >>>>>>>> of the LUT and using a multiplier. What I should do is explore >>>>>>>> the CORDIC algorithm and find out just how "exact" the calculation >>>>>>>> is. >>>>>>> >>>>>>> There usually isn't much point to using CORDIC in DSP. It does >>>>>>> have application in math libraries. If a CORDIC routine is using >>>>>>> double double types internally, then it can probably give an exact >>>>>>> result for the double case. But I'll leave such determinations for >>>>>>> the unfortunate designers who actually have to worry about such >>>>>>> things. >>>>>>> >>>>>>> I've been in DSP for decades and have never once used CORDIC in one >>>>>>> of my own designs. >>>>>> >>>>>> I don't know what type of designs you do, but CORDIC was very >>>>>> popular for RADAR work among others back in the day when FPGAs >>>>>> didn't have multipliers. I believe Ray Andraka was a big user of >>>>>> them. It was often touted that CORDIC didn't require a multiplier, >>>>>> although the circuitry for CORDIC is of the same level of complexity >>>>>> as a multiplier. >>>>>> I didn't figure this out until I tried to learn about the CORDIC >>>>>> once. >>>>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>>>> pursue it further. >>>>> >>>>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>>>> multipliers required the use of tricks like CORDIC rotators for >>>>> carrier mixing, DDS, etc. You could also use LUTs, but memory on >>>>> FPGAs (or anywhere) in those days was also a very precious resource. >>>>> We did a LOT of complexity tradeoff studies because we'd always try >>>>> to place an FPGA just big enough to hold the designs, and if you >>>>> could step down an FPGA size it usually meant saving a bunch of money >>>>> on every unit. When the FPGA vendors started populating very usable >>>>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>>>> >>>>> I never saw a reason to look at CORDICs after that except for a few >>>>> very nichey applications that needed FPGAs that didn't have >>>>> multipliers for various reasons. It's still a good thing to have in >>>>> the bag of tricks, though, for when it is useful. >>>> >>>> What about the claim that the CORDIC algorithm produces an "exact" >>>> result? That was defined as "Exact would mean all the bits are >>>> correct, though typically an error of 1 ULP (unit in the last place) >>>> is ok." I'm not certain the trig with table lookup approach can get >>>> to no more than &plusmn;1. I think the combination of errors might result in >>>> &plusmn;2. Bu then all the work I've done so far was looking at keeping the >>>> hardware reduced as much as possible so no extra bits calculated and >>>> rounded off. >>> >>> I don't recall accuracy being an issue with CORDIC to within the >>> precision that we used, anyway. It's certainly not better than using >>> multipliers or LUTs if you pay attention to what you're doing, IMHO, >>> anyway. >> >> No, the sine and cosine have no error. It is the angle that has the >> error. In other words, the sine and cosine are exact, just for the >> wrong angle. >> >> The angle is provided, the algorithm adds and subtracts angles from a >> lookup table to reduce the angle to zero while rotating the vector (sine >> and cosine) to match. The angles are chosen to result in simple binary >> arithmetic (adds and subtracts) in rotating the vector. The angles are >> not perfectly represented. So every iteration involves a potential >> error in the angle up to &plusmn;0.5 lsb. This is in addition to the final >> error in resolution of up to &plusmn;1 lsb. So for a 32 bit calculation this >> can lead to some 3 lsbs of error on the average and a larger maximum. >> Maybe this is high as the actual errors in the LUT values will not all >> be 0.5. But the point is the result is *NOT* exact. >> >> Right sine, wrong angle. > > Which means that the sine and cosine are still wrong. > > Presumably you could extend the accuracy arbitrarily, by making the angle > computation wider -- but that runs you right into your original complaint > about the algorithm not really being much of a savings over > multiplication.
No savings at all. It is like three multiplications. As to the accuracy, it is right in the same camp as calculating sin(a+b) by evaluating sin(a)+cos(a)*b. The only trade off is that CORDIC is done with a very small lookup table and the above trig equation uses a lookup table for sin(a). At 32 bits the lookup table becomes rather large for some FPGA sizes. But I still don't know why anyone needs 32 bit sines. Real world measurements are typically limited to around 24 bits. -- Rick C
Reply by Tim Wescott February 12, 20172017-02-12
On Sun, 12 Feb 2017 01:37:52 -0500, rickman wrote:

> On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote: >> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: >> >>> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>>> rickman <gnuarm@gmail.com> wrote: >>>>>> >>>>>>> This whole conversation started in another group regarding the use >>>>>>> of the trig method vs. CORDIC. It was claimed that CORDIC gave >>>>>>> *exact* results. I recalled from work I had done a few years ago >>>>>>> that I could get accuracy as good as I wish by adjusting the depth >>>>>>> of the LUT and using a multiplier. What I should do is explore >>>>>>> the CORDIC algorithm and find out just how "exact" the calculation >>>>>>> is. >>>>>> >>>>>> There usually isn't much point to using CORDIC in DSP. It does >>>>>> have application in math libraries. If a CORDIC routine is using >>>>>> double double types internally, then it can probably give an exact >>>>>> result for the double case. But I'll leave such determinations for >>>>>> the unfortunate designers who actually have to worry about such >>>>>> things. >>>>>> >>>>>> I've been in DSP for decades and have never once used CORDIC in one >>>>>> of my own designs. >>>>> >>>>> I don't know what type of designs you do, but CORDIC was very >>>>> popular for RADAR work among others back in the day when FPGAs >>>>> didn't have multipliers. I believe Ray Andraka was a big user of >>>>> them. It was often touted that CORDIC didn't require a multiplier, >>>>> although the circuitry for CORDIC is of the same level of complexity >>>>> as a multiplier. >>>>> I didn't figure this out until I tried to learn about the CORDIC >>>>> once. >>>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>>> pursue it further. >>>> >>>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>>> multipliers required the use of tricks like CORDIC rotators for >>>> carrier mixing, DDS, etc. You could also use LUTs, but memory on >>>> FPGAs (or anywhere) in those days was also a very precious resource. >>>> We did a LOT of complexity tradeoff studies because we'd always try >>>> to place an FPGA just big enough to hold the designs, and if you >>>> could step down an FPGA size it usually meant saving a bunch of money >>>> on every unit. When the FPGA vendors started populating very usable >>>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>>> >>>> I never saw a reason to look at CORDICs after that except for a few >>>> very nichey applications that needed FPGAs that didn't have >>>> multipliers for various reasons. It's still a good thing to have in >>>> the bag of tricks, though, for when it is useful. >>> >>> What about the claim that the CORDIC algorithm produces an "exact" >>> result? That was defined as "Exact would mean all the bits are >>> correct, though typically an error of 1 ULP (unit in the last place) >>> is ok." I'm not certain the trig with table lookup approach can get >>> to no more than &plusmn;1. I think the combination of errors might result in >>> &plusmn;2. Bu then all the work I've done so far was looking at keeping the >>> hardware reduced as much as possible so no extra bits calculated and >>> rounded off. >> >> I don't recall accuracy being an issue with CORDIC to within the >> precision that we used, anyway. It's certainly not better than using >> multipliers or LUTs if you pay attention to what you're doing, IMHO, >> anyway. > > No, the sine and cosine have no error. It is the angle that has the > error. In other words, the sine and cosine are exact, just for the > wrong angle. > > The angle is provided, the algorithm adds and subtracts angles from a > lookup table to reduce the angle to zero while rotating the vector (sine > and cosine) to match. The angles are chosen to result in simple binary > arithmetic (adds and subtracts) in rotating the vector. The angles are > not perfectly represented. So every iteration involves a potential > error in the angle up to &plusmn;0.5 lsb. This is in addition to the final > error in resolution of up to &plusmn;1 lsb. So for a 32 bit calculation this > can lead to some 3 lsbs of error on the average and a larger maximum. > Maybe this is high as the actual errors in the LUT values will not all > be 0.5. But the point is the result is *NOT* exact. > > Right sine, wrong angle.
Which means that the sine and cosine are still wrong. Presumably you could extend the accuracy arbitrarily, by making the angle computation wider -- but that runs you right into your original complaint about the algorithm not really being much of a savings over multiplication. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by rickman February 12, 20172017-02-12
On 2/12/2017 1:26 AM, eric.jacobsen@ieee.org wrote:
> On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >>> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >>> >>>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>>> rickman <gnuarm@gmail.com> wrote: >>>>> >>>>>> This whole conversation started in another group regarding the use of >>>>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>>>> results. I recalled from work I had done a few years ago that I could >>>>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>>>> and find out just how "exact" the calculation is. >>>>> >>>>> There usually isn't much point to using CORDIC in DSP. It does have >>>>> application in math libraries. If a CORDIC routine is using >>>>> double double types internally, then it can probably give an >>>>> exact result for the double case. But I'll leave such determinations for >>>>> the unfortunate designers who actually have to worry about such things. >>>>> >>>>> I've been in DSP for decades and have never once used CORDIC in >>>>> one of my own designs. >>>> >>>> I don't know what type of designs you do, but CORDIC was very popular >>>> for RADAR work among others back in the day when FPGAs didn't have >>>> multipliers. I believe Ray Andraka was a big user of them. It was >>>> often touted that CORDIC didn't require a multiplier, although the >>>> circuitry for CORDIC is of the same level of complexity as a multiplier. >>>> I didn't figure this out until I tried to learn about the CORDIC once. >>>> By that time multipliers were not uncommon in FPGAs, so I didn't >>>> pursue it further. >>> >>> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >>> multipliers required the use of tricks like CORDIC rotators for >>> carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs >>> (or anywhere) in those days was also a very precious resource. We did >>> a LOT of complexity tradeoff studies because we'd always try to place >>> an FPGA just big enough to hold the designs, and if you could step >>> down an FPGA size it usually meant saving a bunch of money on every >>> unit. When the FPGA vendors started populating very usable >>> multipliers on the die the CORDIC fell out of favor pretty quickly. >>> >>> I never saw a reason to look at CORDICs after that except for a few >>> very nichey applications that needed FPGAs that didn't have >>> multipliers for various reasons. It's still a good thing to have in >>> the bag of tricks, though, for when it is useful. >> >> What about the claim that the CORDIC algorithm produces an "exact" >> result? That was defined as "Exact would mean all the bits are correct, >> though typically an error of 1 ULP (unit in the last place) is ok." I'm >> not certain the trig with table lookup approach can get to no more than >> &#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all >> the work I've done so far was looking at keeping the hardware reduced as >> much as possible so no extra bits calculated and rounded off. > > I don't recall accuracy being an issue with CORDIC to within the > precision that we used, anyway. It's certainly not better than using > multipliers or LUTs if you pay attention to what you're doing, IMHO, > anyway.
No, the sine and cosine have no error. It is the angle that has the error. In other words, the sine and cosine are exact, just for the wrong angle. The angle is provided, the algorithm adds and subtracts angles from a lookup table to reduce the angle to zero while rotating the vector (sine and cosine) to match. The angles are chosen to result in simple binary arithmetic (adds and subtracts) in rotating the vector. The angles are not perfectly represented. So every iteration involves a potential error in the angle up to &#4294967295;0.5 lsb. This is in addition to the final error in resolution of up to &#4294967295;1 lsb. So for a 32 bit calculation this can lead to some 3 lsbs of error on the average and a larger maximum. Maybe this is high as the actual errors in the LUT values will not all be 0.5. But the point is the result is *NOT* exact. Right sine, wrong angle. -- Rick C
Reply by February 12, 20172017-02-12
On Sat, 11 Feb 2017 18:01:04 -0500, rickman <gnuarm@gmail.com> wrote:

>On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote: >> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: >> >>> On 2/11/2017 12:24 AM, Steve Pope wrote: >>>> rickman <gnuarm@gmail.com> wrote: >>>> >>>>> This whole conversation started in another group regarding the use of >>>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>>> results. I recalled from work I had done a few years ago that I could >>>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>>> and find out just how "exact" the calculation is. >>>> >>>> There usually isn't much point to using CORDIC in DSP. It does have >>>> application in math libraries. If a CORDIC routine is using >>>> double double types internally, then it can probably give an >>>> exact result for the double case. But I'll leave such determinations for >>>> the unfortunate designers who actually have to worry about such things. >>>> >>>> I've been in DSP for decades and have never once used CORDIC in >>>> one of my own designs. >>> >>> I don't know what type of designs you do, but CORDIC was very popular >>> for RADAR work among others back in the day when FPGAs didn't have >>> multipliers. I believe Ray Andraka was a big user of them. It was >>> often touted that CORDIC didn't require a multiplier, although the >>> circuitry for CORDIC is of the same level of complexity as a multiplier. >>> I didn't figure this out until I tried to learn about the CORDIC once. >>> By that time multipliers were not uncommon in FPGAs, so I didn't >>> pursue it further. >> >> Yes, in the early-to-mid 90s putting demodulators in FPGAs with no >> multipliers required the use of tricks like CORDIC rotators for >> carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs >> (or anywhere) in those days was also a very precious resource. We did >> a LOT of complexity tradeoff studies because we'd always try to place >> an FPGA just big enough to hold the designs, and if you could step >> down an FPGA size it usually meant saving a bunch of money on every >> unit. When the FPGA vendors started populating very usable >> multipliers on the die the CORDIC fell out of favor pretty quickly. >> >> I never saw a reason to look at CORDICs after that except for a few >> very nichey applications that needed FPGAs that didn't have >> multipliers for various reasons. It's still a good thing to have in >> the bag of tricks, though, for when it is useful. > >What about the claim that the CORDIC algorithm produces an "exact" >result? That was defined as "Exact would mean all the bits are correct, >though typically an error of 1 ULP (unit in the last place) is ok." I'm >not certain the trig with table lookup approach can get to no more than >&#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all >the work I've done so far was looking at keeping the hardware reduced as >much as possible so no extra bits calculated and rounded off.
I don't recall accuracy being an issue with CORDIC to within the precision that we used, anyway. It's certainly not better than using multipliers or LUTs if you pay attention to what you're doing, IMHO, anyway. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
Reply by rickman February 11, 20172017-02-11
On 2/11/2017 5:32 PM, eric.jacobsen@ieee.org wrote:
> On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 12:24 AM, Steve Pope wrote: >>> rickman <gnuarm@gmail.com> wrote: >>> >>>> This whole conversation started in another group regarding the use of >>>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>>> results. I recalled from work I had done a few years ago that I could >>>> get accuracy as good as I wish by adjusting the depth of the LUT and >>>> using a multiplier. What I should do is explore the CORDIC algorithm >>>> and find out just how "exact" the calculation is. >>> >>> There usually isn't much point to using CORDIC in DSP. It does have >>> application in math libraries. If a CORDIC routine is using >>> double double types internally, then it can probably give an >>> exact result for the double case. But I'll leave such determinations for >>> the unfortunate designers who actually have to worry about such things. >>> >>> I've been in DSP for decades and have never once used CORDIC in >>> one of my own designs. >> >> I don't know what type of designs you do, but CORDIC was very popular >> for RADAR work among others back in the day when FPGAs didn't have >> multipliers. I believe Ray Andraka was a big user of them. It was >> often touted that CORDIC didn't require a multiplier, although the >> circuitry for CORDIC is of the same level of complexity as a multiplier. >> I didn't figure this out until I tried to learn about the CORDIC once. >> By that time multipliers were not uncommon in FPGAs, so I didn't >> pursue it further. > > Yes, in the early-to-mid 90s putting demodulators in FPGAs with no > multipliers required the use of tricks like CORDIC rotators for > carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs > (or anywhere) in those days was also a very precious resource. We did > a LOT of complexity tradeoff studies because we'd always try to place > an FPGA just big enough to hold the designs, and if you could step > down an FPGA size it usually meant saving a bunch of money on every > unit. When the FPGA vendors started populating very usable > multipliers on the die the CORDIC fell out of favor pretty quickly. > > I never saw a reason to look at CORDICs after that except for a few > very nichey applications that needed FPGAs that didn't have > multipliers for various reasons. It's still a good thing to have in > the bag of tricks, though, for when it is useful.
What about the claim that the CORDIC algorithm produces an "exact" result? That was defined as "Exact would mean all the bits are correct, though typically an error of 1 ULP (unit in the last place) is ok." I'm not certain the trig with table lookup approach can get to no more than &#4294967295;1. I think the combination of errors might result in &#4294967295;2. Bu then all the work I've done so far was looking at keeping the hardware reduced as much as possible so no extra bits calculated and rounded off. -- Rick C
Reply by February 11, 20172017-02-11
On Sat, 11 Feb 2017 03:45:17 -0500, rickman <gnuarm@gmail.com> wrote:

>On 2/11/2017 12:24 AM, Steve Pope wrote: >> rickman <gnuarm@gmail.com> wrote: >> >>> This whole conversation started in another group regarding the use of >>> the trig method vs. CORDIC. It was claimed that CORDIC gave *exact* >>> results. I recalled from work I had done a few years ago that I could >>> get accuracy as good as I wish by adjusting the depth of the LUT and >>> using a multiplier. What I should do is explore the CORDIC algorithm >>> and find out just how "exact" the calculation is. >> >> There usually isn't much point to using CORDIC in DSP. It does have >> application in math libraries. If a CORDIC routine is using >> double double types internally, then it can probably give an >> exact result for the double case. But I'll leave such determinations for >> the unfortunate designers who actually have to worry about such things. >> >> I've been in DSP for decades and have never once used CORDIC in >> one of my own designs. > >I don't know what type of designs you do, but CORDIC was very popular >for RADAR work among others back in the day when FPGAs didn't have >multipliers. I believe Ray Andraka was a big user of them. It was >often touted that CORDIC didn't require a multiplier, although the >circuitry for CORDIC is of the same level of complexity as a multiplier. > I didn't figure this out until I tried to learn about the CORDIC once. > By that time multipliers were not uncommon in FPGAs, so I didn't >pursue it further.
Yes, in the early-to-mid 90s putting demodulators in FPGAs with no multipliers required the use of tricks like CORDIC rotators for carrier mixing, DDS, etc. You could also use LUTs, but memory on FPGAs (or anywhere) in those days was also a very precious resource. We did a LOT of complexity tradeoff studies because we'd always try to place an FPGA just big enough to hold the designs, and if you could step down an FPGA size it usually meant saving a bunch of money on every unit. When the FPGA vendors started populating very usable multipliers on the die the CORDIC fell out of favor pretty quickly. I never saw a reason to look at CORDICs after that except for a few very nichey applications that needed FPGAs that didn't have multipliers for various reasons. It's still a good thing to have in the bag of tricks, though, for when it is useful. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
Reply by rickman February 11, 20172017-02-11
On 2/11/2017 8:47 AM, Steve Pope wrote:
> In article <o7mirg$u3v$1@dont-email.me>, rickman <gnuarm@gmail.com> wrote: > >> On 2/11/2017 12:24 AM, Steve Pope wrote: > >>> There usually isn't much point to using CORDIC in DSP. It does have >>> application in math libraries. If a CORDIC routine is using >>> double double types internally, then it can probably give an >>> exact result for the double case. But I'll leave such determinations for >>> the unfortunate designers who actually have to worry about such things. > >>> I've been in DSP for decades and have never once used CORDIC in >>> one of my own designs. > >> I don't know what type of designs you do, but CORDIC was very popular >> for RADAR work among others back in the day when FPGAs didn't have >> multipliers. I believe Ray Andraka was a big user of them. It was >> often touted that CORDIC didn't require a multiplier, although the >> circuitry for CORDIC is of the same level of complexity as a multiplier. >> I didn't figure this out until I tried to learn about the CORDIC once. >> By that time multipliers were not uncommon in FPGAs, so I didn't >> pursue it further. > > Thanks for this insight. Yes, CORDIC has been popular and while > I haven't used it in a design I have inherited designs that use it. > > It happened that in the first job I had in DSP included > was a product where we could afford to include a TRW multiplier > chip in the design. (This was around 1976-1979). > > After that I was at UC Berkeley, and we were doing VLSI designs > so we could include multipliers as required (I usually used > half-parallel multipliers, sometimes with Booth's recoding, and > I recall some animated discussions with Rick Lyons on the topic > of multiplier logic design.) > > I re-entered the industry in 1984 and multipliers continued to > become more and more available. Although, it has been only > in the last decade or so that you would count on just synthesizing > the * operator in Verilog, as opposed to using something like > Designware to instantiate a vendor multiplier from their library, > or perhaps rolling you own. > >> BTW, what is a "double"? I've seen that used with integers as well as >> floating point data types, so I'm not sure which you are referring to. > > double is 64 bits floating point, double double is 128 bits, > in many (most?) flavors of C or C++.
I didn't assume you were talking about C. In HDL bit widths are explicitly defined rather than using names for specific widths. A double in a sine calculation would already be far more precision than required by any app I can think of. In fact, the other person I was discussing this with was using a 32 bit CORDIC sine generator which is far more resolution than I would have ever expected to need. -- Rick C