DSPRelated.com
Forums

CORDIC

Started by rickman February 11, 2017
On Sun, 12 Feb 2017 15:58:02 -0500, rickman <gnuarm@gmail.com> wrote:

>On 2/12/2017 11:26 AM, Tim Wescott wrote: >> On Sun, 12 Feb 2017 00:36:52 -0500, rickman wrote: >>> >>> If there is something CORDIC is better at, I'm not getting it. If it >>> only used a single add per iteration or even two I would say yes, it has >>> it's uses. But with three adds per iteration it sounds more costly than >>> having to do either a multiply or a divide. >> >> If you're thinking in terms of raw gates, or chip area, perhaps the >> memory savings is worth the extra effort. Certainly in 1956 or whatever >> it was the coolest of the cool -- but then, in 1956, memory was either a >> few transistors per bit, or it was a little toroid woven into a mess of >> wires by some soon-to-be nearsighted young lady. > >I'm not sure how they would have done this in 1956. Wasn't this for a >B-58 navigation computer? What kind of digital electronics did they >have then? Looking at a book on "Pulse and Digital Circuit" I see the >only mention of transistors is in special chapter at the end of the >book. Wasn't the transistor brand new then, invented in '51? Even >though they were available they were expensive and an entire FF would >occupy a small PCB. > >In high school (circa '69) we had been given an old computer from the >weather service. It had PCBs about 3-4 inches square which would hold a >single FF, or a gate. Even an 8 bit CORDIC would be tough to implement >in this technology unless the adds were bit serial. I guess that could >work in a box a foot on each side. It can be hard to think in terms of >bit serial. Everything has to be a shift register. > >Here is a reference by Volder about the invention of CORDIC with a >picture of the navigation computer. > >http://late-dpedago.urv.cat/site_media/papers/fulltext_2.pdf > >It's a lot bigger than a foot cube and I think I see some tubes. > >As an aside, the B-58 was a supersonic bomber. I don't think we have >any of those. The paper says it was scrapped in favor of balistic >missiles which makes perfect sense.
B-1B are supersonic bombers that are still in service. B-58s have been retired for more than fifty years. --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
On Sun, 12 Feb 2017 15:58:02 -0500, rickman wrote:

> On 2/12/2017 11:26 AM, Tim Wescott wrote: >> On Sun, 12 Feb 2017 00:36:52 -0500, rickman wrote: >>> >>> If there is something CORDIC is better at, I'm not getting it. If it >>> only used a single add per iteration or even two I would say yes, it >>> has it's uses. But with three adds per iteration it sounds more >>> costly than having to do either a multiply or a divide. >> >> If you're thinking in terms of raw gates, or chip area, perhaps the >> memory savings is worth the extra effort. Certainly in 1956 or >> whatever it was the coolest of the cool -- but then, in 1956, memory >> was either a few transistors per bit, or it was a little toroid woven >> into a mess of wires by some soon-to-be nearsighted young lady. > > I'm not sure how they would have done this in 1956. Wasn't this for a > B-58 navigation computer? What kind of digital electronics did they > have then? Looking at a book on "Pulse and Digital Circuit" I see the > only mention of transistors is in special chapter at the end of the > book. Wasn't the transistor brand new then, invented in '51? Even > though they were available they were expensive and an entire FF would > occupy a small PCB.
Point-contact transistors were working in the lab in December of 1947. Junction transistors came along in 1950. Your article below says the computer was implemented with DTL, so -- transistors. I think the cylindrical things you see are probably relays, but it could have had toobs in there, too. Given that the thing is _called_ the CORDIC, I think the algorithm is a central part of the computer. <snip>
> Here is a reference by Volder about the invention of CORDIC with a > picture of the navigation computer. > > http://late-dpedago.urv.cat/site_media/papers/fulltext_2.pdf > > It's a lot bigger than a foot cube and I think I see some tubes.
Thanks for digging that up! <more snip> -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Mon, 13 Feb 2017 01:16:40 +0000, eric.jacobsen wrote:

> On Sun, 12 Feb 2017 15:58:02 -0500, rickman <gnuarm@gmail.com> wrote: > >>On 2/12/2017 11:26 AM, Tim Wescott wrote: >>> On Sun, 12 Feb 2017 00:36:52 -0500, rickman wrote: >>>> >>>> If there is something CORDIC is better at, I'm not getting it. If it >>>> only used a single add per iteration or even two I would say yes, it >>>> has it's uses. But with three adds per iteration it sounds more >>>> costly than having to do either a multiply or a divide. >>> >>> If you're thinking in terms of raw gates, or chip area, perhaps the >>> memory savings is worth the extra effort. Certainly in 1956 or >>> whatever it was the coolest of the cool -- but then, in 1956, memory >>> was either a few transistors per bit, or it was a little toroid woven >>> into a mess of wires by some soon-to-be nearsighted young lady. >> >>I'm not sure how they would have done this in 1956. Wasn't this for a >>B-58 navigation computer? What kind of digital electronics did they >>have then? Looking at a book on "Pulse and Digital Circuit" I see the >>only mention of transistors is in special chapter at the end of the >>book. Wasn't the transistor brand new then, invented in '51? Even >>though they were available they were expensive and an entire FF would >>occupy a small PCB. >> >>In high school (circa '69) we had been given an old computer from the >>weather service. It had PCBs about 3-4 inches square which would hold a >>single FF, or a gate. Even an 8 bit CORDIC would be tough to implement >>in this technology unless the adds were bit serial. I guess that could >>work in a box a foot on each side. It can be hard to think in terms of >>bit serial. Everything has to be a shift register. >> >>Here is a reference by Volder about the invention of CORDIC with a >>picture of the navigation computer. >> >>http://late-dpedago.urv.cat/site_media/papers/fulltext_2.pdf >> >>It's a lot bigger than a foot cube and I think I see some tubes. >> >>As an aside, the B-58 was a supersonic bomber. I don't think we have >>any of those. The paper says it was scrapped in favor of balistic >>missiles which makes perfect sense. > > B-1B are supersonic bombers that are still in service. B-58s have been > retired for more than fifty years.
I wonder if they're still used in supersonic, or if the wings just stay out while they fly high & drop smart bombs. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Le samedi 11 f&eacute;vrier 2017 22:19:29 UTC-5, Tim Wescott a &eacute;crit&nbsp;:

> > In a day when multiplies were much more expensive than additions, CORDIC > made oodles of sense, because you were just shifting and adding. Now we > have block RAM and DSP blocks with single-cycle multiplies in our FPGAs > and single-cycle multiply instructions in our DSP chips, and CORDIC > doesn't make so much sense. >
> -- > > Tim Wescott > Wescott Design Services > http://www.wescottdesign.com > > I'm looking for work -- see my website!
I used Cordic 10 years ago in an FPGA for the Atan2 function as using a LUT for 16-bit X and 16-bit Y inputs would have required a huge (or yuge if you prefer) external memory. AFAIK, for Atan2, Cordic is still the best way to go. Maybe a hybrid LUT with linear interpolation would yield good results too.
On Saturday, February 11, 2017 at 4:44:28 PM UTC-8, rickman wrote:
> I don't think I've ever really dug into the CORDIC algorithm enough to > appreciate the logic involved. I do recall coming to the realization > that the shift and add/subtract algorithm is not overly different from a > multiplication, so I had trouble understanding why touted as a way to > avoid the use of a multiplier.
I first knew CORDIC from the implementations in scientific calculators. Calculators commonly do digit serial BCD arithmetic, and I believe that the algorithm is a little different in that case, but not much. If you want to do a 10 decimal digit, or maybe 36 bit binary calculation, multiplication is a lot of work. With 36 bit binary, you do shift and conditional add, 36 times. Each add is one loop through the serial adder, but you might do it four bits at a time. But in any case, with a serial adder (binary or bcd) addition is O(N), multiply is O(N**2). Now, say you do atan() using a 10 term polynomial, and you now have about 10 multiplies. But the whole CORDIC to compute sine, cosine, tangent, or arctan takes only about as much as one multiply. Actually, I believe that there is an additional multiply in the last or first step of CORDIC, but it is still much faster than the polynomials used in software on most computer systems. This leaves out the argument reduction needed before the polynomial, and I suspect before CORDIC. But given a bit or digit serial adder, it works very well.