DSPRelated.com
Forums

IQmath library for conversion of floating to fixed

Started by Sunanda February 14, 2012
Hello, 
   I have a 10 page floating point C code that has to be converted to fixed
point. I wanted to know whether using shortcuts, like IQmath libraries is
out of question..
Has anyone used this library for fixed point conversion?
the processor is TMS32064x+ fixed point processor.
(I am a newbie in this field,this is the first time i am doing a floating
to fixed point conversion. This is a part of our undergraduate project)


On 14.2.12 4:25 , Sunanda wrote:
> Hello, > I have a 10 page floating point C code that has to be converted to fixed > point. I wanted to know whether using shortcuts, like IQmath libraries is > out of question.. > Has anyone used this library for fixed point conversion? > the processor is TMS32064x+ fixed point processor. > (I am a newbie in this field,this is the first time i am doing a floating > to fixed point conversion. This is a part of our undergraduate project)
If this is the question of a navigation solution, the responses stay the same: First you need to understand the math and the dynamics of the calculation before even trying to port the code to fixed point. -- Tauno Voipio
On Tue, 14 Feb 2012 08:25:17 -0600
"Sunanda" <sunanda6691@n_o_s_p_a_m.gmail.com> wrote:

> Hello, > I have a 10 page floating point C code that has to be converted to fixed > point. I wanted to know whether using shortcuts, like IQmath libraries is > out of question.. > Has anyone used this library for fixed point conversion? > the processor is TMS32064x+ fixed point processor. > (I am a newbie in this field,this is the first time i am doing a floating > to fixed point conversion. This is a part of our undergraduate project) > >
TI also makes floating point processors (65xx series if I recall, but others here would know better). What's your reason for trying to make this code fixed point, in light of the fact that a serious control systems expert is telling you that he wouldn't? -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
On Tue, 14 Feb 2012 08:49:35 -0800, Rob Gaddi wrote:

> On Tue, 14 Feb 2012 08:25:17 -0600 > "Sunanda" <sunanda6691@n_o_s_p_a_m.gmail.com> wrote: > >> Hello, >> I have a 10 page floating point C code that has to be converted to >> fixed >> point. I wanted to know whether using shortcuts, like IQmath libraries >> is out of question.. >> Has anyone used this library for fixed point conversion? the processor >> is TMS32064x+ fixed point processor. (I am a newbie in this field,this >> is the first time i am doing a floating to fixed point conversion. This >> is a part of our undergraduate project) >> >> >> > TI also makes floating point processors (65xx series if I recall, but > others here would know better). What's your reason for trying to make > this code fixed point, in light of the fact that a serious control > systems expert is telling you that he wouldn't?
Well, I didn't exactly say that: "Now, if you had a well-defined nav solution algorithm in hand, and you wanted to convert it to fixed point -- that might be doable." I probably should have said "well-defined and simple enough", but maybe not. In fact, I think it'd be oodles of fun to do fixed-point nav solutions. I could probably make a career out of it, because in order to make one that fit I'd have to start by hacking out swaths of functionality, which would mean that I could always do valid work, and there would always be a a never-ending upward path to traverse as time goes by. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Tue, 14 Feb 2012 08:25:17 -0600, Sunanda wrote:

> Hello, > I have a 10 page floating point C code that has to be converted to > fixed > point. I wanted to know whether using shortcuts, like IQmath libraries > is out of question.. > Has anyone used this library for fixed point conversion? the processor > is TMS32064x+ fixed point processor. (I am a newbie in this field,this > is the first time i am doing a floating to fixed point conversion. This > is a part of our undergraduate project)
The real difficulty in converting to fixed point math is not in the actual mechanics of doing the number-crunching. That part certainly isn't trivial, and the libraries are probably going to be a big help if you haven't done much mathematical programming in assembly language. The real difficulty in converting to fixed point is that for every data path in the entire algorithm, you need to calculate the maximum value that will ever be attained and the minimum resolution, then you need to scale every variable that you use from whatever "meaningful" value it had to something that fits into the available fixed-point type while neither overflowing nor injecting too much quantization error. If this is for your nav solution, and if you're implementing a Kalman filter, then you'll find that you'll have an extra-special hard time of it because while it is within the realm of possibility to verify that the fractional precision of all your numbers fits into double-precision floating point, knowing the ranges that all your numbers may take on before hand can be pretty challenging, indeed. Having said all that -- yes, the IQmath may help. I have used TI processors and did not use the IQmath stuff, because I had already developed some nifty block floating point* matrix math algorithms for the ADSP-21xx parts that ported over fairly well and worked with the larger algorithms that I was porting. I would also like to mention at this juncture that unless you make heavy use of vector dot products and other math that takes advantage of the MAC instruction, you may find that using a DSP chip is no better than finding a nice fast general-purpose part and using that. * Note that _block_ floating point is _way_ different in detail from regular ol' floating point. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
Tim Wescott <tim@seemywebsite.com> wrote:

(snip)
> The real difficulty in converting to fixed point is that for every data > path in the entire algorithm, you need to calculate the maximum value > that will ever be attained and the minimum resolution, then you need to > scale every variable that you use from whatever "meaningful" value it had > to something that fits into the available fixed-point type while neither > overflowing nor injecting too much quantization error.
Presumably easier if done while the algorithm is being implemented, instead of going back through a floating-point one done by someone else.
> If this is for your nav solution, and if you're implementing a Kalman > filter, then you'll find that you'll have an extra-special hard time of > it because while it is within the realm of possibility to verify that the > fractional precision of all your numbers fits into double-precision > floating point, knowing the ranges that all your numbers may take on > before hand can be pretty challenging, indeed.
Not knowing at all the details of the algorithm, if you used 128 bit fixed point with 64 bits after the binary point, would that be enough? Presumably there are some processors where 128 bit fixed point would be easier and faster than 64 bit floating point. -- glen
>Tim Wescott <tim@seemywebsite.com> wrote: > >(snip) >> The real difficulty in converting to fixed point is that for every data
>> path in the entire algorithm, you need to calculate the maximum value >> that will ever be attained and the minimum resolution, then you need to
>> scale every variable that you use from whatever "meaningful" value it
had
>> to something that fits into the available fixed-point type while neither
>> overflowing nor injecting too much quantization error. > >Presumably easier if done while the algorithm is being implemented, >instead of going back through a floating-point one done by >someone else. > >> If this is for your nav solution, and if you're implementing a Kalman >> filter, then you'll find that you'll have an extra-special hard time of
>> it because while it is within the realm of possibility to verify that
the
>> fractional precision of all your numbers fits into double-precision >> floating point, knowing the ranges that all your numbers may take on >> before hand can be pretty challenging, indeed. > >Not knowing at all the details of the algorithm, if you used 128 bit >fixed point with 64 bits after the binary point, would that be enough? > >Presumably there are some processors where 128 bit fixed point >would be easier and faster than 64 bit floating point. > >-- glen >
The algorithm is such that it definately requires 128 fixed point. But the processor we re working on is 32 bit. Will it be possible to perform 128 bit fixed point operations on it? - Divya
On Sun, 26 Feb 2012 10:14:28 -0600, divya_divee wrote:

>>Tim Wescott <tim@seemywebsite.com> wrote: >> >>(snip) >>> The real difficulty in converting to fixed point is that for every >>> data > >>> path in the entire algorithm, you need to calculate the maximum value >>> that will ever be attained and the minimum resolution, then you need >>> to > >>> scale every variable that you use from whatever "meaningful" value it > had >>> to something that fits into the available fixed-point type while >>> neither > >>> overflowing nor injecting too much quantization error. >> >>Presumably easier if done while the algorithm is being implemented, >>instead of going back through a floating-point one done by someone else. >> >>> If this is for your nav solution, and if you're implementing a Kalman >>> filter, then you'll find that you'll have an extra-special hard time >>> of > >>> it because while it is within the realm of possibility to verify that > the >>> fractional precision of all your numbers fits into double-precision >>> floating point, knowing the ranges that all your numbers may take on >>> before hand can be pretty challenging, indeed. >> >>Not knowing at all the details of the algorithm, if you used 128 bit >>fixed point with 64 bits after the binary point, would that be enough? >> >>Presumably there are some processors where 128 bit fixed point would be >>easier and faster than 64 bit floating point. >> >>-- glen >> >> > The algorithm is such that it definately requires 128 fixed point. > > But the processor we re working on is 32 bit. Will it be possible to > perform 128 bit fixed point operations on it?
Yes, but it's almost one of those "if you have to ask you can't figure it out" kinds of things. Individual add instructions will give you a carry-out bit, and allow you to use the carry-out bit to add with (if you look in the instruction set you should see an add and an add-with-carry). So the process to add two 128-bit numbers is to add the lowest significant words without carry, then add the next two more significant words with carry, then the next two with carry, then the two most significant words with carry, saving your intermediate results each time. You can do something similar with multiplication (you think of doing multiplication long hand, and treat each 32-bit word as a "digit"). And if the MAC instruction has enough features, you can do a 128-bit MAC on two vectors with 16 passes of a 32-bit MAC -- with a lot of book- keeping. Do you need 128 bits in both the coefficients and the data, or just one? And what the heck are you doing that requires 128 bits at all? That seems extreme, and makes me wonder if perhaps a little bit of algorithm redesign will save you from a whole lot of extra arithmetic in the processor. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On 02/26/2012 06:18 PM, Tim Wescott wrote:
> Individual add instructions will give you a carry-out bit, and allow you > to use the carry-out bit to add with (if you look in the instruction set > you should see an add and an add-with-carry). So the process to add two > 128-bit numbers is to add the lowest significant words without carry, > then add the next two more significant words with carry, then the next > two with carry, then the two most significant words with carry, saving > your intermediate results each time.
Just to make sure the OP does not try to hunt down a instruction that not exist: The C64x+ DSP does not has a carry flag, it uses conditional execution instead. Other than that your post is spot on, especially the remarks why 128 bit fixed-points are needed or not. Overall I'd also like to add that the C64x+ DSP is a questionable choice for such algorithms. The DSP is fast for tight number crunching loops only.
>On 02/26/2012 06:18 PM, Tim Wescott wrote: >> Individual add instructions will give you a carry-out bit, and allow you
>> to use the carry-out bit to add with (if you look in the instruction set
>> you should see an add and an add-with-carry). So the process to add two
>> 128-bit numbers is to add the lowest significant words without carry, >> then add the next two more significant words with carry, then the next >> two with carry, then the two most significant words with carry, saving >> your intermediate results each time. > >Just to make sure the OP does not try to hunt down a instruction that >not exist: The C64x+ DSP does not has a carry flag, it uses conditional >execution instead. > >Other than that your post is spot on, especially the remarks why 128 bit >fixed-points are needed or not. > > >Overall I'd also like to add that the C64x+ DSP is a questionable choice >for such algorithms. The DSP is fast for tight number crunching loops
only.
>
The reason behind choosing 64x+ is that we would like to run the kalman filter navigation algo on the dsp core of the beagle board xm which consists of c64x+. In the algo, i m getting values like 4.7*10^-10, which have to be processed further. If i have to represent these in fixed then i ll have to use of the order of q50 format, which i doubt the processor will be able to handle. Can u suggest a proper optimisation method out of the ones you mentioned so that i ll be able to run the algorithm on the dsp core of beagle board.