DSPRelated.com
Forums

Suggestions on book for "Fixed Point DSP with C/C++ "

Started by cogwsn 7 years ago9 replieslatest reply 7 years ago703 views

Hello, 

I am working with C and assembly code for DSP for some real time SDR applications and frequently get stuck in the basics. 

Can anyone suggest some reference book for the same.

I found this book thru searching, but I am not sure whether to order it or not.

http://www.morganclaypool.com/doi/abs/10.2200/S002...

Regards

Sumit 

[ - ]
Reply by napiermJuly 22, 2017

Not sure what level you are at.  Fixed point math and such can be found online as white paper.

If you are doing SDR and communication design then hands down the best nuts and bolts book I've seen is a textbook by Michael Rice, "Digital Communications: A Discrete-Time Approach."

https://www.amazon.com/Digital-Communications-Disc...

The grey market paperback "international edition" is the same book but a lot cheaper:

https://www.abebooks.com/book-search/title/digital...


[ - ]
Reply by cogwsnJuly 22, 2017

Thanks napierm for your reply :)

However I was looking for suggestions on Fixed Point DSP books :) 

I am used to do floating point DSP, hence with fixed point DSP I get stuck often. 

[ - ]
Reply by napiermJuly 22, 2017

OK.  I haven't seen that book so can't really comment on it.  If it is a cookbook with examples them it might be worthwhile.

AFA fixed point, you are processing a signal that has some maximum level, say the range is [-2,2).  That means it can be exactly -2 but just less than +2.  The asymmetry comes from 2^n needed to represent 0 as well.  The smallest value, or LSB, sets the noise floor.  So the number of bits and the signal level is a balancing act between dynamic range and the noise floor.  This is very similar to analog signal processing.

Radix.  The decimal point is fixed.  So an arithmetic value A(1.14) is a sixteen bit number with a range of [-2,2).  The bits are sx.xxxx_xxxx_xxxx_xx.  It is evaluated with normal 2's complement arithmetic.  To add another number to it it has to have the same radix.  That may require shifting (mult or div by a power of 2) or sign extension.  Also, after an add or subtract it may require limiting to stay within the original range.

Unsigned.  An unsigned value U(2.14) is a sixteen bit number with a range of [0,4). The bits are xx.xxxx_xxxx_xxxx_xx.  It is evaluated with normal unsigned arithmetic.

Noise.  So each additional bit can represent another 6 dB of dynamic range.  In a processing path, Fred Harris claim that the smallest data pipe sets the noise floor.  He says that statistically it works out to 5dB per bit.  So if you have a [-1,1) range full scale signal then you need a 16 bit data path for an 80 dB noise floor (A (0.15) or s.xxxx_xxxx_xxxx_xxx).  In my models I've found this to be true.

Simulation. I always need a working model to code from.  I use Simulink with a fixed point library.  Every part of the data path and coefficients have set data widths.  The advantage of a bit accurate model is that the simulation and the implementation can be made to match end to end.  That way the model is accurate for validation.

Hope some of this helps.

Have fun,

Mark Napier


[ - ]
Reply by dudelsoundJuly 22, 2017

Hi

I find Mark's summary very useful - I've done quite some stuff in fixed point and found that the biggest danger is in getting confused by all the effects that can happen. I think, to get a good grip on the subject you need to do a lot of trying.

Start with the thought, that the fixed point is something that is not really there but you merely imagine to be there. Your processor sees these values as integers and treats them as such (that's not exactly true - many processors can do fixed-point calculations without extra shifting operations, but it's good for getting a feeling). So a multiplication of two unsigned 16bit numbers will give you a 32bit result - where would you have to put the fixed point in order for the result to be correct? Try!

A decimal example:

1000 * 200 = 200000

If somebody interpreted these numbers and said there is a virtual point in there somewhere like this:

1.000 * 0.200

You'd have to divide your result by 10^6 (shift to the right by six) to get a valid result. Why six? Why not three? Ponder these things and you get a feeling...

[ - ]
Reply by Tim WescottJuly 22, 2017

I don't know that it's a book-length subject.  I could see getting a medium-sized chapter out of it in a larger practical DSP book, but not a whole book in its entirety.

If you have my control systems book there's about a half a chapter's worth in the chapter on practical applications, but I leave out block floating point, which can be hugely useful for some sorts of DSP applications.

My introduction to fixed-point DSP came from the "ADSP-2100 Family Assembler Manual & Simulator Manual", and a lot of practical work with plain old processors in the days before floating point was a practical option in embedded work.


[ - ]
Reply by Y(J)SJuly 22, 2017

Sumit, 

I've never seen the book you referenced, but you are quite correct that the subject deserves a full sized book. Having worked over 30 years in large commercial fixed-point real-time DSP systems, many of which involved tens of thousands of lines of parallel DSP assembler, I can attest that this is one of the most important yet least covered issues in the trade.

Do not believe people who try to tell you that it is all a matter of picking a Q format (i.e., how many bits on each side of the "binary" point) or a "scaling factor". Even relying on the saturation arithmetic of most DSPs' this simplistic approach will get you into trouble.

I used to give new hires an initial exercise to understand the problem - to compute a large FFT in fixed point (say 16 bits) of a fixed point input with minimal worst-case loss of SNR. Of course, at each stage of the FFT the butterfly outputs can require more bits than its inputs, so, the simple idea of prescaling the input to avoid overflows will force you to lose almost all of the input information. Of course, there are much better solutions, but it gives you an idea of how (not) to handle the worst-case scenario. 

The next steps are to understand how to allow a very occasional saturation, how to rescale at various stages of the computation, how to exploit accumulators with guards bits, how to redesign your floating point algorithm to be fixed-point oriented, etc.

I never found a book that covered all of this (although there are thousands of papers on specific cases) and considered writing one myself, but the addressable audience did not merit the effort. In a DSP team you may need 5 theoreticians working on paper, 10 algorithmic people doing MATLAB, and 25 good DSP and real-time programmers, but you only need one or two people who are really good at the floating to fixed point conversions.

Y(J)S
[ - ]
Reply by napiermJuly 22, 2017

Wow, I would love to work with that team!  I've always had to go it alone or maybe have one other person to work with.  All the rest are typically digital guys.

Cheers,

Mark

[ - ]
Reply by Y(J)SJuly 22, 2017

I headed such a team for about 10 years. It was indeed great fun, but that all ended about 5 years ago.

The opportunity window on a lot of our products closed, and it is no longer considered economical to do the full development cycle in-house. We now either purchase packages or chips from specialty companies, or do the high-level design and farm out the work to places where manpower is cheaper (Ukraine, India, etc.).

It might be cheaper, but it is nowhere near as enjoyable.

Y(J)S

[ - ]
Reply by JOSJuly 22, 2017