Reply by somd...@gmail.com September 4, 20072007-09-04
I read the paper by Hogenhauer , the treatment by Donadio and Rick
Lyon's article at http://www.eetimes.com/showArticle.jhtml?articleID=160400592
and still have a basic question about DC gains and bit-allocations for
CIC decimators. Please bear with me if this is too basic. I am new to
digital implementation/bit allocation stuff.

For an N-th order CIC decimator with decimation factor R, the bit-
width requirement at the output stage (and all intermediate stages) is
log2(R^N)+Bin, where Bin is the bit-width of the input signal. Also
the DC gain for the filter is R^N. Hence, to normalize DC gain to '1',
I should divide the output stage by R^N.

So is it true to say that the bit-width required AFER the division is
smaller than log2(R^N)+Bin?

In a typical example, I have R=16, N=3 and Bin=2 (coming from the
output of a 2-bit SDM). This gives a bit-width requirement of 14 for
the CIC stages and hence the output requirement is 14 bits.

I then divide by 16^3 to normalize DC gain to '1'. This is equivalent
to right-shifting by 12 bits. I am then left with 2 bits output. Is
this the typical design of a CIC after incorporating the gain factor?
Something doesnt hang right here... this implies that the output bit-
width of the CIC stage after the normalization factor reduces to that
of the input signal, which is typically a low-bit width output from a
sigma delta modulator.

What am I missing? Should I implement the division as a multiplier of
value 1/(16^3) and retain 14 bits after multiplication (the bits now
being distributed as "decimal bits" and "mantissa bits")?