Question for bit number decision of decimation filter's output

Started by pcchen May 6, 2009
Hi All,

I'm a HW engineer and need to design decimation filter for sigma-delta
ADC. After reading some articles and discussion from internet, I knew
decimation filter has 2 major function, downsample and 1-bit to multi-bit
data conversion. As all replies for, the output bit number
seems the max value of 1s accumulation, ex. 12-bit for 4095 oversampling.
But I found a sample code of decimation filter in datasheet of AD7401 that
is a sigma-delta ADC( ). That
decimation filter is implemented by a Sinc3 filter and 256 decimation rate.
The accumulation bit number is 24-bit in that filter. I don't realize why
it needs to 24-bit? Even all 256 sample are 1s, the max value is only 256.
If I change the decimation rate to 512 or 128, should I change the
accumulation bit number? Is there any rule for bit number decision for
decimator output? This question confused me a lot.... Thanks for your kind
advices in advance!

Best Regards,