DC gain?

sum(coeff)?

In other words, what should be taken care in the FPGA design in terms of selecting bit widths at each filter stage to maintain the unity gain?

there are two areas to control fir filter gain.

1) coeff scaling

2) final sum truncation.

for single rate low pass fir filter (or fir decimator) power unity is achieved by scaling so that sum of coeffs = 2^n then truncate n lsbs.

For fir upsampling by (I) you need to increase gain (I) times to achieve unity.

For CIC there are two formulas one for decimator and one for interpolator.

dsplearn, how on Earth can anyone answer your question when you provided __ no__ information about for filters?

Rick, May be I've been trying to get the golden information applicable to filters in general from which I can derive the specific info I need.

Let's say, I have an interpolate by 2 FIR filter, coeff length of 81, sum of coeffs are 1, with input length of (16,15), it will have bit growth to (33,15). I assume the gain is 1/2 of input level because of interpolation. In order to maintain the gain of 1, the FIR output is multiplied with 2.

If the output after multiplication are truncated back to (16,15), what effect will it be on the gain and signal performance such as SNR ?

Since fpga is your platform then don't expect useful logic level points here as the mindset of guys here is geared to high level software functions and they can't understand the issue of filter gain control nodes on fpgas.

I already mentioned to you that there are only two fpga filter nodes (no more) to control gain. Your thoughts may be that for 16 bits input you get an output of say 33 bits that must be dealt with. Correct and I explained that for unity gain you remove (n) LSBs from output if you have scaled your coeffs of sum 1 by 2^n. You may also remove an msb or so if final output after truncation turned up more than 16 bits.

for upsampler by (2) case and assuming you target unity gain then either scale your coeffs by 2^(n+1) and discard (n) lsbs OR keep coeffs scaled by 2^n but discard 2(n-1). For noise issues you need to consider rounding when discarding the LSBs. For high end you may have to remove an msb or so then consider clipping but try avoid that in the first place. so the aim is 16 bits input to 16 bits output. FPGAs cannot keep respecting bit growth... you are free to change gain or bitwidths but I am talking about general practical concepts.

got it. makes sense. thanks kaz!

Hi,

CIC interpolator dc gain = (R.M)^N/R [where R = Rate change, M = comb delay stages, N = number of stages]Hence in your case gain = (92 * 1)^5/92 = > 2^26, < 2^27

So if you target dc unity discard 27 bits and take 20 MSB bits.

But you better check directly using your input range and you may have to discard more or less for best dynamic range especially if you have limited bandwidth.

If you discard 26 bits your gain approaches unity: gain = above gain/2^26

Kaz

Thanks Kaz! So, the gain is (92^4)*(2^-26) = 1.0675 and in dB, 20*log10(1.0675) is ~ 0.5 dB.

if you discard 26 LSBs then gain approaches unity for dc ~= 0.6dB

I would like to add that "discarding" is not your only option when you want to divide a fixed-point number by $2^n$. Discarding in this context is referred to as truncation, which - on the plus side - costs zero FPGA resource, but does have disadvantages that can be anywhere between completely tolerable and completely disastrous, depending on your requirements.

More generally, I would encourage you to think of truncation as just one method of rounding within a family of rounding methods. Many rounding methods differ only in the way that they handle "ties" (e.g. 0.5, 1.5, 2.5 etc). Should 0.5 round to 0 or 1? Should 1.5 round to 1 or 2? In some practical applications, it is crucially important for the rounding method to be unbiased (in which case half-to-even or half-to-odd rounding often make good choices, and are remarkably cheap to implement in hardware).

The second thing I would recommend would be to have a crystal clear understanding of your fixed-point representation. That means having a clear view of the number of "integer" bits and "fractional" bits (and "sign" bit if applicable), at each stage of your computation. In my experience, this is where many people make mistakes that they don't realise they're making. Working with fixed-point numbers can be trickier than it first appears.

The reason I say this is because your initial question is in danger of being meaningless, and it is important to understand why:

How do I calculate the gain of fixed point decimation/interpolation filters for both FIR and CIC ?

The reason this could be viewed as a meaningless question is that the scaling of a fixed point number is *implicit*. Let's say, for example, that you have a 1-tap FIR filter (i.e. just a multiplier). Let's say you input this 8-bit number:

"00000001"

and the corresponding output is this 5-bit number:

"01010"

What is the gain? Ten? Maybe, maybe not. Maybe the output has two more fractional bits than the input, then the gain is 010.10b = 2.5d. Maybe the output has 100,000 more fractional bits, in which case the gain is very small and only a very small range of values can be represented in 5 bits.

The possibilities are theoretically infinite. In practice, it is up to you as the designer to make sound decisions in order to achieve the functionality that you require. The answer to any question like:

If I want to keep only 20 bits at the output after final section, how do I calculate the resulting gain in dB ?

will always be that it depends *which *20 bits you choose to retain.

Direct truncation will lead to dc bias but may be tolerated,

else basic rounding (floor(x + .5)) is enough in this case. Mid point issues are too trivial as many lsbs are removed.

This is not correct in general, and I don't think @dsplearn has provided enough information to be sure that it is definitely suitable for them.

If, for example, your filter sits somewhere in the feedback loop of a control system, then this seemingly insignificant bias can be disastrous. You might not see any effects within 24 hours of simulation, then everything goes horribly wrong within 1 second of running in hardware.

I would certainly advise anyone to consider why they would choose this "basic" (round-half-up) rounding over unbiased rounding. I would say the hardware cost is about the same in each case. There may be applications in which you wouldn't want to distort the ratio of odd values to even values, but I haven't encountered any in a signal processing context.

26 lsbs to be discarded, that is what gives dc power unity and we are talking about. the chances of mid point is 1 in 2^26?

unless it is meant for moon travel application then I don't see any issue here.

checking 26 bits for mid point is very costly.