How to select the optimal filter line up for the Digital Down Converter implementation in FPGA? The input data rate is 100 MSPS and the required decimation factor is 10400.. Right now, it's implemented as CIC (Decimate by 2600) + FIR1 (Decimate by 2) + FIR2 (Decimate by 2). How to make sure this is the best implementation? The Matlab model shows that the design meets the expected requirement like frequency and magnitude response. Like, why not CIC (1300) + FIR(2) + FIR(2)+ FIR(2)? I know, this requires additional filter and more resource consumption. Any other candidates to select the filter lineup?
Have you already heard the sentence "Perfection is the enemy of done."?
If you have a solution that has acceptable performance and resource consumption, why spend more time trying to obtain something better?
Time for wisdom
Two opposing forces in nature:
==> "Perfection is the enemy of done."==>
<== "There is a better way to design it, find out" <==
Yeah, I understand it's never ending process for improvement and has to stop at some time, considering cost,time,etc. I thought it would be a good learning for me to understand the decision to help in my next project !
Away from wisdom remember unlike software each fpga module takes effort and time to design and test. less modules less work.
I will go for cic (decimating by 1300) followed by FIR(decimating by 8)
I think the first step is determining what "optimal" and "best" mean to you in the context of your project. Until you do that you have no way of assessing whether one is "better" than the other or not.
Some potential optimization criteria may be:
- Minimizes complexity
- Maximizes stopband attenuation
- Minimizes passband ripple/distortion
- Minimizes latency
Some of these may be mutually exclusive, so it really is up to you to pick appropriate optimization criteria. Once you do that it will likely be easier to determine whether or how to "improve" the current system. Until you do that people can only guess at what "better" means for your system.
It seems like more and more, optimal means you used some matlab function to figure it out
In general the criterion is more in the form of:
- passband ripple is less than ...
- stopband attenuation is more than
Seeing the downsampling ratio, a single DSP slice will be needed for each filter following the CIC (except if you share it among multiple filters)
I would have started with a CIC (down 5200) followed by a CFIR (down by 2)
Like everybody said, you need to define what is optimal to you (area, power, complexity, reusability...).
But big picture, you want to decimate in multiple stages so you can use smaller order/simpler filter for the first stages (which are running fast so they are costly). Typically that means you want to do as much as possible with the CIC filter. Common numbers are to go down to 4x or 8x the final output rate, then use a 'better' filter to finish the job (normally also in stages, as you only want to use the larger order filter for the last down by 2).
Now that reasoning also applies to the CIC filter. Since your ratio are SO high, I bet you could decimate by 100 with an integrate and dump (pretty much free) without any performance hit. Follow that by decimate by 26 with a Nth order CIC (pick N so you meet your stopband spec), and then 2 stages of FIR down by 2 each.
There are many variations to this scheme, the 'optimal' depends on your technology, goal, power requirements, area requirements... The search space is huge so I'd suggest trying a few ideas on each extreme to see where things are before going too deep.
May be you can try out the following toolbox.
I got the following graph based on your design parameters using the above mentioned toolbox. It selects the decimation factors per stage based on the "minimum computation effort".
Hopefully, it can help.
Keeping in mind that you can spend too much engineering time on things, I would just try each way and see how it works out. Then you'll have a feel for it next time.
Why not CIC(10400) followed by a FIR that runs at speed and corrects for the amplitude rolloff of the CIC?
When making these "perfection vs. done" decisions, three things to look at are the monetary cost of the extra engineering time, the total savings in any hardware that will be realized (through, for example, smaller or slower FPGA hardware), and the cost of the schedule hit (web search on "opportunity cost"). Spending $100,000 of engineering time to save $0.10 per piece only breaks even if you're making a million pieces.
Often, when your production volumes are small, the best and cheapest solution is to use Really Expensive parts paired with large, inelegant, can't-be-wrong engineered solutions that take a minimum of brain-time to contrive. It's not necessarily the most satisfying approach in the short term, but if I'm in such a situation and worried about clunky, I just sooth myself with the mantra that if it works, it is automatically beautiful.
Thanks to everyone. Gave me lot of insights !