I'm simulating 65 MHz RF data and sampled that sinewave with 120MHz fs. Passed that signal with two stage decimation filter. My final stage filter output will be 2 MHz rate. Final stage output will be configurable. So I have changed that decimation rates and observed that the signal/Noise ratio. There is an improvement in SNR by changing that decimation rate( for example, if I'm observing 28 dB SNR @ 2 MHz means, I'm getting 29 dB SNR @ 4 MHz.).
Can anyone suggest to short this issue???
Thanks in advance.
You will have aliasing unless you increase your sampling frequency to > 130 MHz.
As a practical matter you will need to go past that to make up for the transition band of the down sampling filters.
Not clear post but it seems it is about single tone so its alias can be used. signal bandwidth seems small judging by decimation to 2Mhz.
sampling at twice bandwidth may be enough.
decimation increases noise as more of noise is condensed into smaller bandwidth due to noise aliasing.
Actually I'm doing under sampling where fs will be less than 2f.
So your main question is SNR of decimation.
If you decimate by half then all the noise floor (minus filter effect) will alias into your band so expect less SNR.
Moreover, fft may also show false effect if its resolution is halved, for example, because the power gets spread over half number of bins lifting up noise floor. Leakage of single tone is another issue.
Doesn't a 65 MHz signal require a sample rate of at least 130 MHz, to avoid aliasing?, per Nyquist point.
Even simulated data needs to follow Nyquist rules (not just a good idea, "it's the law"). Simulated data has still has quantitization error since it's not continuous but truncated to whatever resolution is used. If you didn't add "noise" then the quantitization error is noise, albeit small. But it's much bigger if you're not doing (real) synchronous sampling, and obviously you're not sync sampling. In general, a small data set's noise is not Gaussian and can vary a lot with phase (play with different phase data sets to see). Decimation is not the same as a decimating filter, which takes into account the effects of reducing bandwidth with reduced number of samples. Nothing's free.
apparently you are doing IF sampling. the 120 MHz sample rate is a trip around the periodic circle. 60 MHz is half way around the circle so that an additional 5 MHZ past 60 is 65 or -55 MHz. Is the data real or it complex? Did you examine the noise floor prior to the decimation process? was there an initial anti-aliasing filter applied to the analog signal prior to the sampling process? What is the modulation BW? What is the modulation?
How much attenuation and what kind of filters have you applied to the input series? an example of the code would be nice so we more likely to be able to reply intelligently to your question?
my opinion is that decimation cannot work here. If you want to get the analog signal from 65MHz down to 2MHz in baseband, then you may simply need to sample the analog signal at a sampling frequency of 63MHz (or 67MHz). Try to understand the sampling of analog bandpass signals (Blog: J. Hoffmann 'Sampling bandpass signals').