DSPRelated.com
Forums

Does decimation of a signal result in a shift in the baseline noise?

Started by tomb18 8 years ago11 replieslatest reply 8 years ago972 views

Hi, I am collecting samples at a rate of 2,000,000 per second from a software defined radio (#SDR).  I am plotting the output of a 16384 point FFT and observing a spectrum of signals and the baseline noise is at a certain level.

I have a zoom control on the graphics and as the user zooms in and it  reaches a span of around 250,000 hz I then decimate the signal by a factor of 8 to 250,000 sample per second.  This is done by using FIR low pass filters and then discarding 7 out of 8 points. Now the spectrum looks great.  The decimation is doing it's job the increased resolution is good as well (many thanks to all that helped out in the past!)

However, I notice that the baseline noise has jumped up by about 10dB. The signal levels are all relatively the same though.  Is this common upon decimation?  And if so, what are the usual resolutions?


Thanks, Tom

[ - ]
Reply by JOSJune 13, 2016

The signal-to-noise ratio in the surviving band should not change provided (1) your FFT length does not change its duration in seconds (giving same number of cycles under the FFT), and (2) you don't have a lot of aliasing due to a poor antialiasing filter or mistuned cutoff frequency.

[ - ]
Reply by tomb18June 13, 2016

Ok, so how can i tell if antialiasing is occurring? I am using a 64 order FIR filter. The signals appear the same as far as I can tell. And the FFT duration will be larger (I think) since I now have to wait a longer period to get all of the samples.

So assuming that the issue is the FFT, can one just multiply the data by an appropriate factor?

Thanks (As you can probably see, I'm quite new to this)

[ - ]
Reply by JOSJune 13, 2016

In Matlab or Octave, you can use freqz() to look at your FIR filter magnitude response.  You can set the lowpass cut-off frequency low enough so that its stop-band begins at the folding frequency where aliasing begins.  Then you know that the aliasing is bounded by the stop-band gain, and then you can easily calculate the worst-case piling up of that in the pass-band.  It should not be 10 dB in any decent filter.

If your FFT lengthens (in terms of sinusoidal cycles), then you'll see the SNR go up, not down.  The FFT magnitude goes up 6 dB for each doubling of length for sinusoids, but only 3 dB per doubling for uncorrelated stationary noise, so you should see a net 3 dB gain in SNR for each doubling of the FFT size.

[ - ]
Reply by Rick LyonsJune 13, 2016

tomb18, the answer to your post’s title question finally “hit” me. And the answer is “In spectral plots, yes, decimation results in an increase in the average baseline noise spectra relative to a signal's spectral peak value.” Are you familiar with the process of pulling “weak” signals up out of the average background spectral noise floor by way of performing extended-length DFTs? (I cover that topic in Chapter 3 of my DSP textbook.) That notion explains what you’re seeing.

If you compute an N-point DFT of a noisy narrowband signal (like a noisy sine wave) you might have spectral peak value of, say, 17 dB above the average noise floor level. If you then perform an 8N-point DFT of that noisy narrowband signal you will then see a spectral peak that’s

  17 plus 10 times log base 10 of 8 dB, or 17 + 10log(8) = 17 + 9 = 26 dB

above the average noise floor level. But you’re doing the opposite of this. You’re performing an N-point DFT and looking at the spectral peak value above the average noise floor. Then you perform a decimated N/8-point DFT and noticed that the spectral peak level has decreased by 10log(8) = 9 dB relative to the average noise floor value.

tomb18, you’re doing the right thing and you are seeing correct results. But be aware, when dealing with noisy signals you might see a decrease in decimated ‘peak-to-noise-floor’ difference of 8.8 dB, or 10 dB, or 9.4 dB, etc. But if you run the test 1,000 times you’ll see that the ‘average’ decrease in decimated ‘peak-to-noise-floor’ difference will be 9 dB.

[ - ]
Reply by tomb18June 13, 2016

Hi, been away on a fishing trip with no internet.

Ok, so to get this right, I am doing a 16384 point FFT on the 2msps signal and then doing a 16384 point FFT on the 250msps signal.

However I am seeing a 18dB increase in the baseline noise value. So I should be just able to multiply the signal by 0.8 before converting to dB and all should be visibly similar.  Is this correct?

Thanks

[ - ]
Reply by Rick LyonsJune 13, 2016

Hi, I've been a away from my computer for a few days.

tomb18, I'm confused. If your had 16384 samples of the 2-Msps signal then after decimation you should only have 2048 samples of the decimated 250-kHz (Not 250msps!) signal. I don't know what you're doing in your processing but I'll say this, "If you have N samples of a 2-Msps signal and N samples of the decimated 250-kHz signal, those two signals should have the same SNR and the same background noise baseline level."

[ - ]
Reply by tomb18June 13, 2016

Hi,

Ok, just to be clear: The SDRPlay device has a minimum sampling rate of 2 msps.  I start with this and then perform 16384 smaple FFT's.  This gives a resolution of about 122 Hz.  This is perfectly fine when you are looking at a wide spectrum.  You don't need finer resolution.  However as you zoom in, it starts to get too coarse.;  So, in my case as soon as you zoom in closer than 250ksps, I then decimate the 2msps down to 250ksps and then perform a 16384 FFT on these samples which brings the resolution up to 15Hz.  Much better for HF work.

When I do so, I see that the SNR is the same (I think) but the background noise baseline level has gone up.

This is not a problem, since I can subtract a fixed value from the data points before conversion to dB's.

This all seems to work quite well.  Am I doing something wrong?

[ - ]
Reply by Tim WescottJune 13, 2016

When you decimate you don't change the noise power.  I assume that the noise is white.  In frequency domain terms this means that you've just taken the noise energy that was distributed over 2MHz of spectrum and aliased it eight times so that it is distributed over 250kHz of spectrum.  That should increase the noise power by a factor of 8, or a bit less than 10dB.

If you were to anti-alias filter and then decimate, the noise spectral density should remain the same.

[ - ]
Reply by tomb18June 13, 2016

Hi,

Ok, but I do pass the I and Q signal through  FIR filters with a cut off frequency of 125kHz an then decimate.  Isn't this supposed to be an anti-alias filter?


[ - ]
Reply by Tim WescottJune 13, 2016

You didn't mention that part!  The next step in this FFT stuff is to consider that getting the scaling right is always confusing.  I'm sure that there are people who do it all the time who just know what's what -- I have to get out pencil and paper and crank through it each time.

Maybe it's because the bin is effectively 8 times wider, too?  In which case my original numbers were off.

If it were me, I'd crank through the numbers.  It builds character -- and, more importantly, DSP math skills.

[ - ]
Reply by Rick LyonsJune 13, 2016

tomb18,

I agree with Prof. Smith (JOS). The average signal-to-noise ratio (SNR) of your decimated signal should be equal to the average SNR of your original input signal. My guess as to why you’re seeing a rise in the decimated signal’s noise floor is because (1) your pre- and post-decimation FFT sizes are different, and (2) you’re plotting your spectra using a logarithmic vertical axis.