I asked this question before but I thought I would rephrase is a bit with a little more background.
I have a software defined radio that provides a quadrature signal. After performing an FFT and plotting the resulting spectrum there is a strong DC spike at the center of the spectrum (at DC).
The solution is supposed to be simple. According to this article
https://www.rtl-sdr.com/removing-that-center-frequ... the solution is to calculate a weighted average over "ongoing" samples and subtract it from each "future" sample. The author also provides python source code but I have no idea about anything in Python.
So I tried the following: I have 16384 i samples and 16384 q samples. I took the average of the i samples and q samples seperately and then for each i sample and q sample I subtracted the average of the i and q's.
Now this did result is a small reduction of the spike but didn't really get rid of it. If I look at software that does this, it completely eliminates the spike and the hump around DC.
So what am I doing wrong? I know I am not using a weighted average but weighted with what? And, whats with the "ongoing" and "future samples?
Thanks for any help.
In the time domain, dc can be two types:
1) dc offset of all values
2) dc bias within some signal values, unrelated to offset.
removing dc offset is easy by subtracting mean as you know.
The dc element in 2) above could be due to:
b) nature of signal e.g. excessive zeros or ones in a stream.
Standard man-made signals tend to spread energy so that all components including dc are equal. This eliminates b) above.
Designer like yourself can eliminate a) above or 1) above
Regarding your 'i' samples, if you computed the average of 16384 'i' samples (a single number) and subtracted that average value from each of the 'i' samples then your new sequence of 'i' samples should have an EXTREMELY low average value. If that process isn't essentially removing the average of your original 'i' samples then my guess is that your performing that "average computation & subtract" process correctly.
I suggest you carefully test the above DC removal scheme on just 5 or 10 'i' samples to see if it's doing exactly what you intend.
It sure looks that way. I just did this again like you suggested and I got
I guess that qualifies as extremely low...
you might want to try the following 45 coefficients in a delay-line FIR filter to implement a DC blocking filter:
-0.001, -0.002, -0.003, -0.004, -0.006, -0.008, -0.010, -0.012, -0.014, -0.017, -0.020, -0.022, -0.025, -0.028, -0.030, -0.033, -0.035, -0.037, -0.039, -0.040, -0.041, -0.042, 0.958, -0.042, -0.041, -0.040, -0.039, -0.037, -0.035, -0.033, -0.030, -0.028, -0.025, -0.022, -0.020, -0.017, -0.014, -0.012, -0.010, -0.008, -0.006, -0.004, -0.003, -0.002, -0.001
If the passband gain ripple of the above taps is greater than you can tolerate, try the following 61 coefficients:
0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0, 0, -0.001, -0.002, -0.003, -0.004, -0.006, -0.008, -0.01, -0.012, -0.014, -0.017, -0.02, -0.022, -0.025, -0.028, -0.03, -0.033, -0.035, -0.037, -0.039, -0.04, -0.041, -0.042, 0.958, -0.042, -0.041, -0.04, -0.039, -0.037, -0.035, -0.033, -0.03, -0.028, -0.025, -0.022, -0.02, -0.017, -0.014, -0.012, -0.01, -0.008, -0.006, -0.004, -0.003, -0.002, -0.001, 0, 0, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001
Using a "folded FIR" block diagram implementation you can reduce the number of FIR filter 'multiplies per output sample' to roughly half the number of coefficients used in the DC blocking filter.
By the way, both of the above filters have guaranteed linear phase.
Regarding weights, it is really meaningless jargon with averaging as you just add n samples and divide by n to get mean i.e. each sample has weight of 1.
You should also note that there is running average and block based average.
If you get the average of current block and apply it to next block then you will have some surprises.
When you are talking about a block, what exactly do you mean?
The SDR delivers 200 i samples and 200 q samples in each callback. I continue acquiring samples until 16384 samples of each are obtained and then I average them separately and send them off to the FFT code.
I don't apply the same average to the next set of 16384 samples. It is done all over.
One obvious question would be if your samples are signed or unsigned numbers. If your ADC provides unsigned numbers you need to convert to signed values before removing the DC component.
From your description it's unclear if you're mixing I and Q statistics.
I have success removing the DC component by getting the mean of the I samples and subtracting that value from each of the samples. Then I do the same separately for the Q samples.