I've recently tried to analyze the math behind a simple moving average filter followed by a decimator. I was unable to find any paper on the web.
The practical implementation in a real-time embedded system could be simply based on a circular buffer of size N and the average could be computed when the buffer is full, on the N samples.
So let Ts be the sample period, we'll have an updated output value every N*Ts seconds.
The frequency response of the mainstream LPF is a section of the well-known sinc() function. But I was wondering what was the global response of the filter, after decimation.
So, I've read some paper, especially here on DSP-related.com, and I've written down an iPython notebook in order to simulate such a response and improve my understanding of the practical phenomena I've observed in some embedded system implementing such kind of filtering.
You can find it here:
In the notebook the decimation factor is equale to N, the number of samples used for averaging.
Note that github shows a preview of the rendered notebook (it's static): the "Unknown character" in the 1st formula, is simply a '-' minus.
I kindly ask the opinion of the experts here, in order to undertand wether or not, the assumptions and the result are correct.
Could you please share with me your knowledge?
Thank you very much indeed in advance,
A moving average followed by a downsampler fits in with general dsp integer downsampling. The lowpass filter is a rectangular filter which (as you say) turns into a sinc function in the frequency-domain with some rolloff of the passband (frequencies < fs_new/2) and will cause some aliasing due to the extended (infinite) series of "bumps" in the stopband (frequencies >= fs_new/2).
Intuitively, the critically sampled rectangular prefiltering will mean that a sample that is (the continous time equivalent of) N/2 samples old will either fall into "this" time frame (being weighted equally as everything else), or the "previous" time frame, depending on a miniscule time difference. That seems like amplifying (usually) irrelevant differences.
fs_org = 1/Ts;
fs_new = 1/(N*Ts);
Below pseudo MATLAB code is sloppy (non functional) in order to highlight the steady state behaviour with a few lines, and also because I did not test it :-)
1. This function can be implemented perhaps most intuitively (but least efficient) as a general FIR-filter followed by sample dropping:
x = randn(1000,1);
kernel = ones(N, 1);
x_lp = conv(x, kernel);
x_dec = x_lp(1:N:end);
2. A more efficient implementation would move the decimation earlier in the pipeline (avoid doing calculations that wont be used anyways):
x_dec(1) = sum(x(1:N))/N;
x_dec(2) = sum(x((1:N)+N))/N;
3. If your data is floating point, you could use a variable-cutoff first-order IIR filter (exponential smoother) to approximate the desired response
for k = 1:length(x)
samples_since_last_switch = rem(samples_since_last_switch, N);
alpha = samples_since_last_switch/N;
tmp = alpha*tmp + (1-alpha)*x(k);
4. You could use a CIC filter.
for k = 1:length(x)
tmp(k) = tmp(k-1) + x(k);%integrator
x_dec(d) = tmp(k) - tmp(k-N);
d = d + 1;
Thank you very much indeed KnutInge for the feedback and the suggestions you gave me. Thank you for your time.
I just wanted to better understand a practical implementation a collegue of mine is used to do with your method number (2).
Hello Maurizio_Malaspina. I'm not sure if it's true but I think you may implementing what is called a cascaded integrator-comb (CIC) filter. If that's true, please know that in a few days I'm going to post a detailed blog on the dsprelated.com web site discussing the behavior, mathematics, and implementation of CIC filters.
Hello Rick Lyons and thanks for you advice!
I suppose you're right, as suggested by KnutInge too, in his implementation hypothesis n. 4.
My goal is to deepen this fantastic theory, so I cannot see the time to read your paper... it promise to be really interesting!
Read you soon!