I have a need to downsample a signal that is at 384 kHz down to 192kHz. I have read that you first apply an FIR filter then you throw away 1 out of every 2 samples. Rather than programming the filter can you use an approximation by averaging each two consecutive samples and using the result in the place of the two samples? That would be very easy and quick. Would it provide reasonable filtering?

Thanks

Low pass filtering before decimation is required to limit bandwidth of signal under process and avoid aliasing which may happen during downsampling.

Averaging is kind of low pass filtering, however, frequency response of such filter might be far from what you really want.

Thus, I believe coding filter is better solution. As your sampling rates suggest, you can easily hear the difference.

To save on processing time you may notice that one of two filtered samples is thrown away, so you don't need to calculate that sample.

So, first (and the part that everyone else left out) -- what's the expected signal content between 192 and 384kHz? If the answer is "nothing", then just decimate: you won't lose any information (i.e., you won't alias what's not there).

Second, and assuming you actually do have content to filter out, don't you know how to calculate the frequency response of a filter? It would be instructive to do it for your suggested filter: I think it would become immediately clear to you why it may not be the best approach.

Third, the Really Good way to do this would be to identify the band of frequencies that you want to keep and what doesn't matter. If the spectral content of the signal to be decimated is fairly flat, and if you want to keep everything below some frequency, then you want to pass everything from 0 to \( f_{keep} \), you want to block everything from \( 192 \mathrm{kHz} - f_{keep} \) to \( 192 \mathrm{kHz} \), and it doesn't much matter what you do to the stuff in between. The higher that \( f_{keep} \) is, the harder your job gets.

This paper may help clarify things for you:

Lots of good comments here. Thanks.

As to signal content between 192 and 384kHz there will be potentially quite a bit so then a simple filter will not do.

This is the actual situation: I can sample from a software defined radio (SDR) at 384 kHz. The IQ signals eventually are plotted after an FFT. There is no demodulation done. The application is strictly a panadapter.

However, there are other amateur radio packages out there that will not work with this SDR since they expect the IQ signals to be delivered as audio and thus they work with sound devices working at 192 kHz. What I want to do is down sample the 384kHz IQ signals to 192 and then make them available as an audio output which they then could connect to with a Virtual Audio Cable.

So I guess I need a decent FIR filter to do the job. But is there another option? What about starting from the FFT? Can I not just take 192kHz worth of bins, perform an iFFT and then send it out to the sound card? The reason I ask is that the tools I use have a limited amount of filters.

Thanks

If you start with the FFT and zero out all the samples above 192kHz then that's similar to using a half-band filter. At higher downsample rates this can actually be a practical approach. But only going down by 2 means you will be inserting a possible sharp transition at 192kHz. Not a good idea.

You might consider a half-band filter. Just a type of FIR filter....

I have read that you first apply an FIR filter then you throw away 1 out of every 2 samples. Rather than programming the filter can you use an approximation by averaging each two consecutive samples and using the result in the place of the two samples?Those are exactly equivalent; your averaging filter is an FIR with coefficients [0.5, 0.5].

The short answer is it kinda depends. What you you mean that you "first apply an FIR filter"? Did you mean "low pass" instead of "finite impulse response"? As rrlagic indicated, an antialiasing low pass filter is required to prevent, well, aliasing. An averaging filter is a type of LPF. It is far from ideal. Try plotting the frequency responses in matlab.

For sure I arrive far after all these nice answers. Downsampling by 2 using a half-band filter is very often (almost always) a good solution. The center tap is 1/2, and the filter is symmetric furthermore 1 every 2 tap is 0 reducing the computation amount.

The way this kind of filter is built is pretty easy: you just specify the bandwidth you want to keep "untouched" (less that a quarter of the initial sampling rate: < 96kHz), and the quality of this "untouched" remaining spectrum: ripple in this band. The ripple will define at the same time the rejection of the other side of the spectrum (that will be aliased on the wanted spectrum).

I have just done a try on Matlab:384kHz sampling rate, keeping 80kHz with a ripple of 0.005dB in the band (-74dB on rejected band) gives a filter of order 50 (51 taps). Taking into account symmetry, zeros and downsampling this means that you would have to perform 13 pre-add/mult/accumulation every 2 input samples (plus the shift of the center tap).

The frequency response of the moving average filter ([.5 .5]) is very bad and you will get a lot of unwanted signal inside your signal of interest.