DSPRelated.com
Forums

using IIR or FIR to implement the lowpass filter for downsampling?

Started by Nasser M. Abbasi March 5, 2012
On Tue, 06 Mar 2012 11:47:03 -0500, robert bristow-johnson wrote:

> On 3/6/12 12:23 AM, Tim Wescott wrote: >> On Mon, 05 Mar 2012 12:49:31 -0500, robert bristow-johnson wrote: >> >>> On 3/5/12 11:35 AM, Tim Wescott wrote: >>>> >>>> One does get some computational improvement in an FIR because one can >>>> skip computations where one can't in an IIR -- >>> >>> kinda hard to do efficient polyphase filtering with IIR. >> >> I was kinda snippy the last time, > > don't sweat it. i am on and off here spottily. and i don't think > you're one of the snippy's here at comp.dsp. whether or not i'm in that > group is for other people to judge. > >> but: I'm not so sure that it's all that bad. If you use a filter form >> that realizes all poles in the states and only applies zeros on the >> output side, then I think you could do polyphase filtering with IIR, >> and do it fairly efficiently. > > well, you don't have to compute the feedforward paths for output samples > you don't keep, but you do have to do it for the feedback paths for > *every* intermediate sample. > > the issue is: what if the upsample ratio is very large? like say, what > if you're converting from 48000 to 44100? you have to upsample by 137 > (stuffing in 136 zeros), LPF, and then throw away 159 out of 160 > samples. with FIR you just drop anchor at, pick the correct set of > coefficients (that's the "polyphase" thingie), and do a, say, 32-tap > FIR. with IIR you have to do *something* for each of those 160 samples, > including the 159 you don't keep.
No you don't. You only need to run the IIR filter kernel at 48000ksps, then for the output samples you can "drop anchor" (I like that phrase) at the appropriate times and read out the best estimate of the output at that point. You do need 147 (your 137 was a typo, I think) different sets of output coefficients to cycle through as you read out at 44100, but this is akin to the 147 different FIR kernels that you need. So all in all, the "polyphase filtering with IIR" vs polyphase with FIR should work out to a very similar tradeoff to "regular old" filtering with IIR vs FIR.
> but if the OP is simply downsampling by a factor of 2 or 3, i think that > an IIR LPF (and tossing out 1 or 2 samples) might be more efficient than > a 32-tap FIR LPF. > >> I'm pretty sure that this would mean that you'd have to implement the >> filter stages in parallel, rather than in cascade; > > dunno what you mean here, Tim. > >> this, in turn, would >> mean that the coefficients for going from the states to the output >> would have obscure (and possibly lengthly) derivations. But I think it >> could be done. >> >> But then -- the OP didn't mention polyphase filtering, just decimation. > > right. which is the *latter* portion of the chore of polyphase > filtering. essentially what the OP is doing is polyphase filtering > without the step of upsampling and zero-stuffing. everything else is > the same: the LPFing and picking out the samples you wanna keep. > >> If you're decimating by an integer amount, then you're not going to be >> doing polyphase filtering at all -- just ignoring some of the output. > > well, you're doing *part* of what polyphase filtering is about. > conceptually, with polyphase filtering, you are upsampling, then LPFing, > then ignoring some of the output. doing it with an FIR saves you from > unnecessary computations of output samples that you know a priori are > ignored. but the IIR has still has to go through those ignored samples > because it's needed to update its states.
See above -- update the states, yes. Do the whole upsample/downsample thing -- no. I'm not even sure that the IIR polyphase filtering suffers much disadvantage in cases of extreme downsampling: yes, in the IIR case you have to update the kernel every time, but in the FIR case you have a LONG filter kernel to compute each time. And besides, if you're doing extreme downsampling then you can probably come pretty close by preceding your polyphase filter with a CIC stage or three, then use a final filter to clean up the signal the way you want it as you downsample. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
my generic answer to the generic question:

If you're doing communications, audio etc where real-time response on the
time scale of a FIR filter's group delay isn't critical: Consider FIR
first.

In control systems where the added group delay affects a feedback loop,
consider IIR first.

If you're struggling with the computational effort, look for an efficient
implementation that gets the job done (for example, CIC is recursive but
FIR). There are a few well-known optimizations for FIR, such as exploit a
symmetric impulse response, a half-band filter with every second
coefficient zero or a [1/2 1 1/2] three-tap FIR at the highest rate using
bit shift multipliers for example.
>I'm not even sure that the IIR polyphase filtering suffers much >disadvantage in cases of extreme downsampling: yes, in the IIR case you >have to update the kernel every time, but in the FIR case you have a LONG
>filter kernel to compute each time. >
Here is a great example of polyphase IIR filter used for decimation. I doubt that one could do better with any form of FIR filter, considering all aspects of the implementation such as data wordlenght, coefficient quantization, amount of memory etc... http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4645807 I have a copy here if anyone is interested.