## Understanding the concept of Polyphase Filters

Started by 1 year ago●8 replies●latest reply 1 year ago●924 viewsHey guys,

I am trying to understand the concept of polyphase filters. I'm starting with the concept, not with the exact means of implementation or the mathematics involved.

So, let's consider a downsampling scenario: The downsampling operation shifts parts of the spectral images, which can result in aliasing. That's why an anti-aliasing filter should be applied before downsampling. Polyphase filters, however, allow for an implementation that avoids aliasing although the anti-aliasing filter gets applied after downsampling. So, why would we want that? I think this is favourable because applying the anti-aliasing filter before the downsampling operation bandlimits the original audio signal and hence results in loss of information. The polyphase filter gives a way of downsampling without having to lose information of the original audio signal. Am I correct ?

Here is a figure of how the polyphase filter could be implemented: Either having downsampling after filtering or before.

Now, the same can be done for upsampling: Upsampling creates unwanted spectral images, which need to be filtered after the upsampling operation. A polyphase filter, however, gives the possibility to apply upsampling before or after the filter. I really don't see why we would want to apply upsampling after the filter, though. Can anybody explain this to me, please ?

I'm not sure your question really has much to do with polyphase structures. I'm also not sure your statements about anti-aliasing are strictly true about polyphase architectures that are upsampling or downsampling.

In my experience "polyphase" means the filter coefficients are subsets of a larger set and the "phase" of the subset is being selected dynamically (indicated by the M or L superscripts in your examples, which should probably really be subscripts, unless whoever drew that actually did mean exponentiation, in which case I have no idea what's going on there). Changing the phase of the filter coefficient subset allows adjustment of the sampling instant of the input or output samples.

Whether the antialias filter needs to be or can be applied before or after is not really because it's a polyphase filter and really just depends on the amount of decimation being done and the filter response. This is the same for any decimating filter. If aliasing happens during the polyphase filter process if it is decimating, you still won't be able to remove it afterwards.

Not sure whether any of that is helpful or not.

This is a small chapter of a book on filter banks.

It is a nice introduction to polyphase filters. Also a supplement that contains some matlab code.

might also want to chase down book Multirate signal processing for communication systems

fred harris

FBMC_book_ch_6_text_5.pdfHi Dr. Harris,

Regarding the book on filter banks - what is it? Is there a place to purchase the complete volume?

Thanks!

Code_warrior,

The book in which my chapter appears is

"Orthogonal Waveforms and Filter banks for Future Communication Systems"

Editors: Markku Refors, Xavier Mestre, Eleftherios Kofidis, Faouzi Bader

Elsevier, Academic Press, 2017, ISBN 978-0-12-810384-5

Might also have a look at my book

"Multirate Signal processing for Communication Systems"

Pearson, 2004,

fred

Excellent, thanks! I'll check it out.

Your Multirate DSP for Comms book is always nearby, own two copies for home and office :)

The two filtering topology (before vs. after) gives exactly the same results in both upconversion and downconversion cases. So for your question about downsampling, the reason is not the one you invoke.

Polyphase architecture is intimately linked to processing power.

Downconversion: here you will have to throw away M-1 samples out of M. Why should you bother computing them?

UpConversion: during the upsampling process, we introduce L-1 zeros in between each original sample. Why should you multiply them by filter taps, as you know already the result?

hey,

thx very much for your reply!

So, are you saying: For downconversion and for upconversion we should use the left of the given figures?

And the reason is that for downconversion the transfer function H(z^M) indicates that we do not compute filter coefficients for the M-1 samples that we throw away and for upconversion H(z^L) indicates we do not compute coefficients for the L-1 zeros that we have inserted?

No you should use the figures on the right.

The left figure on the downconversion encourage you to compute all the samples and then throw away M-1 out of M.

The left figures on the upconversion case makes you introduce the zeros before filtering.

On the right downconversion graph, the combination of the Z^-1 and the downsampler (left part of the graph) is actually performing a distribution of the samples (like dealing cards) to the filters, and the right part of the graph is just a summation of all outputs.

On the right upconversion graph, the left part is sending all the samples to all the filters, and the upsampler combined with the Z^-1 is equivalent to multiplex all the outputs. At each time tick, all the sample output are summed up, but they all have a different delay, so a valid output from one filter is added with zeros on the other lines.