DSPRelated.com
Forums

Understanding the concept of Polyphase Filters

Started by Luk_11 6 years ago10 replieslatest reply 4 years ago11065 views

Hey guys, 

I am trying to understand the concept of polyphase filters. I'm starting with the concept, not with the exact means of implementation or the mathematics involved. 

So, let's consider a downsampling scenario: The downsampling operation shifts parts of the spectral images, which can result in aliasing. That's why an anti-aliasing filter should be applied before downsampling. Polyphase filters, however, allow for an implementation that avoids aliasing although the anti-aliasing filter gets applied after downsampling. So, why would we want that? I think this is favourable because applying the anti-aliasing filter before the downsampling operation bandlimits the original audio signal and hence results in loss of information. The polyphase filter gives a way of downsampling without having to lose information of the original audio signal. Am I correct ? 

Here is a figure of how the polyphase filter could be implemented: Either having downsampling after filtering or before. 

poly_downsample_73148.png


Now, the same can be done for upsampling: Upsampling creates unwanted spectral images, which need to be filtered after the upsampling operation. A polyphase filter, however, gives the possibility to apply upsampling before or after the filter. I really don't see why we would want to apply upsampling after the filter, though. Can anybody explain this to me, please ? 

poly_upsample_43273.png

[ - ]
Reply by SlartibartfastMay 25, 2021

I'm not sure your question really has much to do with polyphase structures.   I'm also not sure your statements about anti-aliasing are strictly true about polyphase architectures that are upsampling or downsampling.

In my experience "polyphase" means the filter coefficients are subsets of a larger set and the "phase" of the subset is being selected dynamically (indicated by the M or L superscripts in your examples, which should probably really be subscripts, unless whoever drew that actually did mean exponentiation, in which case I have no idea what's going on there).  Changing the phase of the filter coefficient subset allows adjustment of the sampling instant of the input or output samples.

Whether the antialias filter needs to be or can be applied before or after is not really because it's a polyphase filter and really just depends on the amount of decimation being done and the filter response.   This is the same for any decimating filter.   If aliasing happens during the polyphase filter process if it is decimating, you still won't be able to remove it afterwards.

Not sure whether any of that is helpful or not.


[ - ]
Reply by fred_harrisMay 25, 2021

This is a small chapter of a book on filter banks.

It is a nice introduction to polyphase filters. Also a supplement that contains some matlab code.

might also want to chase down book Multirate signal processing for communication systems


fred harris

FBMC_ch_6_Supplement_3.pdf

FBMC_book_ch_6_text_5.pdf
[ - ]
Reply by Code_WarriorMay 25, 2021

Hi Dr. Harris,


Regarding the book on filter banks - what is it? Is there a place to purchase the complete volume?

Thanks!

[ - ]
Reply by fred_harrisMay 25, 2021

Code_warrior,

The book in which my chapter appears is

"Orthogonal Waveforms and Filter banks for Future Communication Systems"

Editors: Markku Refors, Xavier Mestre, Eleftherios Kofidis, Faouzi Bader

Elsevier, Academic Press, 2017, ISBN 978-0-12-810384-5 


Might also have a look at my book

"Multirate Signal processing for Communication Systems"

Pearson, 2004, 

fred


[ - ]
Reply by Code_WarriorMay 25, 2021

Excellent, thanks! I'll check it out.


Your Multirate DSP for Comms book is always nearby, own two copies for home and office :)

[ - ]
Reply by Ali23May 25, 2021
  1. What is a difference between "filter ( convolution)", " polyphase filter" and "polyphase filter bank"?

Honestly, I dont know any difference in an implementation.

2. Why do we need anti-aliasing filter before downsampling? 

I believe a decimation filter after downsampling is able to remove anti-aliasing

[ - ]
Reply by SlartibartfastMay 25, 2021

To answer your questions:

1. A typical convolutional filter has constant coefficients, while a polyphase filter adjusts coefficients depending on the relative phases of the input and output, so the implementations are very different.   A polyphase filter bank is a particular architecture for implementing one type of polyphase filter.

2. To prevent aliasing.   When reducing the sample rate the energy between the new and old alias regions needs to be removed or it will alias into the new passband.

Energy that has aliased into a desired passband region containing signal energy cannot be removed without also removing the desired signal energy in the same band region.   This is why aliasing must be prevented with anti-alias filters during downsampling.

[ - ]
Reply by oliviertMay 25, 2021

The two filtering topology (before vs. after) gives exactly the same results in both upconversion and downconversion cases. So for your question about downsampling, the reason is not the one you invoke.

Polyphase architecture is intimately linked to processing power.

Downconversion: here you will have to throw away M-1 samples out of M. Why should you bother computing them?

UpConversion: during the upsampling process, we introduce L-1 zeros in between each original sample. Why should you multiply them by filter taps, as you know already the result?


[ - ]
Reply by Luk_11May 25, 2021

hey, 

thx very much for your reply!

So, are you saying: For downconversion and for upconversion we should use the left of the given figures? 

And the reason is that for downconversion the transfer function H(z^M) indicates that we do not compute filter coefficients for the M-1 samples that we throw away and for upconversion H(z^L) indicates we do not compute coefficients for the L-1 zeros that we have inserted? 

[ - ]
Reply by oliviertMay 25, 2021

No you should use the figures on the right.

The left figure on the downconversion encourage you to compute all the samples and then throw away M-1 out of M.

The left figures on the upconversion case makes you introduce the zeros before filtering.


On the right downconversion graph, the combination of the Z^-1 and the downsampler (left part of the graph) is actually performing a distribution of the samples (like dealing cards) to the filters, and the right part of the graph is just a summation of all outputs.

On the right upconversion graph, the left part is sending all the samples to all the filters, and the upsampler combined with the Z^-1 is equivalent to multiplex all the outputs. At each time tick, all the sample output are summed up, but they all have a different delay, so a valid output from one filter is added with zeros on the other lines.