Polyphase Filters and Filterbanks

Kyle March 20, 20138 comments


Polyphase filtering is a computationally efficient structure for applying resampling and filtering to a signal. Most digital filters can be applied in a polyphase format, and it is also possible to create efficient resampling filterbanks using the same theories.

This post will walk through a reference implementation of both the downsampling polyphase filter and a downsampling polyphase filterbank using scipy, numpy, matplotlib, and python. It should also highlight some of the tricky implementation details that cost time and effort.

This article is available in PDF format for easy printing


When filtering and downsampling, the logical order of operations would be filter -> downsample. This is how most basic signal processing classes teach the order of operations, and for good reason - doing these operations in the reverse order, incorrectly will give bad results due to aliasing.

Doing the filter -> downsample operations means convolving a filter response with the full signal, then throwing away some, or most of the points that were just calculated. It is easy to see why this could be inefficient, but downsampling before filtering will alias the signal... right?

Through the magic of math, it turns out that downsampling, then filtering without aliasing is possible, through a clever reordering and application of the desired filter. A simple example will be heavily borrowed from this lecture (PDF, slide 4)


Use polyphase filtering! Any FIR filter (and some IIR filters) can be applied this way, and it is more efficient to do so!



Suppose we design a filter h with 6 taps, $\left[ \begin{array}{h} h_0 & ... & h_5 \end{array}\right]$, to be used on a signal ${\bf x}$ of length 12, $\left[ \begin{array}{x} x_0 & ... & x_{11} \end{array} \right]$. The output is also decimated by $M$=3. A polyphase implementation of this filter/downsample procedure would apply $M$ filters, structured in the following manner:

$\left[ \begin{array}{h1}h_0 & h_3\end{array}\right] * \left[\begin{array}{x1} x_0 & x_3 & x_6 & x_9\end{array}\right] = \left[\begin{array}{s1} s_0 & s_3 & s_6 & s_9 \end{array}\right]$

$\left[ \begin{array}{h2}h_1 & h_4\end{array}\right] * \left[\begin{array}{x2} x_2 & x_5 & x_8 & x_{11}\end{array}\right] = \left[\begin{array}{s2} s_2 & s_5 & s_8 & s_{11} \end{array}\right]$

$\left[ \begin{array}{h3}h_2 & h_5\end{array}\right] * \left[\begin{array}{x3} x_1 & x_4 & x_7 & x_{10}\end{array}\right] = \left[\begin{array}{s3} s_1 & s_4 & s_7 & s_{10} \end{array}\right]$

The resultant vectors ${\bf s}$ are the convolution of ${\bf h}$ with ${\bf x}$, with the delay line values removed. This is the same as appending len(x)-1 zeros to ${\bf x}$, then performing the convolution. See this for more details.

After this, the final step is to append 0 to the end of the first row, $\left[\begin{array}{ss} s_0 & s_3 & s_6 & s_9 \end{array}\right]$, and prepend zero to all other rows. That leaves this as the final filtered output:

$\left[\begin{array}{ss1} s_0 & s_3 & s_6 & s_9 & 0 \end{array}\right]$

$\left[\begin{array}{ss2} 0 & s_2 & s_5 & s_8 & s_{11} \end{array}\right]$

$\left[\begin{array}{ss3} 0 & s_1 & s_4 & s_7 & s_{10} \end{array}\right]$

Once these filters are applied and the results are adjusted, the 3 resultant vectors s are summed, giving a result ${\bf y}$ of:

$\left[\begin{array}{summed} s_0+0+0 & s_3+s_2+s_1 & s_6+s_5+s_4 & s_9+s_8+s_7 & 0+ s_11 + s_10 \end{array} \right]$

$= \left[\begin{array}{out} y_0 & y_1 & y_2 & y_3 & y_4\end{array} \right]$

${\bf y}$ should be identical to the filter, then downsample case using the same $M$, ${\bf x}$ and ${\bf h}$.

Note that a zero was appended to the first row, and prepended to each row after the first. Also note the inversion of the row ordering of the ${\bf x}$ coefficients other than the first row ($x_0$ above $x_2$ above $x_1$). This is no accident - in fact, the answers without this inversion and zero padding will be incorrect, and may lead to many hours of pain and sadness.

From a high level, these adjustments make sense. The first sample to hit the filter in the typical case would not have any preceding data being filtered, hence $y_0 = s_0 + 0 + 0$. The reordering of the input data is less clear, but is necessary in order to properly align each convolution stream with what would be occuring in the direct filter, then downsample implementation.

Interested parties can check out the references in the Links section to see derivations and diagrams. We will now show that this works by splicing together some sample code in python, with graphs and imagery. For more depth, this lecture (PDF) has great explanation of the key concepts.



The polyphase filterbank is implemented similarly to the single polyphase filter, except for the last step. A polyphase filterbank uses the DFT (discrete fourier transform) to modulate (move in frequency) a prototype filter and perform summation, effectively returning multiple bands of decimated and filtered time-domain data from the DFT stage. The DFT operation is typically performed using the FFT (fast fourier transform). This diagram will aid the visually inclined.

This is confusing on the surface - isn't a DFT supposed to change a representation in time to an equivalent representation in frequency? It is important to remember that a DFT is nothing more than a multiplication by a complex phasor ($e^{-j}$, or $e^{-i}$ for non-engineering types), $\large{x_n e^{\frac{-2 \pi j k n }{N}}}$ for n = 0...N-1, and a summation, run for k = 0...N-1

$\large{x_k = \sum^{N-1}_{n=0}x_n e^\frac{-2 \pi j k n }{N}}$ for k = 0...N-1

Multiplication by a complex phasor is also used for frequency modulation, so it makes sense that a well chosen DFT or IDFT could be twisted to perform modulation of certain energy to different frequencies - in fact, this is the same concept behind OFDM (Orthogonal Frequency Division Multiplexing), which is heavily used in digital communications.

The graphs posted below should help show what has been discussed in this section. For further detail, this link has a different explanation of the phenomena discussed above. Rick Lyons also discussed it on a blog post here on DSPRelated.com



The first order of business is to create a sample signal, in this case a complex value chirp. Any signal will work, but a complex chirp is a preferred signal to show the correct response of the filter bank. The gen_complex_chirp function in the code will generate a 1 second long chirp that covers most of the possible possible frequency range for a sample rate of $f_s$. The .1 (2.1 instead of 2 exactly) in the gen_chirp_function is simply a fudge factor so that the plots look cleaner. DECIMATE_BY and FILT_CONST will be used throughout the implementation, in order to design the filter and calculate other constants.

Don't worry too much about the negative frequency aspect - just know that this chirp should cover the full frequency range available for complex sampling at our sampling rate. Now that there is a test signal, it is time to create a filter.

In the basic_single_filter function, we create a lowpass filter. The "+.1*DECIMATE_BY" term in the cutoff is a fudge factor, to help with the overlap in the filterbank, which will be shown later. An ideal filter would cutoff exactly at 1/DECIMATE_BY, or $1/3 = .333...$ in this example.

The number of taps determines the "steepness" of the filter response shown below. FILT_CONST is multiplied by DECIMATE_BY so that each polyphase leg will have FILT_CONST taps after downsampling. This is not a requirement, but more taps will create a cleaner demonstration, and it was helpful for me to play with the constant during testing.

After filtering, the resulting signal should retain its strength between about +- .17 Hz, normalized frequency, with values outside that range being strongly diminished. Now the next step is to take the data from above, and downsample it, since we have reduced the amount of content we care about in the signal.

Once the basic filter implementation is complete, it is time to implement the polyphase version of this filter and compare the output, which should be identical to the filter -> decimate technique.

def polyphase_single_filter(input_data, decimate_by, filt):
    if len(input_data) % decimate_by != 0:
        input_data = input_data[:len(input_data)-len(input_data)%decimate_by]
    head_data_stream = np.asarray(input_data[::decimate_by])
    tail_data_streams = np.asarray([np.asarray(input_data[0+i::decimate_by]) 
for i in range(1,decimate_by)]) data_streams = np.vstack((head_data_stream, tail_data_streams[::-1,:])) filter_streams = np.asarray([np.asarray(filt[0+i::decimate_by]) for i in range(decimate_by)]) filtered_data_streams = np.asarray([sg.lfilter(filter_streams[n,:], 1, data_streams[n,:])
for n in range(decimate_by)]) filtered_data_streams = np.asarray([np.append(filtered_data_streams[i],0) if i==0 else
np.insert(filtered_data_streams[i], 0, 0) for i in range(decimate_by)]) filtered_data = np.sum(filtered_data_streams, axis=0) return filtered_data

The results of the work produces the following plots:


As we can see, the polyphase filter plot looks identical to the filter, then decimate case - however, the number of operations required for this processing is greatly reduced, as the operations necessary are only applied on data that will be output. This is an important point, and highlights the power of implementing filters in this manner.



The previous code was pretty gnarly, especially for those unfamiliar with python list comprehension, numpy, or MATLAB type mathematics, so the following section will step through line by line and explain the purpose of each logical chunk in the polyphase_single_filter function. A great link on the basics of list comprehension in python can be found here. Some of this code may be cleaner using numpy.vectorize, but it was simpler to stick with list comprehensions for the current incarnation.

if len(input_data) % decimate_by != 0:
    input_data = input_data[:len(input_data)-len(input_data)%decimate_by]

This segment is used to regularize the input, specifically by by dropping the last chunk of data that would be left out when turning this single vector into decimate_by separate vectors of data. An alternative approach would be to append zeros to input_data to make an evenly divisible segment, but I chose to drop the data for this example.  

head_data_stream = np.asarray(input_data[::decimate_by])

head_data_stream is the first decimated stream of our data, which holds every value whose index % decimate_by == 0 (I.E. take index 0, skip 1 and 2, take 3, etc. for decimate_by = 3). It is necessary to differentiate head_data_stream from the rest of the data here due to some special append and ordering requirements for the different data streams, described in the example section above.

    tail_data_streams = np.asarray([np.asarray(input_data[0+i::decimate_by]) 
for i in range(1,decimate_by)])

tail_data_streams is very similar to head_data_streams, except I have used some numpy-isms in order to avoid looping. This chunk of data simply completes the block for our data, taking every decimate_by value starting from each starting point from index 1 to decimate_by-1 (remember numpy is 0 based, so starting from the 0 index is stored in head_data_stream)

I.E. skip index 0, take 1,  skip 2 and 3, take 4, ... then skip indexes 0 and 1, take 2, skip 3 and 4, take 5... for decimate_by = 3.

data_streams = np.vstack((head_data_stream, tail_data_streams[::-1,:]))

Now that each stream has been organized, we want to create an array of values of 2 dimensions that has size (decimate_by, len(input_data)/decimate_by) with decimate_by being the number of rows, len(input_data)/decimate_by being the number of columns. The [::-1,:] associated with tail_data_streams is simply saying to invert the row ordering, in accordance with the example data adjustments. For example, in the decimate_by = 3 case, we now have

$\left[\begin{array}{mod} head \\ tail_1 \\ tail_0 \end{array} \right]$

where $head$, $tail_1$, $tail_0$ represent head_data_stream, tail_data_stream row 1, and tail_data_stream row 0 respectively. 

filter_streams = np.asarray([np.asarray(filt[0+i::decimate_by]) for i in range(decimate_by)]) 

Here we create the decimated versions of the filter, in a similar manner as we created the rows of data_streams. In this case, we don't do any row reversal, simply a take, skip, take operation to build each row vector.   

    filtered_data_streams = np.asarray([sg.lfilter(filter_streams[n,:], 1, data_streams[n,:]) 
for n in range(decimate_by)])

Now we actually apply the filtering, applying each row of filter_streams (a decimated version of the original) to the associated row of data_streams.   

    filtered_data_streams = np.asarray([np.append(filtered_data_streams[i],0) if i==0 else
np.insert(filtered_data_streams[i], 0, 0) for i in range(decimate_by)])

After filtering, we also need to adjust each stream to account by appending a zero to the first row (formerly head_data_stream), and prepending zero to every other row. In more performant code this could be accomplished by writing into a larger array of zeros with proper indexing, but it is simpler here to append and prepend to each row as necessary.

filtered_data = np.sum(filtered_data_streams, axis=0)

Now that the filtered data is complete, we simply sum each column in order to get the final output, which should be the filtered and decimated version of the original signal.



After that explanation, the final extension is to implement the analysis side of a polyphase filterbank. Technically, a full filterbank is composed of an analysis section, which breaks the signal into subbands, and a synthesis section, which returns the signal to proper form.  For this tutorial, only the analysis side of the filterbank will be covered.

def polyphase_analysis(input_data, decimate_by, filt):
    if len(input_data) % decimate_by != 0:
        input_data = input_data[:len(input_data)-len(input_data)%decimate_by]
    #decimate prototype filter
    head_data_stream = np.asarray(input_data[::decimate_by])
    tail_data_streams = np.asarray([np.asarray(input_data[0+i::decimate_by]) 
for i in range(1,decimate_by)]) data_streams = np.vstack((head_data_stream, tail_data_streams[::-1,:])) filter_streams = np.asarray([np.asarray(filt[0+i::decimate_by]) for i in range(decimate_by)]) filtered_data_streams = np.asarray([sg.lfilter(filter_streams[n,:], 1, data_streams[n,:])
 for n in range(decimate_by)]) filtered_data_streams = np.asarray([np.append(filtered_data_streams[i],0) if i==0 else
np.insert(filtered_data_streams[i], 0, 0) for i in range(decimate_by)]) out = np.fft.ifft(filtered_data_streams, n=decimate_by, axis=0) return out

Basically, the code is identical to what we saw above for polyphase_single_filter, except an IFFT was used in place of a sum. Pretty simple switch, and a powerful result. Let's look at the results.


A serious question arises - "Wait, why is the 0th output the MIDDLE of the chirp?". Two important points here - the input data is a complex chirp. This means the baseband filter actually goes from about -.17 to +.17, as was discussed during implementation of the single filter. This chirp also starts at $-\frac{f_s}{2}$, and goes to $\frac{f_s}{2}$. This means the 0th band should actually appear in the center of the spectrogram. For k= 0, the DFT (or IDFT) becomes

$\large{x_k = \sum^{N-1}_{n=0}x_n e^\frac{-2 \pi j k n }{N}}$ for k = 0  is

$\large{x_0 = \sum^{N-1}_{n=0}x_n e^\frac{-2 \pi j k n }{N} = \sum^{N-1}_{n=0} x_n}$

In general, the 0th output of the filterbank should be identical to the single polyphase filter, which should be identical to the filter, then decimate case.

The second important point is that the ordering of the outputs depends on the choice of DFT versus IDFT. The only technical note I have seen on the subject is found in Multirate Digital Signal Processing by N.J. Fliege, pg 237. It says that in some derivations "the input demultiplexer is realized by a clockwise input commutator [...] In this structure, therefore, a DFT is carried out, and not an IDFT". The logic of the demultiplexer (the creation of data_streams) appears to be acting as a counterclockwise commutator, however, the choice between DFT and IDFT simply seems to reorder the output streams.

For even decimation_by values, the value for row decimate_by/2 will be split by the very top and very bottom of the spectrogram, since the first band starts centered about 0.


Hopefully this has been a good description of polyphase filtering, how to implement it, and why it is an important tool for anyone implementing multirate signal processing solutions. For any mistakes or revisions, please don't hesitate to leave a comment. I will try to address any questions or concerns as they arrive.



My python code will run a complete program demonstrating the implementation and process of filtering, polyphase filtering, and polyphase filterbanks in python. For size reasons, I have left this on github. The latest updates to this code can always be found at https://github.com/kastnerkyle/Sandbox/tree/master/dsp/  under filterbank.py



Windows: Python XY. Seriously.

Mac: ... MacBrew?

Linux: Get numpy, scipy, and matplotlib from your favorite neighborhood package manager. In Ubuntu 12.10 64-bit, this line will do the trick:

sudo apt-get install python-{numpy, scipy, matplotlib}







Multirate Digital Signal Processing - N.J. Fliege, pg 229

Spectral Audio Signal Processing - Julius O. Smith


[ - ]
Comment by dszaboMarch 28, 2013
Couldn't you just perform the convolution for every M index to perform the decimation? Is that equivalent to what the polyphase filter does?
[ - ]
Comment by kastnerkyleMarch 29, 2013
That technique would be equivalent to downsampling by M (every Mth sample would be convolved), then filtering. This means your frequency content could be heavily aliased, which is not good! You are on the right track though.

A polyphase implementation is basically making downsampled "subsets" of the original signal, every M+0th element, every M+1th, ... through every M+(M-1)th value. These downsampled "datastreams" are each filtered by an equivalent filter stream (take every K+0th, K+1th, etc.) then summed (each stream, 0 indexes summed, 1 indexes summed, etc.) to get a result mathematically equivalent to filtering then downsampling. For a graphical representation of this (and better teaching), check out the following link, or any of the references mentioned in my post.

[ - ]
Comment by dszaboMarch 29, 2013
I did read that particular link. The technique I was trying to describe was shown in slide three. What I didn't get from the slideshow was the advantage of using the polyphase form over the direct form. If the direct form and polyphase form are mathematically equivalent, is there any efficiency to be gained from the polyphase form over the method outlined in slide three? Perhaps it's easier to describe the polyphase form in the Z domain?
[ - ]
Comment by kastnerkyleMarch 30, 2013
I think the "skip and convolve" method is exactly equivalent - just another representation of the same calculations,
[ - ]
Comment by mite_learnerApril 7, 2013
Great topic and post. I think 'y4' is missing in the 'THE SAMPLE LIFE' para.. No?

[ - ]
Comment by kastnerkyleApril 8, 2013
I have been looking over that section - I don't see anything amiss. Where exactly are you expecting to see y4?
[ - ]
Comment by mite_learnerApril 13, 2013
After following statement "Once these filters are applied and the results are adjusted, the 3 resultant vectors s are summed, giving a result y of"
I am expeccting [y0 y1 y2 y3 y4] insetead of [ y0 y1 y2 y3]
[ - ]
Comment by kastnerkyleApril 13, 2013
Ah! I see it now. That is a typo - I will go ahead and update that. Thanks!

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.

Sign up
or Sign in