```In article <10s5p467bnklbfa@corp.supernews.com>,
Jim Thomas  <jthomas@bittware.com> wrote:
>Each value of y depends on every filter tap, so I still don't see why it's
>polyphase.  If this is polyphase, aren't ALL FIRs polyphase?

Yes.

All this polyphase stuff is just a distraction, *except* for reasons of
implementation efficiency.  Standard resampling filters are all (usually
windowed) variants of Sync (or sync-like) reconstruction FIRs, with the
Sync width appropriate to the output sample rate, and with some phase.
The phase seems to disappear from standard decimators because the
default input to output phase is taken as zero (but you don't have to
decimate that way!).  Using Polyphase coefficient tables is one method
of optimization if the input/output ratio is appropriate.  But the
input/output ratio can be irrational and you can still reconstruct with
Sync coefficients, they just have to be appropriate coefficients for
every output sample (calculated individually per sample or interpolated
for instance).

I've even found uses for resampling filters where the input/output
ratio has been 1:1 (tuneable precision delay lines for instance).

IMHO. YMMV.
--
Ron Nicholson   rhn AT nicholson DOT com   http://www.nicholson.com/rhn/
#include <canonical.disclaimer>        // only my own opinions, etc.
```
```<jshima@timing.com> wrote in message
> The most intuitive way I found to look at polyphase decimation (as I
> learned it) was through the "commutator" structure shown in the
> Crochiere book.
>
> If you start out by writing out the convolution sum for every n.
>
> y[n] = hx[n] + hx[n-1] + ... + h[N-1]x[n-N+1]  , n=0,1,...
>
> For a rate-M decimator, throw away every M output samples and then look
> at your corresponding y[Mn] samples you wrote out above for a bunch of
> different n values, you will see the structure.
>
> Basically a direct-form decimator is similar to a standard FIR
> implementation, but instead of computing one y output for each x input,
> here you skip M samples in your x[n] history buffer for each y output
> sample.  Hence your computations are reduced by M.
>
> For the polyphase approach, the commutator model basically does
> - once you have injected M new samples into the filter you compute one
> output point.  It is just another way to look at the efficient
> direct-form approach (but there are some subtle differences).

I really like the commutator model as well, since it maps well in my brain
with implementation in mind. However, I've always had trouble with most
references that use this in their diagrams.
For example, in the PP. Vaidyanathan book,
Figure 4.3-9 (page 131) still doesn't make that much sense. The delay line
in (a) makes sense. Then going from that to the exact order of how the
commutator rotates doesn't seem intuitive.
I woul've said, when n=0, the commutator is on the top branch. For n=1, I
would've picked the 2nd branch (since it is one sample delayed from the top
branch). But the book shows the 3rd branch.
In the past, I've tried working through it with brute force and the book's
answer does work out but it just wasn't sitting well with my intuition. I
wonder if one of you guys can make me feel at ease.
If some of you have been to fred harris' multi-rate class, his pictures look
a little different as well (he starts his commutators at the bottom branch).

Hmm...now that I look at it again, I think I know what I'm having difficulty
with (and why), but I'd still like to hear some of your thoughts.

Cheers

>
> I had to put together a paper for some non-DSP engineers, it does show
> the commutator model if you are interested:
> http://www.hyperdynelabs.com/dsp/DSP%20code%20optimizations.pdf
>
> Dont go grading me on it, the paper was a quick write up!
>
> Jim
>
>
>
>
>
> Jon Harris wrote:
> > "Randy Yates" <randy.yates@sonyericsson.com> wrote in message
> > news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
> > >
> > > Once again I come to the point where I must state, supporting Jim's
> > > impression, that many things in DSP that "sound" very complex and
> > > intimidating are, when you cut through to the bottom line, aren't
> all
> > > that fancy. To me this "polyphase" stuff is in that category, but I
> > > admit that I haven't studied the higher concepts like those in the
> > > Vaidnayathan book on filter banks. Perhaps when you look at it from
> > > a linear algebra point-of-view there is more to see.
> >
> > I agree with this.  When I first about polyphase, it seemed to be
> like a rather
> > obvious optimization.  Most of the inputs to your filter are zero, so
> you don't
> > bother including them in the calculation.  Most of the outputs aren't
> used
> > anyway, so you don't bother computing them.  Simple!
> >
> > One other point, I first learned about polyphase in the context of
> sample rate
> > conversion by a ratio of n/m, where 'the upsample by n' and
> 'downsample by m'
> > are combined into a single filtering operation.  In that case, you
> end up with
> > what looks like a standard FIR low-pass filter (sinc-like shape)
> whose impulse
> > response has been interpolated to a higher sample rate.  But then you
> end up
> > only using a part (one phase) of that filter for every output sample.
> To me,
> > this rational conversion case is when you can really see the
> polyphase technique
> > in action and can get a grasp on why it is called polyphase
> (literally 'many
> > phases').
>

```
```The most intuitive way I found to look at polyphase decimation (as I
learned it) was through the "commutator" structure shown in the
Crochiere book.

If you start out by writing out the convolution sum for every n.

y[n] = hx[n] + hx[n-1] + ... + h[N-1]x[n-N+1]  , n=0,1,...

For a rate-M decimator, throw away every M output samples and then look
at your corresponding y[Mn] samples you wrote out above for a bunch of
different n values, you will see the structure.

Basically a direct-form decimator is similar to a standard FIR
implementation, but instead of computing one y output for each x input,
here you skip M samples in your x[n] history buffer for each y output
sample.  Hence your computations are reduced by M.

For the polyphase approach, the commutator model basically does
- once you have injected M new samples into the filter you compute one
output point.  It is just another way to look at the efficient
direct-form approach (but there are some subtle differences).

I had to put together a paper for some non-DSP engineers, it does show
the commutator model if you are interested:
http://www.hyperdynelabs.com/dsp/DSP%20code%20optimizations.pdf

Dont go grading me on it, the paper was a quick write up!

Jim

Jon Harris wrote:
> "Randy Yates" <randy.yates@sonyericsson.com> wrote in message
> news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
> >
> > Once again I come to the point where I must state, supporting Jim's
> > impression, that many things in DSP that "sound" very complex and
> > intimidating are, when you cut through to the bottom line, aren't
all
> > that fancy. To me this "polyphase" stuff is in that category, but I
> > admit that I haven't studied the higher concepts like those in the
> > Vaidnayathan book on filter banks. Perhaps when you look at it from
> > a linear algebra point-of-view there is more to see.
>
> I agree with this.  When I first about polyphase, it seemed to be
like a rather
> obvious optimization.  Most of the inputs to your filter are zero, so
you don't
> bother including them in the calculation.  Most of the outputs aren't
used
> anyway, so you don't bother computing them.  Simple!
>
> One other point, I first learned about polyphase in the context of
sample rate
> conversion by a ratio of n/m, where 'the upsample by n' and
'downsample by m'
> are combined into a single filtering operation.  In that case, you
end up with
> what looks like a standard FIR low-pass filter (sinc-like shape)
whose impulse
> response has been interpolated to a higher sample rate.  But then you
end up
> only using a part (one phase) of that filter for every output sample.
To me,
> this rational conversion case is when you can really see the
polyphase technique
> in action and can get a grasp on why it is called polyphase
(literally 'many
> phases').

```
```"Randy Yates" <randy.yates@sonyericsson.com> wrote in message
news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
>
> Once again I come to the point where I must state, supporting Jim's
> impression, that many things in DSP that "sound" very complex and
> intimidating are, when you cut through to the bottom line, aren't all
> that fancy. To me this "polyphase" stuff is in that category, but I
> admit that I haven't studied the higher concepts like those in the
> Vaidnayathan book on filter banks. Perhaps when you look at it from
> a linear algebra point-of-view there is more to see.

I agree with this.  When I first about polyphase, it seemed to be like a rather
obvious optimization.  Most of the inputs to your filter are zero, so you don't
bother including them in the calculation.  Most of the outputs aren't used
anyway, so you don't bother computing them.  Simple!

One other point, I first learned about polyphase in the context of sample rate
conversion by a ratio of n/m, where 'the upsample by n' and 'downsample by m'
are combined into a single filtering operation.  In that case, you end up with
what looks like a standard FIR low-pass filter (sinc-like shape) whose impulse
response has been interpolated to a higher sample rate.  But then you end up
only using a part (one phase) of that filter for every output sample.  To me,
this rational conversion case is when you can really see the polyphase technique
in action and can get a grasp on why it is called polyphase (literally 'many
phases').

```
```Jim Thomas <jthomas@bittware.com> writes:

> Randy Yates wrote:
> > Jim Thomas <jthomas@bittware.com> writes:
> [snip]
> >>Again, I'm allowing ample room for the possibility that I'm wrong, so
> >>don't be shy!  Is your equation correct?
> > Nope! You're right. I had a sign problem in the x's. This corrects
> > that problem (hopefully
>
> > there are no others!):
> >   y[n] = x[n*M -     0*M - 0] * h[    0*M + 0] + x[n*M -     0*M -
> > 1] * h[    0*M + 1] + ... + x[n*M -     0*M - (M-1)] * h[    0*M +
> > (M-1)]
>
> >        + x[n*M -     1*M - 0] * h[    1*M + 0] + x[n*M -     1*M - 1] * h[    1*M + 1] + ... + x[n*M -     1*M - (M-1)] * h[    1*M + (M-1)]
> >        + ...
> >        + x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
> > Thanks for the correction, Jim.
>
>
> That's better, and now I see what you mean.  But... it seems to me
> that arranging the filter equation into rows and columns like this is
> somewhat artificial.

I agree. But it sure sounds impressive.

> At this point I'm just going to confuse people who already understand
> polyphase filters but aren't confident in their understanding, and for
> that, I apologize in advance.  I guess I'm firmly in that same camp.

I think you've got more on the ball than you think.

> Each value of y depends on every filter tap, so I still don't see why
> it's polyphase.  If this is polyphase, aren't ALL FIRs polyphase?

In a sense, yes. The difference between a "standard" FIR that is operating
in a system in which the input and output sample rate is the same and a
decimator is that you don't skip output computations.

Once again I come to the point where I must state, supporting Jim's
impression, that many things in DSP that "sound" very complex and
intimidating are, when you cut through to the bottom line, aren't all
that fancy. To me this "polyphase" stuff is in that category, but I
admit that I haven't studied the higher concepts like those in the
Vaidnayathan book on filter banks. Perhaps when you look at it from
a linear algebra point-of-view there is more to see.
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```
```Randy Yates wrote:
> Jim Thomas <jthomas@bittware.com> writes:
[snip]
>>Again, I'm allowing ample room for the possibility that I'm wrong, so
>>don't be shy!  Is your equation correct?
>
>
> Nope! You're right. I had a sign problem in the x's. This corrects that problem (hopefully
> there are no others!):
>
>   y[n] = x[n*M -     0*M - 0] * h[    0*M + 0] + x[n*M -     0*M - 1] * h[    0*M + 1] + ... + x[n*M -     0*M - (M-1)] * h[    0*M + (M-1)]
>        + x[n*M -     1*M - 0] * h[    1*M + 0] + x[n*M -     1*M - 1] * h[    1*M + 1] + ... + x[n*M -     1*M - (M-1)] * h[    1*M + (M-1)]
>        + ...
>        + x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
>
> Thanks for the correction, Jim.

That's better, and now I see what you mean.  But... it seems to me that
arranging the filter equation into rows and columns like this is somewhat
artificial.

At this point I'm just going to confuse people who already understand polyphase
filters but aren't confident in their understanding, and for that, I apologize
in advance.  I guess I'm firmly in that same camp.

Each value of y depends on every filter tap, so I still don't see why it's
polyphase.  If this is polyphase, aren't ALL FIRs polyphase?

--
Jim Thomas            Principal Applications Engineer  Bittware, Inc
jthomas@bittware.com  http://www.bittware.com    (603) 226-0404 x536
Getting an inch of snow is like winning ten cents in the lottery - Calvin
```
```Jim Thomas <jthomas@bittware.com> writes:

> Randy Yates wrote:
> [snip]
>> It's almost a matter of how you write it. The linear convolution is
>> y[n] = x[n*M +     0*M - 0] * h[    0*M + 0] + x[n*M +     0*M - 1]
>> * h[    0*M + 1] + ... + x[n*M +     0*M - (M-1)] * h[    0*M +
>> (M-1)]
>>        + x[n*M +     1*M - 0] * h[    1*M + 0] + x[n*M +     1*M - 1] * h[    1*M + 1] + ... + x[n*M +     1*M - (M-1)] * h[    1*M + (M-1)]
>>        + ...
>>        + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
>> To get the "polyphase" version just look at this column-wise instead
>> of row-wise.
>
> I think I see where you're going with this, but I still have
> questions.  In order to get my brain wrapped around this equation I
> had to plug in some real numbers.  I chose M=3 and K=3.  Unless I made
> a tragic error, the above reduces to:
>
>    y[n] = x[n*3 + 0] * h + x[n*3 - 1] * h + x[n*3 - 2] * h
>         + x[n*3 + 3] * h + x[n*3 + 2] * h + x[n*3 + 1] * h
>         + x[n*3 + 6] * h + x[n*3 + 5] * h + x[n*3 + 4] * h
>
> Still having trouble, so I plug in n=0 and try again:
>    y = x[ 0] * h + x[-1] * h + x[-2] * h
>         + x[ 3] * h + x[ 2] * h + x[ 1] * h
>         + x[ 6] * h + x[ 5] * h + x[ 4] * h
>
> and again, with n=1:
>    y = x * h + x * h + x * h
>         + x * h + x * h + x * h
>         + x * h + x * h + x * h
>
> It looks to me like x needs both its rows and columns swapped.  I'd be
> a lot happier if I ended up with something like this:
>
>    y = x * h + x * h + x * h
>         + x * h + x * h + x * h
>         + x * h + x * h + x * h
>
> Again, I'm allowing ample room for the possibility that I'm wrong, so
> don't be shy!  Is your equation correct?

Nope! You're right. I had a sign problem in the x's. This corrects that problem (hopefully
there are no others!):

y[n] = x[n*M -     0*M - 0] * h[    0*M + 0] + x[n*M -     0*M - 1] * h[    0*M + 1] + ... + x[n*M -     0*M - (M-1)] * h[    0*M + (M-1)]
+ x[n*M -     1*M - 0] * h[    1*M + 0] + x[n*M -     1*M - 1] * h[    1*M + 1] + ... + x[n*M -     1*M - (M-1)] * h[    1*M + (M-1)]
+ ...
+ x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]

Thanks for the correction, Jim.
--
%  Randy Yates                  % "Maybe one day I'll feel her cold embrace,
%% Fuquay-Varina, NC            %                    and kiss her interface,
%%% 919-577-9882                %            til then, I'll leave her alone."
%%%% <yates@ieee.org>           %        'Yours Truly, 2095', *Time*, ELO
```
```Randy Yates wrote:
[snip]
>
>
> It's almost a matter of how you write it. The linear convolution is
>
>   y[n] = x[n*M +     0*M - 0] * h[    0*M + 0] + x[n*M +     0*M - 1] * h[    0*M + 1] + ... + x[n*M +     0*M - (M-1)] * h[    0*M + (M-1)]
>        + x[n*M +     1*M - 0] * h[    1*M + 0] + x[n*M +     1*M - 1] * h[    1*M + 1] + ... + x[n*M +     1*M - (M-1)] * h[    1*M + (M-1)]
>        + ...
>        + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
>
> To get the "polyphase" version just look at this column-wise instead of row-wise.

I think I see where you're going with this, but I still have questions.  In
order to get my brain wrapped around this equation I had to plug in some real
numbers.  I chose M=3 and K=3.  Unless I made a tragic error, the above reduces to:

y[n] = x[n*3 + 0] * h + x[n*3 - 1] * h + x[n*3 - 2] * h
+ x[n*3 + 3] * h + x[n*3 + 2] * h + x[n*3 + 1] * h
+ x[n*3 + 6] * h + x[n*3 + 5] * h + x[n*3 + 4] * h

Still having trouble, so I plug in n=0 and try again:
y = x[ 0] * h + x[-1] * h + x[-2] * h
+ x[ 3] * h + x[ 2] * h + x[ 1] * h
+ x[ 6] * h + x[ 5] * h + x[ 4] * h

and again, with n=1:
y = x * h + x * h + x * h
+ x * h + x * h + x * h
+ x * h + x * h + x * h

It looks to me like x needs both its rows and columns swapped.  I'd be a lot
happier if I ended up with something like this:

y = x * h + x * h + x * h
+ x * h + x * h + x * h
+ x * h + x * h + x * h

Again, I'm allowing ample room for the possibility that I'm wrong, so don't be

--
Jim Thomas            Principal Applications Engineer  Bittware, Inc
jthomas@bittware.com  http://www.bittware.com    (603) 226-0404 x536
Hope springs occasionally.
```
```On Thu, 16 Dec 2004 09:21:11 -0500, Jim Thomas <jthomas@bittware.com>
wrote:

>Jaime Andr&#2013265929;s Aranguren Cardona wrote:
>> "Jim Thomas" <jthomas@bittware.com> escribi&#2013265923; en el mensaje
>> news:10s0pegaj0r88e9@corp.supernews.com...
>>
>>>Jaime Andr&#2013265929;s Aranguren Cardona wrote:
>>>
>>>>What else could I try, guys? I'd really appreciate if you can take a
>>
>> look at
>>
>>>>my code, and help me find out where the mistakes can be, or provide me
>>
>> with
>>
>>>>some reference C code.
>>>>
>>>>Thanks you very much in advance,
>>>
>>>Did you see Grant's multirate code on dspguru.com?
>>>
>>
>>
>> Hi Jim,
>>
>> Yes I did. It is not polyphase, right? I'd love it was... Any example in C
>> with ployphase implementation?
>
>The interpolation code on dspguru is most definitely polyphase.
>
>My understanding is that polyphase only comes into play during interpolation.
>It's only during interpolation that you get the opportunity to skip through the
>coefs - essentially choosing a phase of the impulse response.  In decimation you
>work on contiguous samples, but only calculate the outputs you're going to keep.
>
>Am I wrong?

I'm coming in in the middle of this and there's already a lot of good
responses here, but:

Polyphase filters are commonly used in decimating applications when
the decimated samples need to be interpolated somewhere in between
where the original samples were.   Many modern communications systems
do this for symbol recovery, where an oversampled input is
simultaneously phase-locked and downsampled to the signal symbols.
Polyphase filters are great for this since the symbol rate can be
changed without changing the ADC sample rate and the related
anti-alias filters, etc.

Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
```
```"Bhaskar Thiagarajan" <bhaskart@my-deja.com> writes: