# Sample Rate Conversion (Downsampling)

Started by December 14, 2004
```Jim Thomas <jthomas@bittware.com> writes:

> Randy Yates wrote:
> > Jim Thomas <jthomas@bittware.com> writes:
> [snip]
> >>Again, I'm allowing ample room for the possibility that I'm wrong, so
> >>don't be shy!  Is your equation correct?
> > Nope! You're right. I had a sign problem in the x's. This corrects
> > that problem (hopefully
>
> > there are no others!):
> >   y[n] = x[n*M -     0*M - 0] * h[    0*M + 0] + x[n*M -     0*M -
> > 1] * h[    0*M + 1] + ... + x[n*M -     0*M - (M-1)] * h[    0*M +
> > (M-1)]
>
> >        + x[n*M -     1*M - 0] * h[    1*M + 0] + x[n*M -     1*M - 1] * h[    1*M + 1] + ... + x[n*M -     1*M - (M-1)] * h[    1*M + (M-1)]
> >        + ...
> >        + x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
> > Thanks for the correction, Jim.
>
>
> That's better, and now I see what you mean.  But... it seems to me
> that arranging the filter equation into rows and columns like this is
> somewhat artificial.

I agree. But it sure sounds impressive.

> At this point I'm just going to confuse people who already understand
> polyphase filters but aren't confident in their understanding, and for
> that, I apologize in advance.  I guess I'm firmly in that same camp.

I think you've got more on the ball than you think.

> Each value of y depends on every filter tap, so I still don't see why
> it's polyphase.  If this is polyphase, aren't ALL FIRs polyphase?

In a sense, yes. The difference between a "standard" FIR that is operating
in a system in which the input and output sample rate is the same and a
decimator is that you don't skip output computations.

Once again I come to the point where I must state, supporting Jim's
impression, that many things in DSP that "sound" very complex and
intimidating are, when you cut through to the bottom line, aren't all
that fancy. To me this "polyphase" stuff is in that category, but I
admit that I haven't studied the higher concepts like those in the
Vaidnayathan book on filter banks. Perhaps when you look at it from
a linear algebra point-of-view there is more to see.
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```
```"Randy Yates" <randy.yates@sonyericsson.com> wrote in message
news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
>
> Once again I come to the point where I must state, supporting Jim's
> impression, that many things in DSP that "sound" very complex and
> intimidating are, when you cut through to the bottom line, aren't all
> that fancy. To me this "polyphase" stuff is in that category, but I
> admit that I haven't studied the higher concepts like those in the
> Vaidnayathan book on filter banks. Perhaps when you look at it from
> a linear algebra point-of-view there is more to see.

I agree with this.  When I first about polyphase, it seemed to be like a rather
obvious optimization.  Most of the inputs to your filter are zero, so you don't
bother including them in the calculation.  Most of the outputs aren't used
anyway, so you don't bother computing them.  Simple!

One other point, I first learned about polyphase in the context of sample rate
conversion by a ratio of n/m, where 'the upsample by n' and 'downsample by m'
are combined into a single filtering operation.  In that case, you end up with
what looks like a standard FIR low-pass filter (sinc-like shape) whose impulse
response has been interpolated to a higher sample rate.  But then you end up
only using a part (one phase) of that filter for every output sample.  To me,
this rational conversion case is when you can really see the polyphase technique
in action and can get a grasp on why it is called polyphase (literally 'many
phases').

```
```The most intuitive way I found to look at polyphase decimation (as I
learned it) was through the "commutator" structure shown in the
Crochiere book.

If you start out by writing out the convolution sum for every n.

y[n] = h[0]x[n] + h[1]x[n-1] + ... + h[N-1]x[n-N+1]  , n=0,1,...

For a rate-M decimator, throw away every M output samples and then look
at your corresponding y[Mn] samples you wrote out above for a bunch of
different n values, you will see the structure.

Basically a direct-form decimator is similar to a standard FIR
implementation, but instead of computing one y output for each x input,
here you skip M samples in your x[n] history buffer for each y output
sample.  Hence your computations are reduced by M.

For the polyphase approach, the commutator model basically does
- once you have injected M new samples into the filter you compute one
output point.  It is just another way to look at the efficient
direct-form approach (but there are some subtle differences).

I had to put together a paper for some non-DSP engineers, it does show
the commutator model if you are interested:
http://www.hyperdynelabs.com/dsp/DSP%20code%20optimizations.pdf

Dont go grading me on it, the paper was a quick write up!

Jim

Jon Harris wrote:
> "Randy Yates" <randy.yates@sonyericsson.com> wrote in message
> news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
> >
> > Once again I come to the point where I must state, supporting Jim's
> > impression, that many things in DSP that "sound" very complex and
> > intimidating are, when you cut through to the bottom line, aren't
all
> > that fancy. To me this "polyphase" stuff is in that category, but I
> > admit that I haven't studied the higher concepts like those in the
> > Vaidnayathan book on filter banks. Perhaps when you look at it from
> > a linear algebra point-of-view there is more to see.
>
> I agree with this.  When I first about polyphase, it seemed to be
like a rather
> obvious optimization.  Most of the inputs to your filter are zero, so
you don't
> bother including them in the calculation.  Most of the outputs aren't
used
> anyway, so you don't bother computing them.  Simple!
>
> One other point, I first learned about polyphase in the context of
sample rate
> conversion by a ratio of n/m, where 'the upsample by n' and
'downsample by m'
> are combined into a single filtering operation.  In that case, you
end up with
> what looks like a standard FIR low-pass filter (sinc-like shape)
whose impulse
> response has been interpolated to a higher sample rate.  But then you
end up
> only using a part (one phase) of that filter for every output sample.
To me,
> this rational conversion case is when you can really see the
polyphase technique
> in action and can get a grasp on why it is called polyphase
(literally 'many
> phases').

```
```<jshima@timing.com> wrote in message
> The most intuitive way I found to look at polyphase decimation (as I
> learned it) was through the "commutator" structure shown in the
> Crochiere book.
>
> If you start out by writing out the convolution sum for every n.
>
> y[n] = h[0]x[n] + h[1]x[n-1] + ... + h[N-1]x[n-N+1]  , n=0,1,...
>
> For a rate-M decimator, throw away every M output samples and then look
> at your corresponding y[Mn] samples you wrote out above for a bunch of
> different n values, you will see the structure.
>
> Basically a direct-form decimator is similar to a standard FIR
> implementation, but instead of computing one y output for each x input,
> here you skip M samples in your x[n] history buffer for each y output
> sample.  Hence your computations are reduced by M.
>
> For the polyphase approach, the commutator model basically does
> - once you have injected M new samples into the filter you compute one
> output point.  It is just another way to look at the efficient
> direct-form approach (but there are some subtle differences).

I really like the commutator model as well, since it maps well in my brain
with implementation in mind. However, I've always had trouble with most
references that use this in their diagrams.
For example, in the PP. Vaidyanathan book,
Figure 4.3-9 (page 131) still doesn't make that much sense. The delay line
in (a) makes sense. Then going from that to the exact order of how the
commutator rotates doesn't seem intuitive.
I woul've said, when n=0, the commutator is on the top branch. For n=1, I
would've picked the 2nd branch (since it is one sample delayed from the top
branch). But the book shows the 3rd branch.
In the past, I've tried working through it with brute force and the book's
answer does work out but it just wasn't sitting well with my intuition. I
wonder if one of you guys can make me feel at ease.
If some of you have been to fred harris' multi-rate class, his pictures look
a little different as well (he starts his commutators at the bottom branch).

Hmm...now that I look at it again, I think I know what I'm having difficulty
with (and why), but I'd still like to hear some of your thoughts.

Cheers

>
> I had to put together a paper for some non-DSP engineers, it does show
> the commutator model if you are interested:
> http://www.hyperdynelabs.com/dsp/DSP%20code%20optimizations.pdf
>
> Dont go grading me on it, the paper was a quick write up!
>
> Jim
>
>
>
>
>
> Jon Harris wrote:
> > "Randy Yates" <randy.yates@sonyericsson.com> wrote in message
> > news:xxpr7lp3pdz.fsf@usrts005.corpusers.net...
> > >
> > > Once again I come to the point where I must state, supporting Jim's
> > > impression, that many things in DSP that "sound" very complex and
> > > intimidating are, when you cut through to the bottom line, aren't
> all
> > > that fancy. To me this "polyphase" stuff is in that category, but I
> > > admit that I haven't studied the higher concepts like those in the
> > > Vaidnayathan book on filter banks. Perhaps when you look at it from
> > > a linear algebra point-of-view there is more to see.
> >
> > I agree with this.  When I first about polyphase, it seemed to be
> like a rather
> > obvious optimization.  Most of the inputs to your filter are zero, so
> you don't
> > bother including them in the calculation.  Most of the outputs aren't
> used
> > anyway, so you don't bother computing them.  Simple!
> >
> > One other point, I first learned about polyphase in the context of
> sample rate
> > conversion by a ratio of n/m, where 'the upsample by n' and
> 'downsample by m'
> > are combined into a single filtering operation.  In that case, you
> end up with
> > what looks like a standard FIR low-pass filter (sinc-like shape)
> whose impulse
> > response has been interpolated to a higher sample rate.  But then you
> end up
> > only using a part (one phase) of that filter for every output sample.
> To me,
> > this rational conversion case is when you can really see the
> polyphase technique
> > in action and can get a grasp on why it is called polyphase
> (literally 'many
> > phases').
>

```
```In article <10s5p467bnklbfa@corp.supernews.com>,
Jim Thomas  <jthomas@bittware.com> wrote:
>Each value of y depends on every filter tap, so I still don't see why it's
>polyphase.  If this is polyphase, aren't ALL FIRs polyphase?

Yes.

All this polyphase stuff is just a distraction, *except* for reasons of
implementation efficiency.  Standard resampling filters are all (usually
windowed) variants of Sync (or sync-like) reconstruction FIRs, with the
Sync width appropriate to the output sample rate, and with some phase.
The phase seems to disappear from standard decimators because the
default input to output phase is taken as zero (but you don't have to
decimate that way!).  Using Polyphase coefficient tables is one method
of optimization if the input/output ratio is appropriate.  But the
input/output ratio can be irrational and you can still reconstruct with
Sync coefficients, they just have to be appropriate coefficients for
every output sample (calculated individually per sample or interpolated
for instance).

I've even found uses for resampling filters where the input/output
ratio has been 1:1 (tuneable precision delay lines for instance).

IMHO. YMMV.
--
Ron Nicholson   rhn AT nicholson DOT com   http://www.nicholson.com/rhn/
#include <canonical.disclaimer>        // only my own opinions, etc.
```