DSPRelated.com
Forums

Sample Rate Conversion (Downsampling)

Started by Unknown December 14, 2004
> Yes - from what I remember it is a polyphase implementation. > Cheers > Bhaskar
Hi, Thanks Jim, thanks Bhaskar, thank you everybody. Yes, it definitely is polyphase implementation. To be honest, I was expecting to see some code similar to mine, with a "viewable" input delay line, and additional delay lines for the decimation of the samples, prior to filtering, one filter per branch. After a thorough study of the code and actually running (and debugging) it on MSVC++ (which I couldn't do when I first replied to Jim), I could really understand it... What a clean and beautiful implementation of decimation filters, the polyphase way! Although I am just up to integrate and test it with my SHARC code, it looks like it won't be a source of confusion anymore! I'll let you know how does it go. Kindest regards, -- Jaime Andr�s Aranguren Cardona jaac@nospam.sanjaac.com SanJaaC Electronics Soluciones en DSP www.sanjaac.com (Remove "nospam" from e-mail address)
> > My understanding is that polyphase only comes into play during
interpolation.
> > It's only during interpolation that you get the opportunity to skip > > through the coefs - essentially choosing a phase of the impulse > > response. In decimation you work on contiguous samples, but only > > calculate the outputs you're going to keep. > > > > > > Am I wrong? > > > In a polyphase M-fold decimator, you have M polyphase filter coefficient
sets
> E_0 through E_{M-1} that each operate at the outgoing sample rate. Each
section
> is fed from an M-fold downsampler. M phases of the input signal are
obtained
> at the M downsampler outputs by delaying the input of each by a sample.
Then
> each polyphase section is convolved and the results summed. Again > your computations require M*K*Fs multiplications/second instead of
M^2*K*Fs,
> where Fs is the output sample rate.
This is how I understand it is, too... Don't see the reason for Jim to say that polyphase only comes into play during interpolation. Could you Jim please explain me your point of view? As I see on most diagrams, you have the filter splitted in branches, each one having a different phase response, thus the term "polyphase", both in interpolation and decimation filters. I ask too, "Am I wrong?". Regards, -- Jaime Andr�s Aranguren Cardona jaac@nospam.sanjaac.com SanJaaC Electronics Soluciones en DSP www.sanjaac.com (Remove "nospam" from e-mail address)
In article <10s36eofav28raa@corp.supernews.com>,
Jim Thomas  <jthomas@bittware.com> wrote:
>The interpolation code on dspguru is most definitely polyphase. > >My understanding is that polyphase only comes into play during interpolation. >It's only during interpolation that you get the opportunity to skip through >the coefs - essentially choosing a phase of the impulse response.
Why choose a phase with one can calculate the coefficients of the precise phase? I prefer to think of both upsampling, interpolation and decimation as identical processes, consisting of a filter plus an interpolator. If the decimation results line up with the input, the the interpolator become trivial (constant phase 0). The lowpass filter usually changes to cut at near or under half the minima of the input and output sample frequencies. The filter+interpolator (usually a windowed sync) coefficients can sometimes be calculated directly, per sample, even in real-time on a fast PC, no precalculated phase table lookup required except for efficiency reasons (and the sin/cos routines can be more cache friendly than large lookup tables on embedded systems with tiny caches). This, of course, depends on the choice of window function (which can be much more expensive than calculating a cosine or three). The upsample-decimate and polyphase table method are merely opaque optimizations, needed for systems with various performance or power limitations. How often does one write their own sin/cos routines for any other generic program? IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
"Jaime Andr&#4294967295;s Aranguren Cardona" <jaac@nospam.sanjaac.com> writes:

> > > My understanding is that polyphase only comes into play during > interpolation. > > > It's only during interpolation that you get the opportunity to skip > > > through the coefs - essentially choosing a phase of the impulse > > > response. In decimation you work on contiguous samples, but only > > > calculate the outputs you're going to keep. > > > > > > > > > Am I wrong? > > > > > > In a polyphase M-fold decimator, you have M polyphase filter coefficient > sets > > E_0 through E_{M-1} that each operate at the outgoing sample rate. Each > section > > is fed from an M-fold downsampler. M phases of the input signal are > obtained > > at the M downsampler outputs by delaying the input of each by a sample. > Then > > each polyphase section is convolved and the results summed. Again > > your computations require M*K*Fs multiplications/second instead of > M^2*K*Fs, > > where Fs is the output sample rate. > > This is how I understand it is, too... Don't see the reason for Jim to say > that polyphase only comes into play during interpolation. Could you Jim > please explain me your point of view?
I know I'm not Jim, but I see what he was saying. Think of it like this. The meat-headed way to do decimation is to compute the convolution of the full (non-polyphase) filter and input data at every input sample. Then you toss M-1 out of M of those computed outputs. The non-polyphase but still-computationally-efficient way is to perform a full filter convolution at every Mth input sample but using all K*M input samples to do it with, i.e., y[n] = sum_{j=0}^{K*M-1} x[n*M - j] * h[j]. (Remember here that the filter length is K*M.) This method still only takes M*K*Fs multiplies/second but it doesn't require the filter to be partitioned.
> As I see on most diagrams, you have the filter splitted in branches, each > one having a different phase response, thus the term "polyphase", both in > interpolation and decimation filters. > > I ask too, "Am I wrong?".
It's almost a matter of how you write it. The linear convolution is y[n] = x[n*M + 0*M - 0] * h[ 0*M + 0] + x[n*M + 0*M - 1] * h[ 0*M + 1] + ... + x[n*M + 0*M - (M-1)] * h[ 0*M + (M-1)] + x[n*M + 1*M - 0] * h[ 1*M + 0] + x[n*M + 1*M - 1] * h[ 1*M + 1] + ... + x[n*M + 1*M - (M-1)] * h[ 1*M + (M-1)] + ... + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)] To get the "polyphase" version just look at this column-wise instead of row-wise. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
"Randy Yates" <randy.yates@sonyericsson.com> wrote in message
news:xxpfz265etg.fsf@usrts005.corpusers.net...
> "Jaime Andr&#4294967295;s Aranguren Cardona" <jaac@nospam.sanjaac.com> writes: > > > > > My understanding is that polyphase only comes into play during > > interpolation. > > > > It's only during interpolation that you get the opportunity to skip > > > > through the coefs - essentially choosing a phase of the impulse > > > > response. In decimation you work on contiguous samples, but only > > > > calculate the outputs you're going to keep. > > > > > > > > > > > > Am I wrong? > > > > > > > > > In a polyphase M-fold decimator, you have M polyphase filter
coefficient
> > sets > > > E_0 through E_{M-1} that each operate at the outgoing sample rate.
Each
> > section > > > is fed from an M-fold downsampler. M phases of the input signal are > > obtained > > > at the M downsampler outputs by delaying the input of each by a
sample.
> > Then > > > each polyphase section is convolved and the results summed. Again > > > your computations require M*K*Fs multiplications/second instead of > > M^2*K*Fs, > > > where Fs is the output sample rate. > > > > This is how I understand it is, too... Don't see the reason for Jim to
say
> > that polyphase only comes into play during interpolation. Could you Jim > > please explain me your point of view? > > I know I'm not Jim, but I see what he was saying. Think of it like > this. The meat-headed way to do decimation is to compute the > convolution of the full (non-polyphase) filter and input data at every > input sample. Then you toss M-1 out of M of those computed > outputs. The non-polyphase but still-computationally-efficient way is > to perform a full filter convolution at every Mth input sample but > using all K*M input samples to do it with, i.e., > > y[n] = sum_{j=0}^{K*M-1} x[n*M - j] * h[j]. > > (Remember here that the filter length is K*M.) > > This method still only takes M*K*Fs multiplies/second but it doesn't
require
> the filter to be partitioned. > > > As I see on most diagrams, you have the filter splitted in branches,
each
> > one having a different phase response, thus the term "polyphase", both
in
> > interpolation and decimation filters. > > > > I ask too, "Am I wrong?". > > It's almost a matter of how you write it. The linear convolution is > > y[n] = x[n*M + 0*M - 0] * h[ 0*M + 0] + x[n*M + 0*M - 1] *
h[ 0*M + 1] + ... + x[n*M + 0*M - (M-1)] * h[ 0*M + (M-1)]
> + x[n*M + 1*M - 0] * h[ 1*M + 0] + x[n*M + 1*M - 1] *
h[ 1*M + 1] + ... + x[n*M + 1*M - (M-1)] * h[ 1*M + (M-1)]
> + ... > + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] *
h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)]
> > To get the "polyphase" version just look at this column-wise instead of
row-wise. Nicely done Randy - good explanation! Cheers Bhaskar
> -- > Randy Yates > Sony Ericsson Mobile Communications > Research Triangle Park, NC, USA > randy.yates@sonyericsson.com, 919-472-1124
"Bhaskar Thiagarajan" <bhaskart@my-deja.com> writes:
> [...] > Nicely done Randy - good explanation!
Thanks Bhaskar! Since I had just studied these structures in my DSP class at NC State I was particularly up on the topic. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
On Thu, 16 Dec 2004 09:21:11 -0500, Jim Thomas <jthomas@bittware.com>
wrote:

>Jaime Andr&#4294967295;s Aranguren Cardona wrote: >> "Jim Thomas" <jthomas@bittware.com> escribi&#4294967295; en el mensaje >> news:10s0pegaj0r88e9@corp.supernews.com... >> >>>Jaime Andr&#4294967295;s Aranguren Cardona wrote: >>> >>>>What else could I try, guys? I'd really appreciate if you can take a >> >> look at >> >>>>my code, and help me find out where the mistakes can be, or provide me >> >> with >> >>>>some reference C code. >>>> >>>>Thanks you very much in advance, >>> >>>Did you see Grant's multirate code on dspguru.com? >>> >> >> >> Hi Jim, >> >> Yes I did. It is not polyphase, right? I'd love it was... Any example in C >> with ployphase implementation? > >The interpolation code on dspguru is most definitely polyphase. > >My understanding is that polyphase only comes into play during interpolation. >It's only during interpolation that you get the opportunity to skip through the >coefs - essentially choosing a phase of the impulse response. In decimation you >work on contiguous samples, but only calculate the outputs you're going to keep. > >Am I wrong?
I'm coming in in the middle of this and there's already a lot of good responses here, but: Polyphase filters are commonly used in decimating applications when the decimated samples need to be interpolated somewhere in between where the original samples were. Many modern communications systems do this for symbol recovery, where an oversampled input is simultaneously phase-locked and downsampled to the signal symbols. Polyphase filters are great for this since the symbol rate can be changed without changing the ADC sample rate and the related anti-alias filters, etc. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Randy Yates wrote:
[snip]
> > > It's almost a matter of how you write it. The linear convolution is > > y[n] = x[n*M + 0*M - 0] * h[ 0*M + 0] + x[n*M + 0*M - 1] * h[ 0*M + 1] + ... + x[n*M + 0*M - (M-1)] * h[ 0*M + (M-1)] > + x[n*M + 1*M - 0] * h[ 1*M + 0] + x[n*M + 1*M - 1] * h[ 1*M + 1] + ... + x[n*M + 1*M - (M-1)] * h[ 1*M + (M-1)] > + ... > + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)] > > To get the "polyphase" version just look at this column-wise instead of row-wise.
I think I see where you're going with this, but I still have questions. In order to get my brain wrapped around this equation I had to plug in some real numbers. I chose M=3 and K=3. Unless I made a tragic error, the above reduces to: y[n] = x[n*3 + 0] * h[0] + x[n*3 - 1] * h[1] + x[n*3 - 2] * h[2] + x[n*3 + 3] * h[3] + x[n*3 + 2] * h[4] + x[n*3 + 1] * h[5] + x[n*3 + 6] * h[6] + x[n*3 + 5] * h[7] + x[n*3 + 4] * h[8] Still having trouble, so I plug in n=0 and try again: y[0] = x[ 0] * h[0] + x[-1] * h[1] + x[-2] * h[2] + x[ 3] * h[3] + x[ 2] * h[4] + x[ 1] * h[5] + x[ 6] * h[6] + x[ 5] * h[7] + x[ 4] * h[8] and again, with n=1: y[1] = x[3] * h[0] + x[2] * h[1] + x[1] * h[2] + x[6] * h[3] + x[5] * h[4] + x[4] * h[5] + x[9] * h[6] + x[8] * h[7] + x[7] * h[8] It looks to me like x needs both its rows and columns swapped. I'd be a lot happier if I ended up with something like this: y[1] = x[9] * h[0] + x[8] * h[1] + x[7] * h[2] + x[6] * h[3] + x[5] * h[4] + x[4] * h[5] + x[3] * h[6] + x[2] * h[7] + x[1] * h[8] Again, I'm allowing ample room for the possibility that I'm wrong, so don't be shy! Is your equation correct? -- Jim Thomas Principal Applications Engineer Bittware, Inc jthomas@bittware.com http://www.bittware.com (603) 226-0404 x536 Hope springs occasionally.
Jim Thomas <jthomas@bittware.com> writes:

> Randy Yates wrote: > [snip] >> It's almost a matter of how you write it. The linear convolution is >> y[n] = x[n*M + 0*M - 0] * h[ 0*M + 0] + x[n*M + 0*M - 1] >> * h[ 0*M + 1] + ... + x[n*M + 0*M - (M-1)] * h[ 0*M + >> (M-1)] >> + x[n*M + 1*M - 0] * h[ 1*M + 0] + x[n*M + 1*M - 1] * h[ 1*M + 1] + ... + x[n*M + 1*M - (M-1)] * h[ 1*M + (M-1)] >> + ... >> + x[n*M + (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M + (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M + (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)] >> To get the "polyphase" version just look at this column-wise instead >> of row-wise. > > I think I see where you're going with this, but I still have > questions. In order to get my brain wrapped around this equation I > had to plug in some real numbers. I chose M=3 and K=3. Unless I made > a tragic error, the above reduces to: > > y[n] = x[n*3 + 0] * h[0] + x[n*3 - 1] * h[1] + x[n*3 - 2] * h[2] > + x[n*3 + 3] * h[3] + x[n*3 + 2] * h[4] + x[n*3 + 1] * h[5] > + x[n*3 + 6] * h[6] + x[n*3 + 5] * h[7] + x[n*3 + 4] * h[8] > > Still having trouble, so I plug in n=0 and try again: > y[0] = x[ 0] * h[0] + x[-1] * h[1] + x[-2] * h[2] > + x[ 3] * h[3] + x[ 2] * h[4] + x[ 1] * h[5] > + x[ 6] * h[6] + x[ 5] * h[7] + x[ 4] * h[8] > > and again, with n=1: > y[1] = x[3] * h[0] + x[2] * h[1] + x[1] * h[2] > + x[6] * h[3] + x[5] * h[4] + x[4] * h[5] > + x[9] * h[6] + x[8] * h[7] + x[7] * h[8] > > It looks to me like x needs both its rows and columns swapped. I'd be > a lot happier if I ended up with something like this: > > y[1] = x[9] * h[0] + x[8] * h[1] + x[7] * h[2] > + x[6] * h[3] + x[5] * h[4] + x[4] * h[5] > + x[3] * h[6] + x[2] * h[7] + x[1] * h[8] > > Again, I'm allowing ample room for the possibility that I'm wrong, so > don't be shy! Is your equation correct?
Nope! You're right. I had a sign problem in the x's. This corrects that problem (hopefully there are no others!): y[n] = x[n*M - 0*M - 0] * h[ 0*M + 0] + x[n*M - 0*M - 1] * h[ 0*M + 1] + ... + x[n*M - 0*M - (M-1)] * h[ 0*M + (M-1)] + x[n*M - 1*M - 0] * h[ 1*M + 0] + x[n*M - 1*M - 1] * h[ 1*M + 1] + ... + x[n*M - 1*M - (M-1)] * h[ 1*M + (M-1)] + ... + x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)] Thanks for the correction, Jim. -- % Randy Yates % "Maybe one day I'll feel her cold embrace, %% Fuquay-Varina, NC % and kiss her interface, %%% 919-577-9882 % til then, I'll leave her alone." %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://home.earthlink.net/~yatescr
Randy Yates wrote:
> Jim Thomas <jthomas@bittware.com> writes:
[snip]
>>Again, I'm allowing ample room for the possibility that I'm wrong, so >>don't be shy! Is your equation correct? > > > Nope! You're right. I had a sign problem in the x's. This corrects that problem (hopefully > there are no others!): > > y[n] = x[n*M - 0*M - 0] * h[ 0*M + 0] + x[n*M - 0*M - 1] * h[ 0*M + 1] + ... + x[n*M - 0*M - (M-1)] * h[ 0*M + (M-1)] > + x[n*M - 1*M - 0] * h[ 1*M + 0] + x[n*M - 1*M - 1] * h[ 1*M + 1] + ... + x[n*M - 1*M - (M-1)] * h[ 1*M + (M-1)] > + ... > + x[n*M - (K-1)*M - 0] * h[(K-1)*M + 0] + x[n*M - (K-1)*M - 1] * h[(K-1)*M + 1] + ... + x[n*M - (K-1)*M - (M-1)] * h[(K-1)*M + (M-1)] > > Thanks for the correction, Jim.
That's better, and now I see what you mean. But... it seems to me that arranging the filter equation into rows and columns like this is somewhat artificial. At this point I'm just going to confuse people who already understand polyphase filters but aren't confident in their understanding, and for that, I apologize in advance. I guess I'm firmly in that same camp. Each value of y depends on every filter tap, so I still don't see why it's polyphase. If this is polyphase, aren't ALL FIRs polyphase? -- Jim Thomas Principal Applications Engineer Bittware, Inc jthomas@bittware.com http://www.bittware.com (603) 226-0404 x536 Getting an inch of snow is like winning ten cents in the lottery - Calvin