DSPRelated.com
Forums

Dumb Decimation Technique?

Started by Unknown January 7, 2008
DSP gurus:

Concerning decimation by a rational scale factor, L/M, where

 L = upsampling factor,
 M = downsampling factor,
 M > L (hence, decimation occurs)

Define Decimator A as
 Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output

Define Decimator B as
 Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M ->
Output

In DecA, the two-point interpolation is used to pick the desired in-
between-sample points (n M / L), n = 0, 1, 2, ....

In DecB, upsampling by L is achieved by inserting (L-1) zeros between
each pair of samples; downsampling by M is achieved by taking every
Mth sample.

Intuitively, it seems that DecA is inferior to DecB.
But why is that, exactly (ignoring implementation details, such as use
of polyphase decomposition)?

Does DecA introduce artifacts that do not occur in DecB?

I've seen both methods used in commercial image scaling software,
FPGAs, etc.

On Mon, 07 Jan 2008 13:27:22 -0800, prediction wrote:

> DSP gurus: > > Concerning decimation by a rational scale factor, L/M, where > > L = upsampling factor, > M = downsampling factor, > M > L (hence, decimation occurs) > > Define Decimator A as > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > Define Decimator B as > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > Output > > In DecA, the two-point interpolation is used to pick the desired in- > between-sample points (n M / L), n = 0, 1, 2, .... > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > each pair of samples; downsampling by M is achieved by taking every Mth > sample. > > Intuitively, it seems that DecA is inferior to DecB. But why is that, > exactly (ignoring implementation details, such as use of polyphase > decomposition)? > > Does DecA introduce artifacts that do not occur in DecB? > > I've seen both methods used in commercial image scaling software, FPGAs, > etc.
If you choose a triangular filter that's exactly (N*M)-2 points long for DecB, it becomes DecA. So DecB isn't _inherently_ better, it just allows for a wider choice of filters. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Bandlimiting is just one aspect of decimation. It just says that
if there were any components beyond (Fs_in*L/2M) in the input 
spectrum they should be removed so that they inturn they do not alias
back and corrupt a peice of the output spectrum.

So this is just one part of the solution.

Now comes the second part of generating samples at locations which
are at positions defined by the output and input sample rates. This
all of us understand cannot be done just by linear interpolation as
linear interpolation does not suppress the images created, which leads
to presence of some new frequency components in the output spectrum.
Interstingly since the output spectrum is even smaller than input
spectrum (decimation case), the insufficiently suppressed images
will be aliased back. 

Now lets see what happens in approach B. In approach B, first we
generate various inbetween samples by means of phase interpolation.
The bandlimiting pi/M filter helps in two ways. First it supresses
the images created by zero insertion to a large extent (maybe beyond
the dynamic range of datapath) and secondly it removes part of the
input spectrum so that these components do not alias back and disturb
a peice of output spectrum. Hence this is much more appropriate approach
then case A.

In approach A what you have essentially done is changed the order in
which multirate processing is done. You have done the decimation first
and then you are trying to interpolate using a very coarse mechanism
called linear interpolation.

The artifacts caused in image processing can easily be caught if one
uses a color zone plate (maybe just a grey scale version of it) and
try to pass it via both approaches and see insufficient suppression
by means of linear interpolation causes some new circles to appear
at places where you will not see those in approach B. Just be slightly
cautious while you are designing your filters. In image processing
domain we would like to strike a balance in terms of filter length.
It should not be too small to serve it's purpose and it should not
be too large such that it's transients start  to hurt more than the
filter itself. 

Bharat Pathak
Arithos Designs
www.arithos.com


>DSP gurus: > >Concerning decimation by a rational scale factor, L/M, where > > L = upsampling factor, > M = downsampling factor, > M > L (hence, decimation occurs) > >Define Decimator A as > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > >Define Decimator B as > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> >Output > >In DecA, the two-point interpolation is used to pick the desired in- >between-sample points (n M / L), n = 0, 1, 2, .... > >In DecB, upsampling by L is achieved by inserting (L-1) zeros between >each pair of samples; downsampling by M is achieved by taking every >Mth sample. > >Intuitively, it seems that DecA is inferior to DecB. >But why is that, exactly (ignoring implementation details, such as use >of polyphase decomposition)? > >Does DecA introduce artifacts that do not occur in DecB? > >I've seen both methods used in commercial image scaling software, >FPGAs, etc. > >
On Mon, 7 Jan 2008 13:27:22 -0800 (PST), prediction@gmail.com wrote:

>DSP gurus:
Hi prediction
>Concerning decimation by a rational scale factor, L/M, where > > L = upsampling factor, > M = downsampling factor, > M > L (hence, decimation occurs)
OK, ... so the output sample rate will be less than the input sample rate (i.e., decimation).
>Define Decimator A as > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > >Define Decimator B as > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> >Output > >In DecA, the two-point interpolation is used to pick the desired in- >between-sample points (n M / L), n = 0, 1, 2, .... > >In DecB, upsampling by L is achieved by inserting (L-1) zeros between >each pair of samples; downsampling by M is achieved by taking every >Mth sample. > >Intuitively, it seems that DecA is inferior to DecB. >But why is that, exactly (ignoring implementation details, such as use >of polyphase decomposition)? > >Does DecA introduce artifacts that do not occur in DecB? > >I've seen both methods used in commercial image scaling software, >FPGAs, etc.
If I understand your notation, it doesn't look to me like your "DecA" method could work. That method is: (1) lowpass filter a signal and (2) perform some sort of two-point (linear??) interpolation. How do those two processes yield decimation?? Not knowing what are the values for M & L, I'm not sure how to interpret those words "two-point interpolation". In any case, your "DecB" method is the classic (popular) method for resampling a signal by a rational factor. Of course, as Bharat said, the FIR filter in this M>L case must attenuate all input signal spectral components whose frequencies are greater than (L*Fs_in)/(2*M), where Fs_in is the input sample rate in Hz. The neat part of the "DecB" method is that DSP gurus have developed a technique that (1) uses only one tapped- delay line for the FIR filter as opposed to a delay line for each polyphase subfilter, (2) no zero stuffing is necessary, and (3) no filter output samples are computed and then discarded. fred harris' discusses interpolation using that clever method in Chapter 7 of his "Multirate Signal Processing" book. However, fred does not (as far as I can tell) give an example of decimation using that clever method. Good Luck, [-Rick-]
Thanks for the comments, everyone.

Rick, here's an illustration of DecA for scaling by 4/5, showing what
I meant by "two-point interpolation":

 estimate bandlimited x[n] at times 0.0, 1.25, 2.5, 3.75, 5.0, etc.,
to produce

  y[0] = x[0],
  y[1] = 0.75 x[1] + 0.25 x[2],
  y[2] = 0.5 x[2] + 0.5 x[3],
  y[3] = 0.25 x[3] + 0.75 x[4],
  y[4] = x[5],
  etc.

(i.e., "resampling" x[n] such that for every 5 samples of x you get 4
samples of y)

If x[n] is bandlimited, say to under (4 pi / 5) rad/s, do you still
get aliasing, and why?
prediction@gmail.com wrote:
> Thanks for the comments, everyone. > > Rick, here's an illustration of DecA for scaling by 4/5, showing what > I meant by "two-point interpolation": > > estimate bandlimited x[n] at times 0.0, 1.25, 2.5, 3.75, 5.0, etc., > to produce > > y[0] = x[0], > y[1] = 0.75 x[1] + 0.25 x[2], > y[2] = 0.5 x[2] + 0.5 x[3], > y[3] = 0.25 x[3] + 0.75 x[4], > y[4] = x[5], > etc. > > (i.e., "resampling" x[n] such that for every 5 samples of x you get 4 > samples of y) > > If x[n] is bandlimited, say to under (4 pi / 5) rad/s, do you still > get aliasing, and why?
If course you get aliasing. A better interpolation scheme (using more points, say) would give a different result. I think you can prove to yourself that the difference is distortion made manifest. Distortion makes harmonics (some of which will exceed Fs/2) and intermodulation. Some of the intermod may alias, and all of it is detrimental. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
> DSP gurus: > > Concerning decimation by a rational scale factor, L/M, where > > �L = upsampling factor, > �M = downsampling factor, > �M > L (hence, decimation occurs) > > Define Decimator A as > �Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > Define Decimator B as > �Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > Output > > In DecA, the two-point interpolation is used to pick the desired in- > between-sample points (n M / L), n = 0, 1, 2, .... > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > each pair of samples; downsampling by M is achieved by taking every > Mth sample. > > Intuitively, it seems that DecA is inferior to DecB. > But why is that, exactly (ignoring implementation details, such as use > of polyphase decomposition)?
Not necessarily. r b-j once computed that if L/M < 1/512, linear interpolation yields 120dB image attenuation, which might suit you just fine. If L/M is larger, you can still get good results using polynomial interpolation. Read all about it at http://yehar.com/dsp/deip.pdf DecB is a generalization of DecA, as Tim mentions.
> > Does DecA introduce artifacts that do not occur in DecB? > > I've seen both methods used in commercial image scaling software, > FPGAs, etc.
Yes, spline interpolation is widely used in image processing, because of the positivity of the FIR coefficients. Regards, Andor
On Jan 9, 10:54&#4294967295;am, Andor <andor.bari...@gmail.com> wrote:
> > DSP gurus: > > > Concerning decimation by a rational scale factor, L/M, where > > > &#4294967295;L = upsampling factor, > > &#4294967295;M = downsampling factor, > > &#4294967295;M > L (hence, decimation occurs) > > > Define Decimator A as > > &#4294967295;Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > Define Decimator B as > > &#4294967295;Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > Output > > > In DecA, the two-point interpolation is used to pick the desired in- > > between-sample points (n M / L), n = 0, 1, 2, .... > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > each pair of samples; downsampling by M is achieved by taking every > > Mth sample. > > > Intuitively, it seems that DecA is inferior to DecB. > > But why is that, exactly (ignoring implementation details, such as use > > of polyphase decomposition)? > > Not necessarily. r b-j once computed that if L/M < 1/512, linear > interpolation yields 120dB image attenuation, which might suit you > just fine. If L/M is larger, you can still get good results using > polynomial interpolation. Read all about it at > > http://yehar.com/dsp/deip.pdf > > DecB is a generalization of DecA, as Tim mentions. > > > > > Does DecA introduce artifacts that do not occur in DecB? > > > I've seen both methods used in commercial image scaling software, > > FPGAs, etc. > > Yes, spline interpolation is widely used in image processing, because > of the positivity of the FIR coefficients. > > Regards, > Andor- Hide quoted text - > > - Show quoted text -
I would like to ask a question in the context of re-sampling for audio applications. Is the issue with low order polynomial interpolation techniques ( for high quality audio application): a) Inadequate linear filtering i.e. they do not provide adequate filtering to prevent aliasing of signals that are in the input? and if the input signal were somehow perfectly bandlimited for the output sample rate, this would not be an issue. or b) Non-linear distortion is created i.e interpolation is a non-linear process so non-linear distortion (harmonics and intermodulation) are created, this alone being a problem. and this would still be an issue even if the input signal were somehow perfectly bandlimted for the output sample rate. thank you Mark
Mark wrote:
> Andor wrote: > > > > > > > > DSP gurus: > > > > Concerning decimation by a rational scale factor, L/M, where > > > > &#4294967295;L = upsampling factor, > > > &#4294967295;M = downsampling factor, > > > &#4294967295;M > L (hence, decimation occurs) > > > > Define Decimator A as > > > &#4294967295;Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > > Define Decimator B as > > > &#4294967295;Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > > Output > > > > In DecA, the two-point interpolation is used to pick the desired in- > > > between-sample points (n M / L), n = 0, 1, 2, .... > > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > > each pair of samples; downsampling by M is achieved by taking every > > > Mth sample. > > > > Intuitively, it seems that DecA is inferior to DecB. > > > But why is that, exactly (ignoring implementation details, such as use > > > of polyphase decomposition)? > > > Not necessarily. r b-j once computed that if L/M < 1/512, linear > > interpolation yields 120dB image attenuation, which might suit you > > just fine. If L/M is larger, you can still get good results using > > polynomial interpolation. Read all about it at > > >http://yehar.com/dsp/deip.pdf > > > DecB is a generalization of DecA, as Tim mentions. > > > > Does DecA introduce artifacts that do not occur in DecB? > > > > I've seen both methods used in commercial image scaling software, > > > FPGAs, etc. > > > Yes, spline interpolation is widely used in image processing, because > > of the positivity of the FIR coefficients. > > > Regards, > > Andor- Hide quoted text - > > > - Show quoted text - > > I would like to ask a question in the context of re-sampling for audio > applications. > > Is the issue with low order polynomial interpolation techniques ( for > high quality audio application): > > a) &#4294967295;Inadequate linear filtering i.e. they do not provide adequate > filtering to prevent aliasing of signals that are in the input? &#4294967295;
The above is the case. Low-order polynomial FIR interpolators have low attenuation in the stopband. In the above paper that I linked, Olli compares a number of polynmial interpolators in different oversampling scenarios (2, 4, 8, 16 and 32-fold oversampling). Using a SNR measure of his own device (which is a psycho-acoustically weighted maximal sidelobe), he gives the "best" interpolation filters with the following modified SNR: 1. order interpolation: Oversampling N | SNR of best 1st-order interpolator ---------------+----------------------------------- 2 | 19.1 dB 4 | 33.8 dB 8 | 47.0 dB 16 | 59.7 dB 32 | 72.0 dB The data seems to follow a logarithmic curve. When I fit the data ( dB(N) = 6.81 + 19 ln(N) ), I get N=80 for 90dB SNR and N=386 for 120dB SNR. This is about what r b-j arrived at as well (he says you need N=512 for 120dB SNR for a linear interpolator). For 3. order interpolation, you get: 3. order interpolation: Oversampling N | SNR of best 3.-order interpolator ---------------+---------------------------------- 2 | 65.9 dB 4 | 89.0 dB 8 | 112.9 dB 16 | 136.9 dB 32 | 161.0 dB So 16-fold oversampling is enough to get decent audio quality resampling with just a 3rd-order polynomial interpolator (FIR filter with four taps). Details on filter coefficients and SNR measure definition can be had from his paper.
> > or > > b) Non-linear distortion is created i.e interpolation is a non-linear > process so non-linear distortion (harmonics and intermodulation) are > created, this alone being a problem. &#4294967295;
Resampling with FIR filters (polynomial or sinc) is linear but time- variant (periodically time-variant for rational sampling rate changes) process. The additional distortion products are generated in the subsampling stage of the resampler. All resamplers have distortion products (FIR or IIR). The stopband attenuation (and the transition bandwidth) of the filter determines the amount of aliasing. Here is a page that compares different audio resamplers: http://src.infinitewave.ca/ The graphs nicely show the harmonic distortion products during a slow sine-wave sweep through the resampler. There are some pretty wild pictures in there! Regards, Andor
I wrote:

> So 16-fold oversampling is enough to get decent audio quality > resampling with just a 3rd-order polynomial interpolator (FIR filter > with four taps). Details on filter coefficients and SNR measure > definition can be had from his paper.
Perhaps another interesting number for audio resampling is the special case N=2 (96kHz) and N=4 (192kHz) resampling. For N=2, Olli found a 5th-order polynomial (FIR filter with 6 taps) having 111dB SNR. For N=4, there is a 5th-order polynomial with 149dB SNR!