DSPRelated.com
Forums

Dumb Decimation Technique?

Started by Unknown January 7, 2008

Andor wrote:
> > > DSP gurus: > > > > Concerning decimation by a rational scale factor, L/M, where > > > > L = upsampling factor, > > M = downsampling factor, > > M > L (hence, decimation occurs) > > > > Define Decimator A as > > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > > Define Decimator B as > > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > Output > > > > In DecA, the two-point interpolation is used to pick the desired in- > > between-sample points (n M / L), n = 0, 1, 2, .... > > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > each pair of samples; downsampling by M is achieved by taking every > > Mth sample. > > > > Intuitively, it seems that DecA is inferior to DecB. > > But why is that, exactly (ignoring implementation details, such as use > > of polyphase decomposition)? > > Not necessarily. r b-j once computed that if L/M < 1/512, linear > interpolation yields 120dB image attenuation, which might suit you > just fine. If L/M is larger, you can still get good results using > polynomial interpolation. Read all about it at > > http://yehar.com/dsp/deip.pdf > > DecB is a generalization of DecA, as Tim mentions. > > > > > Does DecA introduce artifacts that do not occur in DecB? > > > > I've seen both methods used in commercial image scaling software, > > FPGAs, etc. > > Yes, spline interpolation is widely used in image processing,
How exactly does spline interpolation fit in here? The OP asked about making images smaller. In image processing splines are usually only used for scaling the image up, because it is a fast easy way to go directly up to any size. For decimation splines don't work so well. When they are used it is usually to achieve the upsampling part and then the down sampling part is achieved thru some kind of efficient half band filtering where M is made to be some power of two. But in DecA or DecB the OP didn't describe anything that resembled upsampling with splines.
> because > of the positivity of the FIR coefficients.
I don't understand how this part of the sentence makes any sense either. -jim
> > Regards, > Andor
Re. my original post, I think I have convinced myself that DecA is
equivalent to

 FIR-bandlimit -> upsample by L -> 2L-tap FIR -> downsample by M,

where the 2L-tap FIR is a tent function/triangular filter/bartlett
function, as stated by Tim (but I think the length must be 2L, not
(N*M)-2); e.g., for L=2, that filter would be something like {0.5,
1.0, 0.5, 0.0}.

This would indicate that one way in which DecA is inferior to DecB is
that the relatively wimpy 2L-tap FIR in DecA will have a harder job of
suppressing the upsampling-created images than will the N*L-tap FIR in
DecB.

Now I want to determine what benefit is created by the initial FIR-
bandlimiting step in DecA (the results look better with it than
without it); we know it can't suppress the upsampling-created images,
since those happen after the bandlimiting; however, the bandlimiting
might help avoid aliasing in the downsampler. Thinking about this...

On Jan 7, 6:48 pm, Tim Wescott <t...@seemywebsite.com> wrote:
> On Mon, 07 Jan 2008 13:27:22 -0800, prediction wrote: > > DSP gurus: > > > Concerning decimation by a rational scale factor, L/M, where > > > L = upsampling factor, > > M = downsampling factor, > > M > L (hence, decimation occurs) > > > Define Decimator A as > > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > Define Decimator B as > > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > Output > > > In DecA, the two-point interpolation is used to pick the desired in- > > between-sample points (n M / L), n = 0, 1, 2, .... > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > each pair of samples; downsampling by M is achieved by taking every Mth > > sample. > > > Intuitively, it seems that DecA is inferior to DecB. But why is that, > > exactly (ignoring implementation details, such as use of polyphase > > decomposition)? > > > Does DecA introduce artifacts that do not occur in DecB? > > > I've seen both methods used in commercial image scaling software, FPGAs, > > etc. > > If you choose a triangular filter that's exactly (N*M)-2 points long for > DecB, it becomes DecA. So DecB isn't _inherently_ better, it just allows > for a wider choice of filters. > > -- > Tim Wescott > Control systems and communications consultinghttp://www.wescottdesign.com > > Need to learn how to apply control theory in your embedded system? > "Applied Control Theory for Embedded Systems" by Tim Wescott > Elsevier/Newnes,http://www.wescottdesign.com/actfes/actfes.html

prediction@gmail.com wrote:
> > Re. my original post, I think I have convinced myself that DecA is > equivalent to > > FIR-bandlimit -> upsample by L -> 2L-tap FIR -> downsample by M, > > where the 2L-tap FIR is a tent function/triangular filter/bartlett > function, as stated by Tim (but I think the length must be 2L, not > (N*M)-2); e.g., for L=2, that filter would be something like {0.5, > 1.0, 0.5, 0.0}.
That's 2L-1 isn't it? Always odd length with center coef equal 1.
> > This would indicate that one way in which DecA is inferior to DecB is > that the relatively wimpy 2L-tap FIR in DecA will have a harder job of > suppressing the upsampling-created images than will the N*L-tap FIR in > DecB. >
Yes but in general image processing filters need to be wimpy.
> Now I want to determine what benefit is created by the initial FIR- > bandlimiting step in DecA (the results look better with it than > without it); we know it can't suppress the upsampling-created images, > since those happen after the bandlimiting; however, the bandlimiting > might help avoid aliasing in the downsampler. Thinking about this...
Think about the product of the 2 frequency responses (at the up scale). How does that compare to the single frequency response of the DecB filter, could be not much different. -jim
> > On Jan 7, 6:48 pm, Tim Wescott <t...@seemywebsite.com> wrote: > > On Mon, 07 Jan 2008 13:27:22 -0800, prediction wrote: > > > DSP gurus: > > > > > Concerning decimation by a rational scale factor, L/M, where > > > > > L = upsampling factor, > > > M = downsampling factor, > > > M > L (hence, decimation occurs) > > > > > Define Decimator A as > > > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > > > Define Decimator B as > > > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > > Output > > > > > In DecA, the two-point interpolation is used to pick the desired in- > > > between-sample points (n M / L), n = 0, 1, 2, .... > > > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > > each pair of samples; downsampling by M is achieved by taking every Mth > > > sample. > > > > > Intuitively, it seems that DecA is inferior to DecB. But why is that, > > > exactly (ignoring implementation details, such as use of polyphase > > > decomposition)? > > > > > Does DecA introduce artifacts that do not occur in DecB? > > > > > I've seen both methods used in commercial image scaling software, FPGAs, > > > etc. > > > > If you choose a triangular filter that's exactly (N*M)-2 points long for > > DecB, it becomes DecA. So DecB isn't _inherently_ better, it just allows > > for a wider choice of filters. > > > > -- > > Tim Wescott > > Control systems and communications consultinghttp://www.wescottdesign.com > > > > Need to learn how to apply control theory in your embedded system? > > "Applied Control Theory for Embedded Systems" by Tim Wescott > > Elsevier/Newnes,http://www.wescottdesign.com/actfes/actfes.html
On Jan 10, 11:28 am, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
> Think about the product of the 2 frequency responses (at the up scale). > How does that compare to the single frequency response of the DecB filter, > could be not much different. > > -jim
I believe that all that prefiltering does is narrow the bandwidth of each of the aliased images created by the upsampler.
To summarize,:

DecA is generally inferior to DecB for two main reasons:

 1. DecB can be implemented far more efficiently, using a polyphase
decomposition.

 2. The filter in DecB can be designed to better suppress the images
created by upsampling and better suppress the aliasing created by
downsampling.

DecA is equivalent to

 Input -> FIR bandlimit (pi L / M) -> Upsample by L -> (2L - 1) FIR ->
Downsample by M,

where the impulse response of the second FIR is the Bartlett window:

 h[k] = 1 - abs (k / L),      -L < k < L

Upsampling creates images at multiples of 2 pi / L. The first FIR in
DecA bandlimits the input signal, which reduces the bandwidth of each
of those images but does not otherwise suppress them. We have no
control over the second FIR in DecB, since its impulse response is
constrained to be a Bartlett window; thus, this filter cannot be
optimized for a given scaling factor.

DecA appears to be a dumb decimator. So why is it used in products
such as TI's TMS320DM646x DMSoC Video Data Conversion Engine (VDCE)?

(see http://focus.ti.com/lit/ug/sprueq9/sprueq9.pdf)


On Jan 7, 4:27 pm, predict...@gmail.com wrote:
> DSP gurus: > > Concerning decimation by a rational scale factor, L/M, where > > L = upsampling factor, > M = downsampling factor, > M > L (hence, decimation occurs) > > Define Decimator A as > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > Define Decimator B as > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > Output > > In DecA, the two-point interpolation is used to pick the desired in- > between-sample points (n M / L), n = 0, 1, 2, .... > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > each pair of samples; downsampling by M is achieved by taking every > Mth sample. > > Intuitively, it seems that DecA is inferior to DecB. > But why is that, exactly (ignoring implementation details, such as use > of polyphase decomposition)? > > Does DecA introduce artifacts that do not occur in DecB? > > I've seen both methods used in commercial image scaling software, > FPGAs, etc.
(Actually, the TI chip uses a 4-tap interpolator, not a 2-tap, but you
still run up against the same issues.)

On Jan 11, 12:13 pm, predict...@gmail.com wrote:
> To summarize,: > > DecA is generally inferior to DecB for two main reasons: > > 1. DecB can be implemented far more efficiently, using a polyphase > decomposition. > > 2. The filter in DecB can be designed to better suppress the images > created by upsampling and better suppress the aliasing created by > downsampling. > > DecA is equivalent to > > Input -> FIR bandlimit (pi L / M) -> Upsample by L -> (2L - 1) FIR -> > Downsample by M, > > where the impulse response of the second FIR is the Bartlett window: > > h[k] = 1 - abs (k / L), -L < k < L > > Upsampling creates images at multiples of 2 pi / L. The first FIR in > DecA bandlimits the input signal, which reduces the bandwidth of each > of those images but does not otherwise suppress them. We have no > control over the second FIR in DecB, since its impulse response is > constrained to be a Bartlett window; thus, this filter cannot be > optimized for a given scaling factor. > > DecA appears to be a dumb decimator. So why is it used in products > such as TI's TMS320DM646x DMSoC Video Data Conversion Engine (VDCE)? > > (seehttp://focus.ti.com/lit/ug/sprueq9/sprueq9.pdf) > > On Jan 7, 4:27 pm, predict...@gmail.com wrote: > > > DSP gurus: > > > Concerning decimation by a rational scale factor, L/M, where > > > L = upsampling factor, > > M = downsampling factor, > > M > L (hence, decimation occurs) > > > Define Decimator A as > > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > Define Decimator B as > > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > Output > > > In DecA, the two-point interpolation is used to pick the desired in- > > between-sample points (n M / L), n = 0, 1, 2, .... > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > each pair of samples; downsampling by M is achieved by taking every > > Mth sample. > > > Intuitively, it seems that DecA is inferior to DecB. > > But why is that, exactly (ignoring implementation details, such as use > > of polyphase decomposition)? > > > Does DecA introduce artifacts that do not occur in DecB? > > > I've seen both methods used in commercial image scaling software, > > FPGAs, etc.

prediction@gmail.com wrote:
> > To summarize,: > > DecA is generally inferior to DecB for two main reasons: > > 1. DecB can be implemented far more efficiently, using a polyphase > decomposition. >
Yes that is more efficient than the DecB that you initially described, but more efficient than DecA?
> 2. The filter in DecB can be designed to better suppress the images > created by upsampling and better suppress the aliasing created by > downsampling. > > DecA is equivalent to > > Input -> FIR bandlimit (pi L / M) -> Upsample by L -> (2L - 1) FIR -> > Downsample by M, > > where the impulse response of the second FIR is the Bartlett window: > > h[k] = 1 - abs (k / L), -L < k < L > > Upsampling creates images at multiples of 2 pi / L. The first FIR in > DecA bandlimits the input signal, which reduces the bandwidth of each > of those images but does not otherwise suppress them. We have no > control over the second FIR in DecB, since its impulse response is > constrained to be a Bartlett window; thus, this filter cannot be > optimized for a given scaling factor.
The second FIR doesn't need a steep transition band. All it has to do is attenuate the images. One could say it is designed to automatically adjust for that. You need to look at the product of the 2 freq. responses.
> > DecA appears to be a dumb decimator. So why is it used in products > such as TI's TMS320DM646x DMSoC Video Data Conversion Engine (VDCE)? > > (see http://focus.ti.com/lit/ug/sprueq9/sprueq9.pdf) > > On Jan 7, 4:27 pm, predict...@gmail.com wrote: > > DSP gurus: > > > > Concerning decimation by a rational scale factor, L/M, where > > > > L = upsampling factor, > > M = downsampling factor, > > M > L (hence, decimation occurs) > > > > Define Decimator A as > > Input -> FIR bandlimit(L pi / M) -> Two-point interpolation -> Output > > > > Define Decimator B as > > Input -> Upsample by L -> FIR bandlimit(pi / M) -> Downsample by M -> > > Output > > > > In DecA, the two-point interpolation is used to pick the desired in- > > between-sample points (n M / L), n = 0, 1, 2, .... > > > > In DecB, upsampling by L is achieved by inserting (L-1) zeros between > > each pair of samples; downsampling by M is achieved by taking every > > Mth sample. > > > > Intuitively, it seems that DecA is inferior to DecB. > > But why is that, exactly (ignoring implementation details, such as use > > of polyphase decomposition)? > > > > Does DecA introduce artifacts that do not occur in DecB? > > > > I've seen both methods used in commercial image scaling software, > > FPGAs, etc.
On Jan 11, 4:03 pm, jim <"sjedgingN0sp"@m...@mwt.net> wrote:

> Yes that is more efficient than the DecB that you initially described, but > more efficient than DecA?
Of course this is more efficient than DecA. Compare the cost of applying DecA to a line of pixels with the cost of applying DecB (where DecB is implemented via polyphase). My point is that DecA is not inherently decomposable into polyphase because of the initial "FIR bandlimit" filter, which would be applied to every single pixel across the line, and that the filter is essentially a waste of resources because it would be much more efficient to use those resources to implement a longer post-upsampling filter, implemented in DecB using polyphase.
> The second FIR doesn't need a steep transition band. All it has to do is > attenuate the images. One could say it is designed to automatically adjust > for that.
The second FIR in DecA is defined to be a Bartlett function. The Bartlett filter does not necessarily attenuate the images very well. The images are centered at multiples of 2 pi / L; you might, e.g., want to have nulls at those frequencies. You might also want a flat passband. In DecB you have the flexibility of using your degrees of freedom to satisfy whatever filter specs are important in a given application.
> > You need to look at the product of the 2 freq. responses.
I don't think that works. The first FIR in DecA limits the bandwidth of the input signal. The upsampler then compresses the frequencies of the input spectrum by 1/L, while adding images. This means the effect of the filter response that has been applied to the input signal also gets compressed by 1/L and imaged; thus, you can't simply multiply the two frequency responses.

prediction@gmail.com wrote:
> > On Jan 11, 4:03 pm, jim <"sjedgingN0sp"@m...@mwt.net> wrote: > > > Yes that is more efficient than the DecB that you initially described, but > > more efficient than DecA? > > Of course this is more efficient than DecA. Compare the cost of > applying DecA to a line of pixels with the cost of applying DecB > (where DecB is implemented via polyphase). My point is that DecA is > not inherently decomposable into polyphase because of the initial "FIR > bandlimit" filter, which would be applied to every single pixel across > the line, and that the filter is essentially a waste of resources > because it would be much more efficient to use those resources to > implement a longer post-upsampling filter, implemented in DecB using > polyphase.
Well ALL of this depends on implementation and the exact scale factor involved. In the absence of those specific details what you say may be right or may be wrong. That said, yes the DecB method will tend to have a longer filter or in other words will tend to have more additions and multiplications for each pixel out. So how is that going to be more efficient?
> > > The second FIR doesn't need a steep transition band. All it has to do is > > attenuate the images. One could say it is designed to automatically adjust > > for that. > > The second FIR in DecA is defined to be a Bartlett function. The > Bartlett filter does not necessarily attenuate the images very well. > The images are centered at multiples of 2 pi / L; you might, e.g., > want to have nulls at those frequencies. You might also want a flat > passband. In DecB you have the flexibility of using your degrees of > freedom to satisfy whatever filter specs are important in a given > application. > > > > > You need to look at the product of the 2 freq. responses. > > I don't think that works. The first FIR in DecA limits the bandwidth > of the input signal.
It does what it does lets call the result y[n].
> The upsampler then compresses the frequencies of > the input spectrum by 1/L, while adding images.
Not of the [original] input spectrum - It does that to the spectrum of y[n].
> This means the effect > of the filter response that has been applied to the input signal also > gets compressed by 1/L and imaged; thus, you can't simply multiply the > two frequency responses.
Why not? You don't seem to have any problem conceptualizing doing that with DecB after scaling up and stuffing with zeroes.. Obviuosly, you are not going to upsample and apply a tent filter because that would be incredibly inefficient. But you could pretend you did just for the sake of looking at the total frequency response of the entire operation. When I get time I will look at the PDF you posted. -jim
On Jan 11, 4:03 pm, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
> ... > The second FIR doesn't need a steep transition band. All it has to do is > attenuate the images. One could say it is designed to automatically adjust > for that. >...
I think I see what you mean by "automatically adjust" -- the fact that the Bartlett filter has nulls at multiples of 2 pi / L ? But the stopband attenuation isn't otherwise so great, and you might want less rolloff in the passband among other things. So instead of implementing the initial bandlimiting filter in DecA, which doesn't prevent images (just makes their bandwidth narrower), it would be more efficient to use those resources towards optimizing the filter in a DecB implementation. I guess if your input signal doesn't have appreciable content near or above pi L / M, the images won't be as much a problem, since the images fall further into the nulls of the Bartlett filter.