DSPRelated.com
Forums

Interpolation and decimation

Started by seb January 13, 2004
"Jon Harris" <goldentully@hotmail.com> wrote in message
news:bu9esf$f1nrp$1@ID-210375.news.uni-berlin.de...
> > For example, consider the Lagrange polynomial interpolators (linear, > parabolic, cubic, etc.). You can easily implement these using a
poly-phase
> coefficient table and FIR routine. Often, one computes the coefficients
"on
> the fly" because they are fairly simple (especially for linear), but this > need not be the case. Take linear interpolation, for example, the > coefficients look like an upside V. As you move to higher order Lagrange > interpolation, the shapes start to resemble a windowed sinc function! >
Hello Jon and others, This is what Whittaker noticed. As the number of evenly spaced points in LaGrangian interpolation increases towards infinity, the interpolation kernel becomes the sinc function and this function is bandlimited! This was the 1st discovery needed in WKS (Whittaker-Kotelnikov-Shannon) interpolation theory. See http://imagescience.bigr.nl/meijering/publications/download/pieee2002.pdf for a great chronology of interpolation. Clay
"Jon Harris" <goldentully@hotmail.com> wrote in message
news:bua0fr$fq50p$1@ID-210375.news.uni-berlin.de...
> By the way, it jus occurred to me that I've been making the (unstated) > assumption that both the input and output sample rates are
constant/uniform.
> Some of what I've written may not apply to the non-uniform data that you
may
> encounter in some interpolation problems. (I tend to think in terms of > audio most of the time since that's what I've worked on.)
Jon, That's OK for this discussion. Yes, I understand all you said about FIRs and time invariance. I just haven't figured out how to get there from polynomial interpolation yet. So, let's see: y=ax+b is the equation for a straight line. So, how are we going to twist that into a set of coefficients for linear interpolation? (I know what the result should be). Ah! 1) Choose the number of interim sample points to be generated 2) Compute the sample values at those points and find that data points are weighted by constants for each point - with different coefficients for each interim sample point -> which is where the polyphase idea could come from. 3) If we align the sets of coefficients up so their pairs overlap the original samples, then we get a FIR filter definition - and, of course, it's the familiar triangle. So the weights aren't data dependent. OK. I just hadn't done the derivation before .. so hadn't made a hard connection as if the two things were in different universes! It's funny how AHA!s creep in now and then isn't it? Presumably this is also applicable for higher orders of interpolation which really only means that there are more sample data points being used doesn't it? I guess this means that the Parks-McClellan program or any other based on the Remez exchange algorithm could be run using a *huge* lookup table for the barycentric Lagrange interpolation step? In concept that is. Working backwards, does it mean that a partial table might be generated for the final points of the extrema - because they are usually nearly equispaced and there are guarantees about extrema at the end points in many situations. Of course I can't imagine that it would ever be practical because the number of combinations of point sets is just too huge. But, it's an interesting thought. Thanks Fred
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:Y7ydnUt0vKrNpZrdRVn-sQ@centurytel.net...
> > > >Otherwise, methods of > polynomial interpolation require that the function's coefficients be > generated from the data samples first. So, you'd have to weigh the
compute
> load to do that sort of thing. While the "filter" might be shorter, > figuring out the coefficients for each output point might be prohibitive. > And, then you have what amounts to a time-varying filter - which will
likely
> introduce new frequencies.
Thanks to Jon Harris for correcting both of these assertions. It seems that such interpolation methods aren't data dependent and, therefore, not time varying. So, the compute load seems to be independent of the type of interpolation method chosen and only dependent on the length of the interpolating filter. Fred
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:B6mdnSBLdcVo8JTdRVn-uw@centurytel.net...
>
Thanks to Jon Harris for correcting me. It seems that such interpolation methods aren't data dependent and, therefore, not time varying. So, the compute load seems to be independent of the type of interpolation method chosen and only dependent on the length of the interpolating filter. This makes one want to consider using SPT (sums of powers of two) coefficients. That's something I've done in FPGAs in the past. No multiplies. I know that multiplies or MACs are now so fast on some DSPs (single cycle) that there's no multiply penalty. How about recent FPGAs? So, since I'm onto something new (for me) I have a couple more questions: 1) What is the relationship between the order of the interpolation - and by this I mean the number of incoming data points that are involved in the calculation: - and the interpolation method? ie. polynomial, Lagrange, etc. Do different coefficients result? I would think so. - and the basis set for the interpolation? cosines, sincs, powers of the ordinate (polynomial), etc. I guess these questions overlap. Fred
In article CoOdnfMsAfAw75TdRVn-ig@centurytel.net, Fred Marshall at
fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36:
...
> So, since I'm onto something new (for me) I have a couple more questions: > > 1) What is the relationship between the order of the interpolation - and by > this I mean the number of incoming data points that are involved in the > calculation: > > - and the interpolation method? ie. polynomial, Lagrange, etc. Do > different coefficients result? I would think so.
and you would be correct. (BTW, "polynomial" can mean more than just Lagrange, e.g. Hermite, B-spline.)
> - and the basis set for the interpolation? cosines, sincs, powers of the > ordinate (polynomial), etc.
hmmm. i dunno exactly what you mean. a sinc is a sinc, a fourier series has cosines (and sines) as a basis, polynomials are power series.
> I guess these questions overlap.
i'm joining in on this late, but i thought i might expand a little on Jon Harris's responses. In article bu7agd$ed2s0$1@ID-210375.news.uni-berlin.de, Jon Harris at goldentully@hotmail.com wrote on 01/15/2004 19:22:
> As to the question about a cubic vs. short FIR, it seems to me that you > should be able to design a better interpolating filter than the Lagrange* > filters by taking advantage of a priori knowledge about your signal. For > example, if you have or can create some "sampling headroom", e.g. the > signal's frequency content does not extend all the way to Nyquist, you can > bring in the cut-off point of your low pass filter and hence obtain more > stop-band rejection for the same order.
moreover, in this case, there are better polynomials than Lagrange, such as B-spline which puts some killer notches at the multiples of Fs, so if your signal is mostly low-pass, the images get killed even better.
> Rolling your own FIR also lets you > trade-off high frequency response vs. stop-band rejection or pass-band > ripple vs. stop-band rejection. > > The filter can be designed with either the windowed-sync or an optimal > design method such as Remez, aka Parks-McClellan. In my own experience, I > found the windowed-sync method to be adequate and simple to implement.
a windowed-sync has the advantage that when the interpolated sample instance lands exactly on an original sample instance, the interpolated sample is precisely the original. but, using MATLAB, you can usually see that filters designed with Parks-McC (remez) or Least Squares (firls) turn out better than windowed sincs for the same length.
> As far as how many "phases" you need in your table, if your interpolation > resolution is limited/quantized (e.g. at most 7 points between any 2 real > samples) then you can simply limit the phases to that number. If not, use > the biggest table you can afford and pick the nearest phase neighbor. > Linear interpolation between phases can help too, but at considerable extra > effort.
well, it doubles the FIR computation (you need to compute it for two different phases) and then you have to do the linear interpolation. it's extra effort that i would normally recommend highly.
> BTW, you can also implement Lagrange interpolation with a multi-phase > look-up table. For cubic, that might makes sense e.g. if processing power > is tight and memory isn't.
but, IMO, the purpose of any polynomial interpolation (of which linear, Lagrange, Hermite, B-spline are examples) is so you can implement *any* phase without having to have a table with infinite entries to look up. there is an AES paper that Duane Wise and i did a while back and Olli N picked up on that attempts to define and generalize how to compare different polynomial interpolations (of a given order) to each other and also to the traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it, if you want. you can find Olli's paper at: http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf the idea is that for all interpolation methods, you can model them as the sampled data (a string of weighted impulses, with full sized images at every multiple of Fs) being convoluted with a LPF impulse response that can be determined from the method of interpolation. e.g. linear interpolation is modeled by passing your sampled data through a hypothetical filter with a triangular pulse as the impulse response (resulting in a sinc^2 as the frequency response and a sinc^4 as the power spectrum). then, given some simple assumptions of the nature of the input signal (like Jon mentions) such as it has a mostly empty spectrum in the top octave (that would be oversampled by a factor of two), you can compute how big the images are after the filtering and, in worse case, assume that all of those images will get folded back into the baseband with the hypothetical continuous-time output is resampled at the new sampling instances. that's what Duane and i did in our paper. anyway, what i figgered out was that, to get a 120 dB S/N using linear interpolation between the polyphases, you needed to upsample to a factor of about 512 and if using *no* interpolation (sometimes called "drop-sample interpolation"), to get the same S/N, you need to upsample by a factor of 512K, requiring a huge lookup table. another look at the resampling (or interpolation) issue can be found in an earlier post of mine: http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics. com r b-j
Clay S. Turner <CSTurner@WSE.Biz> wrote in message
news:AX2Ob.1108$D%.1014@bignews1.bellsouth.net...
> > "Jon Harris" <goldentully@hotmail.com> wrote in message > news:bu9esf$f1nrp$1@ID-210375.news.uni-berlin.de... > > > > For example, consider the Lagrange polynomial interpolators (linear, > > parabolic, cubic, etc.). You can easily implement these using a
poly-phase
> > coefficient table and FIR routine. Often, one computes the coefficients
"on
> > the fly" because they are fairly simple (especially for linear), but
this
> > need not be the case. Take linear interpolation, for example, the > > coefficients look like an upside V. As you move to higher order
Lagrange
> > interpolation, the shapes start to resemble a windowed sinc function! > > Hello Jon and others, > This is what Whittaker noticed. As the number of evenly spaced points in > LaGrangian interpolation increases towards infinity, the interpolation > kernel becomes the sinc function and this function is bandlimited! This
was
> the 1st discovery needed in WKS (Whittaker-Kotelnikov-Shannon)
interpolation
> theory. > > See > http://imagescience.bigr.nl/meijering/publications/download/pieee2002.pdf > for a great chronology of interpolation.
Wow, to think I discovered the same thing, just 90 years too late to be famous! :-)
Fred Marshall <fmarshallx@remove_the_x.acm.org> wrote in message
news:B6mdnSBLdcVo8JTdRVn-uw@centurytel.net...
> > "Jon Harris" <goldentully@hotmail.com> wrote in message > news:bua0fr$fq50p$1@ID-210375.news.uni-berlin.de... > > Yes, I understand all you said about FIRs and time invariance. I just > haven't figured out how to get there from polynomial interpolation yet. > > So, let's see: y=ax+b is the equation for a straight line. So, how are we > going to twist that into a set of coefficients for linear interpolation?
(I
> know what the result should be). > Ah! > 1) Choose the number of interim sample points to be generated > 2) Compute the sample values at those points and find that data points are > weighted by constants for each point - with different coefficients for
each
> interim sample point -> which is where the polyphase idea could come from. > 3) If we align the sets of coefficients up so their pairs overlap the > original samples, then we get a FIR filter definition - and, of course,
it's
> the familiar triangle. > So the weights aren't data dependent. OK. I just hadn't done the > derivation before .. so hadn't made a hard connection as if the two things > were in different universes! It's funny how AHA!s creep in now and then > isn't it?
I had that same AHA! a few years back when I was studying it. The 2 methods do seem very different at first because of the way they are derived and presented. Glad you got it now too! You've spelled out in more detail what I was trying to communicate when I wrote about finding the impulse response of an interpolator: "Put in a unit impulse and calculate the outputs at whatever fractional-precision you want".
> Presumably this is also applicable for higher orders of interpolation
which
> really only means that there are more sample data points being used
doesn't
> it?
Works for any order and any type of interpolation scheme that is linear time-invariant. It's just a bit harder to do the math for the more complex ones (but that's what computers are for, right? :-) -Jon
robert bristow-johnson <rbj@surfglobal.net> wrote in message
news:BC2F1ABA.7CB2%rbj@surfglobal.net...
> In article CoOdnfMsAfAw75TdRVn-ig@centurytel.net, Fred Marshall at > fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36: > ... > i'm joining in on this late, but i thought i might expand a little on Jon > Harris's responses. > > In article bu7agd$ed2s0$1@ID-210375.news.uni-berlin.de, Jon Harris at > goldentully@hotmail.com wrote on 01/15/2004 19:22: > > > As to the question about a cubic vs. short FIR, it seems to me that you > > should be able to design a better interpolating filter than the
Lagrange*
> > filters by taking advantage of a priori knowledge about your signal.
For
> > example, if you have or can create some "sampling headroom", e.g. the > > signal's frequency content does not extend all the way to Nyquist, you
can
> > bring in the cut-off point of your low pass filter and hence obtain more > > stop-band rejection for the same order. > > moreover, in this case, there are better polynomials than Lagrange, such
as
> B-spline which puts some killer notches at the multiples of Fs, so if your > signal is mostly low-pass, the images get killed even better.
That's good to know. When I first looked at the frequency response of a linear interpolator I was surprised by how bad it looked. The sidelobe behavior is pretty poor for audio stuff.
> > Rolling your own FIR also lets you > > trade-off high frequency response vs. stop-band rejection or pass-band > > ripple vs. stop-band rejection. > > > > The filter can be designed with either the windowed-sync or an optimal > > design method such as Remez, aka Parks-McClellan. In my own experience,
I
> > found the windowed-sync method to be adequate and simple to implement. > > a windowed-sync has the advantage that when the interpolated sample
instance
> lands exactly on an original sample instance, the interpolated sample is > precisely the original.
Well, at least it can have that advantage. We had this discussion a while back: http://groups.google.com/groups?selm=B95B6860.5E87%25robert%40wavemechanics. com&oe=UTF-8&output=gplain
> but, using MATLAB, you can usually see that filters > designed with Parks-McC (remez) or Least Squares (firls) turn out better > than windowed sincs for the same length.
True, though after playing with Kaiser windows, you can usually get something pretty close. And it's a lot simpler than trying to run remez on a _huge_ filter that's sometimes required when you need to generate lots of phases.
> > As far as how many "phases" you need in your table, if your
interpolation
> > resolution is limited/quantized (e.g. at most 7 points between any 2
real
> > samples) then you can simply limit the phases to that number. If not,
use
> > the biggest table you can afford and pick the nearest phase neighbor. > > Linear interpolation between phases can help too, but at considerable
extra
> > effort. > > well, it doubles the FIR computation (you need to compute it for two > different phases) and then you have to do the linear interpolation. it's > extra effort that i would normally recommend highly.
Sure. In one application, I had to do the same interpolation on lots of channels. In this the overhead was small because I could do the coeffiecient linear interpolation just once and then use the results for the rest of the channels.
> > BTW, you can also implement Lagrange interpolation with a multi-phase > > look-up table. For cubic, that might makes sense e.g. if processing
power
> > is tight and memory isn't. > > but, IMO, the purpose of any polynomial interpolation (of which linear, > Lagrange, Hermite, B-spline are examples) is so you can implement *any* > phase without having to have a table with infinite entries to look up.
That's true. I was just correcting the assertion that polynomial interpolation _can't_ be done with a look-up table.
> there is an AES paper that Duane Wise and i did a while back and Olli N > picked up on that attempts to define and generalize how to compare
different
> polynomial interpolations (of a given order) to each other and also to the > traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it,
if
> you want. you can find Olli's paper at: > > http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf
(requested a copy in a direct e-mail to r b-j)
> the idea is that for all interpolation methods, you can model them as the > sampled data (a string of weighted impulses, with full sized images at
every
> multiple of Fs) being convoluted with a LPF impulse response that can be > determined from the method of interpolation. e.g. linear interpolation is > modeled by passing your sampled data through a hypothetical filter with a > triangular pulse as the impulse response (resulting in a sinc^2 as the > frequency response and a sinc^4 as the power spectrum). then, given some > simple assumptions of the nature of the input signal (like Jon mentions) > such as it has a mostly empty spectrum in the top octave (that would be > oversampled by a factor of two), you can compute how big the images are > after the filtering and, in worse case, assume that all of those images
will
> get folded back into the baseband with the hypothetical continuous-time > output is resampled at the new sampling instances. that's what Duane and
i
> did in our paper.
Cool. That goes along with my idea that you can learn pretty much everything you need to know about the quality of an interpolator by looking at its frequency response.
> anyway, what i figgered out was that, to get a 120 dB S/N using linear > interpolation between the polyphases, you needed to upsample to a factor
of
> about 512 and if using *no* interpolation (sometimes called "drop-sample > interpolation"), to get the same S/N, you need to upsample by a factor of > 512K, requiring a huge lookup table.
Sometimes you get lucky. For example, in one application I worked on, the interpolation was for varispeed playback. The varispeed rate was quantized such that it limited the number of possible sub-sample intervals to an amount that allowed fitting all the phases I could possibly need in the available memory. I got perfect accuracy with no coefficient interpolation required. Life was good! -- Jon Harris
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BC2F1ABA.7CB2%rbj@surfglobal.net...
> In article CoOdnfMsAfAw75TdRVn-ig@centurytel.net, Fred Marshall at > fmarshallx@remove_the_x.acm.org wrote on 01/17/2004 12:36: > ... > > there is an AES paper that Duane Wise and i did a while back and Olli N > picked up on that attempts to define and generalize how to compare
different
> polynomial interpolations (of a given order) to each other and also to the > traditional P-McC or LS FIR methods. i can send anyone a PDF copy of it,
if
> you want.
r b-j, Yes, would you please send the PDF to me? Fred

> >
http://groups.google.com/groups?selm=B8D78660.3B54%25robert%40wavemechanics.
> com > > r b-j
link didn't work..... Fred
>