DSPRelated.com
Forums

Interpolation and decimation

Started by seb January 13, 2004
Good questions!  Yes it is possible to do something in between linear and a
much higher-order FIR filter.  Myself, I've experimented with cubic
interpolation in audio applications and it sounded better than linear
without too much extra computational effort.

As to the question about a cubic vs. short FIR, it seems to me that you
should be able to design a better interpolating filter than the Lagrange*
filters by taking advantage of a priori knowledge about your signal.  For
example, if you have or can create some "sampling headroom", e.g. the
signal's frequency content does not extend all the way to Nyquist, you can
bring in the cut-off point of your low pass filter and hence obtain more
stop-band rejection for the same order.  Rolling your own FIR also lets you
trade-off high frequency response vs. stop-band rejection or pass-band
ripple vs. stop-band rejection.

The filter can be designed with either the windowed-sync or an optimal
design method such as Remez, aka Parks-McClellan.  In my own experience, I
found the windowed-sync method to be adequate and simple to implement.

As far as how many "phases" you need in your table, if your interpolation
resolution is limited/quantized (e.g. at most 7 points between any 2 real
samples) then you can simply limit the phases to that number.  If not, use
the biggest table you can afford and pick the nearest phase neighbor.
Linear interpolation between phases can help too, but at considerable extra
effort.  (In my application, it worked well because I had to apply the same
interpolation to multiple channels.  Hence I could do the coefficient
interpolation just once and re-use the coefs for all channels.)

BTW, you can also implement Lagrange interpolation with a multi-phase
look-up table.  For cubic, that might makes sense e.g. if processing power
is tight and memory isn't.

Best wishes!
-Jon

* general name for linear, parabolic, cubic, etc.

"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message
news:bu72pn$svk$1@blue.rahul.net...
> The fastest method of interpolation is to just use the nearest > neighbor, but this usually introduces lots of sampling jitter noise. > Slightly better, but slower by one or two multiply-adds, is 2-point > linear interpolation. > > The multirate literature seems to describe lots of variations on high > quality, but much slower, N-tap windowed-sinc FIR filters, with one > or two multiply-adds per tap, depending on whether one uses a large > multi-phase table, or interpolates inside a smaller table of coefficients. > > Are there methods of interpolation in between these two in performance? > e.g. if one has enough performance overhead to do more than linear > interpolation, but less than enough for a high quality 11-tap FIR filter > with a large cache-busting multi-phase coefficient table, what other > methods should one try? > > Would a 3 or 4 point parabolic or cubic interpolation work? Or would a > 3 or 4 tap FIR filter with, say, a cubic approximation to the windowed > sinc be better? Or would using 4 or 5 taps and the nearest phase > neighbor inside a small multi-phase coefficient table be sufficient? > > Other options? > > Thanks. > > -- > Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ > #include <canonical.disclaimer> // only my own opinions, etc.
"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message
news:bu72pn$svk$1@blue.rahul.net...
> The fastest method of interpolation is to just use the nearest > neighbor, but this usually introduces lots of sampling jitter noise. > Slightly better, but slower by one or two multiply-adds, is 2-point > linear interpolation. > > The multirate literature seems to describe lots of variations on high > quality, but much slower, N-tap windowed-sinc FIR filters, with one > or two multiply-adds per tap, depending on whether one uses a large > multi-phase table, or interpolates inside a smaller table of coefficients. > > Are there methods of interpolation in between these two in performance? > e.g. if one has enough performance overhead to do more than linear > interpolation, but less than enough for a high quality 11-tap FIR filter > with a large cache-busting multi-phase coefficient table, what other > methods should one try? > > Would a 3 or 4 point parabolic or cubic interpolation work? Or would a > 3 or 4 tap FIR filter with, say, a cubic approximation to the windowed > sinc be better? Or would using 4 or 5 taps and the nearest phase > neighbor inside a small multi-phase coefficient table be sufficient? > > Other options?
Rob, The first thing you need to decide is if you're wanting to increase the sample rate by a rational factor or if you want to do arbitrary-point interpolation. If you're talking about FIR filters, etc. then it seems like you're talking about regularly sampled data? Also, are you wanting to generate a fully interpolated sequence which may even go so far as to be almost "continuous" or, more simply, to generate an occasional interim data point from a set of samples? [1/2 1 1/2] is a typical filter to interpolate between samples and is the same as straight line averaging at a midpoint. The filter sample rate is 2x the input series. It reproduces the input samples exactly. If you like to think of polyphase implementation, then it's a [1/2 1/2] filter on the data every other output sample and it's a [1] filter on the data every other output sample. But, polyphase is simply a way of looking at things and a way suggesting how to handle the data and multiplies, etc. The mid-point between interpolating by a factor of 2 or 3 or 4 .... in all this is to conceptually insert *lots* of zeros between the input samples so as to increase the output sample rate by a bunch. In order to generate individual output samples then requires a set of weights that appear to come from a long FIR filter but which only have to be selected as in a polyphase output. For example, the [1/2 1 1/2] filter for midpoint straight line interpolation can be generalized to something like: [0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 .9 .8 .7 .6 .5 .4 .3 .2 .1 0] and used for 10x interpolation ratio by applying [0 1 0] or [1] on top of a data point, [0.9 .1] at 1/10th a sample interval, etc. Again, it's just straight line interpolation with a different discrete interpolation factor. The alternative for arbitrary points is to just use a straight line formula but requires that you compute the coefficients for each abritrary point: like [0.777 0.333]. If you use something like a truncated sinc sort of interpolating filter then the results will be much better than straight line interpolation and the filter coefficients will be fixed at least. Otherwise, methods of polynomial interpolation require that the function's coefficients be generated from the data samples first. So, you'd have to weigh the compute load to do that sort of thing. While the "filter" might be shorter, figuring out the coefficients for each output point might be prohibitive. And, then you have what amounts to a time-varying filter - which will likely introduce new frequencies. I don't believe that a "goodness" measure for interpolation has been dealt with all that much - but maybe so. Somewhere I have a paper that shows signal to noise ratio as a function of frequency for different methods. Here's a couple of thoughts: 1) Does the interpolation method reproduce the original sample values? Many do but some don't. I should think that keeping them unchanged would be a good thing. 2) Does the interpolation method result in introducing new frequencies? That would amount to a type of harmonic distortion and is generally undesirable in an engineering context. It seems a good measure. This is the signal to noise measure mentioned above. 3) Perhaps related to (2), does the interpolation method result in introducing content that is temporally far removed if a single unit sample is interpolated? This is equivalent to measuring the unit sample / impulse response of the interpolator. So, one needs to apply some kind of measures like this if "sufficiency" is to be assessed. Fred
Jerry:

I'm not sure how to read your response, but I'm sure you didn't mean to
imply that filtration is not needed. After upsampling, you have created many
replicas of the spectrum of interest, and this must be LPF/BPF to isolate
one replica prior to decimation, otherwise they will all be folded on top of
each other. However, reading your response verbatim, "by upsampling first,
frequencies that the final sample rate will support don't have to be
removed", I guess I would agree--you need to filter out everything but the
band of interest prior to decimation.... Which further emphasizes my point
that filtration must be applied when resampling.

Perhaps you read my post as my assuming decimation would be applied first? I
only said that filtration was needed prior to decimation. Interpolation
filters could be applied, but their usefulness is very application
dependent, and I don't know the OP's application.

Jim

"Jerry Avins" <jya@ieee.org> wrote in message
news:4005460b$0$6748$61fed72c@news.rcn.com...
> Jim Gort wrote: > > > seb: > > > > I don't know of any "new" ways to do it, but make sure that if you do it
the
> > old way, and a<b, you LPF (or BPF is your frequency region of interest
is
> > other than baseband) the original data prior to decimation so that
folded
> > content is not present in downsampled data. > > > > Jim > > > > "seb" <germain1_fr@yahoo.fr> wrote in message > > news:23925133.0401131910.7f22e0a2@posting.google.com... > > > >>Hello, > >> > >>i am looking for decimation and interpolation technique in order to, > >>given a sampling rate fs, obtain a new sampling rate like (a/b)*fs. > >> > >>A way to to do is to decimate and then use linear interpolation... > >> > >>Is there some other ways (documents) to do this ? > >>If so, have you got some book or url ? > >> > >>Thanks > > By upsampling first, frequencies that the final sample rate will support > don't have to be removed. > > Jerry > -- > Engineering is the art of making what you want from things you can get. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; >
Jim Gort wrote:

> Jerry: > > I'm not sure how to read your response, but I'm sure you didn't mean to > imply that filtration is not needed. After upsampling, you have created many > replicas of the spectrum of interest, and this must be LPF/BPF to isolate > one replica prior to decimation, otherwise they will all be folded on top of > each other. However, reading your response verbatim, "by upsampling first, > frequencies that the final sample rate will support don't have to be > removed", I guess I would agree--you need to filter out everything but the > band of interest prior to decimation.... Which further emphasizes my point > that filtration must be applied when resampling. > > Perhaps you read my post as my assuming decimation would be applied first? I > only said that filtration was needed prior to decimation. Interpolation > filters could be applied, but their usefulness is very application > dependent, and I don't know the OP's application. > > Jim
Consider upsampling by 3:2. That requires decimating by two and interpolating by three. Half the bandwidth has to be discarded if the decimation comes first. By trippling first, all the original ingormation can be retained. If course filtering is needed, but the frequency cutoff needn't reduce the original bandwidth. When changing to a lower sample rate, the bandwidth needs to be reduced to what that will support, but not lower. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Jerry:

Now I see your misunderstanding--I said be careful about decimation
filtration if a<b. Your example of 3:2 is a=3 and b=2.

Jim

"Jerry Avins" <jya@ieee.org> wrote in message
news:40075a4c$0$6088$61fed72c@news.rcn.com...
> Jim Gort wrote: > > > Jerry: > > > > I'm not sure how to read your response, but I'm sure you didn't mean to > > imply that filtration is not needed. After upsampling, you have created
many
> > replicas of the spectrum of interest, and this must be LPF/BPF to
isolate
> > one replica prior to decimation, otherwise they will all be folded on
top of
> > each other. However, reading your response verbatim, "by upsampling
first,
> > frequencies that the final sample rate will support don't have to be > > removed", I guess I would agree--you need to filter out everything but
the
> > band of interest prior to decimation.... Which further emphasizes my
point
> > that filtration must be applied when resampling. > > > > Perhaps you read my post as my assuming decimation would be applied
first? I
> > only said that filtration was needed prior to decimation. Interpolation > > filters could be applied, but their usefulness is very application > > dependent, and I don't know the OP's application. > > > > Jim > > Consider upsampling by 3:2. That requires decimating by two and > interpolating by three. Half the bandwidth has to be discarded if the > decimation comes first. By trippling first, all the original ingormation > can be retained. If course filtering is needed, but the frequency cutoff > needn't reduce the original bandwidth. When changing to a lower sample > rate, the bandwidth needs to be reduced to what that will support, but > not lower. > > Jerry > -- > Engineering is the art of making what you want from things you can get. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; >
Jim Gort wrote:

> Jerry: > > Now I see your misunderstanding--I said be careful about decimation > filtration if a<b. Your example of 3:2 is a=3 and b=2. > > Jim
... I thought the OP needed to bring two data streams to a common rate so they could be processed together. The way to do that without losing bandwidth matches the lower to the higher. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Fred (and others),

There are a few things I didn't quite agree with in your post.  I'll comment
on some specifics below.  I think you are already aware of this, but just
for the record, let me state the following "big idea":

Interpolating with polynomials and poly-phase FIR filters are not separate,
disjointed methods.  Both calculate new samples with weighted sums of
existing samples.  Both can be treated by digital filtering theory.  Both
are linear operations.  Granted, they are typically implemented differently
and most for most people conceptually they seem different, but they really
are accomplishing the same thing and can (and should) be analyzed using the
same methods.

For example, consider the Lagrange polynomial interpolators (linear,
parabolic, cubic, etc.).  You can easily implement these using a poly-phase
coefficient table and FIR routine.  Often, one computes the coefficients "on
the fly" because they are fairly simple (especially for linear), but this
need not be the case.  Take linear interpolation, for example, the
coefficients look like an upside V.  As you move to higher order Lagrange
interpolation, the shapes start to resemble a windowed sinc function!

See more comments in-line.

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:Y7ydnUt0vKrNpZrdRVn-sQ@centurytel.net...
> > > The first thing you need to decide is if you're wanting to increase the > sample rate by a rational factor or if you want to do arbitrary-point > interpolation. If you're talking about FIR filters, etc. then it seems
like
> you're talking about regularly sampled data?
I'm assuming regularly sampled data as well. With DSPs, is there really anything other than rational interpolation? I mean, if you have a certain number of input samples and generate a certain number of output samples, you have a ratio. It may be really nasty like 1343873/1343895, but it's still rational. Maybe there are some real-time operations that use irrational
> Also, are you wanting to generate a fully interpolated sequence which may > even go so far as to be almost "continuous" or, more simply, to generate
an
> occasional interim data point from a set of samples?
Good question. This could effect the number of required "phases" in your poly-phase filter. And if this number was larger than was practical, it may dictate calculating coefficients on-the-fly, which could in term dictate the interpolation method. However, it is still possible to do nearly continuous interpolation with poly-phase FIRs. You can store as many coefficient phases as practical in a table and compute the rest through interpolation (usually linear is good enough). Analog Devices uses this "double interpolation" method (interpolate to get the coeffeicents, then use them to interpolate the data) in their audio sample rate converter products.
> [1/2 1 1/2] is a typical filter to interpolate between samples and is the > same as straight line averaging at a midpoint. The filter sample rate is
2x
> the input series. It reproduces the input samples exactly. If you like
to
> think of polyphase implementation, then it's a [1/2 1/2] filter on the
data
> every other output sample and it's a [1] filter on the data every other > output sample. But, polyphase is simply a way of looking at things and a > way suggesting how to handle the data and multiplies, etc.
Right. Poly-phase is a implementation method that can be applied to either FIR or Lagrange interpolation.
> The mid-point between interpolating by a factor of 2 or 3 or 4 .... in all > this is to conceptually insert *lots* of zeros between the input samples
so
> as to increase the output sample rate by a bunch. In order to generate > individual output samples then requires a set of weights that appear to
come
> from a long FIR filter but which only have to be selected as in a
polyphase
> output. For example, the [1/2 1 1/2] filter for midpoint straight line > interpolation can be generalized to something like: [0 .1 .2 .3 .4 .5 .6
.7
> .8 .9 1.0 .9 .8 .7 .6 .5 .4 .3 .2 .1 0] and used for 10x interpolation
ratio
> by applying [0 1 0] or [1] on top of a data point, [0.9 .1] at 1/10th a > sample interval, etc. Again, it's just straight line interpolation with a > different discrete interpolation factor. The alternative for arbitrary > points is to just use a straight line formula but requires that you
compute
> the coefficients for each abritrary point: like [0.777 0.333].
Right on man!
> If you use something like a truncated sinc sort of interpolating filter
then
> the results will be much better than straight line interpolation and the > filter coefficients will be fixed at least. Otherwise, methods of > polynomial interpolation require that the function's coefficients be > generated from the data samples first. So, you'd have to weigh the
compute
> load to do that sort of thing. While the "filter" might be shorter, > figuring out the coefficients for each output point might be prohibitive. > And, then you have what amounts to a time-varying filter - which will
likely
> introduce new frequencies.
A few points of disagreement here. As mentioned above, you can pre-compute the filter for e.g. cubic interpolation and put it in a table if you want, in the same manner as you described for linear interpolation. Then there is no additional computational load at run time. Also, this doesn't end up generating a time-varying filter (except in the sense that every poly-phase filter is a time-varying filter). No new frequencies are generated except, except those that result from the aliasing of signals not perfectly suppressed by the interpolating filter.
> I don't believe that a "goodness" measure for interpolation has been dealt > with all that much - but maybe so. Somewhere I have a paper that shows > signal to noise ratio as a function of frequency for different methods. > Here's a couple of thoughts:
The best "goodness" measure I've seen is the frequency response of the interpolation filter. If you treat the Lagrange polynomials as filters as I've been advocating, you can find their frequency responses and evaluate their pass-band ripple, stop-band attenuation, side lobes, etc. just as you can with FIR filters.
> 1) Does the interpolation method reproduce the original sample values? > Many do but some don't. I should think that keeping them unchanged would
be
> a good thing.
Usually keeping the original samples is only relevant when interpolating by small rational amounts. Keep in mind that if you do want to keep the original samples, that significantly limits your choice on interpolation filters which may prevent you from optimizing some other figure of merit such as frequency response. In the audio world, the effort is almost always made to optimize the frequency response rather than keep the original samples.
> 2) Does the interpolation method result in introducing new frequencies? > That would amount to a type of harmonic distortion and is generally > undesirable in an engineering context. It seems a good measure. This is > the signal to noise measure mentioned above.
Considering the frequency domain again, the filtering operation in sample rate conversion needs to suppress the higher-frequency images of the original signal. Then, when you change the sample rate, anything not perfectly supressed aliases to a frequency within the new Nyquist range. This generates new frequencies. Hence, the frequency response of the interpolation filter gives you all the information about the amount of aliasing (new frequencies generated). The sample rate conversion ratio tells you _where_ the new frequencies land. The filter's stop-band rejection tells you _how much_ of the new frequenies there will be. The SNR of the whole process depends on the input signal and how well or poorly it aligns with the frequency response of the interpolation filter.
> 3) Perhaps related to (2), does the interpolation method result in > introducing content that is temporally far removed if a single unit sample > is interpolated? This is equivalent to measuring the unit sample /
impulse
> response of the interpolator.
I'm not sure I follow this. I guess you are talking about the "length" of the filter's impulse response? Actually, the theoretically ideal interpolating filter has an infinite length impulse response. But controlling this length is sometimes important, usually because you may need to minimize the group delay of the filter for a particular application.
> So, one needs to apply some kind of measures like this if "sufficiency" is > to be assessed. > > Fred
Agreed. -Jon
"Jon Harris" <goldentully@hotmail.com> wrote in message
news:bu9esf$f1nrp$1@ID-210375.news.uni-berlin.de...
> Fred (and others), >
Jon, Great post! I guess I haven't done enough of it to figure this one thing out: If you're doing Lagrange interpolation then aren't the coefficients dependent on the data? it sure seems so from the expressions I've been reading. If not, then there must be all sorts of tables (FIR filters) already generated, no? I've not seen them. Ditto for polynomial interpolation.... So, I must be missing something. Can you illuminate please? Polyphase is just an implementation detail for a known filter - so I choose to leave that out as much as possible. Fred
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:UfudnYDZHMwl5pXdRVn-hA@centurytel.net...
> > "Jon Harris" <goldentully@hotmail.com> wrote in message > news:bu9esf$f1nrp$1@ID-210375.news.uni-berlin.de... > > Fred (and others), > > Jon, > > Great post! I guess I haven't done enough of it to figure this one thing > out: > > If you're doing Lagrange interpolation then aren't the coefficients > dependent on the data? it sure seems so from the expressions I've been > reading. If not, then there must be all sorts of tables (FIR filters) > already generated, no? I've not seen them.
Of course the output is dependent on the data, but the coefficients aren't. I know it seems that way because of the way the formulas are written. (Another thing that may confuse this is distinguishing between _polynomial_ coefficients, which would be dependent on the input data and _filter_ coeffieicents which would not be.) Take your simple linear interpolator, which is the simplest of the Lagrange family. It's filter coefficeints vary linearly from 0 to 1 depending on where you are in the between the samples, indpendent of the input data. Your polynomial coffieneints in the form y = ax + b would be dependent on the input data of course. The same holds true for higher order polynomials as well. The tables that you are looking for are just the impulse responses of the interpolators. Put in a unit impulse and calculate the outputs at whatever fractional-precision you want! Assuming the interpolation meets the criteria of being time-invariant and linear, the superposition principle tells you that you can calculate the output for any input sequence based on the impulse response (convolve input with impulse response = FIR!).
> Ditto for polynomial interpolation.... So, I must be missing something. > Can you illuminate please?
To be honest, I've only studied the Lagrange. There may well indeed be some other interpolation scheme that doesn't lend itself to an FIR-style implementation. But it would seem that any *time-invariant linear* interpolation could be pre-computed, "table-ized", and implemented with a poly-phase FIR. The reasons it is not commonly implemented this way are probably practical ones rather than due to any limitation in the theory.
> Polyphase is just an implementation detail for a known filter - so I
choose
> to leave that out as much as possible.
Right.
> Fred
Jon
By the way, it jus occurred to me that I've been making the (unstated)
assumption that both the input and output sample rates are constant/uniform.
Some of what I've written may not apply to the non-uniform data that you may
encounter in some interpolation problems.  (I tend to think in terms of
audio most of the time since that's what I've worked on.)

"Jon Harris" <goldentully@hotmail.com> wrote in message
news:bu9vtl$fmn03$1@ID-210375.news.uni-berlin.de...
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message > news:UfudnYDZHMwl5pXdRVn-hA@centurytel.net... > > > > "Jon Harris" <goldentully@hotmail.com> wrote in message > > news:bu9esf$f1nrp$1@ID-210375.news.uni-berlin.de... > > > Fred (and others), > > > > Jon, > > > > Great post! I guess I haven't done enough of it to figure this one
thing
> > out: > > > > If you're doing Lagrange interpolation then aren't the coefficients > > dependent on the data? it sure seems so from the expressions I've been > > reading. If not, then there must be all sorts of tables (FIR filters) > > already generated, no? I've not seen them. > > Of course the output is dependent on the data, but the coefficients
aren't.
> I know it seems that way because of the way the formulas are written. > (Another thing that may confuse this is distinguishing between
_polynomial_
> coefficients, which would be dependent on the input data and _filter_ > coeffieicents which would not be.) > > Take your simple linear interpolator, which is the simplest of the
Lagrange
> family. It's filter coefficeints vary linearly from 0 to 1 depending on > where you are in the between the samples, indpendent of the input data. > Your polynomial coffieneints in the form y = ax + b would be dependent on > the input data of course. The same holds true for higher order
polynomials
> as well. > > The tables that you are looking for are just the impulse responses of the > interpolators. Put in a unit impulse and calculate the outputs at
whatever
> fractional-precision you want! Assuming the interpolation meets the > criteria of being time-invariant and linear, the superposition principle > tells you that you can calculate the output for any input sequence based
on
> the impulse response (convolve input with impulse response = FIR!). > > > Ditto for polynomial interpolation.... So, I must be missing something. > > Can you illuminate please? > > To be honest, I've only studied the Lagrange. There may well indeed be
some
> other interpolation scheme that doesn't lend itself to an FIR-style > implementation. But it would seem that any *time-invariant linear* > interpolation could be pre-computed, "table-ized", and implemented with a > poly-phase FIR. The reasons it is not commonly implemented this way are > probably practical ones rather than due to any limitation in the theory. > > > Polyphase is just an implementation detail for a known filter - so I > choose > > to leave that out as much as possible. > > Right. > > > Fred > > Jon > >