DSPRelated.com
Forums

interpolation accuracy, oversampling and fractional interpolation

Started by renaudin October 9, 2006
Hi all,

Interpolation of a sampled signal x(n) to generate an up-sampled signal  
y(n) can be represented mathematically as:

y(n) = x(n/L) /* here L is the intrpolation factor.

Regardless of the type of interpolation, if we increase the value of 'L'
weather it will increase the accuracy of interpolation process?

Whats about the fractional Interpolation/Decimation factors how to deal
with them?

Thanks in advance for the discussion and comments.

Renaudin  

renaudin wrote:
> Hi all, > > Interpolation of a sampled signal x(n) to generate an up-sampled signal > y(n) can be represented mathematically as: > > y(n) = x(n/L) /* here L is the intrpolation factor. > > Regardless of the type of interpolation, if we increase the value of 'L' > weather it will increase the accuracy of interpolation process? > > Whats about the fractional Interpolation/Decimation factors how to deal > with them? > > Thanks in advance for the discussion and comments. > > Renaudin
Increasing the value of L will not increase the accuracy of the interpolation process. This will simply increase the number of samples to recompute. The accuracy of the interpolation is usually improved with the method (spline, cubic, parabolic, linear, lagrange polynomial) and with the number of input points used to compute an interpolated value. The interpolation theorem for discrete signals indicates that the analog signal can be perfectly recovered (and hence any sample thereof at a time t) if one uses all sample points from -inf to inf (assuming the sample rate washigh enough to avoid aliasing, etc.). In practice, one uses a window around the point of interpolation out to some range. Using more points improves interpolation but also adds to the computational load. This is the engineering trade off you will need to determine. I'm not sure what you mean by how to deal with the fractional factors. By interpolating any discrete signal for any arbitrary point at time (t), you will, by definition, have to deal with a fractional portion between two sample points. -V
> > The accuracy of the interpolation is usually improved with the method > (spline, cubic, parabolic, linear, lagrange polynomial) and with the > number of input points used to compute an interpolated value. The > interpolation theorem for discrete signals indicates that the analog > signal can be perfectly recovered (and hence any sample thereof at a > time t) if one uses all sample points from -inf to inf (assuming the > sample rate washigh enough to avoid aliasing, etc.). >
as long as we are discussing interpolation... I have always been confused about the relationship between this kind of interpolation (linear, cubic, spline etc) and the kind used in DSP audio work for up sampling. In audio up-sampling we just insert zero value samples and then pass the result through a low pass filter to remove the image frequencies. What are the comparative merits of linear, cubic spline type interpolation vs the insert zero sample and filter type of interpolation used for audio? It seems to me that in the case of the audio method.. if the low pass filter were "perfect" in rejection of the image frequencies, the insert sample/filter method provides "perfect" results. By comparison, the linear, cubic, spline methods are always imperfect. Why not always use the audio method, is it computationally expensive? thanks Mark
The technique you describe for upsampling is ideal for resampling time
series at a fixed, spacing. It's fine. Here the accuracy is dependent,
not on the upsampling factor L, but on the length and quality of your
low pass filter. In my last message, the interpolation theorem says I
can achieve perfect reconstruction of the underlying analog signal
given an infinite length (ideal) filter. Somewhere between 0 and
infinity one decides how large of a filter to really use.

The techniques that use cubic, parabolic, polynomial types of
interpolation are typically used when the spacing is variable. This
comes up often in communications systems when a received  time series
needs to be resampled to match a transmitters clock rate. With each
sample, the value of the signal at a time t needs to be computed using
discrete points in the vicinity of t. The value of t may not increase
linearly as is the case when the receiver and transmitter clocks have a
frequency offset. So t is adjusted to the next sample time and the
sample at that time is calculated. 

-V

On 9 Oct 2006 07:57:24 -0700, "Mark" <makolber@yahoo.com> wrote:

> >> >> The accuracy of the interpolation is usually improved with the method >> (spline, cubic, parabolic, linear, lagrange polynomial) and with the >> number of input points used to compute an interpolated value. The >> interpolation theorem for discrete signals indicates that the analog >> signal can be perfectly recovered (and hence any sample thereof at a >> time t) if one uses all sample points from -inf to inf (assuming the >> sample rate washigh enough to avoid aliasing, etc.). >> > >as long as we are discussing interpolation... > >I have always been confused about the relationship between this kind of >interpolation (linear, cubic, spline etc) and the kind used in DSP >audio work for up sampling. In audio up-sampling we just insert zero >value samples and then pass the result through a low pass filter to >remove the image frequencies. > >What are the comparative merits of linear, cubic spline type >interpolation vs the insert zero sample and filter type of >interpolation used for audio?
In my experience interpolaters like linear or cubic spline create large errors from the interpolated to ideal points. I've not yet seen a case where linear or any kind of spline-based interpolator is a good choice in signal processing. I'm sure if the large errors can be tolerated that they may be useful, but there may be simpler methods that are better.
>It seems to me that in the case of the audio method.. if the low pass >filter were "perfect" in rejection of the image frequencies, the insert >sample/filter method provides "perfect" results. By comparison, the >linear, cubic, spline methods are always imperfect. > >Why not always use the audio method, is it computationally expensive?
Not always, and, as usual, depending. Something that hasn't been mentioned yet is polyphase filtering, which can provide the equivalent of your zero-stuffing method but with efficient computation. And the time resolution of interpolated samples in only limited by the number of phases (in the polyphase filter) needed to provide that sample. Any time offset can be interpolated, but the coefficient set for that phase will be unique. However, as soon as one limits the precision in the filter coefficients (which is practical), then there is a phase resolution beyond which the coefficient changes will be below the quantization levels of the coefficients. So, practically, within reason, once you have a certain level of computation precision you can interpolate to any time that you want. And interpolation done this way will have minimum error for the bandwidth supported by the filter, so you can do the bandlimiting (to eliminate the images) and interpolation in one step. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
On 10 Oct 2006 05:03:38 -0700, "Viognier" <viognier@cox.net> wrote:

> >The technique you describe for upsampling is ideal for resampling time >series at a fixed, spacing. It's fine. Here the accuracy is dependent, >not on the upsampling factor L, but on the length and quality of your >low pass filter. In my last message, the interpolation theorem says I >can achieve perfect reconstruction of the underlying analog signal >given an infinite length (ideal) filter. Somewhere between 0 and >infinity one decides how large of a filter to really use. > >The techniques that use cubic, parabolic, polynomial types of >interpolation are typically used when the spacing is variable. This >comes up often in communications systems when a received time series >needs to be resampled to match a transmitters clock rate. With each >sample, the value of the signal at a time t needs to be computed using >discrete points in the vicinity of t. The value of t may not increase >linearly as is the case when the receiver and transmitter clocks have a >frequency offset. So t is adjusted to the next sample time and the >sample at that time is calculated. > >-V
Cubic, parabolic, polynomial, or pretty much any spline-based interpolator is a really bad choice for a comm system. Most comm systems use FIR interpolators, e.g., polyphase filters, in order to avoid the substantial performance loss that one would incur with the interpolators that you've mentioned. I looked at spline-based interpolation once when a reviewer of a paper I'd written said that my method was bad and I should be using splines. The reviewer was completely off his rocker (IMHO) which I demonstrated later. The errors are just too large for a communications system, where it isn't hard to get enough samples on either side of the interpolation point to do the job. Splines have their applications, but comm doesn't seem to be one of them. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
> > > > The accuracy of the interpolation is usually improved with the method > > (spline, cubic, parabolic, linear, lagrange polynomial) and with the > > number of input points used to compute an interpolated value. The > > interpolation theorem for discrete signals indicates that the analog > > signal can be perfectly recovered (and hence any sample thereof at a > > time t) if one uses all sample points from -inf to inf (assuming the > > sample rate washigh enough to avoid aliasing, etc.). > >
Mark wrote:
> > as long as we are discussing interpolation... > > I have always been confused about the relationship between this kind of > interpolation (linear, cubic, spline etc) and the kind used in DSP > audio work for up sampling.
it's not just audio. it's any bandlimited interpolation of uniformly sampled data.
> In audio up-sampling we just insert zero > value samples and then pass the result through a low pass filter to > remove the image frequencies.
that's *conceptually* what to do. the polyphase interpolation method takes into account the inserted zeros (the non-zero terms in the convolution summation are equally spaced) and does not bother with multiplying anything with those terms.
> What are the comparative merits of linear, cubic spline type > interpolation vs the insert zero sample and filter type of > interpolation used for audio?
you can compare the merits by recognizing that, even for spline/polynomial interpolation, all methods can be represented as convolving the string of ideal impulses (weighted by the sample values) with the corresponding interpolation function which is an impulse response of a LPF. then you can judge which LPF is better for the application. Duane Wise and me did an AES paper doing some of this comparison, perhaps 7 or 9 years ago. i can send you or whoever else emails me a pdf copy.
> It seems to me that in the case of the audio method.. if the low pass > filter were "perfect" in rejection of the image frequencies, the insert > sample/filter method provides "perfect" results. By comparison, the > linear, cubic, spline methods are always imperfect.
so is any FIR or causal IIR.
> Why not always use the audio method, is it computationally expensive?
it usually requires a table lookup which limits the number of fractional delays to a finite number. if you interpolated "time" can land *anywhere* between the input samples, the you have to interpolate between two neighboring fractional delays (for which you have a finite set of coefficients for). for audio, i usually think that upsampling with some kind of optimized LPF (so the coefs are in table form) with an upsample ratio of 512 and then linear interpolating in between those fractional delays will result in error from aliasing upon resampling of less than -120 dB. r b-j
Eric Jacobsen wrote:

> Cubic, parabolic, polynomial, or pretty much any spline-based > interpolator is a really bad choice for a comm system. Most comm > systems use FIR interpolators, e.g., polyphase filters, in order to > avoid the substantial performance loss that one would incur with the > interpolators that you've mentioned. > > I looked at spline-based interpolation once when a reviewer of a paper > I'd written said that my method was bad and I should be using splines. > The reviewer was completely off his rocker (IMHO) which I demonstrated > later. The errors are just too large for a communications system, > where it isn't hard to get enough samples on either side of the > interpolation point to do the job. Splines have their applications, > but comm doesn't seem to be one of them.
When resampling a sequence to an output sample rate equal to a rational number times the input sample rate, (FSout = (P/Q) * FSin), polyphase filtering is the method of choice. But when the output sample rate is an irrational multiple of the input rate (FSout = r * FSin), one cannot use the polyphase filtering. That's when spline methods may be helpful. This happens often in communications when constructing an all digital receiver. -V
On 11 Oct 2006 05:01:48 -0700, "Viognier" <viognier@cox.net> wrote:

>Eric Jacobsen wrote: > >> Cubic, parabolic, polynomial, or pretty much any spline-based >> interpolator is a really bad choice for a comm system. Most comm >> systems use FIR interpolators, e.g., polyphase filters, in order to >> avoid the substantial performance loss that one would incur with the >> interpolators that you've mentioned. >> >> I looked at spline-based interpolation once when a reviewer of a paper >> I'd written said that my method was bad and I should be using splines. >> The reviewer was completely off his rocker (IMHO) which I demonstrated >> later. The errors are just too large for a communications system, >> where it isn't hard to get enough samples on either side of the >> interpolation point to do the job. Splines have their applications, >> but comm doesn't seem to be one of them. > >When resampling a sequence to an output sample rate equal to a rational >number times the input sample rate, (FSout = (P/Q) * FSin), polyphase >filtering is the method of choice. But when the output sample rate is >an irrational multiple of the input rate (FSout = r * FSin), one cannot >use the polyphase filtering. That's when spline methods may be helpful. > >This happens often in communications when constructing an all digital >receiver.
I'd still disagree, having used polyphase filters for many years for locking symbol clocks in digital recievers, with no measurable loss. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
robert bristow-johnson wrote:
> Mark wrote:
> you can compare the merits by recognizing that, even for > spline/polynomial interpolation, all methods can be represented > as convolving the string of ideal impulses (weighted by the > sample values) with the corresponding interpolation function > which is an impulse response of a LPF. then you can judge which > LPF is better for the application.
>> It seems to me that in the case of the audio method.. if the >> low pass filter were "perfect" in rejection of the image >> frequencies, the insert sample/filter method provides "perfect" >> results. By comparison, the linear, cubic, spline methods are >> always imperfect. > > so is any FIR or causal IIR.
What's more, don't the IRs of interpolating splines converge to the sinc function at infinite order? Martin -- He who can be himself shall belong to no one else. --Paracelsus