DSPRelated.com
Forums

Interpolation

Started by cpshah99 March 25, 2008
On Sun, 30 Mar 2008 06:51:50 -0700, Rick Lyons
<R.Lyons@_BOGUS_ieee.org> wrote:


>By the way, my current thinking (beyond what I wrote >ten years) is that the term "interpolation" should >be used for *ANY* sample rate change that is NOT decimation >by an integer factor. Stated in different words, I now >believe the phrase "interpolation" should be used, for >example, to describe a sample rate decrease by a factor >of 3/4.
My personal view, for whatever it's worth, is that "interpolation" already has a pretty well understood meaning, which is estimation of the value of a new point in between two (or more) existing data points. In our case we can generally substitute "sample" for "point", but the idea is the same. It's not hard to see how "interpolation" became somewhat equivalent to "upsampling", since upsampling always involves interpolation to generate the new samples. So I think your suggested use is still a little problematic because even for integer decimation ratios the output sample instances could be points interpolated between the previous sample points. This will always be the case with a symmetric FIR with an even number of taps, for example, regardless of the decimation (or downsampling) rate, even if the downsample ratio is integer (an integer ratio...we have a lot of terminology problems in this area ;) ). Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org
On Mar 30, 6:51 am, Rick Lyons <R.Lyons@_BOGUS_ieee.org> wrote:
> ... > Hi Dale, > I'm not rightly sure that I understand the exact > meaning of your post. But if you have any suggestions > for me to help me minimize what you think is "sloppiness" > in my writing, I'd sure be willing to hear your opinions. >
You could have titled the first two sections in chapter 10 as: 10.1 Decimation and 10.2 Interpolation, a process that in one of the two usages in this section is complementary to decimation That you chose instead to use: 10.1 Decimation and 10.2 Interpolation is concise technical editing and perfectly appropriate. However, as evidenced in this thread, it has been sloppy with regards side effects the mere juxtaposition of decimation and interpolation can produce. Casual readers have argued such juxtaposition as validation that your one and only usage of interpolation is as complementary to decimation. A simple reading of the section titled "Interpolation" shows otherwise. It isn't easy to suggest this to people here.
> This thread's examination of the language of 'sample > rate change' (SRC) is fascinating to me because in recent > years I've also run into trouble reading the literature > because there seems to be no standard definitions for words > like "decimation", "interpolation", "expander", "upsampling", > "downsampling", "compressor", etc. (And, I confess, my writing > has not helped this situation, but I hope to correct that.)
Even worse, the history of the use of the word interpolation if rife with authors who have used two definitions. Crochiere and Rabiner in their tutorial in the IEEE Proceedings in 1981 can't get past the first page without using multiple meanings in consecutive sentences.
> ...
In his work I have cited in this thread, harris covers this area using only the terms "resampling", up-sampling" and "down-sampling", where the up and down sampling aren't used to span the range of resampling.
> By the way, my current thinking (beyond what I wrote > ten years) is that the term "interpolation" should > be used for *ANY* sample rate change that is NOT decimation > by an integer factor. Stated in different words, I now > believe the phrase "interpolation" should be used, for > example, to describe a sample rate decrease by a factor > of 3/4.
Whatever terminology we end up width it should not include dual usage of any terms. harris did that by dropping interpolation. Particular cases that often fall through the cracks of simplified usages are 1) fractional sample delay with no rate change and 2) resampling to an integer submultiple frequency where new values must be "calculated on the curve" because anti-aliasing is required for samples that still fall on the old sample times. It would be nice to have a terminology the differentiates clearly between 2) and simply throwing away some of the samples.
> > Ya' know what's interesting? In the early 1970's, when > DSP was becoming more and more popular, a group of > DSP experts/pioneers/gurus joined together and published > the paper: > > L. Rabiner et al., "Terminology in Digital Signal > Processing," IEEE Trans. on Audio and Electroacoustics., > Vol. AU-20, No. 5, December 1972, pp. 322-337. >
Indeed an fine reference. It has it's quirks, too. One of the common problems in discussions on comp.dsp is the varied usages of resolution. Rabiner et al. use resolve to mean 'to separate two tones' and the process thus becomes resolution. Unfortunately, some 'separate the tones' processes are performed by looking at oscilloscopes (or harris' response plots for close tones in the 1978 Proceedings) and others by defining detection and estimation processes and calculating variances. There are, of course many possible (usually unstated) choices of detection and estimation and their combinations. I hope we can eventually fix "resolution", too.
> ... > See Ya', > [-Rick-]
Dale B. Dalrymple
On 2008-03-30 10:51:50 -0300, Rick Lyons <R.Lyons@_BOGUS_ieee.org> said:

> On Thu, 27 Mar 2008 13:45:25 -0700 (PDT), dbd <dbd@ieee.org> wrote: > >> On Mar 27, 1:02 pm, Randy Yates <ya...@ieee.org> wrote: > > (Snipped by Lyons) > >> >> Randy >> >> I only have your first reference at hand right now. >> >> A scan of the titles in the contents page supports your argument. >> However, on page 387 of the 2004 version, Lyons says that >> "Conceptually interpolation comprises the generation" of a >> "continuous ... curve passing through our ... sampled values ... >> followed by sampling that curve at a new sample rate". The title of >> the section is "Interpolation" but Lyons uses "Sample rate increase by >> interpolation" within the first paragraph. It may be common for book >> editing to induce technical sloppiness in the course of a search for >> conciseness and eye-pleasing page layout. That is not justification to >> suggest the enforcement of such sloppiness here. >> >> Dale B. Dalrymple > > Hi Dale, > I'm not rightly sure that I understand the exact > meaning of your post. But if you have any suggestions > for me to help me minimize what you think is "sloppiness" > in my writing, I'd sure be willing to hear your opinions. > > This thread's examination of the language of 'sample > rate change' (SRC) is fascinating to me because in recent > years I've also run into trouble reading the literature > because there seems to be no standard definitions for words > like "decimation", "interpolation", "expander", "upsampling", > "downsampling", "compressor", etc. (And, I confess, my writing > has not helped this situation, but I hope to correct that.) > > I'm gonna try to convince my Publisher to publish a 3rd > edition of my book in the next year or two and, as such, > I have totally re-written my current Chapter 10, "Sample > Rate Conversion". > > In fact, I've written a short section in my new Chapter 10 > material describing what I think is all the various, and > sometimes ambiguous, terminology in the literature of sample > rate change (SRC) processing. That new material of mine is > meant to warn my readers to be both flexible and careful > when they read the *words* authors use to describe SRC > techniques. > > By the way, my current thinking (beyond what I wrote > ten years) is that the term "interpolation" should > be used for *ANY* sample rate change that is NOT decimation > by an integer factor. Stated in different words, I now > believe the phrase "interpolation" should be used, for > example, to describe a sample rate decrease by a factor > of 3/4.
Interpolation would be anytime the new samples are at other than the the old places. The new places could be more closely spaced than the old ones or less closely spaced. This just an alternate phrasing of what you have said. Maybe some would find it easy to remember. After all an interpolation formulae just matches the old values. Sometimes it uses only a few old values and sometimes it uses a lot. In the rather simple case of linear interpolation it seems strange (to some) to have the new values be more widely spaced, particularly if there are several old values between the new values. Some values would then have not any effect. But for more complex cases this apparent strangeness goes away. One could use this to say that decimation is when some of the intermediate values have no effect. This line of argument says that some simple interpolation rules can be decimation even if the new values are not at the old places. When applied to the step function interpolation one does get the sloppy terminology so maybe in that case it is just the interpolation that is sloppy or maybe it is an assumption that all interpolation is by step functions.
> Ya' know what's interesting? In the early 1970's, when > DSP was becoming more and more popular, a group of > DSP experts/pioneers/gurus joined together and published > the paper: > > L. Rabiner et al., "Terminology in Digital Signal > Processing," IEEE Trans. on Audio and Electroacoustics., > Vol. AU-20, No. 5, December 1972, pp. 322-337. > > In that paper, which makes for *VERY* useful reading for > anyone in the field of DSP, the authors literally > compiled a list of "recommended DSP terminology" for all > future DSP-related papers, articles, and textbooks. They > hoped to establish a self-consistent, unambiguous, language > for DSP. For example, they clearly defined the > difference between the terms "recursive filter" and > "IIR filter". > > It might be interesting to write a paper titled: > "Terminology in Sample Rate Change Systems" in the > hope of reducing the confusion that now exists in > the language of SRC as described in the literature. > Any takers? > > See Ya', > [-Rick-]
On Sun, 30 Mar 2008 11:03:44 -0700, Eric Jacobsen
<eric.jacobsen@ieee.org> wrote:

>On Sun, 30 Mar 2008 06:51:50 -0700, Rick Lyons ><R.Lyons@_BOGUS_ieee.org> wrote: > > >>By the way, my current thinking (beyond what I wrote >>ten years) is that the term "interpolation" should >>be used for *ANY* sample rate change that is NOT decimation >>by an integer factor. Stated in different words, I now >>believe the phrase "interpolation" should be used, for >>example, to describe a sample rate decrease by a factor >>of 3/4. > >My personal view, for whatever it's worth, is that "interpolation" >already has a pretty well understood meaning, which is estimation of >the value of a new point in between two (or more) existing data >points. In our case we can generally substitute "sample" for >"point", but the idea is the same. > >It's not hard to see how "interpolation" became somewhat equivalent to >"upsampling", since upsampling always involves interpolation to >generate the new samples. > >So I think your suggested use is still a little problematic because >even for integer decimation ratios the output sample instances could >be points interpolated between the previous sample points. This will >always be the case with a symmetric FIR with an even number of taps, >for example, regardless of the decimation (or downsampling) rate, even >if the downsample ratio is integer (an integer ratio...we have a lot >of terminology problems in this area ;) ). > >Eric Jacobsen
Hi Eric, I understand your point (about the FIR filters). As it turns out, Randy Yates pointed out that "notion", that concept, to me just recently. Yep, there's a fair amount of confusion. See Ya', [-Rick-]
On Sun, 30 Mar 2008 11:05:04 -0700 (PDT), dbd <dbd@ieee.org> wrote:

  (snipped by Lyons)
> >> >> Ya' know what's interesting? In the early 1970's, when >> DSP was becoming more and more popular, a group of >> DSP experts/pioneers/gurus joined together and published >> the paper: >> >> L. Rabiner et al., "Terminology in Digital Signal >> Processing," IEEE Trans. on Audio and Electroacoustics., >> Vol. AU-20, No. 5, December 1972, pp. 322-337. >> > >Indeed an fine reference. It has it's quirks, too. One of the common >problems in discussions on comp.dsp is the varied usages of >resolution. Rabiner et al. use resolve to mean 'to separate two tones' >and the process thus becomes resolution. Unfortunately, some 'separate >the tones' processes are performed by looking at oscilloscopes (or >harris' response plots for close tones in the 1978 Proceedings) and >others by defining detection and estimation processes and calculating >variances. There are, of course many possible (usually unstated) >choices of detection and estimation and their combinations. > >I hope we can eventually fix "resolution", too. > >> ... >> See Ya', >> [-Rick-] > >Dale B. Dalrymple
Hi, Thanks for your thoughts/suggestions. I agree that the word "resolution" is problematic. I sometimes cringe when I read that word, because it's not always clear what an author means by "resolution". [-Rick-]
On Mar 30, 3:05 pm, Rick Lyons <R.Lyons@_BOGUS_ieee.org> wrote:
> On Sun, 30 Mar 2008 11:05:04 -0700 (PDT), dbd <d...@ieee.org> wrote:
...
> >> Ya' know what's interesting? In the early 1970's, when > >> DSP was becoming more and more popular, a group of > >> DSP experts/pioneers/gurus joined together and published > >> the paper: > > >> L. Rabiner et al., "Terminology in Digital Signal > >> Processing," IEEE Trans. on Audio and Electroacoustics., > >> Vol. AU-20, No. 5, December 1972, pp. 322-337. > > >Indeed an fine reference. It has it's quirks, too. One of the common > >problems in discussions on comp.dsp is the varied usages of > >resolution. Rabiner et al. use resolve to mean 'to separate two tones' > >and the process thus becomes resolution. Unfortunately, some 'separate > >the tones' processes are performed by looking at oscilloscopes (or > >harris' response plots for close tones in the 1978 Proceedings) and > >others by defining detection and estimation processes and calculating > >variances. There are, of course many possible (usually unstated) > >choices of detection and estimation and their combinations. > > >I hope we can eventually fix "resolution", too. > > >> ... > >> See Ya', > >> [-Rick-] > > >Dale B. Dalrymple > > Hi, > Thanks for your thoughts/suggestions. > > I agree that the word "resolution" is > problematic. I sometimes cringe when I read > that word, because it's not always clear > what an author means by "resolution".
I see no need to "cringe", only to ask for clarification, or clarify which interpretation one in assuming in one's reply, answer or commentary. It's oft the case in narrow technical specialties for certain terms to have different meanings from those in common usage, or even in use in other specialties. Clarification of which meaning is being used often leads to more a widely understood communication, than does fighting over one's own specialist preferred definition. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M
On Mar 30, 12:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote:

> ... > Interpolation would be anytime the new samples are at other than the > the old places. The new places could be more closely spaced than the > old ones or less closely spaced. This just an alternate phrasing of > what you have said. Maybe some would find it easy to remember. After > all an interpolation formulae just matches the old values. > ...
Thank you for demonstrating the misconception common in one of the problem areas I pointed out to Rick Lyons. Resampling formulas - do_not- match the old samples when they include anti-alias filtering. A new sample -has- been calculated even though it is at the same time as one of the original samples. fred harris' explicit separation of these processes into a resampling operation and filtering operation might help people understand this. Can we define things to make it harder to make this mistake? Dale B. Dalrymple
On Mar 30, 8:42 pm, dbd <d...@ieee.org> wrote:
> On Mar 30, 12:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote: > > > ... > > Interpolation would be anytime the new samples are at other than the > > the old places. The new places could be more closely spaced than the > > old ones or less closely spaced. This just an alternate phrasing of > > what you have said. Maybe some would find it easy to remember. After > > all an interpolation formulae just matches the old values. > > ... > > Thank you for demonstrating the misconception common in one of the > problem areas I pointed out to Rick Lyons. Resampling formulas - > do_not- match the old samples when they include anti-alias filtering.
that's not always true. the requirement for resampled samples to match the old samples is that the impulse response of the anti-aliasing filter, h(t), have h(0) = 1 h(nT) = 0 for all integer n <> 0 that is the necessary and sufficient requirement. now, it is certainly the case that some optimizing of that impulse response will make an interpolation kernal that does not satisfy the above requiremen, but there is a whole class of impulse responses, namely windowed sinc() functions, that do. and if N (the polyphase FIR filter length) is large enough, the aliases are very well antied. but, for a fixed and limited N, i'm not saying that windowed sinc() functions are the best, *but* what would be interesting is if such constaints are applied to an optimization alg, you can always divided by a sinc(). you never get division by zero except at places where it is 0/0 and then you can use L'Hopital's rule. it would be interesting to see the effective "window function" that you get from such a division of one function by another. i know that when using Lagrange and Hermite interpolation, that the output *does* go through the input points. i thought at one time (about the time that Duane Wise and i did this little interpolation paper) i took a look at the resulting window function that i got when i divided the Lagrange or Hermite interoplation kernel with a sinc() function. of course, it didn't match any window that i saw before, and it wasn't usually continuous (at least some derivative showed discontinuities in the part of the window that was non-zero).
> A new sample -has- been calculated even though it is at the same time > as one of the original samples. fred harris' explicit separation of > these processes into a resampling operation and filtering operation > might help people understand this.
for me, all that is needed to understand the interpolation, resampling, and/or fractional delay issue is the Nyquist/Shannon sampling and reconstruction theorem.
> Can we define things to make it harder to make this mistake?
can you define the mistake explicitly for me? if it is the semantic that for "interpolation", the interpolated samples *must* agree with the original samples when their times coincide, then i think that such is a good semantic. but "sample rate conversion", "resampling", and "fractional-sample delay" are not necessarily the same thing as "interpolation". *sometimes* procedures we call "sample rate conversion", "resampling", and "fractional-sample delay" do using interpolation as a method to decide what value to use for those samples at in-between times, but sometimes they use something else. r b-j
On Mar 30, 8:42 pm, dbd <d...@ieee.org> wrote:
> On Mar 30, 12:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote: > > > ... > > Interpolation would be anytime the new samples are at other than the > > the old places. The new places could be more closely spaced than the > > old ones or less closely spaced. This just an alternate phrasing of > > what you have said. Maybe some would find it easy to remember. After > > all an interpolation formulae just matches the old values. > > ... > > Thank you for demonstrating the misconception common in one of the > problem areas I pointed out to Rick Lyons. Resampling formulas - > do_not- match the old samples when they include anti-alias filtering.
that's not always true. the requirement for resampled samples to match the old samples is that the impulse response of the anti-aliasing filter, h(t), have h(0) = 1 h(nT) = 0 for all integer n <> 0 that is the necessary and sufficient requirement. now, it is certainly the case that some optimizing of that impulse response will make an interpolation kernal that does not satisfy the above requiremen, but there is a whole class of impulse responses, namely windowed sinc() functions, that do. and if N (the polyphase FIR filter length) is large enough, the aliases are very well antied. but, for a fixed and limited N, i'm not saying that windowed sinc() functions are the best, *but* what would be interesting is if such constaints are applied to an optimization alg, you can always divided by a sinc(). you never get division by zero except at places where it is 0/0 and then you can use L'Hopital's rule. it would be interesting to see the effective "window function" that you get from such a division of one function by another. i know that when using Lagrange and Hermite interpolation, that the output *does* go through the input points. i thought at one time (about the time that Duane Wise and i did this little interpolation paper) i took a look at the resulting window function that i got when i divided the Lagrange or Hermite interoplation kernel with a sinc() function. of course, it didn't match any window that i saw before, and it wasn't usually continuous (at least some derivative showed discontinuities in the part of the window that was non-zero).
> A new sample -has- been calculated even though it is at the same time > as one of the original samples. fred harris' explicit separation of > these processes into a resampling operation and filtering operation > might help people understand this.
for me, all that is needed to understand the interpolation, resampling, and/or fractional delay issue is the Nyquist/Shannon sampling and reconstruction theorem.
> Can we define things to make it harder to make this mistake?
can you define the mistake explicitly for me? if it is the semantic that for "interpolation", the interpolated samples *must* agree with the original samples when their times coincide, then i think that such is a good semantic. but "sample rate conversion", "resampling", and "fractional-sample delay" are not necessarily the same thing as "interpolation". *sometimes* procedures we call "sample rate conversion", "resampling", and "fractional-sample delay" do using interpolation as a method to decide what value to use for those samples at in-between times, but sometimes they use something else. r b-j
On Mar 30, 8:42 pm, dbd <d...@ieee.org> wrote:
> On Mar 30, 12:23 pm, Gordon Sande <g.sa...@worldnet.att.net> wrote: > > > ... > > Interpolation would be anytime the new samples are at other than the > > the old places. The new places could be more closely spaced than the > > old ones or less closely spaced. This just an alternate phrasing of > > what you have said. Maybe some would find it easy to remember. After > > all an interpolation formulae just matches the old values. > > ... > > Thank you for demonstrating the misconception common in one of the > problem areas I pointed out to Rick Lyons. Resampling formulas - > do_not- match the old samples when they include anti-alias filtering.
that's not always true. the requirement for resampled samples to match the old samples is that the impulse response of the anti-aliasing filter, h(t), have h(0) = 1 h(nT) = 0 for all integer n <> 0 that is the necessary and sufficient requirement. now, it is certainly the case that some optimizing of that impulse response will make an interpolation kernal that does not satisfy the above requiremen, but there is a whole class of impulse responses, namely windowed sinc() functions, that do. and if N (the polyphase FIR filter length) is large enough, the aliases are very well antied. but, for a fixed and limited N, i'm not saying that windowed sinc() functions are the best, *but* what would be interesting is if such constaints are applied to an optimization alg, you can always divided by a sinc(). you never get division by zero except at places where it is 0/0 and then you can use L'Hopital's rule. it would be interesting to see the effective "window function" that you get from such a division of one function by another. i know that when using Lagrange and Hermite interpolation, that the output *does* go through the input points. i thought at one time (about the time that Duane Wise and i did this little interpolation paper) i took a look at the resulting window function that i got when i divided the Lagrange or Hermite interoplation kernel with a sinc() function. of course, it didn't match any window that i saw before, and it wasn't usually continuous (at least some derivative showed discontinuities in the part of the window that was non-zero).
> A new sample -has- been calculated even though it is at the same time > as one of the original samples. fred harris' explicit separation of > these processes into a resampling operation and filtering operation > might help people understand this.
for me, all that is needed to understand the interpolation, resampling, and/or fractional delay issue is the Nyquist/Shannon sampling and reconstruction theorem.
> Can we define things to make it harder to make this mistake?
can you define the mistake explicitly for me? if it is the semantic that for "interpolation", the interpolated samples *must* agree with the original samples when their times coincide, then i think that such is a good semantic. but "sample rate conversion", "resampling", and "fractional-sample delay" are not necessarily the same thing as "interpolation". *sometimes* procedures we call "sample rate conversion", "resampling", and "fractional-sample delay" do using interpolation as a method to decide what value to use for those samples at in-between times, but sometimes they use something else. r b-j