DSPRelated.com
Forums

FIR Interpolation question

Started by Roger Waters February 2, 2004
Jerry:

Whoa! Stop right there! OP wants to interpolate to a sample frequency
greater than the sampled data (my reading of this thread). Thus, he needs to
create data points that did not originally exist. Whoever did the original
sampling is way out of the picture. He just does not want to create
artifacts in the extended (upsampled) frequency domain by doing something
silly like only lineraly interpolating the data...., and thus is searching
for a proper interpolation filter (to filter out spectral images created at
higher frequencies than his original data in the higher sample-rate data) or
frequency domain way of accomplishing this, which has been the dedication of
the thread so far. What are you thinking, Jerry? Did I completely misread
this thread?

And also, by definition, when the samples are what they are (obtained from
god-knows-where), they can't possibly contain any frequencies higher than
half the sample rate. All samples are "valid" (the data you have is what it
is)--they just may not have the spectral content and/or sample rate you
want. This is what the OP is trying to do something about.

Jim

"Jerry Avins" <jya@ieee.org> wrote in message
news:4022be43$0$8367$61fed72c@news.rcn.com...
> Roger Waters wrote: > > > > The reason I'm not doing a pure mathematical interpolation is that > > I've read that this leads to signal distortion by adding undesired > > spectral images to the signal at multiples of the original sampling > > rate. > > > Whoa! Stop right there! The samples are valid only if the sampled signal > contains no frequencies as high as half the sample rate. (It's up to > whoever does the sampling to ensure that that condition is met.) > Spectral images at multiples of the sample rate can be filtered out; > they are irrelevant. > > Jerry > -- > Engineering is the art of making what you want from things you can get. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295; >
"Roger Waters" <rogerwaters@fastmail.fm> wrote in message
news:6975d4b5.0402051320.5b66c1ae@posting.google.com...
> Hello All, > > First off, let me thank you all for your replies! > > I think I need to explain the problem at hand a little better. I'm > reading in the data of a recorded .WAV file. This file is a recording > of music played on a single string of a guitar. > > Once I've read the data in, I segment the data into 256 sample arrays, > on each of which I perform autocorrelation. > > Therefore I have, let's say "N", number of segments, each 256 samples > long, which I now want to interpolate. > > The reason I'm not doing a pure mathematical interpolation is that > I've read that this leads to signal distortion by adding undesired > spectral images to the signal at multiples of the original sampling > rate. > > Fred, thank you for the link to your application. However, as you've > mentioned in the README, filter lengths greater than 300 are not > suitable as inputs. > > I basically need to code the interpolation in C#, and need a set of > filter coefficients. More importantly I want to understand what > exactly is going on. > > I've read Paul Embree's approach (C Language Algorithms for DSP). He > says that the pass band should be 0 to (fs*L)/2*L (L = interp. factor) > and the stopband from fs*L/2*L to fs*L/2 > > My wav file is sampled at 48KHz. Therefore, for a 4:1 interpolation, > the cutoff frequency (ideally) should be 24KHz. The stopband should > extend from 24KHz to 96KHz. > > The smallest power of 2 greater than 96KHz is 2^17 = 131072 (for the > IFFT length). Should I scale 0 - 131072 to 0 - 1024 and take an IFFT > and use the coefficients?
Roger, OK, let me paraphrase back: You compute the autocorrelation over 256 samples. You repeat this process for the next 256 samples. You continue this until there are N 256-point autocorrelations computed. [the autocorrelations will be 511 samples long] It's unclear to me whether you want to interpolate between the autocorrelations or within each autocorrelation.
> The reason I'm not doing a pure mathematical interpolation is that > I've read that this leads to signal distortion by adding undesired > spectral images to the signal at multiples of the original sampling > rate. >
Well ... sampling does that anyway so you'd have to tell me more about this "distortion". Could it be that the interpolation filters aren't good enough for the application? An interpolation filter with more than 300 coefficients is probably overkill. Embree's expressions are OK as long as you remember that fs is the *original* fs and not the fs of the interpolated sequences. If the .wav file is sampled at 48kHz then this means that the sampled data should have a bandwidth <24kHz and preferably something around 20kHz. When you interpolate by a factor of 4 the sample rate will be 192kHz and the desired signal bandwidth will remain at 20kHz. I will assume a transition band of 4kHz - so you want the stopband from 24kHz to fs/2=96kHz as you've said. Now, you can do the normal FIR filter computations in the time domain with a filter of reasonable length. However, you introduced the notion of ffts and their lengths and it's clear there's some confusion here. The filter coefficients aren't directly useful in the frequency domain. They are filter time domain coefficients. So, I don't know why you did this. However, let's talk about it and this will also bring us back to the beginning where you have segments that are but 256 samples long: When the data was sampled at 48kHz, the spectrum was replicated every 48kHz. That's just the normal thing. When you took 256 samples, the corresponding spectrum will be 256 frequency samples from 0 to just under 48kHz with a resolution of 187.5Hz. The fewer the samples, the coarser the resolution. When you want to look at an interpolated version of the signal you can think of it in a number of ways (here let's stick with 4:1): Conceptually zero-stuff 3 zeros for every sample so the sample rate goes up by 4. Filter at the original bandwidth to fill in those added zeros. Conceptually compute the fft of the signal. Repeat the spectrum thus generated (from 0 to just under 48kHz) 4 times. Filter out everything (by mulltiplying by the fft of the filter) from 24kHz to 172kHz - using the same filter as above. Inverse fft to get the interpolated signal. Both processes are exactly the same except for small numerical roundoff noise. If you use the fft/multiply/ifft method, the length of the incoming arrays is dependent on the length of the filter. You can simply add zeros at the ends to make the two time arrays the same. Either way, if the data is length N and the filter length M then the resulting array must be at least length M+N-1 and (assuming a symmetrical filter) the first M-1 and last M-1 data points in time are transients to be thrown out. If that's the case, then the number of useful data points is: M+N-1 - 2*(M-1) = N+1-M which tells you that M better be quite a bit smaller than N. Otherwise you're going to have to deal with the ends of the data in a different way. So, you're starting with epochs of 256 samples, which when transformed spread over 48kHz. When you interpolate, you're increasing the number of spectral samples by 3:1 and then filtering 3/4 of them away. The spectral resolution doesn't improve - but the temporal resolution appears to improve as a result of the added samples. It doesn't really improve because the spectral width remains the same after filtering. But, the point of the interpolation is to get good interim values rather than to suggest better resolution. Anyway, since you computed the autocorrelation that suggests you're interested in spectral character and 128 data points (256 over fs or 128 over fs/2) in the spectrum isn't all that much. Fred
On Fri, 06 Feb 2004 03:38:05 GMT, "Jim Gort" <jgort@comcast.net>
wrote:

>Jerry: > >Whoa! Stop right there! OP wants to interpolate to a sample frequency >greater than the sampled data (my reading of this thread). Thus, he needs to >create data points that did not originally exist. Whoever did the original >sampling is way out of the picture. He just does not want to create >artifacts in the extended (upsampled) frequency domain by doing something >silly like only lineraly interpolating the data...., and thus is searching >for a proper interpolation filter (to filter out spectral images created at >higher frequencies than his original data in the higher sample-rate data) or >frequency domain way of accomplishing this, which has been the dedication of >the thread so far. What are you thinking, Jerry? Did I completely misread >this thread? > >And also, by definition, when the samples are what they are (obtained from >god-knows-where), they can't possibly contain any frequencies higher than >half the sample rate. All samples are "valid" (the data you have is what it >is)--they just may not have the spectral content and/or sample rate you >want. This is what the OP is trying to do something about. > >Jim
Hi, I think Jerry's merely saying that the Nyquist criterion should be satisfied during the original sampling process in order to be sure the digitized samples accurately represent the original continuous (analog) signal. [-Rick-]
"Rick Lyons" <r.lyons@_BOGUS_ieee.org> wrote in message
news:40237071.217709078@news.sf.sbcglobal.net...
> > Hi, > I think Jerry's merely saying that > the Nyquist criterion should be satisfied > during the original sampling process > in order to be sure the digitized samples > accurately represent the original > continuous (analog) signal.
Yeah. I used to like to say (and still do with a caveat): "No matter what the samples are, they must *by definition* represent the samples of a (that is "some")bandlimited function." That doesn't mean that the signal that was sampled originally was necessarily bandlimited. What it means is that there's now a bandlimited signal that can be reconstructed - and no other (except in practice of course because of real filters, etc.). You can't go back to the original unless, at a minimum, it was bandlimited. But, you can construct a bandlimited signal from the samples nonetheless. This means that a finite-length sequence would always be reconstructed into a continuous infinite-length signal (due to the tails of the reconstructing sincs). Usually in DSP we get around that by treating a finite-length sequence as one period of a periodic/infinite sequence - (notwithstanding the arguments of the better mathematicians amongst us who like to think of finite length sequences and transforms and, because of being in a somewhat different context, say there's "no need" for things to be viewed as periodic). There is only one caveat that I know of (that I learned here about a year ago): If one constructs a pathological case where there's a combination of sinusoids at exactly fs/2 - then the reconstruction can blow up. Of course, this doesn't satisfy the Nyquist criterion any more than a signal whose bandwidth is greater than fs/2 because the sampling theorem says B<fs/2 and *not* B=fs/2 where B is the bandwidth of the signal being sampled. The pathological cases have sequences that might look something like: @-inf .....+1 -1 +1 -1 +1 -1 +1 -1 -1 +1 -1 +1 -1 +1 -1 +1.....@+inf (note the phase shift at the center) So, it may be that sampling caused frequency aliasing but there's no way to tell. Well, unless the Fourier Transform has energy at exactly fs/2 - that's a clue or maybe clear evidence! and Fortunately in practice there's generally small enough energy at fs/2 that would cause the real-world reconstruction to blow up. The bottom line is that "usually" or "almost always" the samples represent the samples of some bandlimited function - even if it doesn't exactly represent the original signal that was sampled. Thus a corollary to what Jerry said about being able to accurately reconstruct the original. Fred
I think that, probably because of the attitude of my post, it was completely
misunderstood.

My point was that the original poster's concern was not about how the .WAV
file was created, it was about how to take what he had and convert to a
higher sample rate. Replies regarding how the original data was
sampled/filtered would only serve to confuse the question, and especially
replies that mistook his concern about interpoltion filtration for a concern
about sampling filtration.

Jim

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:36udnZ8pyNdETL7dRVn-hg@centurytel.net...
> > "Rick Lyons" <r.lyons@_BOGUS_ieee.org> wrote in message > news:40237071.217709078@news.sf.sbcglobal.net... > > > > Hi, > > I think Jerry's merely saying that > > the Nyquist criterion should be satisfied > > during the original sampling process > > in order to be sure the digitized samples > > accurately represent the original > > continuous (analog) signal. > > Yeah. I used to like to say (and still do with a caveat): > "No matter what the samples are, they must *by definition* represent the > samples of a (that is "some")bandlimited function." > > That doesn't mean that the signal that was sampled originally was > necessarily bandlimited. > What it means is that there's now a bandlimited signal that can be > reconstructed - and no other (except in practice of course because of real > filters, etc.). > You can't go back to the original unless, at a minimum, it was
bandlimited.
> But, you can construct a bandlimited signal from the samples nonetheless. > This means that a finite-length sequence would always be reconstructed
into
> a continuous infinite-length signal (due to the tails of the
reconstructing
> sincs). > Usually in DSP we get around that by treating a finite-length sequence as > one period of a periodic/infinite sequence - (notwithstanding the
arguments
> of the better mathematicians amongst us who like to think of finite length > sequences and transforms and, because of being in a somewhat different > context, say there's "no need" for things to be viewed as periodic). > > There is only one caveat that I know of (that I learned here about a year > ago): > If one constructs a pathological case where there's a combination of > sinusoids at exactly fs/2 - then the reconstruction can blow up. Of
course,
> this doesn't satisfy the Nyquist criterion any more than a signal whose > bandwidth is greater than fs/2 because the sampling theorem says B<fs/2
and
> *not* B=fs/2 where B is the bandwidth of the signal being sampled. > The pathological cases have sequences that might look something like: > @-inf .....+1 -1 +1 -1 +1 -1 +1 -1 -1 +1 -1 +1 -1 +1 -1 +1.....@+inf > (note the phase shift at the center) > > So, it may be that sampling caused frequency aliasing but there's no way
to
> tell. Well, unless the Fourier Transform has energy at exactly fs/2 - > that's a clue or maybe clear evidence! > and > Fortunately in practice there's generally small enough energy at fs/2 that > would cause the real-world reconstruction to blow up. > The bottom line is that "usually" or "almost always" the samples represent > the samples of some bandlimited function - even if it doesn't exactly > represent the original signal that was sampled. Thus a corollary to what > Jerry said about being able to accurately reconstruct the original. > > Fred > >
"Jim Gort" <jgort@comcast.net> wrote in message
news:d9TUb.241192$na.396510@attbi_s04...
> I think that, probably because of the attitude of my post, it was
completely
> misunderstood. > > My point was that the original poster's concern was not about how the .WAV > file was created, it was about how to take what he had and convert to a > higher sample rate. Replies regarding how the original data was > sampled/filtered would only serve to confuse the question, and especially > replies that mistook his concern about interpoltion filtration for a
concern
> about sampling filtration. >
Jim, I had the same reaction that Jerry did re: "The reason I'm not doing a pure mathematical interpolation is that I've read that this leads to signal distortion by adding undesired spectral images to the signal at multiples of the original sampling rate." This appears to be an OP misunderstanding so Jerry and I both addressed it. I said: Well ... sampling does that anyway so you'd have to tell me more about this "distortion". Could it be that the interpolation filters aren't good enough for the application? Jerry said: "Spectral images at multiples of the sample rate can be filtered out; they are irrelevant." Fred
Fred:

I wish Roger Waters could stop being bitter about the Floyd problem and jump
in, but again my understanding is different. I will try to resolve my take
on it with yours (and others) below:

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:NI6dnTmEIq1Qh7ndRVn-hw@centurytel.net...
> > "Jim Gort" <jgort@comcast.net> wrote in message > news:d9TUb.241192$na.396510@attbi_s04... > > I think that, probably because of the attitude of my post, it was > completely > > misunderstood. > > > > My point was that the original poster's concern was not about how the
.WAV
> > file was created, it was about how to take what he had and convert to a > > higher sample rate. Replies regarding how the original data was > > sampled/filtered would only serve to confuse the question, and
especially
> > replies that mistook his concern about interpoltion filtration for a > concern > > about sampling filtration. > > > > Jim, > > I had the same reaction that Jerry did re: > > "The reason I'm not doing a pure mathematical interpolation is that > I've read that this leads to signal distortion by adding undesired > spectral images to the signal at multiples of the original sampling > rate." > > This appears to be an OP misunderstanding so Jerry and I both addressed
it. My take on OP is that he does not have a misunderstanding, but he is concerned about the effects of blind upsampling or linear interpolation to create a higher sample rate but keep the content of his original data. At this point, it may be your take also....
> I said: > > Well ... sampling does that anyway so you'd have to tell me more about
this
> "distortion". Could it be that the interpolation filters aren't good
enough
> for the application? > Jerry said: > > "Spectral images at multiples of the sample rate can be filtered out; > they are irrelevant."
OK, here is my big problem with Jerry's reply vs. the OP's intent--"they can be filtered out" was exactly what the OP knew, and was seeking a better understanding of how this should be done--the interpolation filter. To say it is "irrelevant" implies that there was a big misunderstanding of either my take on this thread, or Jerry in his reply. My take was that Jerry was considering the original data (which, as I have said, is what it is), and not the upsampled data, which the OP wants to know the best way to create. Jim
> Fred > > > > >
Jim,

I think we're nit picking. By "The samples are valid only if ...", I
meant, and expected you and others to understand, that the samples
validly represent the original signal only if ..., and whatever the OP
wants, the array containing his interpolated values will not represent
all of the array handed to him unless he invents some values not 
directly given to extend the original array.

Take any series of integers, then write their first differences beneath
it. If you would you have us believe that the number of elements in the
lower line equals that of the line above, try it.

Jerry

Jim Gort wrote:

> Jerry: > > Whoa! Stop right there! OP wants to interpolate to a sample frequency > greater than the sampled data (my reading of this thread). Thus, he needs to > create data points that did not originally exist. Whoever did the original > sampling is way out of the picture. He just does not want to create > artifacts in the extended (upsampled) frequency domain by doing something > silly like only lineraly interpolating the data...., and thus is searching > for a proper interpolation filter (to filter out spectral images created at > higher frequencies than his original data in the higher sample-rate data) or > frequency domain way of accomplishing this, which has been the dedication of > the thread so far. What are you thinking, Jerry? Did I completely misread > this thread? > > And also, by definition, when the samples are what they are (obtained from > god-knows-where), they can't possibly contain any frequencies higher than > half the sample rate. All samples are "valid" (the data you have is what it > is)--they just may not have the spectral content and/or sample rate you > want. This is what the OP is trying to do something about. > > Jim > > "Jerry Avins" <jya@ieee.org> wrote in message > news:4022be43$0$8367$61fed72c@news.rcn.com... > >>Roger Waters wrote: >> >> >> > The reason I'm not doing a pure mathematical interpolation is that >> > I've read that this leads to signal distortion by adding undesired >> > spectral images to the signal at multiples of the original sampling >> > rate. >> >> >>Whoa! Stop right there! The samples are valid only if the sampled signal >>contains no frequencies as high as half the sample rate. (It's up to >>whoever does the sampling to ensure that that condition is met.) >>Spectral images at multiples of the sample rate can be filtered out; >>they are irrelevant. >> >>Jerry >>-- >>Engineering is the art of making what you want from things you can get. >>&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Jim Gort wrote:

> Fred: > > I wish Roger Waters could stop being bitter about the Floyd problem and jump > in, but again my understanding is different. I will try to resolve my take > on it with yours (and others) below: > > "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message > news:NI6dnTmEIq1Qh7ndRVn-hw@centurytel.net... > >>"Jim Gort" <jgort@comcast.net> wrote in message >>news:d9TUb.241192$na.396510@attbi_s04... >> >>>I think that, probably because of the attitude of my post, it was >> >>completely >> >>>misunderstood. >>> >>>My point was that the original poster's concern was not about how the > > .WAV > >>>file was created, it was about how to take what he had and convert to a >>>higher sample rate. Replies regarding how the original data was >>>sampled/filtered would only serve to confuse the question, and > > especially > >>>replies that mistook his concern about interpoltion filtration for a >> >>concern >> >>>about sampling filtration. >>> >> >>Jim, >> >>I had the same reaction that Jerry did re: >> >>"The reason I'm not doing a pure mathematical interpolation is that >>I've read that this leads to signal distortion by adding undesired >>spectral images to the signal at multiples of the original sampling >>rate." >> >>This appears to be an OP misunderstanding so Jerry and I both addressed > > it. > > My take on OP is that he does not have a misunderstanding, but he is > concerned about the effects of blind upsampling or linear interpolation to > create a higher sample rate but keep the content of his original data. At > this point, it may be your take also.... > > >>I said: >> >>Well ... sampling does that anyway so you'd have to tell me more about > > this > >>"distortion". Could it be that the interpolation filters aren't good > > enough > >>for the application? >>Jerry said: >> >>"Spectral images at multiples of the sample rate can be filtered out; >>they are irrelevant." > > > OK, here is my big problem with Jerry's reply vs. the OP's intent--"they can > be filtered out" was exactly what the OP knew, and was seeking a better > understanding of how this should be done--the interpolation filter. To say > it is "irrelevant" implies that there was a big misunderstanding of either > my take on this thread, or Jerry in his reply. My take was that Jerry was > considering the original data (which, as I have said, is what it is), and > not the upsampled data, which the OP wants to know the best way to create. > > Jim > > >>Fred
Jim, The OP was doing handstands to avoid creating what every sampling process does anyway: create images at multiples of the sampling frequency. If that were a problem, sampling wouldn't work. By "irrelevant", I meant "not a special concern in this application." Whatever reconstruction filter would be used at the higher sample rate, however the samples might have been obtained, will take care of images at multiples of the new sample rate. Those images are not artifacts of concern. Images at multiples of the old sample rate can still be filtered out because we know that if the original sampling were valid, there would be no signal in that part of the passband. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;



"Jim Gort" <jgort@comcast.net> wrote in message
news:KcVUb.108474$U%5.556842@attbi_s03...
> Fred: > > I wish Roger Waters could stop being bitter about the Floyd problem
cute! very cute! I was wondering if Jerry would make this connection. Clay
> and jump > in, but again my understanding is different. I will try to resolve my take > on it with yours (and others) below: > >