DSPRelated.com
Forums

Logarithmic Interpolation

Started by Michel Rouzic April 14, 2006
I've went through lots of posts on this group about obtaining a
logarithmic-scaled FFT but that didn't help alot. What I am trying to
obtain is logarithmic interpolation. I could talk about what I'd need
it for but since I can think about many things I might use it for, and
since it doesn't make any difference to how it has to be implemented,
I'd like this discussion to focus on logarithmic interpolation in
general.

I already could think about a few solutions for this. First one was to
zero-stuff my signal with a number of zeroes depending on the position
wrt the logarithmic function chosen, and then convolve with a windowed
sinc function that will vary with time, but the main problem I can
think of with this that for example if you got 50 zeroes on the left of
your sample to convolve and 20 on the right side, you got a problem
with using a symetrical windowed-sinc function. Indeed I think the only
way to achieve it perfectly with it is to interpolate the windowed-sinc
logarithmically, so it doesn't really solve the problem

Next thing I could think about, it was to linearly interpolate the
signal to the scale of the lowest frequencies (i mean interpolate it to
as large as it would be needed in a part of the signal) and then
decimate logarithmically spaced samples to get to the final result, but
i'm not totally sure about this one, i know that decimating like one
sample out of two is a viable decimation method, but can you decimate
like for example 1 sample every 50 samples (provided that no frequency
gets over 0.49) and be able to reconstruct the original signal without
trouble?

Other than that, if there is another interpolation method I didn't
think about i'd be curious to know

"Michel Rouzic" <Michel0528@yahoo.fr> wrote in message 
news:1145015299.483859.282240@i39g2000cwa.googlegroups.com...
> I've went through lots of posts on this group about obtaining a > logarithmic-scaled FFT but that didn't help alot. What I am trying to > obtain is logarithmic interpolation. I could talk about what I'd need > it for but since I can think about many things I might use it for, and > since it doesn't make any difference to how it has to be implemented, > I'd like this discussion to focus on logarithmic interpolation in > general. > > I already could think about a few solutions for this. First one was to > zero-stuff my signal with a number of zeroes depending on the position > wrt the logarithmic function chosen, and then convolve with a windowed > sinc function that will vary with time, but the main problem I can > think of with this that for example if you got 50 zeroes on the left of > your sample to convolve and 20 on the right side, you got a problem > with using a symetrical windowed-sinc function. Indeed I think the only > way to achieve it perfectly with it is to interpolate the windowed-sinc > logarithmically, so it doesn't really solve the problem > > Next thing I could think about, it was to linearly interpolate the > signal to the scale of the lowest frequencies (i mean interpolate it to > as large as it would be needed in a part of the signal) and then > decimate logarithmically spaced samples to get to the final result, but > i'm not totally sure about this one, i know that decimating like one > sample out of two is a viable decimation method, but can you decimate > like for example 1 sample every 50 samples (provided that no frequency > gets over 0.49) and be able to reconstruct the original signal without > trouble? > > Other than that, if there is another interpolation method I didn't > think about i'd be curious to know
Michael, Here are some thoughts: It appears you want to some how create a "correct" interpolation. Another viewpoint would be to generate a "nice" interpolation. In the latter case one would simply forget all about logs and just interpolate a data sequence. A standard of reference for a "correct" interpolation might be to take the linearly scaled data, interpolate it and then take the log of the interpolated data. At least then you'd have something to compare against various interpolation algorithms. Of course, all interpolations are approximations. It might be useful to consider simple interpolators: Linear 2-point interpolation: known: x1,y1 x3,y3 generate y2 at x1+(x2-x1)/2 = (x1+x2)/2 y2 = (y1+y2)/2 log(y2)= log[(y1+y2)/2] = Linear 4-point interpolation: known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 y2.5 = (y1+y2+y3+y4)/4 log(y2.5)= log[(y1+y2+y3+y4)/4] = log(y1+y2+y3+y4)-log(4) and so forth.... Note that these interpolators only require that you take a sum, a log and subtract a constant. Another: Linear 4-point interpolation: known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 y2.5 = (0.125*y1+0.375*y2+0.375*y3+0.125*y4) log(y2.5)=log(y1 +3*y2 +3*y3 +y4) - log(8) But goodness, see Section 4 at: http://www.dattalo.com/technical/theory/logs.html Fred
Michel Rouzic wrote:
> I've went through lots of posts on this group about obtaining a > logarithmic-scaled FFT but that didn't help alot.
The FFT is an operator which produces linearly scaled results. I don't think there is a way to directly produce a log-scaled FFT. One can do something like interpolate a linearly scaled DFT/FFT result to produce a log scaled spectrum of some sort. One can either zero stuff before, or interpolate after the FFT, since zero stuffing is equivalent to sinc interpolation.
> What I am trying to > obtain is logarithmic interpolation. I could talk about what I'd need > it for but since I can think about many things I might use it for, and > since it doesn't make any difference to how it has to be implemented, > I'd like this discussion to focus on logarithmic interpolation in > general. > > I already could think about a few solutions for this. First one was to > zero-stuff my signal with a number of zeroes depending on the position > wrt the logarithmic function chosen, and then convolve with a windowed > sinc function that will vary with time, but the main problem I can > think of with this that for example if you got 50 zeroes on the left of > your sample to convolve and 20 on the right side, you got a problem > with using a symetrical windowed-sinc function.
What problem? If you want a "spectrum" centered around the window center, then the side with less zeros is farther from the center, and thus should be attenuated more and affect the result less.
> Indeed I think the only > way to achieve it perfectly with it is to interpolate the windowed-sinc > logarithmically, so it doesn't really solve the problem
What do you mean by this? Zero padding is identical to sinc interpolation. Any logarithmic or even random point can be interpolated from a DFT spectrum by sinc interpolation.
> Next thing I could think about, it was to linearly interpolate the > signal to the scale of the lowest frequencies (i mean interpolate it to > as large as it would be needed in a part of the signal) and then > decimate logarithmically spaced samples to get to the final result, but > i'm not totally sure about this one, i know that decimating like one > sample out of two is a viable decimation method, but can you decimate > like for example 1 sample every 50 samples (provided that no frequency > gets over 0.49) and be able to reconstruct the original signal without > trouble?
Reconstruction from a sampled log-scale interpolated DFT result is not straightforward and not always possible. If you want to reconstruct you should keep the linear-scaled complex FFT results and use those for reconstruction. With infinite precision, you theoretically might be able to reconstruct from non-linear frequency samples long as you keep the same number of complex samples related to the Nyquist sample rate. In reality, with quantization errors, etc., it will be far easier to reconstruct a signal if your interpolated spectrum sample spacing after decimation is never wider than that of the non-zero padded DFT/FFT. Then you might be able to somehow intepolate back to the original DFT results without too much loss of information using some polynomial of sufficient degree.
> Other than that, if there is another interpolation method I didn't > think about i'd be curious to know
IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M
Fred Marshall wrote:
> "Michel Rouzic" <Michel0528@yahoo.fr> wrote in message > news:1145015299.483859.282240@i39g2000cwa.googlegroups.com... > > I've went through lots of posts on this group about obtaining a > > logarithmic-scaled FFT but that didn't help alot. What I am trying to > > obtain is logarithmic interpolation. I could talk about what I'd need > > it for but since I can think about many things I might use it for, and > > since it doesn't make any difference to how it has to be implemented, > > I'd like this discussion to focus on logarithmic interpolation in > > general. > > > > I already could think about a few solutions for this. First one was to > > zero-stuff my signal with a number of zeroes depending on the position > > wrt the logarithmic function chosen, and then convolve with a windowed > > sinc function that will vary with time, but the main problem I can > > think of with this that for example if you got 50 zeroes on the left of > > your sample to convolve and 20 on the right side, you got a problem > > with using a symetrical windowed-sinc function. Indeed I think the only > > way to achieve it perfectly with it is to interpolate the windowed-sinc > > logarithmically, so it doesn't really solve the problem > > > > Next thing I could think about, it was to linearly interpolate the > > signal to the scale of the lowest frequencies (i mean interpolate it to > > as large as it would be needed in a part of the signal) and then > > decimate logarithmically spaced samples to get to the final result, but > > i'm not totally sure about this one, i know that decimating like one > > sample out of two is a viable decimation method, but can you decimate > > like for example 1 sample every 50 samples (provided that no frequency > > gets over 0.49) and be able to reconstruct the original signal without > > trouble? > > > > Other than that, if there is another interpolation method I didn't > > think about i'd be curious to know > > Michael, > > Here are some thoughts: > > It appears you want to some how create a "correct" interpolation. Another > viewpoint would be to generate a "nice" interpolation. In the latter case > one would simply forget all about logs and just interpolate a data sequence. > > A standard of reference for a "correct" interpolation might be to take the > linearly scaled data, interpolate it and then take the log of the > interpolated data. At least then you'd have something to compare against > various interpolation algorithms. > > Of course, all interpolations are approximations. > > It might be useful to consider simple interpolators: > > Linear 2-point interpolation: > > known: x1,y1 x3,y3 generate y2 at x1+(x2-x1)/2 = (x1+x2)/2 > y2 = (y1+y2)/2 > > log(y2)= log[(y1+y2)/2] =
wait... when is it that you use y3? sounds like in the linear interpolation it's about making the average between two points and in the log one the same but on a log scale? if so, that's a nice idea, but that kind of interpolation isn't exactly what I'm looking for (i'm looking for something more accurate than that). As you pointed out i'm looking forward making a very correct interpolation, ideally the ideal log interpolation actually =)
> Linear 4-point interpolation: > known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 > y2.5 = (y1+y2+y3+y4)/4 > > log(y2.5)= log[(y1+y2+y3+y4)/4] = log(y1+y2+y3+y4)-log(4) > > and so forth.... > > Note that these interpolators only require that you take a sum, a log and > subtract a constant. > > Another: > Linear 4-point interpolation: > known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 > y2.5 = (0.125*y1+0.375*y2+0.375*y3+0.125*y4) > log(y2.5)=log(y1 +3*y2 +3*y3 +y4) - log(8) > > But goodness, see Section 4 at: > http://www.dattalo.com/technical/theory/logs.html > > Fred
Ron N. wrote:
> Michel Rouzic wrote: > > I've went through lots of posts on this group about obtaining a > > logarithmic-scaled FFT but that didn't help alot. > > The FFT is an operator which produces linearly scaled results. > I don't think there is a way to directly produce a log-scaled FFT.
I know that, that's why this topic is about logarithmic interpolation, and not log-scaled FFT =).
> One can do something like interpolate a linearly scaled DFT/FFT > result to produce a log scaled spectrum of some sort.
Exactly what i'm talking about :)
> One can > either zero stuff before, or interpolate after the FFT, since zero > stuffing is equivalent to sinc interpolation.
Yup, i'm not really sure which I will choose, but it's a detail, isn't it? That's still only getting me to a linear interpolation tho
> > What I am trying to > > obtain is logarithmic interpolation. I could talk about what I'd need > > it for but since I can think about many things I might use it for, and > > since it doesn't make any difference to how it has to be implemented, > > I'd like this discussion to focus on logarithmic interpolation in > > general. > > > > I already could think about a few solutions for this. First one was to > > zero-stuff my signal with a number of zeroes depending on the position > > wrt the logarithmic function chosen, and then convolve with a windowed > > sinc function that will vary with time, but the main problem I can > > think of with this that for example if you got 50 zeroes on the left of > > your sample to convolve and 20 on the right side, you got a problem > > with using a symetrical windowed-sinc function. > > What problem? If you want a "spectrum" centered around the window > center, then the side with less zeros is farther from the center, > and thus should be attenuated more and affect the result less. > > > Indeed I think the only > > way to achieve it perfectly with it is to interpolate the windowed-sinc > > logarithmically, so it doesn't really solve the problem > > What do you mean by this? Zero padding is identical to sinc > interpolation.
why are you talking about zero padding? i talked about zero-stuffing in case you got confused with it. what I meant by this is, well, figure yourself a sinc represented linearly (as it always is). now, figure it represented logarithmically, it's not symmetrical anymore, for obvious reasons, right? what I meant is that, in order to obtain log interpolation, if you have your pointed zero-stuffed in a logarithimic manner, ideally you'll have to convolve that with a logarithmically interpolated sinc function, because a linear sinc won't do it, wlel it will do it but not ideally.
> Any logarithmic or even random point can be > interpolated from a DFT spectrum by sinc interpolation.
now I'm not really sure about what you mean, if you mean that we can interpolate a signal which sample rates varies with differently sized sincs, ok, I agree, but that's not ideal. In my 50 zero-stuffing samples on one side and 20 samples on the other, I could convolve the sample in between these two parts with a sinc that you would normally use to interpolate a sample that has 35 zero-stuffing samples on both sides, but that's not ideal.
> > Next thing I could think about, it was to linearly interpolate the > > signal to the scale of the lowest frequencies (i mean interpolate it to > > as large as it would be needed in a part of the signal) and then > > decimate logarithmically spaced samples to get to the final result, but > > i'm not totally sure about this one, i know that decimating like one > > sample out of two is a viable decimation method, but can you decimate > > like for example 1 sample every 50 samples (provided that no frequency > > gets over 0.49) and be able to reconstruct the original signal without > > trouble? > > Reconstruction from a sampled log-scale interpolated DFT result > is not straightforward and not always possible. If you want > to reconstruct you should keep the linear-scaled complex FFT > results and use those for reconstruction.
I don't understand why you couldn't interpolate your signal logarithmically in a way that can be turned back to the original signal. well ok, you will have to lose some of the lowest frequencies, but it's fine.
> With infinite precision, you theoretically might be able to > reconstruct from non-linear frequency samples long as you keep the > same number of complex samples related to the Nyquist sample rate.
Infinite precision? why? we don't need infinite precision, think about it, you only need to have enough precision so that the higher frequencies of you FFT are interpolated to about the same scale to keep all the informations, right?
> In reality, with quantization errors, etc., it will be far easier > to reconstruct a signal if your interpolated spectrum sample spacing > after decimation is never wider than that of the non-zero padded > DFT/FFT. Then you might be able to somehow intepolate back > to the original DFT results without too much loss of information > using some polynomial of sufficient degree.
Yeah, sounds like the plan. I don't know about the polynomial thing tho, never managed to understand polynomials (needless to say that I use a library for performing FFT?)
Fred Marshall wrote:

> [SNIP] > > But goodness, see Section 4 at: > http://www.dattalo.com/technical/theory/logs.html >
Neat page and interesting site in general. Thanks for reference.
Michel Rouzic skrev:
> I've went through lots of posts on this group about obtaining a > logarithmic-scaled FFT but that didn't help alot. What I am trying to > obtain is logarithmic interpolation. I could talk about what I'd need > it for but since I can think about many things I might use it for, and > since it doesn't make any difference to how it has to be implemented, > I'd like this discussion to focus on logarithmic interpolation in > general.
Hmmmm... I don't think it is that easy to separate purpose and implementation... There was a thread recently about using the exponential representation of complex numbers directly in the FFT. My argument against such use, was that the exponential form is useful to the human operator but obfuscates numerical computations. I can't, off the top of my head, see an application other than graphical reprsentations to humans, that would benefit from a direct computaton of te DFT with a logarithmic frequency scale. Rune
Rune Allnor wrote:
> Michel Rouzic skrev: > > I've went through lots of posts on this group about obtaining a > > logarithmic-scaled FFT but that didn't help alot. What I am trying to > > obtain is logarithmic interpolation. I could talk about what I'd need > > it for but since I can think about many things I might use it for, and > > since it doesn't make any difference to how it has to be implemented, > > I'd like this discussion to focus on logarithmic interpolation in > > general. > > Hmmmm... I don't think it is that easy to separate purpose > and implementation... > > There was a thread recently about using the exponential > representation of complex numbers directly in the FFT. > My argument against such use, was that the exponential > form is useful to the human operator but obfuscates > numerical computations. > > I can't, off the top of my head, see an application other than > graphical reprsentations to humans, that would benefit from > a direct computaton of te DFT with a logarithmic frequency > scale.
I didn't wanna talk about this too much, because I rather realize by myself if it's a dumb idea than be told, but my plan is to *try* using that to perform pitch shift/time stretch. my basic idea is, get the FFT of the signal you want to shift, interpolate the FFT logarithmically, shift the contents towards the left or the right depending on what you want to obtain, interpolate back to linear form, IFFT. of course, i'm definitly not sure and quite sceptical about the odds of obtaining the wanted result, but I think it's worth trying.
"Michel Rouzic" <Michel0528@yahoo.fr> wrote in message 
news:1145099575.431079.157480@e56g2000cwe.googlegroups.com...
>> >> Of course, all interpolations are approximations. >> >> It might be useful to consider simple interpolators: >> >> Linear 2-point interpolation: >> >> known: x1,y1 x3,y3 generate y2 at x1+(x2-x1)/2 = (x1+x2)/2 >> y2 = (y1+y2)/2 >> >> log(y2)= log[(y1+y2)/2] = > > wait... when is it that you use y3? sounds like in the linear > interpolation it's about making the average between two points and in > the log one the same but on a log scale? if so, that's a nice idea, but > that kind of interpolation isn't exactly what I'm looking for (i'm > looking for something more accurate than that). As you pointed out i'm > looking forward making a very correct interpolation, ideally the ideal > log interpolation actually =) > >> Linear 4-point interpolation: >> known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 >> y2.5 = (y1+y2+y3+y4)/4 >> >> log(y2.5)= log[(y1+y2+y3+y4)/4] = log(y1+y2+y3+y4)-log(4) >> >> and so forth.... >> >> Note that these interpolators only require that you take a sum, a log and >> subtract a constant. >> >> Another: >> Linear 4-point interpolation: >> known: x1,y1 x2,y2 x3,y3 x4,y4 generate y2.5 at (x3+x2)/2 >> y2.5 = (0.125*y1+0.375*y2+0.375*y3+0.125*y4) >> log(y2.5)=log(y1 +3*y2 +3*y3 +y4) - log(8) >> >> But goodness, see Section 4 at: >> http://www.dattalo.com/technical/theory/logs.html >>
Michael, an error:
>> It might be useful to consider simple interpolators: >> >> Linear 2-point interpolation: >> >> known: x1,y1 x3,y3 generate y2 at x1+(x2-x1)/2 = (x1+x2)/2 >> y2 = (y1+y2)/2 >> >> log(y2)= log[(y1+y2)/2] = log(y1 + y2) - log(2) > > wait... when is it that you use y3?
log(y2)= log[(y1+y3)/2] there, that's better. -----
>sounds like in the linear > interpolation it's about making the average between two points and in > the log one the same but on a log scale? if so, that's a nice idea, but > that kind of interpolation isn't exactly what I'm looking for (i'm > looking for something more accurate than that).
If you look carefully, here the interpolation is done in the linear space and the logs are taken thereafter. So, this is a "perfect" interpolation as far as it goes. It remains a 2-point linear interpolation which may or may not be what you want. I repeat: all interpolations are approximations and you have to have your approximation criteria well in hand before you can decide which interpolation is best. Not perfect, but best for your situation / criteria. This challenge faces us all when doing interpolations. (Before somebody gets silly and wants to challenge this assertion about "all interpolations are approximations" I am speaking about the general case. If somebody wants to start with a polynomial (i.e. not real-world data) and then sample it and then choose an interpolation method that reproduces interim or extrapolated samples of the polynomial perfectly then that's not the sort of contrived situation that I'm talking about here or am interested in. That's not to say that there aren't such things that are useful.) Fred
Michel Rouzic wrote:
> > One can > > either zero stuff before, or interpolate after the FFT, since zero > > stuffing is equivalent to sinc interpolation. > > Yup, i'm not really sure which I will choose, but it's a detail, isn't > it? That's still only getting me to a linear interpolation tho
No. Sinc interpolation is closer to a high degree polynomial interpolation. You can use this interpolation method to rescale the time or frequency axis of samples from any sufficiently bandlimited signal or function. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M