DSPRelated.com
Forums

Signal Interpolation

Started by DavidSaunders9 November 2, 2004
On 2004-11-03 17:08:44 +0100, Andor Bariska <an2or@nospam.net> said:

> Stephan M. Bernsee wrote: >> To do this right you need to do the same backwards in time from the end >> of the missing segment and blend the sinusoidal parameters between the >> two sets. Then you'll have to sort out ambiguity (what oscillator at >> the beginning of the gap corresponds to which one at the end)... this >> can be fairly complicated it you want to do it right. > > There is no need to do that - one can just crossfade the (forwards and > backwards) oscillator outputs.
Not sure if that's enough... imagine a singer singing a note at slowly increasing pitch. If you simply crossfade between the two segments you'll hear a glitch. If you track the pitch across the gap and match the oscillators you get the correct result... -- Stephan M. Bernsee http://www.dspdimension.com
Andor Bariska wrote:

> Bernhard Holzmayer wrote: > ... >> build a new FFT signal where at every frequency bin newFFT=min(FFT1,FFT2) > > What's the min of two complex numbers?
Right, should have been more precise. Since it's a real audio stream, I guess that optimal phase response isn't so important as long as the loudest portions are suppressed. I'd probably solve it for the real parts. With the imag parts? - Either do the same, or just average the two values. I don't know which is more appropriate. I guess, it's more important for the OP to find a quick and dirty solution which provides an agreable result, than to identify an optimal algorithm. Bernhard
On 2004-11-04 08:21:11 +0100, Bernhard Holzmayer 
<Holzmayer.Bernhard@deadspam.com> said:

> I'd probably solve it for the real parts. With the imag parts? - Either > do the same, or just average the two values. > I don't know which is more appropriate.
I still don't get it - what exactly are you trying to do? -- Stephan M. Bernsee http://www.dspdimension.com
Stephan M. Bernsee wrote:
> On 2004-11-03 17:08:44 +0100, Andor Bariska <an2or@nospam.net> said: > >> Stephan M. Bernsee wrote: >> >>> To do this right you need to do the same backwards in time from the >>> end of the missing segment and blend the sinusoidal parameters >>> between the two sets. Then you'll have to sort out ambiguity (what >>> oscillator at the beginning of the gap corresponds to which one at >>> the end)... this can be fairly complicated it you want to do it right. >> >> >> There is no need to do that - one can just crossfade the (forwards and >> backwards) oscillator outputs. > > > Not sure if that's enough... imagine a singer singing a note at slowly > increasing pitch. If you simply crossfade between the two segments > you'll hear a glitch. If you track the pitch across the gap and match > the oscillators you get the correct result...
That's true - as I wrote originally, one needs the assumption of (local) stationarity of the signal for the algorithm to work. For short drop-outs (< 20 samples), this is generally ok. For longer stretches (as in a slowly increasing pitch section), this becomes a problem. The next step would be to try to find a non-stationary model, such as ARIMA or fractional ARIMA or something else. Highly non-trivial. Regards, Andor
On 2004-11-04 10:07:35 +0100, Andor Bariska <an2or@nospam.net> said:

> That's true - as I wrote originally, one needs the assumption of > (local) stationarity of the signal for the algorithm to work. For short > drop-outs (< 20 samples), this is generally ok. For longer stretches > (as in a slowly increasing pitch section), this becomes a problem. > > The next step would be to try to find a non-stationary model, such as > ARIMA or fractional ARIMA or something else. Highly non-trivial. > > Regards, > Andor
Yes I agree. Clearly, most of the problems of DSP techniques (at least in the area I'm working in) come from the assumption of short-time stationarity, which is a helpful concept to begin with but can't be ignored if you really need very high-end stuff... -- Stephan M. Bernsee http://www.dspdimension.com