DSPRelated.com
Forums

LPC Question

Started by HardySpicer September 30, 2007
In LPC we estimate a 10th order all pole model of quasi-stationary
speech. The predicted speech is then

y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n)

Is the 'gain' estimated at all ie the variance of the driving noise of
the AR model?

Also is this really a predictor? It is not a Wiener predictor in the
sense there is no additive white noise and it is only predicting one
step. Looks more like a basic filter. A least-squares predictor would
have something like

Y(k/k-n) for an n step predictor and this would need a Wiener or
Kalman approach.

I once tried this (ordinary LPC) and although it worked the output
sounded like a robot! Very machine like. How does one overcome this?

Hardy

"HardySpicer" <gyansorova@gmail.com> wrote in message
news:1191190301.157557.209990@r29g2000hsg.googlegroups.com...
> In LPC we estimate a 10th order all pole model of quasi-stationary > speech. The predicted speech is then > > y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n)
> Is the 'gain' estimated at all ie the variance of the driving noise of > the AR model?
The LPC analysis gives the filter coefficients only. The gain should be found separately.
> Also is this really a predictor?
Yes. It exploits the correlation between the subsequent samples.
> It is not a Wiener predictor in the > sense there is no additive white noise and it is only predicting one > step. Looks more like a basic filter.
Exactly. The LPC is a basic linear filter designed to MSE criterion.
> A least-squares predictor would > have something like > > Y(k/k-n) for an n step predictor and this would need a Wiener or > Kalman approach.
The LPC is a Wiener approach.
> I once tried this (ordinary LPC) and although it worked the output > sounded like a robot! Very machine like. How does one overcome this?
Overcome what? What exactly are you trying to do? Vladimir Vassilevsky DSP and Mixed Signal Consultant www.abvolt.com
> Is the 'gain' estimated at all ie the variance of the driving noise of > the AR model?
LPC analysis returns LP-coefficients and variance (sum of squared errors)
> I once tried this (ordinary LPC) and although it worked the output > sounded like a robot! Very machine like. How does one overcome this?
It depends on what you use as excitation for your vocal tract filter. You can use white noise, white noise+pulses, pulses or something else...
"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> writes:

> "HardySpicer" <gyansorova@gmail.com> wrote in message > news:1191190301.157557.209990@r29g2000hsg.googlegroups.com... >> In LPC we estimate a 10th order all pole model of quasi-stationary >> speech. The predicted speech is then >> >> y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n) > >> Is the 'gain' estimated at all ie the variance of the driving noise of >> the AR model? > > The LPC analysis gives the filter coefficients only. The gain should be > found separately.
As Hans Christiansen points out: # LPC analysis returns LP-coefficients and variance (sum of squared errors) Levison-Durbin (autocorelation method) returns the sum of squares of the residual/excitation so you can get the gain from this and the original energy in your waveform. Tony
On Sep 30, 6:11 pm, HardySpicer <gyansor...@gmail.com> wrote:
> I once tried this (ordinary LPC) and although it worked the output > sounded like a robot! Very machine like. How does one overcome this?
Typically, LPC analysis for speech applications will involve some sort of estimate of the type of sound (voiced, unvoiced) that is to be reproduced. Depending upon the type of sound, the input to the filter during reconstruction will vary. It could contain just white noise, or some periodic signal at a frequency corresponding to the detected pitch of the sound. If you're just using noise input all the time, you won't get very good reproduction of voiced sounds. Jason
On Oct 1, 11:49 am, "Vladimir Vassilevsky"
<antispam_bo...@hotmail.com> wrote:
> "HardySpicer" <gyansor...@gmail.com> wrote in message > > news:1191190301.157557.209990@r29g2000hsg.googlegroups.com... > > > In LPC we estimate a 10th order all pole model of quasi-stationary > > speech. The predicted speech is then > > > y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n) > > Is the 'gain' estimated at all ie the variance of the driving noise of > > the AR model? > > The LPC analysis gives the filter coefficients only. The gain should be > found separately. > > > Also is this really a predictor? > > Yes. It exploits the correlation between the subsequent samples. > > > It is not a Wiener predictor in the > > sense there is no additive white noise and it is only predicting one > > step. Looks more like a basic filter. > > Exactly. The LPC is a basic linear filter designed to MSE criterion. > > > A least-squares predictor would > > have something like > > > Y(k/k-n) for an n step predictor and this would need a Wiener or > > Kalman approach. > > The LPC is a Wiener approach. >
This predates Wiener - it's the Yule-Walker equations and there is no additive noise though Yule and Walker tried to predict the sunspot activity. 'Real Wiener' prediction would be is y is also corrupted by noise of some sort. In fact this woulkd follow from the same idea (with some modification) as Smoothing and Filtering. Hardy
"Tony Robinson" <tonyr@shrek.cantabResearch.com> wrote in message
news:87myv3629u.fsf@shrek.cantabResearch.com...
> "Vladimir Vassilevsky" <antispam_bogus@hotmail.com> writes: > > > "HardySpicer" <gyansorova@gmail.com> wrote in message > > news:1191190301.157557.209990@r29g2000hsg.googlegroups.com... > >> In LPC we estimate a 10th order all pole model of quasi-stationary > >> speech. The predicted speech is then > >> > >> y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n) > > > >> Is the 'gain' estimated at all ie the variance of the driving noise of > >> the AR model? > > > > The LPC analysis gives the filter coefficients only. The gain should be > > found separately. > > As Hans Christiansen points out: > > # LPC analysis returns LP-coefficients and variance (sum of squared
errors) Not quite. It returns the normalized variance. VLV
"Vladimir Vassilevsky" <antispam_bogus@hotmail.com> writes:

> "Tony Robinson" <tonyr@shrek.cantabResearch.com> wrote in message > >> As Hans Christiansen points out: >> >> # LPC analysis returns LP-coefficients and variance (sum of squared > errors) > > Not quite. It returns the normalized variance.
Yours may very well do so. Mine doesn't because I think that's the more natural and useful implementation. Tony
HardySpicer wrote:
> In LPC we estimate a 10th order all pole model of quasi-stationary > speech. The predicted speech is then > > y(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n)
Just to clarify some notation ambiguity: y(k) is the speech, y^(k) is the predicted speech r(k) is the prediction error, often called "the innovation" or "the residual" and the correct equation is: y^(k)=-a1y(k-1)-a2y(k-2)....-a10y(k-n) r(k)=y(k)-y^(k) Also, r'(k) is the quantized prediction error, often called "the excitation"
> Is the 'gain' estimated at all ie the variance of the driving noise of > the AR model?
The signal's gain is indeed related to the the residual variance (or the speech variance). Another interesting parameter is "the prediction gain" which is the ratio between the speech variance and the residual variance.
> I once tried this (ordinary LPC) and although it worked the output > sounded like a robot! Very machine like. How does one overcome this?
Old LPC vocoder, in which the vocal tract was primarily modeled, sounded like robot indeed. So that what happens when you model the LPC, and does not utilize accurate modeling of the excitation. But over the years, the excitation modeling was researched, improved and enhanced, and today's modeling produces natural sounding speech.