DSPRelated.com
Forums

LPC question

Started by Jack May 15, 2006
Hi,

I have read some litterature about linear prediction.

However, I haven't been able to find any litterature
that explains in mathematical terms exactly why
minimization of the variance of the residual leads
to an estimate of the all-pole filter coefficients.

The known output of a 10th order unknown all-pole filter is:

x[k]=u[k]-sum(a[q]x[k-q],q=1,q=10)

where the unknown u[k] is defined to be random noise with
variance 1 and the unknown, constants a[q] are the coefficients
of the all-pole filter.

Now...if I send x[k] through a FIR filter I get:

e[k]=x[k]+sum(b[q]x[k-q],q=1,q=10)

where the constants b[q] are the coefficients
of the FIR filter.

I can see that choosing b[q]=a[q] leads to

e[k]=u[k]

but I don't understand why minimizing the
variance of e[k] guarantees that
the resulting estimates of b[q] are
close to a[q]

How do I prove that mathematically?

I haven't been able to find such a proof
with google.

Maybe some of you guys could help me?

Thanks :o)





The prove is simple:

Since u[k] is white noise, it does not correlate with a shifted version
of itself.

Therefore, the variance of any signal

u[k] + c1 u[k-1] + c2 u[k-2] + ...

is precisely

1 + c1^2 + c2^2 + ...

This is minimum if and only if c1=c2=...=0, which is minimum if and
only if there are no shifted copies of the noise around, which is the
case if and only if b=a.

Did I express this clearly enough?

Slainte!
Hanspi

> > Since u[k] is white noise, it does not correlate with a shifted version > of itself.
Yes, that's true
> Therefore, the variance of any signal > > u[k] + c1 u[k-1] + c2 u[k-2] + ... >
I assume this expression is another way of writing x[k] ? Like: x[k]=u[k]-sum(a[q]x[k-q],q=1,q=10) = u[k] - a[1]x[k-1]-sum(a[q]x[k-q],q=2,q=10) = u[k] - a[1](u[k-1]-sum(a[q]x[k-q-1],q=1,q=10))-sum(a[q]x[k-q],q=2,q=10) = u[k] - a[1]u[k-1] - sum(a[q]a[1]x[k-q-1],q=1,q=10)-sum(a[q]x[k-q],q=2,q=10) = etc....
> is precisely > > 1 + c1^2 + c2^2 + ... > This is minimum if and only if c1=c2=...=0,
Yes, that's obvious
> which is minimum if and > only if there are no shifted copies of the noise around,
So c1=c2=...=0 iff there are no shifted copies of the noise
> which is the > case if and only if b=a.
So the terms c1, c2, c3 etc... contribute to the variance ONLY when the weighted sum of past samples of x[k] is present in x[k] and the only way to get rid of that sum is to cancel it with another sum (a FIR filter) ??
> Did I express this clearly enough?
I think so...Thanks and slainte to you too :o)
> Slainte! > Hanspi
Hi hanspi,

Your reply to the question by jack on LPC has really interested me.
I still dont get exactly what you are trying to tell.
I think my interpretation of u(k) being equal to e(k) is wrong and I
want to correct myself.
Can you please elaborate on this concept of LPC please?
Please explain the concept of variance you have brought into picture
and what significance does it have over here?
With Regards,
Abhishek S


Hanspi wrote:
> The prove is simple: > > Since u[k] is white noise, it does not correlate with a shifted version > of itself. > > Therefore, the variance of any signal > > u[k] + c1 u[k-1] + c2 u[k-2] + ... > > is precisely > > 1 + c1^2 + c2^2 + ... > > This is minimum if and only if c1=c2=...=0, which is minimum if and > only if there are no shifted copies of the noise around, which is the > case if and only if b=a. > > Did I express this clearly enough? > > Slainte! > Hanspi
Hi Abhishek,

I have not brought variance into the picture, Jack did, with his words:

> I don't understand why minimizing the > variance of e[k] guarantees that > the resulting estimates of b[q] are > close to a[q]
Normally you minimise the Mean Square Error, MSE, which is the expected value of the square of the signal e: E{e^2} The variance is the same for the zero-mean signal e: E{ (e - E{e})^2 } If the signal e has E{e}=0, then MSE and variance are the same. For an adaptive linear filter, E{e}=0 is normally the case, so you can just as well use the variance instead of the MSE. Did this explain the variance thing?
> > Therefore, the variance of any signal
> > u[k] + c1 u[k-1] + c2 u[k-2] + ...
> I assume this expression is another way of writing x[k] ?
No, this is writing what you will generally see at the output of your FIR filter if b is not equal to a.