Hi,
The goal of LP in the AR context is to come up with a linear estimate
of x[n] based on previous samples of x[n], such that the error is
minimal (and white):
err[n]= x[n] - xhat[n] = x[n] - (a1*x[n-1] + a2*x[n-2]....ap*x[n-P])
where:
xhat[n] =a1*x[n-1] + a2*x[n-2]....ap*x[n-P]
This can be written compactly as:
err[n]= [1 a1 a2 ...ap] * [ x[n] x[n-1] x[n-2]....x[n-P] ] = a' * x
The prediction error is minimized by using the orthogonality theorem
(forcing the current error to be orthogonal to all linear combinations
of previous samples of x[n-1] through x[n-P], (the error lies in the
x[n] space only) :
E{ x*err[n]' } = [sig^2 0 0...0 ] '
Using the previous definition of the error we have:
E{ x * (a'x)' }= E{ x*x'a} = E{x*x'}a = Ra =[sig^2 0 0...0 ] '
I hope this helps....
-Ikaro
Reply by Naebad●January 9, 20062006-01-09
<ravi.srikantiah@gmail.com> wrote in message
news:1136818965.022602.110520@g14g2000cwa.googlegroups.com...
I understood the Yules Walker equations for an AR process was given by this
Ra=b
ie
http://www.mathworks.com/access/helpdesk/help/toolbox/dspblks/levinsondurbin
.html
where R is the autocorrelation matrix, a=[a1,a2....an] is the parameters and
b is a vector made up of correlations ie b=[R(1),R(2)....R(n+1)]. However,
I have seen the following where b is different and is
b=[sigma^2,0,0...0]
where sigma^2 is the driving noise variance I think. (see below)
http://www.cbi.dongnocchi.it/glossary/YuleWalker.html
Can anybody explain?
Thanks