Reply by Ikaro January 10, 20062006-01-10
Hi,

The goal of LP in the AR context is to come up with a linear estimate
of x[n] based on previous samples of x[n], such that the error is
minimal (and white):

err[n]= x[n] - xhat[n] = x[n] - (a1*x[n-1] + a2*x[n-2]....ap*x[n-P])

where:

xhat[n] =a1*x[n-1] + a2*x[n-2]....ap*x[n-P]

This can be written compactly as:

err[n]= [1 a1 a2 ...ap] * [ x[n] x[n-1] x[n-2]....x[n-P] ] = a' * x

The prediction error is minimized by using the orthogonality theorem
(forcing the current error to be orthogonal to all linear combinations
of previous samples of x[n-1] through x[n-P], (the error lies in the
x[n] space only) :

E{ x*err[n]' } = [sig^2 0 0...0 ] '

Using the previous definition of the error we have:

E{ x * (a'x)' }= E{ x*x'a} = E{x*x'}a = Ra =[sig^2 0 0...0 ] '


I hope this helps....

-Ikaro

Reply by Naebad January 9, 20062006-01-09
<ravi.srikantiah@gmail.com> wrote in message
news:1136818965.022602.110520@g14g2000cwa.googlegroups.com...
> Naebad, > > x(n) = [sum i=1^p][a(i)x(n-i)] + e(n) > x(n)x(n-m) = x(n-m)[sum i=1^p][a(i)x(n-m-i)] + x(n-m)e(n) > > Taking expectations on both sides, > E{x(n)x(n-m)} = [sum i=1^p][a(i)E{x(n)x(n-m-i)}] + E{x(n-m)e(n)} > E{x(n)x(n-m)} = [sum i=1^p][a(i)r(m-i)] + E{x(n-m)e(n)} > > For m=0; > E{x(n)^2} = [sum i=1^p][a(i)r(0)]+E{x(n)e(n)} > > Now, E{x(n)e(n)} = [sum i=1^p]E[a(i)x(n-i)e(n)] + E{e(n)2} > Assuming that the data sequence is independent of the error, > E{x(n-i)e(n)} = 0 > E{e(n)^2} = sigma2 > E{x(n)e(n)} = sigma2 > > So, > E{x(n)^2} = [sum i=1^p][a(i)r(i)] + sigma^2 > r(0) = [r(1) r(2) ... r(p)] [a(1) a(2) ... a(p)]^T + sigma^2 > > For m > 0; > E{x(n)x(n-m)} = [sum i=1^p][a(i)E{x(n)x(n-m-i)}] + E{x(n-m)e(n)} > E{x(n-m)e(n)} = [sum i=1^p][a(i)x(n-m-i)e(n)] + E{x(n-m)e(n)} = 0 > > So, > r(m) = [r(m-1) r(m-2) ... r(m-p)][a(1) a(2) ... a(p)]^T > > In matrix form, > > |r(0)| | r(1) r(2) r(3) ... r(p) | | a(1) | |sigma^2| > |r(1)| | r(0) r(1) r(2) ... r(p-1) | | a(2) | | 0 | > |r(2)| = | r(1) r(0) r(1) ... r(p-2) | * | a(3) | + | 0 | > |: | | | | : | | 0 | > |r(p)| | r(p-1) r(p-2) r(p-3) ... r(0) | | a(p) | | 0 | > > or > > |r(0)-sigma^2| | r(1) r(2) r(3) ... r(p) | | a(1) | > |r(1) | | r(0) r(1) r(2) ... r(p-1) | | a(2) | > |r(2) | = | r(1) r(0) r(1) ... r(p-2) | * | a(3) | > |: | | | | : | > |r(p) | | r(p-1) r(p-2) r(p-3) ... r(0) | | a(p) | > > Ignore the first row, and you have > |r(1) | | r(0) r(1) r(2) ... r(p-1) | | a(1) | > |r(2) | = | r(1) r(0) r(1) ... r(p-2) | * | a(2) | > |: | | | | : | > |r(p) | | r(p-1) r(p-2) r(p-3) ... r(0) | | a(p) | > > Regards, > Ravi Srikantiah >
Thanks - I think the equation with variance in it is known as the augmented Y-W equations is it not? Naebad
Reply by ravi...@gmail.com January 9, 20062006-01-09
Naebad,

x(n) = [sum i=1^p][a(i)x(n-i)] + e(n)
x(n)x(n-m) = x(n-m)[sum i=1^p][a(i)x(n-m-i)] + x(n-m)e(n)

Taking expectations on both sides,
E{x(n)x(n-m)} = [sum i=1^p][a(i)E{x(n)x(n-m-i)}] + E{x(n-m)e(n)}
E{x(n)x(n-m)} = [sum i=1^p][a(i)r(m-i)] + E{x(n-m)e(n)}

For m=0;
E{x(n)^2} = [sum i=1^p][a(i)r(0)]+E{x(n)e(n)}

Now, E{x(n)e(n)} = [sum i=1^p]E[a(i)x(n-i)e(n)] + E{e(n)2}
Assuming that the data sequence is independent of the error,
E{x(n-i)e(n)} = 0
E{e(n)^2} = sigma2
E{x(n)e(n)} = sigma2

So,
E{x(n)^2} = [sum i=1^p][a(i)r(i)] + sigma^2
r(0) = [r(1) r(2) ... r(p)] [a(1) a(2) ... a(p)]^T + sigma^2

For m > 0;
E{x(n)x(n-m)} = [sum i=1^p][a(i)E{x(n)x(n-m-i)}] + E{x(n-m)e(n)}
E{x(n-m)e(n)} = [sum i=1^p][a(i)x(n-m-i)e(n)] + E{x(n-m)e(n)} = 0

So,
r(m) = [r(m-1) r(m-2) ... r(m-p)][a(1) a(2) ... a(p)]^T

In matrix form,

|r(0)|   | r(1)   r(2)   r(3)   ...   r(p)   |    | a(1) |   |sigma^2|
|r(1)|   | r(0)   r(1)   r(2)   ...   r(p-1) |    | a(2) |   |   0   |
|r(2)| = | r(1)   r(0)   r(1)   ...   r(p-2) | *  | a(3) | + |   0   |
|:   |   |                                   |    |  :   |   |   0   |
|r(p)|   | r(p-1) r(p-2) r(p-3) ...   r(0)   |    | a(p) |   |   0   |

or

|r(0)-sigma^2|   | r(1)   r(2)   r(3)   ...   r(p)   |    | a(1) |
|r(1)        |   | r(0)   r(1)   r(2)   ...   r(p-1) |    | a(2) |
|r(2)        | = | r(1)   r(0)   r(1)   ...   r(p-2) | *  | a(3) |
|:           |   |                                   |    |  :   |
|r(p)        |   | r(p-1) r(p-2) r(p-3) ...   r(0)   |    | a(p) |

Ignore the first row, and you have
|r(1)        |   | r(0)   r(1)   r(2)   ...   r(p-1) |    | a(1) |
|r(2)        | = | r(1)   r(0)   r(1)   ...   r(p-2) | *  | a(2) |
|:           |   |                                   |    |  :   |
|r(p)        |   | r(p-1) r(p-2) r(p-3) ...   r(0)   |    | a(p) |

Regards,
Ravi Srikantiah

Reply by Naebad January 8, 20062006-01-08
I understood the Yules Walker equations for an AR process was given by this

Ra=b

ie

http://www.mathworks.com/access/helpdesk/help/toolbox/dspblks/levinsondurbin
.html

where R is the autocorrelation matrix, a=[a1,a2....an] is the parameters and
b is a vector made up of correlations  ie b=[R(1),R(2)....R(n+1)]. However,
I have seen the following where b is different and is

b=[sigma^2,0,0...0]

where sigma^2 is the driving noise variance I think. (see below)

http://www.cbi.dongnocchi.it/glossary/YuleWalker.html

Can anybody explain?

Thanks