# Matrix equation

Started by November 8, 2005
Hi!

I'm hoping here's someone who can help me with a "hairy" matrix equation. I
stumbled upon this developing my lossless audio codec. I have this:

e = r^T * (R^-1 - A^T * (A*R*A^T)^-1 * A) * r, where

r is 1xk vector (lags 1 to k of autocorrelation),
R is kxk matrix (autocorrelation matrix) and it's usually positive definite,
A = [I 0]^T of size (k-1)xk, ie. (k-1)x(k-1) identity matrix padded with
zero vector.

Now, the task is to calculate or estimate e, without expensive operations
such as matrix inversion. Perhaps I'm asking too much, but it would be
useful. Even a rough estimate.

I have pretty much nothing so far, but one intresting property seems to
hold: matrix (R^-1 - A^T * (A*R*A^T)^-1 * A) has only one non-zero
eigenvalue (note: this is based solely on experimentation).

If someone is familiar with equations of this form or is otherwise

----
Jani Huhtanen
Tampere University of Technology, Pori
Hi Jani,

first of all your matrix looks like a Schur complement, and is in a
form that may be simplified by using the matrix inversion lemma. Google
will help you with both of these. Also look for papers on MMSE
equalisation
by John Cioffi and Naofal Al-Dhahir.

Secondly, it is quite possible that this quadratic form (term inside
r^T (*) r) is positive definite and of full rank, in which case the
value e is minimised by choosing r as the eigenvector corresponding to
since it appears that your r is fixed (up to a delay) but if you give
the context of the problem, I might be able to give you some better
insight. I've worked with equations of that form for years, as I expect
have
many of the comp.dsp crew. You could do worse than get yourself a book
on linear algebra. Kailath covers everything but I find his text quite
impenetrable. A softer introduction should suffice (suggestions
anyone???)

Porterboy

Jani Huhtanen wrote:

> Hi! > > I'm hoping here's someone who can help me with a "hairy" matrix equation. I > stumbled upon this developing my lossless audio codec. I have this: > > e = r^T * (R^-1 - A^T * (A*R*A^T)^-1 * A) * r, where > > r is 1xk vector (lags 1 to k of autocorrelation), > R is kxk matrix (autocorrelation matrix) and it's usually positive definite, > A = [I 0]^T of size (k-1)xk, ie. (k-1)x(k-1) identity matrix padded with > zero vector. > > Now, the task is to calculate or estimate e, without expensive operations > such as matrix inversion. Perhaps I'm asking too much, but it would be > useful. Even a rough estimate. > > I have pretty much nothing so far, but one intresting property seems to > hold: matrix (R^-1 - A^T * (A*R*A^T)^-1 * A) has only one non-zero > eigenvalue (note: this is based solely on experimentation). > > If someone is familiar with equations of this form or is otherwise > intrested, please help. > > ---- > Jani Huhtanen > Tampere University of Technology, Pori
If I'm not mistaken that equation, and a way to use it to reduce a problem to inverting a 1x1 matrix, can be found in "Adaptive Control" by Astrom -- I'd say for sure but it's up there and I'm down here and it's cold outside and late. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
porterboy76@yahoo.com wrote:

> Hi Jani, > > first of all your matrix looks like a Schur complement, and is in a > form that may be simplified by using the matrix inversion lemma. Google > will help you with both of these. Also look for papers on MMSE > equalisation > by John Cioffi and Naofal Al-Dhahir.
OK, I'll look for these.
> > Secondly, it is quite possible that this quadratic form (term inside > r^T (*) r) is positive definite and of full rank, in which case the > value e is minimised by choosing r as the eigenvector corresponding to > the minimum eigenvalue of the quadratic form. This might not help you > since it appears that your r is fixed (up to a delay) but if you give > the context of the problem, I might be able to give you some better > insight. I've worked with equations of that form for years, as I expect > have > many of the comp.dsp crew.
OK, perhaps I should explain a little bit more. r is fixed as are R, and A. This equation gives difference of MSE of linear prediction between order k and order k-1. MSE of my linear predictor is given like this: mse = sum(x[n]^2) - r^T * R^-1 * r, where r is again 1xk and R is kxk This removes the last lag of an autocorr: r_(k-1) = A*r_k, where _n refers to the order and this gives the last principal leading submatrix of R: R_(k-1) = A*R_k*A', where _n again refers to order of the prediction Now MSE of order k-1 is given by mse_(k-1) = sum(x[n]^2) - r_(k-1)^T * R_(k-1)^-1 * r_(k-1) Now calculating the difference of these two MSE:s we get the equation I gave in my first post. Equation which is "easy" calculate, but requires way too much operations to be practical for order estimation (which is what I'm trying to do here). This
> You could do worse than get yourself a book > on linear algebra. Kailath covers everything but I find his text quite > impenetrable. A softer introduction should suffice (suggestions > anyone???) >
I have read my linear algebra in uni, altough there's always room for improvement. Yesterday I got Goldbergs "Matrix Theory with Applications" and it seem promising (at least the part with quadratic forms).
> Porterboy
Thanks for your help! -- ---- Jani Huhtanen Tampere University of Technology, Pori
"Jani Huhtanen" <jani.huhtanen@kolumbus.fi> wrote in message
news:dkqn46\$5hq\$1@phys-news1.kolumbus.fi...
> Hi! > > I'm hoping here's someone who can help me with a "hairy" matrix equation.
I
> stumbled upon this developing my lossless audio codec. I have this: > > e = r^T * (R^-1 - A^T * (A*R*A^T)^-1 * A) * r, where > > r is 1xk vector (lags 1 to k of autocorrelation), > R is kxk matrix (autocorrelation matrix) and it's usually positive
definite,
> A = [I 0]^T of size (k-1)xk, ie. (k-1)x(k-1) identity matrix padded with > zero vector. > > Now, the task is to calculate or estimate e, without expensive operations > such as matrix inversion. Perhaps I'm asking too much, but it would be > useful. Even a rough estimate. > > I have pretty much nothing so far, but one intresting property seems to > hold: matrix (R^-1 - A^T * (A*R*A^T)^-1 * A) has only one non-zero > eigenvalue (note: this is based solely on experimentation). > > If someone is familiar with equations of this form or is otherwise > intrested, please help. > > ---- > Jani Huhtanen > Tampere University of Technology, Pori
Hi Jani, maybe ppLinear could calculate this just write everything out using its syntax and stick values in for A ,R, and r....see www.pecos-place.com regards.... Bill
Bill Shortall wrote:

> > "Jani Huhtanen" <jani.huhtanen@kolumbus.fi> wrote in message > news:dkqn46\$5hq\$1@phys-news1.kolumbus.fi... >> Hi! >> >> I'm hoping here's someone who can help me with a "hairy" matrix equation. > I >> stumbled upon this developing my lossless audio codec. I have this: >> >> e = r^T * (R^-1 - A^T * (A*R*A^T)^-1 * A) * r, where >> >> r is 1xk vector (lags 1 to k of autocorrelation), >> R is kxk matrix (autocorrelation matrix) and it's usually positive > definite, >> A = [I 0]^T of size (k-1)xk, ie. (k-1)x(k-1) identity matrix padded with >> zero vector. >> >> Now, the task is to calculate or estimate e, without expensive operations >> such as matrix inversion. Perhaps I'm asking too much, but it would be >> useful. Even a rough estimate. >> >> I have pretty much nothing so far, but one intresting property seems to >> hold: matrix (R^-1 - A^T * (A*R*A^T)^-1 * A) has only one non-zero >> eigenvalue (note: this is based solely on experimentation). >> >> If someone is familiar with equations of this form or is otherwise >> intrested, please help. >> >> ---- >> Jani Huhtanen >> Tampere University of Technology, Pori > > Hi Jani, > maybe ppLinear could calculate this > just write everything out using its > syntax and stick values in for A ,R, > and r....see > > www.pecos-place.com > > regards.... Bill
I know how to calculate the e because both R and A*R*A^T are invertible and there's no unknowns (except e). This is not the problem. The problem is that the calculation of e is a way too expensive operation and I need a another way to do it faster, even with the expense of accurary. Thus some approximation of e would be great. -- Jani Huhtanen Tampere University of Technology, Pori
Hi again,

Can anyone explain the one non-zero eigenvalue i'm getting for the matrix
(R^-1 - A^T * (A*R*A^T)^-1 * A)?

It clearly has something to do with the way A modifies R and (A*R*A^T)^-1.
If it wasn't clear what A does before let me explain a bit more:

As I said earlier A = [I 0]^T of size (k-1)xk, ie. (k-1)x(k-1) identity
matrix padded with zero vector. So multiplying R from left with A "removes"
the last row of R (thus the size is (k-1)xk). And mult from right with A^T
removes the last column (size (k-1)x(k-1)). This matrix now corresponds to
the last principal leading submatrix of R.

Now multiplying the inverse from left with A^T adds a row with zeros to the
"end" and multiplying from right with A adds a zero column (final size
kxk).

This means that R^-1 last row and last column leaves intact in substraction.

Also if I use different A, such that it leaves more zero columns and rows
(and more cols and rows of R^-1 stay intact), I get more eigenvalues. To be
specific the number of eigenvalues seem to correspond the number of columns
or rows left intact in R^-1. This is still only based on experimentation,
but I believe it can be proven with basic linear algebra. Can anyone help
me with this?

--
Jani Huhtanen
Tampere University of Technology, Pori
Jani Huhtanen wrote:
> Hi again, > > > Can anyone explain the one non-zero eigenvalue i'm getting for the matrix > (R^-1 - A^T * (A*R*A^T)^-1 * A)?
Here is an outline of how I would attempt to prove your conjecture (it might be wrong, I haven't worked out the details): 1. R_k and R_k^-1 are circulant kxk matrices (because R_k is one, and R_k is positive definite => invertible) => they are both diagonalized by the discrete k-point Fourier transform, their eigenvectors are the Fourier basis vectors (all circulant kxk matrices share a common eigenbasis), and their eigenvalues are the coordinates of the Fourier transform of the (one-sided) autocorrelation vector r_k (the generating vector for R_k). 2. R_{k-1} = (A*R*A^T) and R_{k-1}^-1 = (A*R*A^T)^-1 are circulant (k-1)x(k-1) matrices, so the above applies as well, with the (k-1)-point Fourier transform. 3. Show that for S_k := A^T * (A*R_k*A^T)^-1 * A, the kernel for R_k^-1 - S_k is a {k-1}-dimensional subspace. Use the eigendecompositions from above. You will most probably need the fact that the sum of the n unity roots equals to zero. 4. You're done by linearity. Presumably, point 3 needs some work. Regards, Andor
I wrote:

> 4. You're done by linearity.
Sorry, that was from another recipe. It should rather say: 4. Show that the space spanned by the only non-kernel dimension is also an eigenspace of R_k^-1 - S_k. I think this should follow from your calculations under point 3. Regards, Andor
abariska@student.ethz.ch wrote:

> ... > 1. R_k and R_k^-1 are circulant kxk matrices (because R_k is one, and > R_k is positive definite => invertible)
Unfortunately they are not. Altough I'm willing to do that approximation (because R is "almost circulant"), if it considerably helps in this situation. I calculate the matrix R as follows: Let X[n] = [x[n-1], x[n-2], ... , x[n-k]]^T a vector of size kx1 composed of k past samples of signal x. then R = sum(X[n]*X[n]^T) which is sum of outer products from n=0 to N-1. So R is symmetric and often with large enough N, also positive definite. Particularly with low order (ie small k) and large N, R is very close to circulant matrix.
> ... > Regards, > Andor
-- Jani Huhtanen Tampere University of Technology, Pori