DSPRelated.com
Forums

Matrix equation

Started by Jani Huhtanen November 8, 2005
Jani wrote:
>> ... >> 1. R_k and R_k^-1 are circulant kxk matrices (because R_k is one, and >> R_k is positive definite => invertible) > >Unfortunately they are not. Altough I'm willing to do that approximation >(because R is "almost circulant"), if it considerably helps in this >situation.
Hmm, that makes it more difficult (perhaps). I'm not sure how one can generally find eigendecompositions of Toeplitz matrices (R is normal, so there exists a vector space basis of eigenvectors of R). Once you have the general eigendecomposition of R_k you can perhaps also find it for R_k^-1 - S_k (at least finding it for R_k^-1 is easy). If this operator has only one non-zero eigenvalue, all you have to do is project r onto that eigenspace and multiply the projection with the corresponding eigenvalue to get e. For example, if R is a diagonal matrix, ie. R_k = c 1_k, then R_k^-1 - S_k has eigenvalue 1/c with eigenspace spanned by e_k, the k-th standard normal basis vector. Perhaps you can find a proof by induction: show that it (the fact that R_k^-1 - S_k has only one non-zero eigenvalue) holds for k=2, where it shouldn't be too difficult to compute everything explicitely, and then proceed with k->k+1. FWIW, Andor
Jani Huhtanen wrote:
> abariska@student.ethz.ch wrote: > >> ... >> 1. R_k and R_k^-1 are circulant kxk matrices (because R_k is one, and >> R_k is positive definite => invertible) > > Unfortunately they are not. Altough I'm willing to do that approximation > (because R is "almost circulant"), if it considerably helps in this > situation.
I made an error here. R is _not_ circulant and it isn't even close. Instead it _is_ toeplitz. Sorry about that. -- ---- Jani Huhtanen Tampere University of Technology, Pori
Jani,

> And to be more "precise": R is approximately toeplitz.
This is getting worse by the minute! I think I'm bailing out ... :-) Best regards, Andor
Jani Huhtanen wrote:

> Jani Huhtanen wrote: >> abariska@student.ethz.ch wrote: >> >>> ... >>> 1. R_k and R_k^-1 are circulant kxk matrices (because R_k is one, and >>> R_k is positive definite => invertible) >> >> Unfortunately they are not. Altough I'm willing to do that approximation >> (because R is "almost circulant"), if it considerably helps in this >> situation. > > I made an error here. R is _not_ circulant and it isn't even close. > Instead it _is_ toeplitz. Sorry about that. >
And to be more "precise": R is approximately toeplitz. -- ---- Jani Huhtanen Tampere University of Technology, Pori
> I know how to calculate the e because both R and A*R*A^T are invertible and > there's no unknowns (except e). This is not the problem. > > The problem is that the calculation of e is a way too expensive operation > and I need a another way to do it faster, even with the expense of > accurary. Thus some approximation of e would be great.
Sorry, I cant help you Jani. Although I have worked with these equations before, r was always unknown in my case, and you could use the power method to compute both r and e... http://college.hmco.com/mathematics/larson/elementary_linear/4e/shared/downloads/c10s3.pdf I imagine in your case that if you work through a simple 3x3 example by hand, you will quickly see if there is any structure to be exploited. Porterboy
Hi Jani,

It can be shown that the expression is unity rank. I have slightly
reduced the problem to a simpler one though you would still require an
inverse of a matrix.

http://www.geocities.com/weech1at/repository/051111.pdf

Alan T

Alan Tan wrote:

> Hi Jani, > > It can be shown that the expression is unity rank. I have slightly > reduced the problem to a simpler one though you would still require an > inverse of a matrix. > > http://www.geocities.com/weech1at/repository/051111.pdf > > Alan T
This is great! Thanks. a note to self: I really have to develop a good intuition for dealing with submatrices. -- ---- Jani Huhtanen Tampere University of Technology, Pori
Hi all!

Alan Tan showed how my equation can be simplified. Big thanks for him! 

Particularly intresting is the equation for R^-1. It gives R_{k}^-1 when
R_{k-1}^-1 is known. If this is applied recursively starting from say 1x1
matrix (scalar) one can get inverse of R with O(k^3) complexity (with 2*k^3
+ 3*k^2 + k mults), where k is the order (or size of the matrix R). 

Isn't this of the same complexity as inversion of symmetric positive
definite matrix through Cholesky decomp? 

If so, then it seems that I'm able to determine the "correct" order for the
linear predictor for "free" because I can check the minimum error at every
step of the algorithm.

Probably I have overlooked something or there is some other method for order
determination which is faster than this. Please tell me if you know such
method.

-- 
Jani Huhtanen
Tampere University of Technology, Pori
Jani Huhtanen wrote:
>... >If this is applied recursively starting from say 1x1 > matrix (scalar) one can get inverse of R with O(k^3) complexity (with > 2*k^3 + 3*k^2 + k mults), where k is the order (or size of the matrix R).
This is wrong. If I calculated correctly the amount of mults is given by sum(k^2) from k=2 to K, where K is the final order (size of R). So the correct amount of multiplications is (k^3)/3 + (k^2)/2 + k/6 - 1.
> > Isn't this of the same complexity as inversion of symmetric positive > definite matrix through Cholesky decomp?
According to Golubs and van Loans "Matrix Computations 2nd ed." Cholesky decomposition is O(n^3) and requires n^3/3 flops. This n^3/3 is an approximation and it contains also additions. As a conclusion I think that the performance of the recursive calculation of the R^-1 is comparable to Chol.decomp. Also if Alan Tan didn't do any mistakes in his proof, this new (?) algorithm works for all symmetric matrices (Am I right Alan?). Now the question is: Is this a _new_ algorithm? And is there even better ways to accomplish this (ie. finding the mse associated with each predictor order and finding the R^-1 of the "optimal" order)? -- Jani Huhtanen Tampere University of Technology, Pori
Hi Jani,

I should be quite correct in the formulation although personally I do
not think that this is a particularly new algorithm. For more
information about the inverse of the block matrices, you can refer to
the wonderful compilation of matrix operations by Roweis at
http://www.cs.toronto.edu/~roweis/notes/matrixid.pdf

Alan T


Jani Huhtanen wrote:
> Jani Huhtanen wrote: > >... > >If this is applied recursively starting from say 1x1 > > matrix (scalar) one can get inverse of R with O(k^3) complexity (with > > 2*k^3 + 3*k^2 + k mults), where k is the order (or size of the matrix R). > > This is wrong. If I calculated correctly the amount of mults is given by > sum(k^2) from k=2 to K, where K is the final order (size of R). So the > correct amount of multiplications is (k^3)/3 + (k^2)/2 + k/6 - 1. > > > > > Isn't this of the same complexity as inversion of symmetric positive > > definite matrix through Cholesky decomp? > > According to Golubs and van Loans "Matrix Computations 2nd ed." Cholesky > decomposition is O(n^3) and requires n^3/3 flops. This n^3/3 is an > approximation and it contains also additions. > > As a conclusion I think that the performance of the recursive calculation of > the R^-1 is comparable to Chol.decomp. Also if Alan Tan didn't do any > mistakes in his proof, this new (?) algorithm works for all symmetric > matrices (Am I right Alan?). > > Now the question is: Is this a _new_ algorithm? > And is there even better ways to accomplish this (ie. finding the mse > associated with each predictor order and finding the R^-1 of the "optimal" > order)? > > -- > Jani Huhtanen > Tampere University of Technology, Pori