DSPRelated.com
Forums

inversion of a matrix

Started by Henrietta Denoue August 23, 2005

Hi

I need to inverse a rather large symmetric matrix. Using 'inv'
takes much time and I need to speed this up. What I actually have
is the the follwing:

A'*inv(R)*A

Where A is convmtx(a,n) where a is a vector of length
less than 'n' and R is A*A' .(or actually the pre-whitened,
added a small epsilon on the diagonal). Is that a Toeplitz matrix ?
Using left division, "A'*(R\A)"  in matlab helps to some degree and
if A is polpulated with many zeros, sparsing helps even
more but still it takes a long time. Is there anything
I can do do to make the above faster ? Someone told me
to use 'levinson' but I don't know how.

Thanks

H.

Henrietta Denoue skrev:
> Hi > > I need to inverse a rather large symmetric matrix. Using 'inv' > takes much time and I need to speed this up. What I actually have > is the the follwing: > > A'*inv(R)*A > > Where A is convmtx(a,n) where a is a vector of length > less than 'n' and R is A*A' .(or actually the pre-whitened, > added a small epsilon on the diagonal).
So what you have to do, then, is to invert the matrix H = A'inv(AA')A right? This looks very similar to the starting point of the Moore- Penrose pseudoinverse, Q = A'inv(A'A)A (the difference is what factor of R is transposed.) Where did this matrix problem come from? If you are solving some LMS problem, could it be that you are actually looking for the pseudo inverse? The matrix H above is so similar to the pseudo inverse Q that I would start out making sure there are no typos or blunders in the derivation of H. To justify that suspicion, consider this: If the matrix A is of dimension NxM, N =/= M, then AA' is of dimension NxN, and so is inv(AA'). But then the product of a NxM matrix times an NxN matrix, is illegal. So if your matrix A is *not* square, your expression H is wrong. If it should happen that there is a typo and the expression should be Q, check out the matlab function PINV. Rune
Rune Allnor skrev:
> Henrietta Denoue skrev: > > Hi > > > > I need to inverse a rather large symmetric matrix. Using 'inv' > > takes much time and I need to speed this up. What I actually have > > is the the follwing: > > > > A'*inv(R)*A > > > > Where A is convmtx(a,n) where a is a vector of length > > less than 'n' and R is A*A' .(or actually the pre-whitened, > > added a small epsilon on the diagonal). > > So what you have to do, then, is to invert the matrix > > H = A'inv(AA')A > > right? > > This looks very similar to the starting point of the Moore- > Penrose pseudoinverse,
...which is exactly what it is, but with A transposed compared to the form I am used to see it. Try this in matlab H = (pinv(A'))'; and I think you are through. Rune
Rune Allnor wrote:
> Henrietta Denoue skrev: > >>Hi >> >>I need to inverse a rather large symmetric matrix. Using 'inv' >>takes much time and I need to speed this up. What I actually have >>is the the follwing: >> >>A'*inv(R)*A >> >>Where A is convmtx(a,n) where a is a vector of length >>less than 'n' and R is A*A' .(or actually the pre-whitened, >>added a small epsilon on the diagonal). > > > So what you have to do, then, is to invert the matrix > > H = A'inv(AA')A > > right? >
Right.
> This looks very similar to the starting point of the Moore- > Penrose pseudoinverse, > > Q = A'inv(A'A)A > > (the difference is what factor of R is transposed.) > > Where did this matrix problem come from? If you are solving > some LMS problem,
yes
> could it be that you are actually looking > for the pseudo inverse? The matrix H above is so similar to > the pseudo inverse Q that I would start out making sure > there are no typos or blunders in the derivation of H. >
right, it should read A'*(inv(R)*A) then if A is 90x100 and A' 100*90 and R 90x90 then the output is 100x100.
> To justify that suspicion, consider this: If the matrix A > is of dimension NxM, N =/= M, then AA' is of dimension NxN, > and so is inv(AA'). But then the product of a NxM matrix > times an NxN matrix, is illegal. > > So if your matrix A is *not* square, your expression H is wrong. > > If it should happen that there is a typo and the expression > should be Q, check out the matlab function PINV. > > Rune >
The above is an attempt to implement Soubaras algorithm for fx-projection. If A is the convolution matrix for PEF (prediction filter) then the projection filter reads as: A^2/(A^2+mu^2) regards H.
Rune Allnor wrote:
> Rune Allnor skrev: > >>Henrietta Denoue skrev: >> >>>Hi >>> >>>I need to inverse a rather large symmetric matrix. Using 'inv' >>>takes much time and I need to speed this up. What I actually have >>>is the the follwing: >>> >>>A'*inv(R)*A >>> >>>Where A is convmtx(a,n) where a is a vector of length >>>less than 'n' and R is A*A' .(or actually the pre-whitened, >>>added a small epsilon on the diagonal). >> >>So what you have to do, then, is to invert the matrix >> >>H = A'inv(AA')A >> >>right? >> >>This looks very similar to the starting point of the Moore- >>Penrose pseudoinverse, > > > ...which is exactly what it is, but with A transposed compared > to the form I am used to see it. > > Try this in matlab > > H = (pinv(A'))'; > > and I think you are through. > > Rune >
Thansk again Rune. You are correct. The problem is pinv() does not work with sparse matrix (because of svd) and speed being the real objective here, I can not dispense with sparsing of A. regards H.
Henrietta Denoue skrev:
> Rune Allnor wrote: > > Henrietta Denoue skrev: > > > >>Hi > >> > >>I need to inverse a rather large symmetric matrix. Using 'inv' > >>takes much time and I need to speed this up. What I actually have > >>is the the follwing: > >> > >>A'*inv(R)*A > >> > >>Where A is convmtx(a,n) where a is a vector of length > >>less than 'n' and R is A*A' .(or actually the pre-whitened, > >>added a small epsilon on the diagonal). > > > > > > So what you have to do, then, is to invert the matrix > > > > H = A'inv(AA')A > > > > right? > > > > Right. > > > This looks very similar to the starting point of the Moore- > > Penrose pseudoinverse, > > > > Q = A'inv(A'A)A > > > > (the difference is what factor of R is transposed.) > > > > Where did this matrix problem come from? If you are solving > > some LMS problem, > > yes > > > could it be that you are actually looking > > for the pseudo inverse? The matrix H above is so similar to > > the pseudo inverse Q that I would start out making sure > > there are no typos or blunders in the derivation of H. > > > > right, it should read A'*(inv(R)*A) > then if A is 90x100 and A' 100*90 and R 90x90 then the > output is 100x100.
You are right. It is the pseudo inverse, except for that A is transposed compared to what I am used to. If you are looking for Hi=inv(H) where H = A'*inv(A*A')*A then Hi = (pinv(A'))';
> The above is an attempt to implement Soubaras algorithm > for fx-projection. If A is the convolution matrix for PEF > (prediction filter) then the projection filter reads as: > > A^2/(A^2+mu^2)
I don't know Soubara's algorithm. How do you compute A^2 if A is a non-square matrix? Rune
In article <deeqi0$74m$1@dolly.uninett.no>,
 Henrietta Denoue <henrietta@netcalcul.fr> wrote:

> Rune Allnor wrote: > > Rune Allnor skrev: > > > >>Henrietta Denoue skrev: > >> > >>>Hi > >>> > >>>I need to inverse a rather large symmetric matrix. Using 'inv' > >>>takes much time and I need to speed this up. What I actually have > >>>is the the follwing: > >>> > >>>A'*inv(R)*A > >>> > >>>Where A is convmtx(a,n) where a is a vector of length > >>>less than 'n' and R is A*A' .(or actually the pre-whitened, > >>>added a small epsilon on the diagonal). > >> > >>So what you have to do, then, is to invert the matrix > >> > >>H = A'inv(AA')A > >> > >>right? > >> > >>This looks very similar to the starting point of the Moore- > >>Penrose pseudoinverse, > > > > > > ...which is exactly what it is, but with A transposed compared > > to the form I am used to see it. > > > > Try this in matlab > > > > H = (pinv(A'))'; > > > > and I think you are through. > > > > Rune > > > > > Thansk again Rune. > You are correct. The problem is pinv() does not work with sparse > matrix (because of svd) and speed being the real objective > here, I can not dispense with sparsing of A.
You reallllllly do want to keep your sparse matrices sparse, so pinv would not be an option. Pinv is probably slower than other solvers anyway, since it uses an svd. An iterative solver may help here, if you can find a good preconditioner for R. You would use it to solve the problems inv(R)*A. In effect, there are just many right hand sides to solve for. A good preconditioner can speed up an iterative solution dramatically. The question is whether the cost of the preconditioner plus n iterative solves will be less than one solve using \. If R is positive definite, then I'd start with cholinc for the preconditioner. If not, then luinc. The sparsity of the preconditioner is also important, making it take less memory and speed up multiplies too, so you would want to look into one of the permutation tools like symamd. If you find that the iterative solve is not better, so you are back to \, then try out symamd or colamd anyway. By making the internal Cholesky or LU factors sparser, the \ operation will run faster. Its often surprising the increase in speed that you might get, especially if you are going into virtual memory. If it happens and you can stop it from happening then the speed up will be tremendous. Next, you would want to consider linsolve. It can show some gains because it does not need to do any checks on the matrix. Even though you know that R is symmetric, \ still needs to decide it is, and check to see if is also positive definite. HTH, John D'Errico -- The best material model of a cat is another, or preferably the same, cat. A. Rosenblueth, Philosophy of Science, 1945
John D'Errico wrote:
> In article <deeqi0$74m$1@dolly.uninett.no>, > Henrietta Denoue <henrietta@netcalcul.fr> wrote: > > >>Rune Allnor wrote: >> >>>Rune Allnor skrev: >>> >>> >>>>Henrietta Denoue skrev: >>>> >>>> >>>>>Hi >>>>> >>>>>I need to inverse a rather large symmetric matrix. Using 'inv' >>>>>takes much time and I need to speed this up. What I actually have >>>>>is the the follwing: >>>>> >>>>>A'*inv(R)*A >>>>> >>>>>Where A is convmtx(a,n) where a is a vector of length >>>>>less than 'n' and R is A*A' .(or actually the pre-whitened, >>>>>added a small epsilon on the diagonal). >>>> >>>>So what you have to do, then, is to invert the matrix >>>> >>>>H = A'inv(AA')A >>>> >>>>right? >>>> >>>>This looks very similar to the starting point of the Moore- >>>>Penrose pseudoinverse, >>> >>> >>>...which is exactly what it is, but with A transposed compared >>>to the form I am used to see it. >>> >>>Try this in matlab >>> >>>H = (pinv(A'))'; >>> >>>and I think you are through. >>> >>>Rune >>> >> >> >>Thansk again Rune. >>You are correct. The problem is pinv() does not work with sparse >>matrix (because of svd) and speed being the real objective >>here, I can not dispense with sparsing of A. > > > You reallllllly do want to keep your sparse matrices > sparse, so pinv would not be an option. Pinv is probably > slower than other solvers anyway, since it uses an svd. > > An iterative solver may help here, if you can find a > good preconditioner for R. You would use it to solve > the problems inv(R)*A. In effect, there are just many > right hand sides to solve for. A good preconditioner > can speed up an iterative solution dramatically. The > question is whether the cost of the preconditioner > plus n iterative solves will be less than one solve > using \. If R is positive definite, then I'd start > with cholinc for the preconditioner. If not, then > luinc. The sparsity of the preconditioner is also > important, making it take less memory and speed up > multiplies too, so you would want to look into one > of the permutation tools like symamd. > > If you find that the iterative solve is not better, > so you are back to \, then try out symamd or colamd > anyway. By making the internal Cholesky or LU factors > sparser, the \ operation will run faster. Its often > surprising the increase in speed that you might get, > especially if you are going into virtual memory. If > it happens and you can stop it from happening then > the speed up will be tremendous. > > Next, you would want to consider linsolve. It can > show some gains because it does not need to do any > checks on the matrix. Even though you know that R > is symmetric, \ still needs to decide it is, and > check to see if is also positive definite. > > HTH, > John D'Errico > >
Thanks for the answer John. I don't actually know what these functions do but I certainly will have a look at them. So far has 'cholinc ...' complained about complex values (my matrix is complex) and stopped. H.
Rune Allnor wrote:
> Henrietta Denoue skrev: > >>Rune Allnor wrote: >> >>>Henrietta Denoue skrev: >>> >>> >>>>Hi >>>> >>>>I need to inverse a rather large symmetric matrix. Using 'inv' >>>>takes much time and I need to speed this up. What I actually have >>>>is the the follwing: >>>> >>>>A'*inv(R)*A >>>> >>>>Where A is convmtx(a,n) where a is a vector of length >>>>less than 'n' and R is A*A' .(or actually the pre-whitened, >>>>added a small epsilon on the diagonal). >>> >>> >>>So what you have to do, then, is to invert the matrix >>> >>>H = A'inv(AA')A >>> >>>right? >>> >> >>Right. >> >> >>>This looks very similar to the starting point of the Moore- >>>Penrose pseudoinverse, >>> >>>Q = A'inv(A'A)A >>> >>>(the difference is what factor of R is transposed.) >>> >>>Where did this matrix problem come from? If you are solving >>>some LMS problem, >> >>yes >> >> >>>could it be that you are actually looking >>>for the pseudo inverse? The matrix H above is so similar to >>>the pseudo inverse Q that I would start out making sure >>>there are no typos or blunders in the dervation of H. >>> >> >>right, it should read A'*(inv(R)*A) >>then if A is 90x100 and A' 100*90 and R 90x90 then the >>output is 100x100. > > > You are right. It is the pseudo inverse, except for that A is > transposed compared to what I am used to. > > If you are looking for Hi=inv(H) where > > H = A'*inv(A*A')*A > > then > > Hi = (pinv(A'))'; > > >>The above is an attempt to implement Soubaras algorithm >>for fx-projection. If A is the convolution matrix for PEF >>(prediction filter) then the projection filter reads as: >> >>A^2/(A^2+mu^2) > > > I don't know Soubara's algorithm. How do you compute A^2 if > A is a non-square matrix? > > Rune >
As above, i.e.: A'*(inv(R)*A) H.
In article <def21m$j1j$1@dolly.uninett.no>,
 Henrietta Denoue <henrietta@netcalcul.fr> wrote:

> Thanks for the answer John. > I don't actually know what these functions do but I > certainly will have a look at them. So far has > 'cholinc ...' complained about complex values (my > matrix is complex) and stopped.
Then its not positive definite. It always helps if you tell us details like being complex. I'd guess thats at least as important to know about as being symmetric. John -- The best material model of a cat is another, or preferably the same, cat. A. Rosenblueth, Philosophy of Science, 1945