DSPRelated.com
Forums

what matrices are easiest to invert?

Started by Unknown October 24, 2016
On 2016-10-24 22:59:50 +0000, Christian Gollwitzer said:

> Am 25.10.16 um 00:18 schrieb Marcel Mueller: >> On 24.10.16 11.00, pedro1492@lycos.com wrote: >>> Simultaneous equations with random coefficients go through Gaussian >>> elimination - awfully slow if you have 100+ rows. >> >> Sure? Numerical recipies recommends LU decomposition. > > LU decomposition is the same as Gaussia elimination. It is of the order > O(N^3) flops for square matrices. The question is, what exactly means > "slow". A 100x100 system is solved in a millisecond using LU > decomposition on a desktop computer. "Slow" always depends on your > expectations, whereas for a 1000x1000 system you'll need a second. For > a dense system, you cannot do much better than that - however if you > have a sparse system, then from a certain point on an iterative method > like conjugate gradients will be faster. > > Christian
The O(N^3) notation hides the important detail of the multiplying constant. LU has a lower coefficient than what is usually called Gaussian elimination.
Am 25.10.16 um 01:45 schrieb Gordon Sande:
> On 2016-10-24 22:59:50 +0000, Christian Gollwitzer said: > >> Am 25.10.16 um 00:18 schrieb Marcel Mueller: >>> On 24.10.16 11.00, pedro1492@lycos.com wrote: >>>> Simultaneous equations with random coefficients go through Gaussian >>>> elimination - awfully slow if you have 100+ rows. >>> >>> Sure? Numerical recipies recommends LU decomposition. >> >> LU decomposition is the same as Gaussia elimination. It is of the >> order O(N^3) flops for square matrices. The question is, what exactly >> means "slow". A 100x100 system is solved in a millisecond using LU >> decomposition on a desktop computer. "Slow" always depends on your >> expectations, whereas for a 1000x1000 system you'll need a second. For >> a dense system, you cannot do much better than that - however if you >> have a sparse system, then from a certain point on an iterative method >> like conjugate gradients will be faster. >> >> Christian > > The O(N^3) notation hides the important detail of the multiplying constant. > LU has a lower coefficient than what is usually called Gaussian > elimination.
Are you sure? The leading coefficient is 2/3 n^3 according to Wikipedia. What is the difference between Gaussian elimination and LU decomposition? I thought LU is just Gaussian elimination where you store the factors (which then form the L matrix), if you want to solve for another right hand side. The OP also wrote about solving a system of equations, so I hope he is not computing the explicit inverse of the matrix and then multiplying it with the right hand side. A big factor is also the use of the right library. There can be a factor of 1000 in speed between a naive implementation in C for linear algebra algorithms and a highly tuned library like openBLAS, Intel MKL or the like. Christian
[...snip...]
>>> >>> A small correction, I think you meant "orthonormal" instead of >>> "orthogonal". >>> >>> My answer was going to be a diagonal matrix. They are really easy to >>> invert. >>> >>> Ced >>> --------------------------------------- >>> Posted through http://www.DSPRelated.com >>> >> >>Sure thing, Ced. Could you, please, invert for me the following diagonal
>>matrix? >> >>1 0 0 >>0 1 0 >>0 0 0 >> >>I would really appreciate your effort! >> >>Kindly thanks, >>Gene
Alright Gene, you got me. Though, the topic question implies you are dealing with an invertible matrix. I would say that diagonal matrices are the easiest to tell whether they are invertible followed by matrices with identical rows or columns.
> >lulz > >"Orthonormal" applies to vector sets, although an orthonormal set can >be used to make an orthogonal matrix. > >Has nobody mentioned the identity matrix yet as an example of "easiest >to invert"?
It's been more than thirty years since I took Linear Algebra, so I had to look it up. You are absolutely correct, thanks for the correction to my inaccurate correction. It does seem to be poor nomenclature though. An orthogonal basis is not represented by an orthogonal matrix unless it is orthonormal. None of our "witty" answers fit the OP's specification of random coefficients. Ced --------------------------------------- Posted through http://www.DSPRelated.com