Hello, i want to compute the inverse of a positive semi-definite matrix A. To insure robustness of the inversion i want to use a simple regularization of the form: A' = A + delta*I where delta is the regularization parameter and A' is the regularized matrix. Is anyone familiar with any empirical rules to determine delta based on the values of the elements of the matrix A? thank you, N.
Robust Inversion of a Matrix
Started by ●August 28, 2008
Reply by ●August 29, 20082008-08-29
On Thu, 28 Aug 2008 05:53:06 -0500, namlak wrote:> Hello, > > i want to compute the inverse of a positive semi-definite matrix A. To > insure robustness of the inversion i want to use a simple regularization > of the form: > > A' = A + delta*I > > where delta is the regularization parameter and A' is the regularized > matrix. Is anyone familiar with any empirical rules to determine delta > based on the values of the elements of the matrix A? > > thank you, > N.I think you're more likely to insure that the inverse of A' bears no relation to the inverse of A. The applied math newsgroup is really good, there are a lot more folks on that group that really know their stuff compared to this one. --- Tim Wescott www.wescottdesign.com Control and communications system consulting
Reply by ●August 29, 20082008-08-29
>On Thu, 28 Aug 2008 05:53:06 -0500, namlak wrote: > >> Hello, >> >> i want to compute the inverse of a positive semi-definite matrix A. To >> insure robustness of the inversion i want to use a simpleregularization>> of the form: >> >> A' = A + delta*I >> >> where delta is the regularization parameter and A' is the regularized >> matrix. Is anyone familiar with any empirical rules to determine delta >> based on the values of the elements of the matrix A? >> >> thank you, >> N. > >I think you're more likely to insure that the inverse of A' bears no >relation to the inverse of A. > >The applied math newsgroup is really good, there are a lot more folks on>that group that really know their stuff compared to this one. > >--- >Tim Wescott >www.wescottdesign.com >Control and communications system consulting >Hi Tim, what's the address of this applied math newsgroup? N.
Reply by ●August 29, 20082008-08-29
>On Thu, 28 Aug 2008 05:53:06 -0500, namlak wrote: > >> Hello, >> >> i want to compute the inverse of a positive semi-definite matrix A. To >> insure robustness of the inversion i want to use a simpleregularization>> of the form: >> >> A' = A + delta*I >> >> where delta is the regularization parameter and A' is the regularized >> matrix. Is anyone familiar with any empirical rules to determine delta >> based on the values of the elements of the matrix A? >> >> thank you, >> N. > >I think you're more likely to insure that the inverse of A' bears no >relation to the inverse of A. > >The applied math newsgroup is really good, there are a lot more folks on>that group that really know their stuff compared to this one. > >--- >Tim Wescott >www.wescottdesign.com >Control and communications system consulting >I forgot to ask you: as far as i know you are an experienced dsp and control systems engineer, where matrix inversion happens all the time. What would you choose for delta? You know, there are various Tikhonov regularization techniques but engineers do not seem to use them. Instead they select delta on an ad-hoc basis. So, suppose that you run an algorithm, which at each iteration inverts a positive semi-definite matrix. Since the matrix can have a zero eigenvalue, and hence can be singular, you do not have another choice but use regularization as i pointed. So, what delta would you select? N.
Reply by ●August 29, 20082008-08-29
namlak schrieb:> Hello, > > i want to compute the inverse of a positive semi-definite matrix A. To > insure robustness of the inversion i want to use a simple regularization of > the form: > > A' = A + delta*I > > where delta is the regularization parameter and A' is the regularized > matrix. Is anyone familiar with any empirical rules to determine delta > based on the values of the elements of the matrix A?A' is (up to a sign) the resolvent of the matrix A at (-delta). The above regularization only makes sense if 0 is in the spectrum of A, and hence A is not invertible (or close to not invertible). If so, the distance of delta from the spectrum of A should be large enough, but small enough to keep A' (the resolvent at -\delta) close to the inverse (which is the resolvent at 0). You can estimate the error you make by adding delta to A by the first resolvent equation. So long, Thomas
Reply by ●August 29, 20082008-08-29
>namlak schrieb: >> Hello, >> >> i want to compute the inverse of a positive semi-definite matrix A. To >> insure robustness of the inversion i want to use a simpleregularization of>> the form: >> >> A' = A + delta*I >> >> where delta is the regularization parameter and A' is the regularized >> matrix. Is anyone familiar with any empirical rules to determine delta >> based on the values of the elements of the matrix A? > >A' is (up to a sign) the resolvent of the matrix A at (-delta). The aboveregularization only>makes sense if 0 is in the spectrum of A, and hence A is not invertible(or close to not>invertible). If so, the distance of delta from the spectrum of A shouldbe large enough,>but small enough to keep A' (the resolvent at -\delta) close to theinverse (which is>the resolvent at 0). You can estimate the error you make by adding deltato A by the>first resolvent equation. > >So long, > Thomas >Hi Thomas, thank you for your answer. However, i am not sure i followed you. First time a hear about "the resolvent of a matrix". Yes, A can be singular, with 0 belonging to it's spectrum. I did not understand how i can estimate the error. N.
Reply by ●August 29, 20082008-08-29
>I forgot to ask you: as far as i know you are an experienced dsp and >control systems engineer, where matrix inversion happens all the time.What>would you choose for delta? You know, there are various Tikhonov >regularization techniques but engineers do not seem to use them. Instead >they select delta on an ad-hoc basis. So, suppose that you run an >algorithm, which at each iteration inverts a positive semi-definitematrix.>Since the matrix can have a zero eigenvalue, and hence can be singular,you>do not have another choice but use regularization as i pointed. So, what >delta would you select?Dear Namlak, Nobody inverts a matrix these days. What you more likely need to do is to solve A*x = b. So you factor A in one form (QR, Cholesky, etc.), and use that to find the solution x. These methods are called "factor and solve". I think that's what you meant by inverting A via regularization. If your matrix A is singular (as it seems to be from your question), you might want to find its SVD. Then you throw away relatively small singular values and the associated singular vectors, use the pseudo-inverse. For instance, if you keep only the first n-largest singular values, where n<rank(A), then you get the truncated-SVD inverse as SUM_i(1^n 1/sigma_i * v_i * u_i'), where sigma_i are the singular values (ordered from largest to smallest), u_i and v_i are the left and right eigenvectors. In other words, A = U S V' (prime denotes conjugate transpose) is the SVD of A, u_i and v_i are columns of U and V, respectively, and sigma_i = S_ii. How you choose n depends on what you want the condition number of your truncated-SVD approximation of A to be.. If you have significant noise in your system, you'd like it as small as possible. Two cents of an engineer who chooses parameters on an ad-hoc basis... :-) Hope this helps. Emre
Reply by ●August 29, 20082008-08-29
>I forgot to ask you: as far as i know you are an experienced dsp and >control systems engineer, where matrix inversion happens all the time.What>would you choose for delta? You know, there are various Tikhonov >regularization techniques but engineers do not seem to use them. Instead >they select delta on an ad-hoc basis. So, suppose that you run an >algorithm, which at each iteration inverts a positive semi-definitematrix.>Since the matrix can have a zero eigenvalue, and hence can be singular,you>do not have another choice but use regularization as i pointed. So, what >delta would you select?Dear Namlak, Nobody inverts a matrix these days. What you more likely need to do is to solve A*x = b. So you factor A in one form (QR, Cholesky, etc.), and use that to find the solution x. These methods are called "factor and solve". I think that's what you meant by inverting A via regularization. If your matrix A is singular (as it seems to be from your question), you might want to find its SVD. Then you throw away relatively small singular values and the associated singular vectors, use the pseudo-inverse. For instance, if you keep only the first n-largest singular values, where n<rank(A), then you get the truncated-SVD inverse as SUM_i(1^n 1/sigma_i * v_i * u_i'), where sigma_i are the singular values (ordered from largest to smallest), u_i and v_i are the left and right eigenvectors. In other words, A = U S V' (prime denotes conjugate transpose) is the SVD of A, u_i and v_i are columns of U and V, respectively, and sigma_i = S_ii. How you choose n depends on what you want the condition number of your truncated-SVD approximation of A to be.. If you have significant noise in your system, you'd like it as small as possible. Two cents of an engineer who chooses parameters on an ad-hoc basis... :-) Hope this helps. Emre
Reply by ●August 29, 20082008-08-29
>> SUM_i(1^n 1/sigma_i * v_i * u_i')Correction: A# = SUM_{i=1^n} ( 1/sigma_i * v_i * u_i') where A# is the truncated-SVD inverse of A. Emre
Reply by ●August 29, 20082008-08-29
namlak wrote:> thank you for your answer. However, i am not sure i followed you. First > time a hear about "the resolvent of a matrix". Yes, A can be singular, with > 0 belonging to it's spectrum. I did not understand how i can estimate the > error.If zero belongs to the spectrum, there is no inverse - full stop. The closest you can then get is a B such that (B A) is the projection to the complement on the kernel of A. If you need this, then this is a different question (but not a regularization) - I thought you are wondering about making the computation of the inverse numerically more stable. Note that I'm not an expert in numerics - but here's a short intro into resolvents: The resolvent of a matrix A at \lambda is the inverse of (Id * \lambda - A). Obviously, the resolvent exists if and only if z is not an element of the spectrum of A. Resolvents to different parameters are related by the resolvent equation: R(\lambda) - R(\mu) = R(\lambda) R(\mu) (\lambda - \mu) (This is nothing but the matrix version of the identity 1/x - 1/y = 1/(xy) * (y - x) ) The above estimates the error in case you have good estimates on the resolvents, i.e. it tells you that *If* A is invertible and \lambda is small enough, then the error you make by adding the term Id \lambda is in first order linear in \lambda (which is not a big surprise, though). IOW, unlike what others said, the resolvent *does* have a lot to do with the inverse, provided \delta is small enough. So long, Thomas






