DSPRelated.com
Forums

Eigenvalue Accuracy in Matlab

Started by Randy Yates December 10, 2005
Adding very little to the excellent advice given here, a practical method 
for assessing "goodness" of the matrix is to compute its condition number. 
Well-conditioned matrices will return a reciprocal condition (rcond) result 
close to unity; ill-conditioned matrices have rcond close to machine eps. 
Accuracy of eig, backsolvers, etc, are all sensitive to condition number. 
Your matrix reveals:

>> rcond(A)
ans = 8.6357e-018 It's very poorly conditioned ( eig returns digital dandruff ;-) ) Use rcond to quickly identify ill-conditioned systems. --Don "Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message news:ellieandrogerxyzzy-1012052135450001@pool1430.cvx4-bradley.dialup.earthlink.net...
> In article <irtwxu9y.fsf@ieee.org>, Randy Yates <yates@ieee.org> wrote: > >> Hi Folks, >> >> Consider the following integer matrix: >> >> A = >> >> 1 0 -1 >> -1 1 0 >> 3 -1 -2 >> >> A is a nilpotent matrix of index 3, as can be seen by evaluating A^3. >> So then the only eigenvalues of A are zero. >> >> However, Matlab's "eig" function returns: >> >> ans = >> >> -7.178125070607518e-006 >> 3.589062535291640e-006 +6.216541114106220e-006i >> 3.589062535291640e-006 -6.216541114106220e-006i >> >> Is there no way to get exact results for such simple matrices? Or >> is there a way to establish some sort of rounding for these types >> of functions? >> -- >> % Randy Yates > ---------------------------- > As to the numerical computation of A's eigenvalues, in this case they > are the roots to the cubic equation > > 0 = det(A - lambda*eye(3)) = -lambda^3 > > Suppose that matlab, in the process of carrying out a complicated > algorithm for eigenvalue/eigenvector solutions makes a very small roundoff > error out in the 16-th decimal place in the process and arrives at this > equation > > -lambda^3 = 3.698563363842292e-16 > > instead. In this case the solutions for lambda would be: > > -7.178125070607523e-06 > 3.589062535303763e-06 + 6.216438662688082e-06i > 3.589062535303763e-06 - 6.216438662688082e-06i > > which are quite close to those you obtained! > > The point here is that, for this particular ill-conditioned matrix, a > very small rounding error in computing the eigenvalue equation, well > within the limits that could be expected for 64-bit floating point > accuracy, will result in greatly expanded errors for the eigenvalues > themselves. Such instability is inherent in a matrix in which a > multiplicity of its eigenvalues are at, or very close to, zero. > > (Remove "xyzzy" and ".invalid" to send me email.) > Roger Stafford
"Don Orofino" <don@mathworks.com> wrote in message
news:dnin9k$4u0$1@fred.mathworks.com...
> Adding very little to the excellent advice given here, a practical method > for assessing "goodness" of the matrix is to compute its condition
number.
> Well-conditioned matrices will return a reciprocal condition (rcond)
result
> close to unity; ill-conditioned matrices have rcond close to machine eps. > Accuracy of eig, backsolvers, etc, are all sensitive to condition number. > Your matrix reveals: > > >> rcond(A) > ans = > 8.6357e-018 > > It's very poorly conditioned ( eig returns digital dandruff ;-) ) Use
rcond
> to quickly identify ill-conditioned systems. > > --Don > > > "Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in > message >
news:ellieandrogerxyzzy-1012052135450001@pool1430.cvx4-bradley.dialup.earthlink.net...
> > In article <irtwxu9y.fsf@ieee.org>, Randy Yates <yates@ieee.org> wrote: > > > >> Hi Folks, > >> > >> Consider the following integer matrix: > >> > >> A = > >> > >> 1 0 -1 > >> -1 1 0 > >> 3 -1 -2 > >> > >> A is a nilpotent matrix of index 3, as can be seen by evaluating A^3. > >> So then the only eigenvalues of A are zero. > >> > >> However, Matlab's "eig" function returns: > >> > >> ans = > >> > >> -7.178125070607518e-006 > >> 3.589062535291640e-006 +6.216541114106220e-006i > >> 3.589062535291640e-006 -6.216541114106220e-006i > >> > >> Is there no way to get exact results for such simple matrices? Or > >> is there a way to establish some sort of rounding for these types > >> of functions? > >> -- > >> % Randy Yates > > ---------------------------- > > As to the numerical computation of A's eigenvalues, in this case they > > are the roots to the cubic equation > > > > 0 = det(A - lambda*eye(3)) = -lambda^3 > > > > Suppose that matlab, in the process of carrying out a complicated > > algorithm for eigenvalue/eigenvector solutions makes a very small
roundoff
> > error out in the 16-th decimal place in the process and arrives at this > > equation > > > > -lambda^3 = 3.698563363842292e-16 > > > > instead. In this case the solutions for lambda would be: > > > > -7.178125070607523e-06 > > 3.589062535303763e-06 + 6.216438662688082e-06i > > 3.589062535303763e-06 - 6.216438662688082e-06i > > > > which are quite close to those you obtained! > > > > The point here is that, for this particular ill-conditioned matrix, a > > very small rounding error in computing the eigenvalue equation, well > > within the limits that could be expected for 64-bit floating point > > accuracy, will result in greatly expanded errors for the eigenvalues > > themselves. Such instability is inherent in a matrix in which a > > multiplicity of its eigenvalues are at, or very close to, zero. > > > > (Remove "xyzzy" and ".invalid" to send me email.) > > Roger Stafford > >
Good point. Also, a dconig for the A in question reveals that Matlab is not to be trusted in fp arithmetic: indeed that Tr(A) == 0 is sufficient to ..., anyways, do your own homework! -- Bye, Hrundi V.B.
"Mr Hrundi V Bakshi" <mrhrundivbakshi@hotmail.com> writes:

> Good point. Also, a dconig for the A in question reveals that Matlab is not > to be trusted in fp arithmetic: indeed that Tr(A) == 0 is sufficient to > ..., anyways, do your own homework!
You're just upset because you gave the wrong advice, and don't realise that "0.0000" in matlab is not really zero. Go do your own homework! Ciao, Peter K.
I am afraid that Don has fallen victim to a widely believed misconception.
The condition number of a matrix has nothing to do with the sensitivity or
computed accuracy of its eigenvalues.  The condition number of a matrix
measures nearness to singularity.  A matrix is singular if and only if it
has an eigenvalue equal to zero.  The condition number of the matrix of 
eigenvectors is the crucial quantity.  If a matrix fails to have a full
set of linearly independent eigenvectors, then its eigenvalues are sensitive
to perturbations, including roundoff error.  Such matrices are called
"defective".

This particular example happens to be both singular and defective.  If
any scalar value is added to the diagonal elements, then the matrix is no
longer singular, but that scalar value becomes the sensitive eigenvalue.

  -- Cleve
  moler@mathworks.com


In article <dnin9k$4u0$1@fred.mathworks.com>,
Don Orofino <don@mathworks.com> wrote:
>Adding very little to the excellent advice given here, a practical method >for assessing "goodness" of the matrix is to compute its condition number. >Well-conditioned matrices will return a reciprocal condition (rcond) result >close to unity; ill-conditioned matrices have rcond close to machine eps. >Accuracy of eig, backsolvers, etc, are all sensitive to condition number. >Your matrix reveals: > >>> rcond(A) >ans = > 8.6357e-018 > >It's very poorly conditioned ( eig returns digital dandruff ;-) ) Use rcond >to quickly identify ill-conditioned systems. > >--Don > > >"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in >message >news:ellieandrogerxyzzy-1012052135450001@pool1430.cvx4-bradley.dialup.earthlink.net... >> In article <irtwxu9y.fsf@ieee.org>, Randy Yates <yates@ieee.org> wrote: >> >>> Hi Folks, >>> >>> Consider the following integer matrix: >>> >>> A = >>> >>> 1 0 -1 >>> -1 1 0 >>> 3 -1 -2 >>> >>> A is a nilpotent matrix of index 3, as can be seen by evaluating A^3. >>> So then the only eigenvalues of A are zero. >>> >>> However, Matlab's "eig" function returns: >>> >>> ans = >>> >>> -7.178125070607518e-006 >>> 3.589062535291640e-006 +6.216541114106220e-006i >>> 3.589062535291640e-006 -6.216541114106220e-006i >>> >>> Is there no way to get exact results for such simple matrices? Or >>> is there a way to establish some sort of rounding for these types >>> of functions? >>> -- >>> % Randy Yates >> ---------------------------- >> As to the numerical computation of A's eigenvalues, in this case they >> are the roots to the cubic equation >> >> 0 = det(A - lambda*eye(3)) = -lambda^3 >> >> Suppose that matlab, in the process of carrying out a complicated >> algorithm for eigenvalue/eigenvector solutions makes a very small roundoff >> error out in the 16-th decimal place in the process and arrives at this >> equation >> >> -lambda^3 = 3.698563363842292e-16 >> >> instead. In this case the solutions for lambda would be: >> >> -7.178125070607523e-06 >> 3.589062535303763e-06 + 6.216438662688082e-06i >> 3.589062535303763e-06 - 6.216438662688082e-06i >> >> which are quite close to those you obtained! >> >> The point here is that, for this particular ill-conditioned matrix, a >> very small rounding error in computing the eigenvalue equation, well >> within the limits that could be expected for 64-bit floating point >> accuracy, will result in greatly expanded errors for the eigenvalues >> themselves. Such instability is inherent in a matrix in which a >> multiplicity of its eigenvalues are at, or very close to, zero. >> >> (Remove "xyzzy" and ".invalid" to send me email.) >> Roger Stafford > >
"Peter K." <p.kootsookos@remove.ieee.org> wrote in message
news:uhd9ejyc8.fsf@remove.ieee.org...
> "Mr Hrundi V Bakshi" <mrhrundivbakshi@hotmail.com> writes: > > > Good point. Also, a dconig for the A in question reveals that Matlab is
not
> > to be trusted in fp arithmetic: indeed that Tr(A) == 0 is sufficient to > > ..., anyways, do your own homework! > > You're just upset because you gave the wrong advice, and don't realise > that "0.0000" in matlab is not really zero. Go do your own homework! >
I couldn't give a rat's ass what Matlab produces: it's an overpriced tool within the budget of the naive. For the A in question, its eigenvalues are threefold degenerately naught 0, not "0.0000" or any othersuch boloney. Sorry that this escaped you. -- Tsk, Hrundi V.B.
"Cleve Moler" <moler@mathworks.com> wrote in message
news:dnl992$c7v$1@fred.mathworks.com...
> I am afraid that Don has fallen victim to a widely believed
misconception.
> The condition number of a matrix has nothing to do with the sensitivity
or
> computed accuracy of its eigenvalues.
There are numerous matrix condition numbers, which one do you mean?, obviously wrt to inversion which is not necessarily the object of interest.
>The condition number of a matrix > measures nearness to singularity. A matrix is singular if and only if it > has an eigenvalue equal to zero. The condition number of the matrix of > eigenvectors is the crucial quantity. If a matrix fails to have a full > set of linearly independent eigenvectors, then its eigenvalues are
sensitive
> to perturbations, including roundoff error. Such matrices are called > "defective". > > This particular example happens to be both singular and defective. If > any scalar value is added to the diagonal elements, then the matrix is no > longer singular, but that scalar value becomes the sensitive eigenvalue. >
The particular A could represent the interaction of three species such that they while dynamically fluid, they're ultimately matastable, as occurs naturally in many chemically reactive systems. In these cases the condition number of interest is not wrt inversion and such irrelevant suspects as singularity, defectiveness, et cetra, et cetra, et cetra, but rather the exponentiation of A (scaled as warranted) which Matlab does next to squat in providing either an optimum algorithm for its computation or its condition. -- Bye, Hrundi V.B.
"Mr Hrundi V Bakshi" <mrhrundivbakshi@hotmail.com> writes:

> "Peter K." <p.kootsookos@remove.ieee.org> wrote in message > news:uhd9ejyc8.fsf@remove.ieee.org... > > "Mr Hrundi V Bakshi" <mrhrundivbakshi@hotmail.com> writes: > > > > > Good point. Also, a dconig for the A in question reveals that Matlab is > not > > > to be trusted in fp arithmetic: indeed that Tr(A) == 0 is sufficient to > > > ..., anyways, do your own homework! > > > > You're just upset because you gave the wrong advice, and don't realise > > that "0.0000" in matlab is not really zero. Go do your own homework! > > > > I couldn't give a rat's ass what Matlab produces: it's an overpriced tool > within the budget of the naive. > For the A in question, its eigenvalues are threefold degenerately naught 0, > not "0.0000" or any othersuch boloney. Sorry that this escaped you.
It escaped you, that's all I was pointing out. :-) Ciao, Peter K.
"Cleve Moler" <moler@mathworks.com> wrote in message
news:dnl992$c7v$1@fred.mathworks.com...
> I am afraid that Don has fallen victim to a widely believed misconception. > The condition number of a matrix has nothing to do with the sensitivity or > computed accuracy of its eigenvalues. The condition number of a matrix > measures nearness to singularity. A matrix is singular if and only if it > has an eigenvalue equal to zero. The condition number of the matrix of > eigenvectors is the crucial quantity. If a matrix fails to have a full > set of linearly independent eigenvectors, then its eigenvalues are
sensitive
> to perturbations, including roundoff error. Such matrices are called > "defective". > > This particular example happens to be both singular and defective. If > any scalar value is added to the diagonal elements, then the matrix is no > longer singular, but that scalar value becomes the sensitive eigenvalue. > > -- Cleve > moler@mathworks.com
Wow! Cleve Moler writing on comp.dsp - I know we do a fair amount of Mathworks bashing here, but I'm impressed. For those who don't know, he was one of the founders. Cheers Bhaskar
> In article <dnin9k$4u0$1@fred.mathworks.com>, > Don Orofino <don@mathworks.com> wrote: > >Adding very little to the excellent advice given here, a practical method > >for assessing "goodness" of the matrix is to compute its condition
number.
> >Well-conditioned matrices will return a reciprocal condition (rcond)
result
> >close to unity; ill-conditioned matrices have rcond close to machine eps. > >Accuracy of eig, backsolvers, etc, are all sensitive to condition number. > >Your matrix reveals: > > > >>> rcond(A) > >ans = > > 8.6357e-018 > > > >It's very poorly conditioned ( eig returns digital dandruff ;-) ) Use
rcond
> >to quickly identify ill-conditioned systems. > > > >--Don > > > > > >"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in > >message > >news:ellieandrogerxyzzy-1012052135450001@pool1430.cvx4-bradley.dialup.earth
link.net...
> >> In article <irtwxu9y.fsf@ieee.org>, Randy Yates <yates@ieee.org> wrote: > >> > >>> Hi Folks, > >>> > >>> Consider the following integer matrix: > >>> > >>> A = > >>> > >>> 1 0 -1 > >>> -1 1 0 > >>> 3 -1 -2 > >>> > >>> A is a nilpotent matrix of index 3, as can be seen by evaluating A^3. > >>> So then the only eigenvalues of A are zero. > >>> > >>> However, Matlab's "eig" function returns: > >>> > >>> ans = > >>> > >>> -7.178125070607518e-006 > >>> 3.589062535291640e-006 +6.216541114106220e-006i > >>> 3.589062535291640e-006 -6.216541114106220e-006i > >>> > >>> Is there no way to get exact results for such simple matrices? Or > >>> is there a way to establish some sort of rounding for these types > >>> of functions? > >>> -- > >>> % Randy Yates > >> ---------------------------- > >> As to the numerical computation of A's eigenvalues, in this case they > >> are the roots to the cubic equation > >> > >> 0 = det(A - lambda*eye(3)) = -lambda^3 > >> > >> Suppose that matlab, in the process of carrying out a complicated > >> algorithm for eigenvalue/eigenvector solutions makes a very small
roundoff
> >> error out in the 16-th decimal place in the process and arrives at this > >> equation > >> > >> -lambda^3 = 3.698563363842292e-16 > >> > >> instead. In this case the solutions for lambda would be: > >> > >> -7.178125070607523e-06 > >> 3.589062535303763e-06 + 6.216438662688082e-06i > >> 3.589062535303763e-06 - 6.216438662688082e-06i > >> > >> which are quite close to those you obtained! > >> > >> The point here is that, for this particular ill-conditioned matrix, a > >> very small rounding error in computing the eigenvalue equation, well > >> within the limits that could be expected for 64-bit floating point > >> accuracy, will result in greatly expanded errors for the eigenvalues > >> themselves. Such instability is inherent in a matrix in which a > >> multiplicity of its eigenvalues are at, or very close to, zero. > >> > >> (Remove "xyzzy" and ".invalid" to send me email.) > >> Roger Stafford > > > > > >
On Tue, 13 Dec 2005 09:34:36 -0800, "Bhaskar Thiagarajan"
<bhaskart@deja.com> wrote:

>"Cleve Moler" <moler@mathworks.com> wrote in message
...
>> This particular example happens to be both singular and defective. If >> any scalar value is added to the diagonal elements, then the matrix is no >> longer singular, but that scalar value becomes the sensitive eigenvalue. >> >> -- Cleve >> moler@mathworks.com > >Wow! Cleve Moler writing on comp.dsp - I know we do a fair amount of >Mathworks bashing here, but I'm impressed. >For those who don't know, he was one of the founders. > >Cheers >Bhaskar
He's been here plenty of times in the past, including during our Matlab bashing regarding ones-based indexing. His technical input is always good and welcomed (as cited above), but he's seemed completely unsympathetic regarding the indexing issues. There are a number of well-known folks, techno-deities, so to speak, that show up here from time to time. It's always nice to hear from them, IMHO. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Eric Jacobsen wrote:
> On Tue, 13 Dec 2005 09:34:36 -0800, "Bhaskar Thiagarajan" > <bhaskart@deja.com> wrote:
...
> >Wow! Cleve Moler writing on comp.dsp -
Randy's original post and the whole thread was crossposted to comp.soft-sys.matlab which, i would think, would be monitored by TMW folks, including Cleve.
> > I know we do a fair amount of > >Mathworks bashing here, but I'm impressed. > >For those who don't know, he was one of the founders.
we know, we know...
> He's been here plenty of times in the past, including during our > Matlab bashing regarding ones-based indexing.
that brings me back. maybe 1999 or 2000, i think. it's still inexcusable, but since TMW has been so deaf about this, it's just more people for Mathematica or Octave. and it's not that it's one-based, but it can *only* be one-based without any real option to set it to a different base.
> His technical input is > always good and welcomed (as cited above), but he's seemed completely > unsympathetic regarding the indexing issues.
especially since we have fully dispelled his stated reason for not adopting an adjustable base (that it would not be backward compatible)
> There are a number of well-known folks, techno-deities, so to speak, > that show up here from time to time. It's always nice to hear from > them, IMHO.
Bob Adams has posted here recently regarding some weird Fourier Transform problem. i think it is accurate that he has been the principal designer of the first monolithic asynchronous sample rate converter (and was one of the pioneers of sigma-delta) which puts him at least in the techno-high-priesthood. Hello Cleve: just to let you know that your unresponsiveness regarding this need for an adjustable index base has lost TMW at least two customers that i know of. you should have heeded Tom Krause's advice 15 years ago (not for the sake of 2 potential customers but because MATLAB could have lived to its advertized potential). such a shame. Melly Clistmas. r b-j rbj@audioimagination.com "Imagination is more important than knowledge."