DSPRelated.com
Forums

[OT] "Real world" linear algebra

Started by Richard Owlett August 28, 2009
I haven't had a linear algebra course in over 40 years.
AND, all our problem sets had exact solutions.

That is Ax+b = 0 (A a matrix, x&b vectors)
But I have Ax+b ~= 0 (fortunately  # of equations > # unknowns)

I have not *YET* Googled. Don't know appropriate keywords. 
Suggestions please. TIA


[As there are some that claim I don't give enough info.
   Current project is investigation of GPS "errors".
   Quotes cause one man's error is another's *DATA*
   As I'm cheap with limited observation time
     I'll use RINEX formatted CORS data ;]

Besides which I'm trying to educate myself.
On 28 Aug, 18:59, Richard Owlett <rowl...@pcnetinc.com> wrote:
> I haven't had a linear algebra course in over 40 years. > AND, all our problem sets had exact solutions. > > That is Ax+b = 0 (A a matrix, x&b vectors) > But I have Ax+b ~= 0 (fortunately &#4294967295;# of equations > # unknowns) > > I have not *YET* Googled. Don't know appropriate keywords. > Suggestions please. TIA
The main keyword is 'Moore-Penrose pseudoinverse.' Given data column vector b, basis column vectors in matrix A, and unknown coefficients column vector x, the model becomes b = Ax + e = b_A + e [1] where e is error and b_A is the attainable solution in the sense that it contains the part of b what is representable by A. It follows that e _|_ A, so <b_A,e> = 0 [2.a] or, <a_i,e> = 0 for all i. [2.b] where a_i is the i'th column of A. We exploit [2.b] to get rid of the error term in [1] by pre-multiplying by A': A'Ax = A'(b_A + e) = A'b_A + A'e = A'b_A + 0 [3] or more clearly, A'Ax = A'b_A. [4] This is easily solved for x as x = (A'A)^{-1}A'b_A [5] Conveniently, one never needs to compute b_A from b, x = (A'A)^{-1}A'b [6] works just as well. The matrix (A'A)^-1A' is the Moore-Penrose Pseudoinverse of A, and is computed using the Singular Value Decomposition, command svd in matlab (and presumably also in octave). Rune
Rune Allnor wrote:
> On 28 Aug, 18:59, Richard Owlett <rowl...@pcnetinc.com> wrote: >> I haven't had a linear algebra course in over 40 years. >> AND, all our problem sets had exact solutions. >> >> That is Ax+b = 0 (A a matrix, x&b vectors) >> But I have Ax+b ~= 0 (fortunately # of equations > # unknowns) >> >> I have not *YET* Googled. Don't know appropriate keywords. >> Suggestions please. TIA > > The main keyword is 'Moore-Penrose pseudoinverse.' > > Given data column vector b, basis column vectors in > matrix A, and unknown coefficients column vector x, > the model becomes > > b = Ax + e = b_A + e [1] > > where e is error and b_A is the attainable solution > in the sense that it contains the part of b what is > representable by A. > > It follows that e _|_ A, so > > <b_A,e> = 0 [2.a] > > or, > > <a_i,e> = 0 for all i. [2.b] > > where a_i is the i'th column of A. > > We exploit [2.b] to get rid of the error term in [1] > by pre-multiplying by A': > > A'Ax = A'(b_A + e) = A'b_A + A'e = A'b_A + 0 [3] > > or more clearly, > > A'Ax = A'b_A. [4] > > This is easily solved for x as > > x = (A'A)^{-1}A'b_A [5] > > Conveniently, one never needs to compute b_A from b, > > x = (A'A)^{-1}A'b [6] > > works just as well. > > The matrix (A'A)^-1A' is the Moore-Penrose Pseudoinverse > of A, and is computed using the Singular Value > Decomposition, command svd in matlab (and presumably also > in octave). > > Rune
Matlab (and octave) also has a pinv command which will conveniently compute Moore-Penrose pseudo-inverses for you. Of course, it must be using svd's "under the hood". Robert E. Beaudoin
What is on the right side of the eqn. Ax + b ~= 0?

Some references:

0) search for Moore-Penrose pseudoinverse (it's even in Wikipedia);

1) search for least squares problem, minimum norm solution;

2) Pick up a copy of Matrix Computations by G. Golub and C. Van Loan;

3) If you have MATLAB, type 'doc mldivide' for a good treatment of how 
they solve these problems.

Richard Owlett wrote:
> I haven't had a linear algebra course in over 40 years. > AND, all our problem sets had exact solutions. > > That is Ax+b = 0 (A a matrix, x&b vectors) > But I have Ax+b ~= 0 (fortunately # of equations > # unknowns) > > I have not *YET* Googled. Don't know appropriate keywords. Suggestions > please. TIA > > > [As there are some that claim I don't give enough info. > Current project is investigation of GPS "errors". > Quotes cause one man's error is another's *DATA* > As I'm cheap with limited observation time > I'll use RINEX formatted CORS data ;] > > Besides which I'm trying to educate myself.
On Aug 28, 1:17&#4294967295;pm, Rune Allnor <all...@tele.ntnu.no> wrote:
> On 28 Aug, 18:59, Richard Owlett <rowl...@pcnetinc.com> wrote: > > > I haven't had a linear algebra course in over 40 years. > > AND, all our problem sets had exact solutions. > > > That is Ax+b = 0 (A a matrix, x&b vectors) > > But I have Ax+b ~= 0 (fortunately &#4294967295;# of equations > # unknowns) > > > I have not *YET* Googled. Don't know appropriate keywords. > > Suggestions please. TIA > > The main keyword is 'Moore-Penrose pseudoinverse.' > > Given data column vector b, basis column vectors in > matrix A, and unknown coefficients column vector x, > the model becomes > > b = Ax + e = b_A + e &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; [1] > > where e is error and b_A is the attainable solution > in the sense that it contains the part &#4294967295;of b what is > representable by A. > > It follows that e _|_ A, so > > <b_A,e> = 0 &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; [2.a] > > or, > > <a_i,e> = 0 for all i. &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;[2.b] > > where a_i is the i'th column of A. > > We exploit [2.b] to get rid of the error term in [1] > by pre-multiplying by A': > > A'Ax = A'(b_A + e) = A'b_A + A'e = A'b_A + 0 &#4294967295; &#4294967295; [3] > > or more clearly, > > A'Ax = A'b_A. &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;[4] > > This is easily solved for x as > > x = (A'A)^{-1}A'b_A &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;[5] > > Conveniently, one never needs to compute b_A from b, > > x = (A'A)^{-1}A'b &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;[6] > > works just as well. > > The matrix (A'A)^-1A' is the Moore-Penrose Pseudoinverse > of A, and is computed using the Singular Value > Decomposition, command svd in matlab (and presumably also > in octave). > > Rune
You may also efficiently compute it via QR factorization. Clay
Karl Molnar wrote:
> What is on the right side of the eqn. Ax + b ~= 0?
I wasn't trying to be formally correct, just trying to contrast "real" with "ideal" (ie Ax+b = 0)
> > Some references: > > 0) search for Moore-Penrose pseudoinverse (it's even in Wikipedia);
First Google hit I got after reading Rune's response.
> > 1) search for least squares problem, minimum norm solution;
I should have thought of searching for "least squares". I don't recognize "minimum norm solution" at all - will have to look it up.
> > 2) Pick up a copy of Matrix Computations by G. Golub and C. Van Loan;
Not available from local library system. In fact using "matrix" in a keyword search yielded only one math related hit - Cliftnotes for Algebra II ;/
> > 3) If you have MATLAB, type 'doc mldivide' for a good treatment of how > they solve these problems.
I only have Scilab, will go searching its help files. Thanks to all who responded.
> > Richard Owlett wrote: >> I haven't had a linear algebra course in over 40 years. >> AND, all our problem sets had exact solutions. >> >> That is Ax+b = 0 (A a matrix, x&b vectors) >> But I have Ax+b ~= 0 (fortunately # of equations > # unknowns) >> >> I have not *YET* Googled. Don't know appropriate keywords. Suggestions >> please. TIA >> >> >> [As there are some that claim I don't give enough info. >> Current project is investigation of GPS "errors". >> Quotes cause one man's error is another's *DATA* >> As I'm cheap with limited observation time >> I'll use RINEX formatted CORS data ;] >> >> Besides which I'm trying to educate myself.
On 29 Aug, 13:44, Richard Owlett <rowl...@pcnetinc.com> wrote:

> > 1) search for least squares problem, minimum norm solution; > > I should have thought of searching for "least squares". I don't > recognize "minimum norm solution" at all - will have to look it up.
Given the model Ax = b_A + e both the terms 'minimum norm' and 'least mean squares' refer to the magnitude of the error. In 'traditional' lingo, the LMS refers to the sum of the squared error terms, which should be minimized. In linear algebra lingo, 'minimum norm' refers to the norm (length) of the error vector, ||e||, which should be minimum. Of course, when you write out the formula for the norm, you end up with the same formula as in the LMS case. Give or take a square root. Rune
On 2009-08-29 09:09:26 -0300, Rune Allnor <allnor@tele.ntnu.no> said:

> On 29 Aug, 13:44, Richard Owlett <rowl...@pcnetinc.com> wrote: > >>> 1) search for least squares problem, minimum norm solution; >> >> I should have thought of searching for "least squares". I don't >> recognize "minimum norm solution" at all - will have to look it up. > > Given the model > > Ax = b_A + e > > both the terms 'minimum norm' and 'least mean squares' > refer to the magnitude of the error.
Not really!
> In 'traditional' > lingo, the LMS refers to the sum of the squared error > terms, which should be minimized. > > In linear algebra lingo, 'minimum norm' refers to the > norm (length) of the
solution vector when the solution is not well determined. This happens when the columes of A are not of full rank. The rows are not of full rank when there are more rows than columes to set up the usual least squares problem.
> error vector, ||e||, which should > be minimum. Of course, when you write out the formula > for the norm, you end up with the same formula as in > the LMS case. Give or take a square root. > > Rune
You have gotten confused by not paying close atention to the subtle distinction. The "least" is applied to the residuals and the "minimum" to the solution. Totally arbitrary usage that is intended to avoid having long strings of adjectives to separate a subtle technical issue. Lots of folks will make the mistake of not noticing that there are two minimization problems present as they miss the low colume rank case. The symptoms are zero singular values in the SVD which leads to various direct solutions that do not need the specialized terminology.
On 29 Aug, 16:53, Gordon Sande <g.sa...@worldnet.att.net> wrote:
> On 2009-08-29 09:09:26 -0300, Rune Allnor <all...@tele.ntnu.no> said:
> You have gotten confused by not paying close atention to > the subtle distinction. The "least" is applied to the residuals > and the "minimum" to the solution.
I can't remember ever to have encountered a underdetermined system in a useful application. Once the system is underdetermined, you are basically down to guesswork. Constrained guesswork, but guesswork nonetheless. Rune
>On 29 Aug, 16:53, Gordon Sande <g.sa...@worldnet.att.net> wrote: >> On 2009-08-29 09:09:26 -0300, Rune Allnor <all...@tele.ntnu.no> said: > >> You have gotten confused by not paying close atention to >> the subtle distinction. The "least" is applied to the residuals >> and the "minimum" to the solution. > > Once the system is >underdetermined, you are basically down to guesswork. >Constrained guesswork, but guesswork nonetheless.
True, although the covariance output of a good routine will tell you which parameters are in error so you can improve the model and/or take better data. I get the impression that the OP is trying to do something with the residuals besides just minimizing them and forgetting. Richard: am I mistaken?
>I can't remember ever to have encountered a underdetermined >system in a useful application.
Perhaps you've been fortunate to have good enough data? The way the use of SVD is motivated in the section "General Linear Least Squares" of "Numerical Recipes in C, 2nd ed" includes the following: "The reason is that, more often than experimenters would like to admit, data do not clearly distinguish between two or more of the basis functions provided." "...irony in the fact that least-squares problems are both overdetermined (number of data points greater than number of parameters) and underdetermined (ambiguous combinations of parameters exist)..." "...In the case of an underdetermined system, SVD produces a solution whose values ... are smallest in the least-squares sense..." Also, for conditioning reasons, in the use of SVD, you are supposed to doctor singular values that are very close to zero, with "close" being some fraction (specific to the application) of the largest singular value. I do not know what MATLAB chooses for the threshold in its internal routines. In light of the act of forcing a nearly-singular matrix to be *actually* singular, it should become clear where the "underdetermined" aspect can come from. But it is totally conceivable that the folks (you?) designing your experiments had enough intuition to choose experiments that uniquely determine your coefficients, in which case you've been able to avoid this. Even if the OP doesn't have enough satellites, the results may have physical meaning to him.