What does it physically mean to say that the functions
exp(i*w*t) are eigenvectors of linear systems?
Bob
--
"Things should be described as simply as possible, but no
simpler."
A. Einstein
A _really_ basic question
Started by ●June 17, 2004
Reply by ●June 17, 20042004-06-17
Hi Bob, Bob Cain wrote:> What does it physically mean to say that the functions exp(i*w*t) are > eigenvectors of linear systems?Same thing it always means for something to be an eigenvector, I imagine. I.e., A e = \lambda e, for e an eigenvector of A, and \lambda a (complex) scalar. When you start using phrases like "physically mean" and expressions involving complex numbers in the same sentence, then I think that it behoves you to supply a little more information about the scope of the "physical meaning" that you're hoping for. My two cents: it tends to mean that the system is LTI (linear, time-invariant). Wrong direction of answer? -- Andrew
Reply by ●June 17, 20042004-06-17
Andrew Reilly wrote:> Hi Bob, > > Bob Cain wrote: > >> What does it physically mean to say that the functions exp(i*w*t) are >> eigenvectors of linear systems? > > > Same thing it always means for something to be an eigenvector, I > imagine. I.e., A e = \lambda e, for e an eigenvector of A, and \lambda > a (complex) scalar. > > When you start using phrases like "physically mean" and expressions > involving complex numbers in the same sentence, then I think that it > behoves you to supply a little more information about the scope of the > "physical meaning" that you're hoping for. > > My two cents: it tends to mean that the system is LTI (linear, > time-invariant). > > Wrong direction of answer? >Yes. To be more specific, what interpretation does it have in physical mechanics in the case of mechanical LTI systems. Sorry. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by ●June 17, 20042004-06-17
"Bob Cain" <arcane@arcanemethods.com> wrote in message news:car1vv02pf2@enews3.newsguy.com...> What does it physically mean to say that the functions > exp(i*w*t) are eigenvectors of linear systems?You should say "eigenfunctions" and "linear time invariant systems". Then, it means exactly that: if x(t) is of the form exp(i*w*t), and y(t) is the response of an LTI system to x(t), then y(t)=ax(t) for some complex scalar factor a. In English, it means that the response of the system to the signal is just a scaled version of the signal. The scaling factor is allowed to be complex. The scaling factor (a above) is the eigenvalue for the signal. Every eigenfunction has an associated eigenvalue. For LTI systems, it actually works for any x(t) = (a+jb)*exp(t*(x+jy)), i.e., any single value at any single point on the S or Z plane. Other LTI systems can have additional eigenfunctions, but exponentials like this are eigenfunctions of *every* LTI system. The term "eigenvector" is used with vectors and matrices instead of signals and systems. A vector v is an eigenvector of a matrix M iff there is some scalar a such that Mv=av. Again, the output is just a scaled version of the input, and the complex scaling factor, a, is the eigenvalue for the given eigenvector.
Reply by ●June 17, 20042004-06-17
"Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote in message news:Xf9Ac.24496$nY.949518@news20.bellglobal.com...> > "Bob Cain" <arcane@arcanemethods.com> wrote in message > news:car1vv02pf2@enews3.newsguy.com... > > What does it physically mean to say that the functions > > exp(i*w*t) are eigenvectors of linear systems? > > You should say "eigenfunctions" and "linear time invariant systems". > > Then, it means exactly that: > > if x(t) is of the form exp(i*w*t), and y(t) is the response of an LTIsystem> to x(t), then y(t)=ax(t) for some complex scalar factor a. > > In English, it means that the response of the system to the signal is justa> scaled version of the signal. The scaling factor is allowed to becomplex.> > The scaling factor (a above) is the eigenvalue for the signal. Every > eigenfunction has an associated eigenvalue. > > For LTI systems, it actually works for any x(t) = (a+jb)*exp(t*(x+jy)), > i.e., any single value at any single point on the S or Z plane. Other LTI > systems can have additional eigenfunctions, but exponentials like this are > eigenfunctions of *every* LTI system. > > The term "eigenvector" is used with vectors and matrices instead ofsignals> and systems. A vector v is an eigenvector of a matrix M iff there is some > scalar a such that Mv=av. Again, the output is just a scaled version ofthe> input, and the complex scaling factor, a, is the eigenvalue for the given > eigenvector.Good response Matt. So, Bob, you will notice that a square wave is an infinite sum (Fourier Series) of (prescaled) eigenfunctions and not an eigenfunction itself - because the scaling property doesn't apply except at a single frequency or perhaps I should say at each frequency separately. Fred
Reply by ●June 17, 20042004-06-17
"Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote in message news:<Xf9Ac.24496$nY.949518@news20.bellglobal.com>...> "Bob Cain" <arcane@arcanemethods.com> wrote in message > news:car1vv02pf2@enews3.newsguy.com... > > What does it physically mean to say that the functions > > exp(i*w*t) are eigenvectors of linear systems? > > You should say "eigenfunctions" and "linear time invariant systems". > > Then, it means exactly that: > > if x(t) is of the form exp(i*w*t), and y(t) is the response of an LTI system > to x(t), then y(t)=ax(t) for some complex scalar factor a. > > In English, it means that the response of the system to the signal is just a > scaled version of the signal. The scaling factor is allowed to be complex. > > The scaling factor (a above) is the eigenvalue for the signal. Every > eigenfunction has an associated eigenvalue. > > For LTI systems, it actually works for any x(t) = (a+jb)*exp(t*(x+jy)), > i.e., any single value at any single point on the S or Z plane. Other LTI > systems can have additional eigenfunctions, but exponentials like this are > eigenfunctions of *every* LTI system.Eh... I don't think the exponentials are a caused by the system being LTI, rather, I think exponentials are eigen vectors of particular differential equations. If you express the physical problem in a cylindrical or spherical coordinate system, the systems are LTI but some of the eigen vectors might be Bessel functions or Legendre polynomials instead of exponentials.> The term "eigenvector" is used with vectors and matrices instead of signals > and systems. A vector v is an eigenvector of a matrix M iff there is some > scalar a such that Mv=av. Again, the output is just a scaled version of the > input, and the complex scaling factor, a, is the eigenvalue for the given > eigenvector.Right. This applies to matrixes and it applies to differential equations. If you define an "differential operator" Lf (which isn't as bad as it sounds, it merely means "apply some sort of unspecified differential equation to f" much the same way as f(t) means "apply an unspecified function to the argument t") then there are certain "special" functions f that have the convenient property Lf = mu f where mu is the real or complex "eigenvalue" and f is the "eigenvector" or "eigenfunction". Note that different "operators" (i.e. differential equations) L have different eigenvalues and eigenvectors, just the same way that different matrixes have different eigenvalues/eigenvectors. Whenever we can find these eigenfunctions f, we are at a great advantage since all we need to do is to express the general signal x(t) in terms of the eigenfunctions (or eigenvectors) f(t): inf x(t) = sum c_n f_n(t) n=0 where c_n is the weighting coefficient of the n'th eigenfunction. Since the same rules applies in an infinite dimentional vector space as in finite dimensions, we define an "inner product" as b <x,f_n> = integral x(tau)*conj(f_n(tau)) dtau a where a,b are the (possibly infinite) upper and lower integration limits, i.e. the boundary conditions of the differential equation. If we now apply the operator L to general signal, we merely need to know how L affects the different eigenvectors: inf Lx(t) = L(sum <x(t),f_n(t)> f_n(t)) n=0 inf = sum <x(t),f_n(t)>Lf_n(t) n=0 inf = sum mu_n*c_n*f_n(t) n=0 I am sure you see that this is exactly what the Fourier analysis is about, except that the exponential function that is so essential to DSP'ers have not been explicitly mentioned. I just found a book that summarizes all those essential results from Hilbert space analysis (which this really is) in a very useful, "engineering-like" manner: Deutsch: "Best Approximations in Inner Product Spaces" Springer, 2001. If you already have an intro course in linear systems theory, Deutsch' chapters 1 and 2 summarize those key results from linear space theory so clearly and with so little mathematical fuzz that those chapters alone almost justifies buying the book. Rune
Reply by ●June 17, 20042004-06-17
Matt Timmermans wrote:> > Then, it means exactly that: > > if x(t) is of the form exp(i*w*t), and y(t) is the response of an LTI system > to x(t), then y(t)=ax(t) for some complex scalar factor a. > > In English, it means that the response of the system to the signal is just a > scaled version of the signal. The scaling factor is allowed to be complex. > > The scaling factor (a above) is the eigenvalue for the signal. Every > eigenfunction has an associated eigenvalue. > > For LTI systems, it actually works for any x(t) = (a+jb)*exp(t*(x+jy)), > i.e., any single value at any single point on the S or Z plane.Thanks Matt. That's just what I'm looking for. Eigenstuff stands outside my education and I'm always a bit mystified when people start speaking it.> Other LTI > systems can have additional eigenfunctions, but exponentials like this are > eigenfunctions of *every* LTI system.Got an example of such a system and its eigenfunction?> > The term "eigenvector" is used with vectors and matrices instead of signals > and systems. A vector v is an eigenvector of a matrix M iff there is some > scalar a such that Mv=av. Again, the output is just a scaled version of the > input, and the complex scaling factor, a, is the eigenvalue for the given > eigenvector. >I see. My question came from starting to read Mallat's "A Wavelet Tour Of Signal Processing" and he thinks in terms of vectors and Hilbert spaces. I doubt I'll get far with this, he presumes too formal a math education, but I thought I'd give it a go. Just for context, I got into a mailing list argument recently on the ubiquity of the Fourier decomposition when considering models of hearing, I don't think it particularly applies, and want to deepen my understanding of functional decomposition. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by ●June 17, 20042004-06-17
Rune Allnor wrote:> > Right. This applies to matrixes and it applies to differential equations. > If you define an "differential operator" Lf (which isn't as bad as it sounds, > it merely means "apply some sort of unspecified differential equation > to f" much the same way as f(t) means "apply an unspecified function > to the argument t") then there are certain "special" functions f that > have the convenient property > > Lf = mu f > > where mu is the real or complex "eigenvalue" and f is the "eigenvector" > or "eigenfunction". Note that different "operators" (i.e. differential > equations) L have different eigenvalues and eigenvectors, just the same > way that different matrixes have different eigenvalues/eigenvectors.This is exactly the context of Mallat's wavelet text from which my question derived and nicely explains why he uses the term. Thanks, Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by ●June 17, 20042004-06-17
"Rune Allnor" <allnor@tele.ntnu.no> wrote in message news:f56893ae.0406170228.2fe9b8d9@posting.google.com...> "Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote in messagenews:<Xf9Ac.24496$nY.949518@news20.bellglobal.com>...> > For LTI systems, it actually works for any x(t) = (a+jb)*exp(t*(x+jy)), > > i.e., any single value at any single point on the S or Z plane. OtherLTI> > systems can have additional eigenfunctions, but exponentials like thisare> > eigenfunctions of *every* LTI system. > > Eh... I don't think the exponentials are a caused by the system being > LTI, rather, I think exponentials are eigen vectors of particular > differential equations.I'm not sure what you mean by "caused", but many different kinds of operators have eigenfunctions or eigen-something-elses. In all cases, the meaning of the term is simple -- the result of the operator applied to an eigen-whatever-the-domain-is is that eigen-whatever multiplied by a scalar eigenvalue. All linear time-invariant systems have all of the exponentials of the above form as eigenfunctions.> If you express the physical problem in a cylindrical or spherical > coordinate system, the systems are LTI but some of the eigen vectors > might be Bessel functions or Legendre polynomials instead of exponentials.Those systems aren't linear and time-invariant. -- Matt
Reply by ●June 17, 20042004-06-17
"Bob Cain" <arcane@arcanemethods.com> wrote in message news:caslgn02uq1@enews2.newsguy.com...> Thanks Matt. That's just what I'm looking for. Eigenstuff > stands outside my education and I'm always a bit mystified > when people start speaking it.Just remember that the term is as simple as I'm saying it is -- the result of an operator applied to one of its eigen-whatevers is just that eigen-whatever multiplied by a scalar eigenvalue.> > Other LTI > > systems can have additional eigenfunctions, but exponentials like thisare> > eigenfunctions of *every* LTI system. > > Got an example of such a system and its eigenfunction?Every continuous filter is an LTI system. H(s) gives the eigenvalue for the eigenfunction e^s. When H(s_i) is the same for multiple s_i abscissas, any linear combination of the corresponding e^(s_i) eigenfunctions will be another eigenfunction. Zero-phase filters have sines and cosines as eigenfunctions, for example, because they have H(jw)=H(-jw). A discrete filter is linear and time-invariant too, with the understanding that time shifts are integral multiples of the sample period. Similar to the continuous case, every H(z) gives the eigenvalue for the eigenfunction e^z.> My question came from starting to read Mallat's "A > Wavelet Tour Of Signal Processing" and he thinks in terms of > vectors and Hilbert spaces. I doubt I'll get far with this, > he presumes too formal a math education, but I thought I'd > give it a go.Ah, then he will be using a formal definition of the word "vector" that includes continuous functions, discrete functions, finite vectors, and anything else you can make a Hilbert space out of. Note also that wavelet analysis is an explicit departure from the time-invariance of Fourier analysis, so all this LTI stuff may not apply. Eigenthings are used all over the place, but I don't remember the concept being fundamental to wavelet analysis, so I can't guess at what you would actually be seeing in Mallat's book. -- Matt






