DSPRelated.com
Forums

Karhunen-Loeve Expansion of a Wiener Process and Eigenvalues/Eigenfunctions of a Function

Started by Randy Yates December 15, 2003
We covered KL expansion in my Random Processes course and in reviewing
for the final I found that I'm really lost.

Part of my problem is that in performing the expansion you must find the
eigenfunctions of a *function*. I don't understand that. The only place
I've ever seen eigen-stuff is in linear algebra, and there we found the
eigenvalues and eigen*vectors* of a square matrix. I've never seen the
eigenfunction and eigenvalue of a *function*.

It seems to be defined as the solution to the equation

   \int_{0}^{T} Cxx(t, s) \phi(t) dt = \lambda * \phi(t)

which I guess you solve by differentiating and then solve
as a differential equation.

Anyway, any pointers, hints, jokes, etc. would be appreciated.
-- 
%  Randy Yates                  % "...the answer lies within your soul
%% Fuquay-Varina, NC            %       'cause no one knows which side
%%% 919-577-9882                %                   the coin will fall."
%%%% <yates@ieee.org>           %  'Big Wheels', *Out of the Blue*, ELO
http://home.earthlink.net/~yatescr
Randy Yates wrote:
> We covered KL expansion in my Random Processes course and in reviewing > for the final I found that I'm really lost. > > Part of my problem is that in performing the expansion you must find the > eigenfunctions of a *function*. I don't understand that. The only place > I've ever seen eigen-stuff is in linear algebra, and there we found the > eigenvalues and eigen*vectors* of a square matrix. I've never seen the > eigenfunction and eigenvalue of a *function*. > > It seems to be defined as the solution to the equation > > \int_{0}^{T} Cxx(t, s) \phi(t) dt = \lambda * \phi(t)
Hi Randy, it looks like you are looking for eigenfunctions not of a function but of a linear operator (like the integral operator you defined above). In this sense, it is the same as looking for eigenvectors in linear algebra. The linear operator is defined as a linear map over a function space, ie. the argument vectors are functions. For example, the Fourier transform is an injective linear operator on Schwarz (vector) space. It has the Gaussian pulse as eigenfunction with eigenvalue 1 - you can plug a j somewhere in the pulse, then you also get an eigenfunction to eigenvalue j (or -j, can't remember).
> which I guess you solve by differentiating and then solve > as a differential equation.
This strongly depends on the problem, and in the general case there is no closed form solution. Some results of functional analysis can guarantee the existence of solutions depending on the operator definition and image range, but finding them isn't easy.
> > Anyway, any pointers, hints, jokes, etc. would be appreciated.
Sorry, can't think of a joke right now :). Regards, Andor
They have to be eigenfunctions so that the coefficients are 
uncorrelated.  That's the point of the expansion - you can pick
coefficients independently and produce an instance of the process.

Conversely, if you throw out the coefficients with the smallest energy, 
that energy loss is all the damage you do.

Stationary processes have sines and cosines as eigenfunctions, and
the |coefficients|^2 are the power spectrum.  To generate a realization,
you pick coefficients with variance equal to the power spectrum
and random phase, and add them up.  If it's gaussian, there's nothing
more to say about the process.
-- 
Ron Hardin
rhhardin@mindspring.com

On the internet, nobody knows you're a jerk.
Randy Yates <yates@ieee.org> wrote in message news:<zecDb.395$SX1.54@newsread2.news.atl.earthlink.net>...
> We covered KL expansion in my Random Processes course and in reviewing > for the final I found that I'm really lost. > > Part of my problem is that in performing the expansion you must find the > eigenfunctions of a *function*. I don't understand that. The only place > I've ever seen eigen-stuff is in linear algebra, and there we found the > eigenvalues and eigen*vectors* of a square matrix. I've never seen the > eigenfunction and eigenvalue of a *function*. > > It seems to be defined as the solution to the equation > > \int_{0}^{T} Cxx(t, s) \phi(t) dt = \lambda * \phi(t) > > which I guess you solve by differentiating and then solve > as a differential equation.
Nope, it's not an eigen expansion of a function, it could an eigen expansion of a functionAL, although I suspect the term "operator" to be better (at least according to Young: "Introduction to Hilbert space", Cambridge, 1988). It's basically the same thing as the eigen decomposition of a matrix: (\int_{0}^{T} Cxx(t, s) dt - \lambda) * \phi(t) = 0 (L-\lambda)\phi(t) = 0 L = \int_{0}^{T} Cxx(t, s) dt for \phi(t) \not= 0. The course on real analysis I took ten years ago stopped just short of actually evaluating such eigen expansions. The key is that the integral equation in some (to me) mystical manner is related to a differential equation that makes an inverse operator to the integral operator L, and the function \phi(t) is then an eigen function to that differential operator. So if you end up with the inverse of L, L^{-1}, being the Laplace equation in carthesian coordinates, the eigenfunctions \phi(t) become the solutions to that differential equation, i.e. \phi(t) = A\exp(-j\omega t) + B\exp(j\omega t). Or something like that. It's really interesting stuff, but I have never seen anyone take this further from an engineering point of view. The maths book we used (Young, cited above) basically said that linear operators of these types are exhausted from a maths research point of view. Which probably means that the mathematicians find it very boring and prefer to do other things.
> Anyway, any pointers, hints, jokes, etc. would be appreciated.
I'm sorry I can't be of more specific help. My response could, if nothing else, perhaps be classified as a "joke"... Rune
Randy Yates <yates@ieee.org> wrote in message news:<zecDb.395$SX1.54@newsread2.news.atl.earthlink.net>...
> We covered KL expansion in my Random Processes course and in reviewing > for the final I found that I'm really lost. > > Part of my problem is that in performing the expansion you must find the > eigenfunctions of a *function*. I don't understand that. The only place > I've ever seen eigen-stuff is in linear algebra, and there we found the > eigenvalues and eigen*vectors* of a square matrix. I've never seen the > eigenfunction and eigenvalue of a *function*. >
Hello Randy, To better understand this, just go back to the definition of an eigensystem. Namely Lx=lambda*x Where operator L operates on function(vector) x and results in the same function(vector) simply scaled by a constant,lambda. You already said you are familiar with the linear algebra case, but you actually know another. When you input a sinusoid into a linear filter, you get the same sinusoid out but simply scaled. Since the scaling can be complex - this incorporates the possible phase shift. So sinusoid functions are eigenfunctions and the gain/phase is the corresponding eigenvalue for the operator L representing the filter. Note how in this case there are an infinite number of eigenfunctions (since freq. is a cont. variable). I hope this makes some sense now. When/If you get in the theory of Green's function, you will find out how to expand a delta function into a complete set of eigenfunctions. It is a neat way of solving some types of differential equations. Clay
physics@bellsouth.net (Clay S. Turner) wrote in message news:<1ae4d220.0312160612.4dd3fbe8@posting.google.com>...

> When/If you get in the theory of Green's function, you will find out > how to expand a delta function into a complete set of eigenfunctions. > It is a neat way of solving some types of differential equations. > > Clay
Hi Clay. Speaking of the theory of Green's functions, that's where I got lost ten years ago when I took that intro course on Real Analysis. After having gone through some basic properties of Banach spaces, linear spaces, the Cauch-Schwartz inequality, linear functionals and operators, the last item before jumping ont the Green's function was the "compact operator". Now, I have some very vague impression of what such an animal might look like, but I can't really come to grips with it. The point that really confused me was some property of the series (eigenfunction?) expansion of the compact operator: For a compact operator to exist, it was sufficient that a *subseries* of this series expansion converged. I have (well, had) no problems envisioning an operator where the *total* series ecpansion converged, but when it was sufficient that merely a *sub*series converged, I was lost. Unfortunately, these weird "compact operators" appear both in the theory of differential equations as well as in optimization problems, so it would certainly be useful to understand something more about them. Do you know of any exposition of "compact operators" that an eclectical engineer like me can have a hope of understanding? I have a somewhat masochistic attitude to maths, I can go through great pain in trying to understand something if I think it to be useful. But not at any cost. Rune
allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312161046.12722@posting.google.com>...
> physics@bellsouth.net (Clay S. Turner) wrote in message news:<1ae4d220.0312160612.4dd3fbe8@posting.google.com>...
[..]
> Do you know of any exposition of "compact operators" that an eclectical > engineer like me can have a hope of understanding? I have a somewhat > masochistic attitude to maths, I can go through great pain in trying > to understand something if I think it to be useful. But not at any cost.
An amazingly clearly written book: Vulikh, B Z : Introduction To Functional Analysis for Scientists ISBN: 1114540714 (or see most any book on functional analysis.) Vulikh uses the term "completely continuous" instead of "compact". Among real numbers a bounded set always has a subset that is a convergent series, (see Bolzano-Weierstrass) not so among functions. This property of the real numbers is at the heart of any nummerical approximation method. A finite dimensional vector space has the same property where distance (boundedness, etc.) is understood in the Euclidean sense. A compact operator transforms any bounded set of elements that do not necessarily have convergent subseries into a set that is bounded and has a convergent subseries. All non-singular square matrices are of course compact because a bounded set is transformed into a bounded set. What is interesting is that under certain symmetry conditions, becasue of its "compactness" property, a compact operator over an infinite dimensional space can be approximated as a series of finite dimensional operators that are just symmetric matrices! The most famous example is what you are referring to: if K(x,y) is a continuous function of two variables over the finite closed rectangle a<=x<=b and c<=y<=d, then the integral operator between continuous function sets of F over [a,b] and G over [c,d] g = K[f] = integral_a^b K(x,y)f(y) dy = g(x) is compact. Here distance is measured in mean square sense, that is the distance between u(x) and (v(x) is integral_a^b |u(x)-v(x)|^2 dx. If in addition to this we also have K(x,y) = K(y,x), that is a symmetrical kernel (or K(x,y) = K(y,x)* if it is complex), then the integral operator can be expanded as a convergent series in terms of its eigenfunctions and eigenvalues, etc., in a form reminescent to matrix decomposition. (Mercer-Hilbert Theorem). hope this helps Robert
rge11x@netscape.net (robert egri) wrote in message news:<b4e32a30.0312170634.73db8bf6@posting.google.com>...
> allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312161046.12722@posting.google.com>... > > physics@bellsouth.net (Clay S. Turner) wrote in message news:<1ae4d220.0312160612.4dd3fbe8@posting.google.com>... > > [..] > > Do you know of any exposition of "compact operators" that an eclectical > > engineer like me can have a hope of understanding? I have a somewhat > > masochistic attitude to maths, I can go through great pain in trying > > to understand something if I think it to be useful. But not at any cost. > > An amazingly clearly written book: > > Vulikh, B Z : Introduction To Functional Analysis for Scientists > ISBN: 1114540714 > > (or see most any book on functional analysis.) > > Vulikh uses the term "completely continuous" instead of "compact". > Among real numbers a bounded set always has a subset that is a > convergent series, (see Bolzano-Weierstrass) not so among functions. > This property of the real numbers is at the heart of any nummerical > approximation method. A finite dimensional vector space has the same > property where distance (boundedness, etc.) is understood in the > Euclidean sense. > > A compact operator transforms any bounded set of elements that do not > necessarily have convergent subseries into a set that is bounded and > has a convergent subseries. All non-singular square matrices are of > course compact because a bounded set is transformed into a bounded > set. What is interesting is that under certain symmetry conditions, > becasue of its "compactness" property, a compact operator over an > infinite dimensional space can be approximated as a series of finite > dimensional operators that are just symmetric matrices!
Right. This is what struck me as useful (admittedly after the course teacher had "struck" the class with all kinds of motivations and excuses for doing these things).
> The most famous example is what you are referring to: if K(x,y) is a > continuous function of two variables over the finite closed rectangle > a<=x<=b and c<=y<=d, then the integral operator between continuous > function sets of F over [a,b] and G over [c,d] > > g = K[f] = integral_a^b K(x,y)f(y) dy = g(x) > > is compact.
This looks just as a matrix product. Which reinforces the impression of a very strong link between matrixes and the operators.
> Here distance is measured in mean square sense, that is > the distance between u(x) and (v(x) is > > integral_a^b |u(x)-v(x)|^2 dx.
Which means that the operator is bounded in the L_2 norm, right?
> If in addition to this we also have K(x,y) = K(y,x), that is a > symmetrical kernel (or K(x,y) = K(y,x)* if it is complex), then the > integral operator can be expanded as a convergent series in terms of > its eigenfunctions and eigenvalues, etc., in a form reminescent to > matrix decomposition. (Mercer-Hilbert Theorem). > > hope this helps
It sure does. I only wish there was a "Rick Lyons of mathemathics" who would take the time to write an understandable and accurate introduction to these things. I don't think it's all that difficult, once all that maths lingo is sorted out. We are, after all, discussing "mere" geometry. Rune
allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312171203.d55ec45@posting.google.com>...
> rge11x@netscape.net (robert egri) wrote in message news:<b4e32a30.0312170634.73db8bf6@posting.google.com>... > > allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0312161046.12722@posting.google.com>... > > > physics@bellsouth.net (Clay S. Turner) wrote in message news:<1ae4d220.0312160612.4dd3fbe8@posting.google.com>... > > > > [..] > > > Do you know of any exposition of "compact operators" that an eclectical > > > engineer like me can have a hope of understanding? I have a somewhat > > > masochistic attitude to maths, I can go through great pain in trying > > > to understand something if I think it to be useful. But not at any cost. > > > > An amazingly clearly written book: > > > > Vulikh, B Z : Introduction To Functional Analysis for Scientists > > ISBN: 1114540714 > > > > (or see most any book on functional analysis.) > > > > Vulikh uses the term "completely continuous" instead of "compact". > > Among real numbers a bounded set always has a subset that is a > > convergent series, (see Bolzano-Weierstrass) not so among functions. > > This property of the real numbers is at the heart of any nummerical > > approximation method. A finite dimensional vector space has the same > > property where distance (boundedness, etc.) is understood in the > > Euclidean sense. > > > > A compact operator transforms any bounded set of elements that do not > > necessarily have convergent subseries into a set that is bounded and > > has a convergent subseries. All non-singular square matrices are of > > course compact because a bounded set is transformed into a bounded > > set. What is interesting is that under certain symmetry conditions, > > becasue of its "compactness" property, a compact operator over an > > infinite dimensional space can be approximated as a series of finite > > dimensional operators that are just symmetric matrices! > > Right. This is what struck me as useful (admittedly after the course > teacher had "struck" the class with all kinds of motivations and excuses > for doing these things). > > > The most famous example is what you are referring to: if K(x,y) is a > > continuous function of two variables over the finite closed rectangle > > a<=x<=b and c<=y<=d, then the integral operator between continuous > > function sets of F over [a,b] and G over [c,d] > > > > g = K[f] = integral_a^b K(x,y)f(y) dy = g(x) > > > > is compact. > > This looks just as a matrix product. Which reinforces the impression > of a very strong link between matrixes and the operators.
This is exactly the point. An integral operator with a continuous kernel is just like a matrix in the sense that it is representable as a convergent sequence of _finite_ dimensional operators, square matrices. If the kernel is not continuous then there is no such guarantee and the looks are just fake.
> > > Here distance is measured in mean square sense, that is > > the distance between u(x) and (v(x) is > > > > integral_a^b |u(x)-v(x)|^2 dx. > > Which means that the operator is bounded in the L_2 norm, right?
Boundedness of an operator means that a bounded set is mapped into a bounded set. Compactness is stronger in the sense that the mapped set is bounded and also has a convergent subsequence. The existence of a convergent sunsequence does not follow from boundedness; it depends among other things on the definition of distance. A trivial example of a bounded but non-compact operator is the identity over L^2 of the square summable sequences:(x1,x2,x3,....). The unit vectors (1,0,0,0...), (0,1,0,0 ..) , etc., obviously form a bounded set but there is no convergent subsequence and the identity operator will not produce one either. Of course, any other orthonormal sequence of functions will do just as well as counter-example over L^2. Notice that to represent the identity operator as an integral operator you need the kernel K(x,y) = delta(x-y) which functoin is not continuous!
> > > If in addition to this we also have K(x,y) = K(y,x), that is a > > symmetrical kernel (or K(x,y) = K(y,x)* if it is complex), then the > > integral operator can be expanded as a convergent series in terms of > > its eigenfunctions and eigenvalues, etc., in a form reminescent to > > matrix decomposition. (Mercer-Hilbert Theorem). > > > > hope this helps > > It sure does. I only wish there was a "Rick Lyons of mathemathics" who > would take the time to write an understandable and accurate introduction > to these things. I don't think it's all that difficult, once all that > maths lingo is sorted out. We are, after all, discussing "mere" geometry. > > Rune
I share your frustration... Rick is indeed a rare creature in science education!
Randy Yates wrote:

> We covered KL expansion in my Random Processes course and in reviewing > for the final I found that I'm really lost. > > Part of my problem is that in performing the expansion you must find the > eigenfunctions of a *function*. I don't understand that. The only place > I've ever seen eigen-stuff is in linear algebra, and there we found the > eigenvalues and eigen*vectors* of a square matrix. I've never seen the > eigenfunction and eigenvalue of a *function*. > > It seems to be defined as the solution to the equation > > \int_{0}^{T} Cxx(t, s) \phi(t) dt = \lambda * \phi(t) > > which I guess you solve by differentiating and then solve > as a differential equation. > > Anyway, any pointers, hints, jokes, etc. would be appreciated.
Dear Andor, Ron, Rune, Clay, and Robert, I thank each and every one of you for your responses. The final is over and am awaiting grades, but he didn't ask a think about KL except briefly in a multiple choice. Obviously he was giving us a "survey" of the technique. It will apparently take some more study to understand it like you fellows do - I hope some day I will have the chance to do just that. Until then you will have to be satisfied in having planted a seed. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr