May I know how the eigenvalue spread affects the steady state error of LMS-type algorithms?
Eigenvalue spread and steady state error
Started by ●November 9, 2006
Reply by ●November 10, 20062006-11-10
>May I know how the eigenvalue spread affects the steady state error of >LMS-type algorithms? >An increase in the eiqenvalue spread of the input correlation matrix will cause an increase in steady state error. The LMS algorithm will also converge more slowly as the eigenvalue spread increases.
Reply by ●November 10, 20062006-11-10
Many thanks for your help. From the literature review, I found that most of the books (Haykin, Widrow, ...) prove that higher eigenvalue spread results in slow convergence speed. But may I know how to mathematically prove that an increase in eigenvalue spread will cause an increase in steady state error as well? Please help. Thanks.>>May I know how the eigenvalue spread affects the steady state error of >>LMS-type algorithms? >> > >An increase in the eiqenvalue spread of the input correlation matrixwill>cause an increase in steady state error. The LMS algorithm will also >converge more slowly as the eigenvalue spread increases. >
Reply by ●November 10, 20062006-11-10
"mike450exc" <mgraziano@ieee.org> writes:>>May I know how the eigenvalue spread affects the steady state error of >>LMS-type algorithms? >> > > An increase in the eiqenvalue spread of the input correlation matrix will > cause an increase in steady state error. The LMS algorithm will also > converge more slowly as the eigenvalue spread increases.Hi Mike, I disagree. At least in one case MSE is not related to e.s. Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where it can be shown that MSEdelta is not related to the e.s. (I derived from [proakiscomm]). So the question is, does e.s. affect MSEmin? From [widrow], MSEmin = E[d^2_k] - P^T R^{-1} P, where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it doesn't matter what R is. I suspect the same result can be shown in general, but I lack the time to derive it. -- % Randy Yates % "My Shangri-la has gone away, fading like %% Fuquay-Varina, NC % the Beatles on 'Hey Jude'" %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Shangri-La', *A New World Record*, ELO http://home.earthlink.net/~yatescr
Reply by ●November 10, 20062006-11-10
Randy Yates <yates@ieee.org> writes: @BOOK{proakiscomm, title = "{Digital Communications}", author = "John~G.~Proakis", publisher = "McGraw-Hill", edition = "fourth", year = "2001"} @book{widrow, title = "Adaptive Signal Processing", author = "Bernard Widrow, Samuel D. Stearns", publisher = "Prentice-Hall", edition = "third", year = "1985"} -- % Randy Yates % "Though you ride on the wheels of tomorrow, %% Fuquay-Varina, NC % you still wander the fields of your %%% 919-577-9882 % sorrow." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://home.earthlink.net/~yatescr
Reply by ●November 10, 20062006-11-10
Randy Yates <yates@ieee.org> writes:> Randy Yates <yates@ieee.org> writes: > > @book{widrow, > title = "Adaptive Signal Processing", > author = "Bernard Widrow, Samuel D. Stearns", > publisher = "Prentice-Hall", > edition = "third", > year = "1985"}Edition is first, not third. (The dangers of copying bibliography entries in LaTeX's BiBTeX...) -- % Randy Yates % "Ticket to the moon, flight leaves here today %% Fuquay-Varina, NC % from Satellite 2" %%% 919-577-9882 % 'Ticket To The Moon' %%%% <yates@ieee.org> % *Time*, Electric Light Orchestra http://home.earthlink.net/~yatescr
Reply by ●November 10, 20062006-11-10
Randy Yates <yates@ieee.org> writes:> "mike450exc" <mgraziano@ieee.org> writes: > >>>May I know how the eigenvalue spread affects the steady state error of >>>LMS-type algorithms? >>> >> >> An increase in the eiqenvalue spread of the input correlation matrix will >> cause an increase in steady state error. The LMS algorithm will also >> converge more slowly as the eigenvalue spread increases. > > Hi Mike, > > I disagree. At least in one case MSE is not related to e.s. > > Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where > it can be shown that MSEdelta is not related to the e.s. (I derived > from [proakiscomm]). > > So the question is, does e.s. affect MSEmin? From [widrow], > > MSEmin = E[d^2_k] - P^T R^{-1} P, > > where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it doesn't > matter what R is.Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that P^T R^{-1} P = E[d^2_k] and MSEmin = 0. However, this puts no constraint on the eigenvalue spread of R. In this case, the eigenvalues are the diagonals of R and the spread can be anything depending on the statistics of x_k = d_k. The conclusion is still valid therefore. -- % Randy Yates % "The dreamer, the unwoken fool - %% Fuquay-Varina, NC % in dreams, no pain will kiss the brow..." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO http://home.earthlink.net/~yatescr
Reply by ●November 10, 20062006-11-10
But from the simulation results in Haykin's book, it has been shown that steady state error increases as eigenvalue spread increases. Hence, can we draw the conclusion that eigenvalue spread can affect the steady state error?>Randy Yates <yates@ieee.org> writes: > >> "mike450exc" <mgraziano@ieee.org> writes: >> >>>>May I know how the eigenvalue spread affects the steady state errorof>>>>LMS-type algorithms? >>>> >>> >>> An increase in the eiqenvalue spread of the input correlation matrixwill>>> cause an increase in steady state error. The LMS algorithm will also >>> converge more slowly as the eigenvalue spread increases. >> >> Hi Mike, >> >> I disagree. At least in one case MSE is not related to e.s. >> >> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >> it can be shown that MSEdelta is not related to the e.s. (I derived >> from [proakiscomm]). >> >> So the question is, does e.s. affect MSEmin? From [widrow], >> >> MSEmin = E[d^2_k] - P^T R^{-1} P, >> >> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and itdoesn't>> matter what R is. > >Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >P^T R^{-1} P = E[d^2_k] and MSEmin = 0. > >However, this puts no constraint on the eigenvalue spread of R. In >this case, the eigenvalues are the diagonals of R and the spread can >be anything depending on the statistics of x_k = d_k. > >The conclusion is still valid therefore. >-- >% Randy Yates % "The dreamer, the unwoken fool - >%% Fuquay-Varina, NC % in dreams, no pain will kiss thebrow...">%%% 919-577-9882 % >%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >http://home.earthlink.net/~yatescr >
Reply by ●November 10, 20062006-11-10
>But from the simulation results in Haykin's book, it has been shown that >steady state error increases as eigenvalue spread increases. > >Hence, can we draw the conclusion that eigenvalue spread can affect the >steady state error? > > > >>Randy Yates <yates@ieee.org> writes: >> >>> "mike450exc" <mgraziano@ieee.org> writes: >>> >>>>>May I know how the eigenvalue spread affects the steady state error >of >>>>>LMS-type algorithms? >>>>> >>>> >>>> An increase in the eiqenvalue spread of the input correlation matrix >will >>>> cause an increase in steady state error. The LMS algorithm willalso>>>> converge more slowly as the eigenvalue spread increases. >>> >>> Hi Mike, >>> >>> I disagree. At least in one case MSE is not related to e.s. >>> >>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >>> it can be shown that MSEdelta is not related to the e.s. (I derived >>> from [proakiscomm]). >>> >>> So the question is, does e.s. affect MSEmin? From [widrow], >>> >>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>> >>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >doesn't >>> matter what R is. >> >>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >> >>However, this puts no constraint on the eigenvalue spread of R. In >>this case, the eigenvalues are the diagonals of R and the spread can >>be anything depending on the statistics of x_k = d_k. >> >>The conclusion is still valid therefore. >>-- >>% Randy Yates % "The dreamer, the unwoken fool - >>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >brow..." >>%%% 919-577-9882 % >>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >>http://home.earthlink.net/~yatescr >> > > >You beat me to it! P.417 3rd Edition, "Experiment 1". However, I've been checking my references and have so far been unable to locate a derivation that proves it. I have come across this same issue in a real world system that I developed. I was attempting to train a Kalman-filter based DFE which required the forward model of the channel. I had the input symbols to the channel and the received signal, but was unable to get a satisfactory SNR using an LMS to learn the channel model. After a lot of head-scratching, I decided to try both a block regularized-least-squares and RLS algorithms to learn the channel, and instantly saw an increase of 6dB in performance. After checking the correlation matrix of the input hard-symbol stream, I saw that it had a large condition number (large eigenvalue spread) due to the hard symbols having been precoded at some point (the hard symbols had spectral shaping). Now my interest is peaked so I'll see what I can find in terms of a mathematical proof/evidence.
Reply by ●November 10, 20062006-11-10
Hi Mike, I don't have Haykin, 3rd edition (I have the 4th edition). But may I know what is "Experiment 1" in P.417 3rd Edition about? Thanks.>>But from the simulation results in Haykin's book, it has been shownthat>>steady state error increases as eigenvalue spread increases. >> >>Hence, can we draw the conclusion that eigenvalue spread can affect the >>steady state error? >> >> >> >>>Randy Yates <yates@ieee.org> writes: >>> >>>> "mike450exc" <mgraziano@ieee.org> writes: >>>> >>>>>>May I know how the eigenvalue spread affects the steady state error >>of >>>>>>LMS-type algorithms? >>>>>> >>>>> >>>>> An increase in the eiqenvalue spread of the input correlationmatrix>>will >>>>> cause an increase in steady state error. The LMS algorithm will >also >>>>> converge more slowly as the eigenvalue spread increases. >>>> >>>> Hi Mike, >>>> >>>> I disagree. At least in one case MSE is not related to e.s. >>>> >>>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >>>> it can be shown that MSEdelta is not related to the e.s. (I derived >>>> from [proakiscomm]). >>>> >>>> So the question is, does e.s. affect MSEmin? From [widrow], >>>> >>>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>>> >>>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >>doesn't >>>> matter what R is. >>> >>>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >>>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >>> >>>However, this puts no constraint on the eigenvalue spread of R. In >>>this case, the eigenvalues are the diagonals of R and the spread can >>>be anything depending on the statistics of x_k = d_k. >>> >>>The conclusion is still valid therefore. >>>-- >>>% Randy Yates % "The dreamer, the unwoken fool - >>>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >>brow..." >>>%%% 919-577-9882 % >>>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >>>http://home.earthlink.net/~yatescr >>> >> >> >> > > >You beat me to it! P.417 3rd Edition, "Experiment 1". > >However, I've been checking my references and have so far been unable to >locate a derivation that proves it. > >I have come across this same issue in a real world system that I >developed. I was attempting to train a Kalman-filter based DFE which >required the forward model of the channel. I had the input symbols tothe>channel and the received signal, but was unable to get a satisfactorySNR>using an LMS to learn the channel model. After a lot ofhead-scratching,>I decided to try both a block regularized-least-squares and RLSalgorithms>to learn the channel, and instantly saw an increase of 6dB inperformance.> >After checking the correlation matrix of the input hard-symbol stream, I >saw that it had a large condition number (large eigenvalue spread) dueto>the hard symbols having been precoded at some point (the hard symbolshad>spectral shaping). > >Now my interest is peaked so I'll see what I can find in terms of a >mathematical proof/evidence. >