Reply by Richard_K November 13, 20062006-11-13
Hi Mike,
Many thanks.


>Hi Richard, >In the 3rd edition, its section 9.7: "Computer Experiment on Adaptive >Equalization" (same title as 1st edition). Hope this helps, >Mike > > > >
Reply by mike450exc November 12, 20062006-11-12
Hi Richard,
In the 3rd edition, its section 9.7: "Computer Experiment on Adaptive
Equalization" (same title as 1st edition).  Hope this helps,
Mike



Reply by Richard_K November 12, 20062006-11-12
Hi Randy,

Thanks for your helps.

>"Richard_K" <ngyh80@hotmail.com> writes: > >> Hi Mike, >> >> I don't have Haykin, 3rd edition (I have the 4th edition). But may I
know
>> what is "Experiment 1" in P.417 3rd Edition about? > >Richard, > >My edition is the first, so I'm not sure it would be applicable to >your edition, but I bet the experiment Mike is referring to is >Experiment 1 in section 5.16, "Computer Experiment on Adaptive >Equalization." He uses a single value of mu (step-size) that is >guaranteed to converge for four e.s.'s. The key result is a plot that >shows the convergence (MSE, ensemble-averaged, versus iterations) for >each e.s., where it is clear that the steady-state MSE is larger for >larger e.s. > >I can't really comment further without seeing the actual code and >thinking about the problem much harder, It seems that this could be >related to numerical errors rather than inherent in the statistics of >ths signals. But I'm simply conjecturing. > >I believe the special case I provided does prove, however, that one >cannot state without qualification that larger e.s's result in larger >MSE. > >--Randy > > > >> >> Thanks. >> >>>>But from the simulation results in Haykin's book, it has been shown >> that >>>>steady state error increases as eigenvalue spread increases. >>>> >>>>Hence, can we draw the conclusion that eigenvalue spread can affect
the
>>>>steady state error? >>>> >>>> >>>> >>>>>Randy Yates <yates@ieee.org> writes: >>>>> >>>>>> "mike450exc" <mgraziano@ieee.org> writes: >>>>>> >>>>>>>>May I know how the eigenvalue spread affects the steady state
error
>>>>of >>>>>>>>LMS-type algorithms? >>>>>>>> >>>>>>> >>>>>>> An increase in the eiqenvalue spread of the input correlation >> matrix >>>>will >>>>>>> cause an increase in steady state error. The LMS algorithm will >>>also >>>>>>> converge more slowly as the eigenvalue spread increases. >>>>>> >>>>>> Hi Mike, >>>>>> >>>>>> I disagree. At least in one case MSE is not related to e.s. >>>>>> >>>>>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta,
where
>>>>>> it can be shown that MSEdelta is not related to the e.s. (I
derived
>>>>>> from [proakiscomm]). >>>>>> >>>>>> So the question is, does e.s. affect MSEmin? From [widrow], >>>>>> >>>>>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>>>>> >>>>>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >>>>doesn't >>>>>> matter what R is. >>>>> >>>>>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>>>>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is
that
>>>>>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >>>>> >>>>>However, this puts no constraint on the eigenvalue spread of R. In >>>>>this case, the eigenvalues are the diagonals of R and the spread can >>>>>be anything depending on the statistics of x_k = d_k. >>>>> >>>>>The conclusion is still valid therefore. >>>>>-- >>>>>% Randy Yates % "The dreamer, the unwoken fool - >>>>>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >>>>brow..." >>>>>%%% 919-577-9882 % >>>>>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*,
ELO
>>>>>http://home.earthlink.net/~yatescr >>>>> >>>> >>>> >>>> >>> >>> >>>You beat me to it! P.417 3rd Edition, "Experiment 1". >>> >>>However, I've been checking my references and have so far been unable
to
>>>locate a derivation that proves it. >>> >>>I have come across this same issue in a real world system that I >>>developed. I was attempting to train a Kalman-filter based DFE which >>>required the forward model of the channel. I had the input symbols to >> the >>>channel and the received signal, but was unable to get a satisfactory >> SNR >>>using an LMS to learn the channel model. After a lot of >> head-scratching, >>>I decided to try both a block regularized-least-squares and RLS >> algorithms >>>to learn the channel, and instantly saw an increase of 6dB in >> performance. >>> >>>After checking the correlation matrix of the input hard-symbol stream,
I
>>>saw that it had a large condition number (large eigenvalue spread) due >> to >>>the hard symbols having been precoded at some point (the hard symbols >> had >>>spectral shaping). >>> >>>Now my interest is peaked so I'll see what I can find in terms of a >>>mathematical proof/evidence. >>> >> >> > >-- >% Randy Yates % "Though you ride on the wheels of
tomorrow,
>%% Fuquay-Varina, NC % you still wander the fields of your >%%% 919-577-9882 % sorrow." >%%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO >http://home.earthlink.net/~yatescr >
Reply by Randy Yates November 11, 20062006-11-11
"Richard_K" <ngyh80@hotmail.com> writes:

> Hi Mike, > > I don't have Haykin, 3rd edition (I have the 4th edition). But may I know > what is "Experiment 1" in P.417 3rd Edition about?
Richard, My edition is the first, so I'm not sure it would be applicable to your edition, but I bet the experiment Mike is referring to is Experiment 1 in section 5.16, "Computer Experiment on Adaptive Equalization." He uses a single value of mu (step-size) that is guaranteed to converge for four e.s.'s. The key result is a plot that shows the convergence (MSE, ensemble-averaged, versus iterations) for each e.s., where it is clear that the steady-state MSE is larger for larger e.s. I can't really comment further without seeing the actual code and thinking about the problem much harder, It seems that this could be related to numerical errors rather than inherent in the statistics of ths signals. But I'm simply conjecturing. I believe the special case I provided does prove, however, that one cannot state without qualification that larger e.s's result in larger MSE. --Randy
> > Thanks. > >>>But from the simulation results in Haykin's book, it has been shown > that >>>steady state error increases as eigenvalue spread increases. >>> >>>Hence, can we draw the conclusion that eigenvalue spread can affect the >>>steady state error? >>> >>> >>> >>>>Randy Yates <yates@ieee.org> writes: >>>> >>>>> "mike450exc" <mgraziano@ieee.org> writes: >>>>> >>>>>>>May I know how the eigenvalue spread affects the steady state error >>>of >>>>>>>LMS-type algorithms? >>>>>>> >>>>>> >>>>>> An increase in the eiqenvalue spread of the input correlation > matrix >>>will >>>>>> cause an increase in steady state error. The LMS algorithm will >>also >>>>>> converge more slowly as the eigenvalue spread increases. >>>>> >>>>> Hi Mike, >>>>> >>>>> I disagree. At least in one case MSE is not related to e.s. >>>>> >>>>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >>>>> it can be shown that MSEdelta is not related to the e.s. (I derived >>>>> from [proakiscomm]). >>>>> >>>>> So the question is, does e.s. affect MSEmin? From [widrow], >>>>> >>>>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>>>> >>>>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >>>doesn't >>>>> matter what R is. >>>> >>>>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>>>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >>>>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >>>> >>>>However, this puts no constraint on the eigenvalue spread of R. In >>>>this case, the eigenvalues are the diagonals of R and the spread can >>>>be anything depending on the statistics of x_k = d_k. >>>> >>>>The conclusion is still valid therefore. >>>>-- >>>>% Randy Yates % "The dreamer, the unwoken fool - >>>>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >>>brow..." >>>>%%% 919-577-9882 % >>>>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >>>>http://home.earthlink.net/~yatescr >>>> >>> >>> >>> >> >> >>You beat me to it! P.417 3rd Edition, "Experiment 1". >> >>However, I've been checking my references and have so far been unable to >>locate a derivation that proves it. >> >>I have come across this same issue in a real world system that I >>developed. I was attempting to train a Kalman-filter based DFE which >>required the forward model of the channel. I had the input symbols to > the >>channel and the received signal, but was unable to get a satisfactory > SNR >>using an LMS to learn the channel model. After a lot of > head-scratching, >>I decided to try both a block regularized-least-squares and RLS > algorithms >>to learn the channel, and instantly saw an increase of 6dB in > performance. >> >>After checking the correlation matrix of the input hard-symbol stream, I >>saw that it had a large condition number (large eigenvalue spread) due > to >>the hard symbols having been precoded at some point (the hard symbols > had >>spectral shaping). >> >>Now my interest is peaked so I'll see what I can find in terms of a >>mathematical proof/evidence. >> > >
-- % Randy Yates % "Though you ride on the wheels of tomorrow, %% Fuquay-Varina, NC % you still wander the fields of your %%% 919-577-9882 % sorrow." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://home.earthlink.net/~yatescr
Reply by Richard_K November 11, 20062006-11-11
Hi Mike,

Thanks a lot.

I am so happy that can find someone to discuss this problem.

>> >>Hi Mike, >> >>I don't have Haykin, 3rd edition (I have the 4th edition). But may I >know >>what is "Experiment 1" in P.417 3rd Edition about? >> >>Thanks. >> >Hi Richard, >My books are at work so I'll check when I get back there and then I'll >post a summary of the experiment details so you can cross-refernce to
your
>edition. >Mike >
Reply by mike450exc November 11, 20062006-11-11
> >Hi Mike, > >I don't have Haykin, 3rd edition (I have the 4th edition). But may I
know
>what is "Experiment 1" in P.417 3rd Edition about? > >Thanks. >
Hi Richard, My books are at work so I'll check when I get back there and then I'll post a summary of the experiment details so you can cross-refernce to your edition. Mike
Reply by Richard_K November 10, 20062006-11-10
Hi Mike,

I don't have Haykin, 3rd edition (I have the 4th edition).  But may I know
what is "Experiment 1" in P.417 3rd Edition about?

Thanks.

>>But from the simulation results in Haykin's book, it has been shown
that
>>steady state error increases as eigenvalue spread increases. >> >>Hence, can we draw the conclusion that eigenvalue spread can affect the >>steady state error? >> >> >> >>>Randy Yates <yates@ieee.org> writes: >>> >>>> "mike450exc" <mgraziano@ieee.org> writes: >>>> >>>>>>May I know how the eigenvalue spread affects the steady state error >>of >>>>>>LMS-type algorithms? >>>>>> >>>>> >>>>> An increase in the eiqenvalue spread of the input correlation
matrix
>>will >>>>> cause an increase in steady state error. The LMS algorithm will >also >>>>> converge more slowly as the eigenvalue spread increases. >>>> >>>> Hi Mike, >>>> >>>> I disagree. At least in one case MSE is not related to e.s. >>>> >>>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >>>> it can be shown that MSEdelta is not related to the e.s. (I derived >>>> from [proakiscomm]). >>>> >>>> So the question is, does e.s. affect MSEmin? From [widrow], >>>> >>>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>>> >>>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >>doesn't >>>> matter what R is. >>> >>>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >>>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >>> >>>However, this puts no constraint on the eigenvalue spread of R. In >>>this case, the eigenvalues are the diagonals of R and the spread can >>>be anything depending on the statistics of x_k = d_k. >>> >>>The conclusion is still valid therefore. >>>-- >>>% Randy Yates % "The dreamer, the unwoken fool - >>>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >>brow..." >>>%%% 919-577-9882 % >>>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >>>http://home.earthlink.net/~yatescr >>> >> >> >> > > >You beat me to it! P.417 3rd Edition, "Experiment 1". > >However, I've been checking my references and have so far been unable to >locate a derivation that proves it. > >I have come across this same issue in a real world system that I >developed. I was attempting to train a Kalman-filter based DFE which >required the forward model of the channel. I had the input symbols to
the
>channel and the received signal, but was unable to get a satisfactory
SNR
>using an LMS to learn the channel model. After a lot of
head-scratching,
>I decided to try both a block regularized-least-squares and RLS
algorithms
>to learn the channel, and instantly saw an increase of 6dB in
performance.
> >After checking the correlation matrix of the input hard-symbol stream, I >saw that it had a large condition number (large eigenvalue spread) due
to
>the hard symbols having been precoded at some point (the hard symbols
had
>spectral shaping). > >Now my interest is peaked so I'll see what I can find in terms of a >mathematical proof/evidence. >
Reply by mike450exc November 10, 20062006-11-10
>But from the simulation results in Haykin's book, it has been shown that >steady state error increases as eigenvalue spread increases. > >Hence, can we draw the conclusion that eigenvalue spread can affect the >steady state error? > > > >>Randy Yates <yates@ieee.org> writes: >> >>> "mike450exc" <mgraziano@ieee.org> writes: >>> >>>>>May I know how the eigenvalue spread affects the steady state error >of >>>>>LMS-type algorithms? >>>>> >>>> >>>> An increase in the eiqenvalue spread of the input correlation matrix >will >>>> cause an increase in steady state error. The LMS algorithm will
also
>>>> converge more slowly as the eigenvalue spread increases. >>> >>> Hi Mike, >>> >>> I disagree. At least in one case MSE is not related to e.s. >>> >>> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >>> it can be shown that MSEdelta is not related to the e.s. (I derived >>> from [proakiscomm]). >>> >>> So the question is, does e.s. affect MSEmin? From [widrow], >>> >>> MSEmin = E[d^2_k] - P^T R^{-1} P, >>> >>> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it >doesn't >>> matter what R is. >> >>Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >>is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >>P^T R^{-1} P = E[d^2_k] and MSEmin = 0. >> >>However, this puts no constraint on the eigenvalue spread of R. In >>this case, the eigenvalues are the diagonals of R and the spread can >>be anything depending on the statistics of x_k = d_k. >> >>The conclusion is still valid therefore. >>-- >>% Randy Yates % "The dreamer, the unwoken fool - >>%% Fuquay-Varina, NC % in dreams, no pain will kiss the >brow..." >>%%% 919-577-9882 % >>%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >>http://home.earthlink.net/~yatescr >> > > >
You beat me to it! P.417 3rd Edition, "Experiment 1". However, I've been checking my references and have so far been unable to locate a derivation that proves it. I have come across this same issue in a real world system that I developed. I was attempting to train a Kalman-filter based DFE which required the forward model of the channel. I had the input symbols to the channel and the received signal, but was unable to get a satisfactory SNR using an LMS to learn the channel model. After a lot of head-scratching, I decided to try both a block regularized-least-squares and RLS algorithms to learn the channel, and instantly saw an increase of 6dB in performance. After checking the correlation matrix of the input hard-symbol stream, I saw that it had a large condition number (large eigenvalue spread) due to the hard symbols having been precoded at some point (the hard symbols had spectral shaping). Now my interest is peaked so I'll see what I can find in terms of a mathematical proof/evidence.
Reply by Richard_K November 10, 20062006-11-10
But from the simulation results in Haykin's book, it has been shown that
steady state error increases as eigenvalue spread increases.

Hence, can we draw the conclusion that eigenvalue spread can affect the
steady state error?



>Randy Yates <yates@ieee.org> writes: > >> "mike450exc" <mgraziano@ieee.org> writes: >> >>>>May I know how the eigenvalue spread affects the steady state error
of
>>>>LMS-type algorithms? >>>> >>> >>> An increase in the eiqenvalue spread of the input correlation matrix
will
>>> cause an increase in steady state error. The LMS algorithm will also >>> converge more slowly as the eigenvalue spread increases. >> >> Hi Mike, >> >> I disagree. At least in one case MSE is not related to e.s. >> >> Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where >> it can be shown that MSEdelta is not related to the e.s. (I derived >> from [proakiscomm]). >> >> So the question is, does e.s. affect MSEmin? From [widrow], >> >> MSEmin = E[d^2_k] - P^T R^{-1} P, >> >> where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it
doesn't
>> matter what R is. > >Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} >is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that >P^T R^{-1} P = E[d^2_k] and MSEmin = 0. > >However, this puts no constraint on the eigenvalue spread of R. In >this case, the eigenvalues are the diagonals of R and the spread can >be anything depending on the statistics of x_k = d_k. > >The conclusion is still valid therefore. >-- >% Randy Yates % "The dreamer, the unwoken fool - >%% Fuquay-Varina, NC % in dreams, no pain will kiss the
brow..."
>%%% 919-577-9882 % >%%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO >http://home.earthlink.net/~yatescr >
Reply by Randy Yates November 10, 20062006-11-10
Randy Yates <yates@ieee.org> writes:

> "mike450exc" <mgraziano@ieee.org> writes: > >>>May I know how the eigenvalue spread affects the steady state error of >>>LMS-type algorithms? >>> >> >> An increase in the eiqenvalue spread of the input correlation matrix will >> cause an increase in steady state error. The LMS algorithm will also >> converge more slowly as the eigenvalue spread increases. > > Hi Mike, > > I disagree. At least in one case MSE is not related to e.s. > > Once LMS has converged, the MSE is given by MSEmin + MSEdelta, where > it can be shown that MSEdelta is not related to the e.s. (I derived > from [proakiscomm]). > > So the question is, does e.s. affect MSEmin? From [widrow], > > MSEmin = E[d^2_k] - P^T R^{-1} P, > > where P = E[d_k x_k]. In the case where x_k = d_k, P = 0 and it doesn't > matter what R is.
Whoa. That's wrong reasoning. P is not zero. P is [d_k^2], but R{-1} is diag(1/d_k^2) if d_k = x_k are uncorrelated, so the result is that P^T R^{-1} P = E[d^2_k] and MSEmin = 0. However, this puts no constraint on the eigenvalue spread of R. In this case, the eigenvalues are the diagonals of R and the spread can be anything depending on the statistics of x_k = d_k. The conclusion is still valid therefore. -- % Randy Yates % "The dreamer, the unwoken fool - %% Fuquay-Varina, NC % in dreams, no pain will kiss the brow..." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO http://home.earthlink.net/~yatescr