DSPRelated.com
Forums

Weird behaviour in the nLMS code.

Started by Jack L. December 18, 2004
Randy Yates wrote:
> "Jack L." <jack_nospam@nospam.dk> writes: >
Randy, thanks a lot, that cleared all confusions! :-) -- Mvh / Best regards, Jack, Copenhagen The email address is for real. :)
"Jack L."  wrote

> Hi Peter, so far, I don't seem to be able to exceed 0.2 for nLMS and about > 0.08 for LMS (otherwise, the system diverges/goes unstable). As the former > group managed to get up to 2.0 I'm wondering if my code is incorrect. o_O
I've seen cases where the LMS blows up for mu greater than 0.02, so that's not an issue. What was the value of "b" you used? Ciao, Peter K.
"Jack L." <jack_nospam@nospam.dk> wrote in message
news:%tWwd.76207$Vf.3615633@news000.worldonline.dk...
> [SNIP] > Code: > > Global initialisations: > #define TAPS // filter length > float ref_0[TAPS] // delay line > float w_0[TAPS] // weight vectors
float euclidean=eps; Process_Sample()
> { > int n; > double y, error, norm_my, d; > > /* Version 1 of the Euclidean norm BEGIN */ > euclidean -= ref_0[TAPS-1]*ref_0[TAPS-1]; > euclidean += ref_0[0]*ref_0[0]; > /* Version 1 of the Euclidean norm END */ > > /* Version 2 of the Euclidean norm BEGIN */ > euclidean = 0; > for (n = 0; n < TAPS; n++) { > euclidean += ref_0[n] * ref_0[n]; > } > euclidean = sqrt(euclidean); > /* Version 2 of the Euclidean norm END */ > > // Delay-line > for (n=TAPS-1; n>=1; n--) { > ref_0[n] = ref_0[n-1]; > } > > // Inserting new sample into the delay line > ref_0[0] = u0_[0]; > > // FIR-filter > y = w_0[0]*ref_0[0]; > for (n=1; n<TAPS; n++) { > y += w_0[n]*ref_0[n]; > } > > error = d - y; > > norm_my = stepsize_0 * error / (b + euclidean); > for (n=0; n<TAPS; n++) { > w_0[n] += ref_0[n] * norm_my; > } > } > >
Jack, A couple of comments: By definition, Normalize LMS divides the update term by the total power in the regressor i.e. sum of w_0[i]*w_0[i]. not the the sqrt of the sum of squares. Version 1 is using a trick to do this sum of squares in one step by subtracting the oldest sq term first, and then adding the contribution of the newest. So you must initialize it to zero, (or initialize it to epsilon). I have seen problems with version 1 on floating point machines when the input gets small or goes away, the subtract-then-add-approach runs into precision problems and goes negative! (which will cause herrendeous problems). Also you will note that the first time through your loop the power is zero, a situation which should be avoided. This means you should calculate the regressor power after doing the delay line update. Version 2 is doing the sum of squares directly, but then taking the sqrt, so the normalization term is not right (may often be too small which means divding by too small of a number, thus boosting your update term too much, thus requiring smaller step sizes.) You have to be careful of the order of doing the estimate, the LMS update and the buffer update: Here is an excerpt from a matlab m-file that I have used countless times over the years: for icnt=1:nump x(1)=xin(icnt); % add newest input to buffer. yh=x'*b; % calculate estimate e=ydes(icnt)-yh; % calculate error b = b + mu*e*x/(eps+x'*x); % update forward weights x(2:nb)=x(1:nb-1); % update buffer for estimate end Hope this helps -Shawn www.appliedsignalprocessing.com
Peter J. Kootsookos wrote:
> "Jack L." wrote > > > What was the value of "b" you used? > > Ciao, > > Peter K.
Thanks for your response, Peter. Interesting, mu = 0.02 for the LMS? Could that have been the result of a very long filter? (given that the 0 < mu < 2 / M*Smax formula was used, where M = filter length, Smax = power spectral density of the input vector) I had set "b" to 0.001 or 0.0001. I've realised this number can be a bit too small because a small 'euclidean' (the variable) + a small 'b' can give a huge norm_my. I may have put a bit too much emphasis on Haykin's words: "delta is a small value". I'm going to try 0.5 and 1. The DSP board's voltage level is between +/- 1.4V, and generally, the signal exhibiting from the human ears we are interested in, is about 100 mV. -- Mvh / Best regards, Jack, Copenhagen The email address is for real :)
Shawn Steenhagen wrote:
> "Jack L." <jack_nospam@nospam.dk> wrote in message > news:%tWwd.76207$Vf.3615633@news000.worldonline.dk... > > By definition, Normalize LMS divides the update term by the total > power in the regressor i.e. sum of w_0[i]*w_0[i]. not the the sqrt of > the sum of squares.
Shawn, thanks for your response. It's sort of confirming what I've seen, that is, the algorithm seems to perform much better by not taking the sqrt of the sum of sqares. But I get confused because in Simon Haykin's book "Adaptive Filter Theory", 4th edition, he says that mu is normalised with respect to the Euclidean norm. If I'm not wrong, the Euclidean norm is given by: sqrt(sum(w_0[i]*w_0[i])) <-- half MatLab, half C-notation. I don't know what to believe, the book or what I observe (+ what you say)?
> Also you will note that the first time through your loop the power is > zero, a situation which should be avoided. This means you should > calculate the regressor power after doing the delay line update. >
Yes, you're right. Thanks again for the correction and for the m-file - it shows how simple the nLMS algorithm can be. :) -- Mvh / Best regards, Jack, Copenhagen The email address is for real :)
Jack L. wrote:
> Shawn Steenhagen wrote: >> "Jack L." <jack_nospam@nospam.dk> wrote in message >> news:%tWwd.76207$Vf.3615633@news000.worldonline.dk... >> >> But I get confused because in > Simon Haykin's book "Adaptive Filter Theory", 4th edition, he says > that mu is normalised with respect to the Euclidean norm. If I'm not > wrong, the Euclidean norm is given by: sqrt(sum(w_0[i]*w_0[i])) <-- > half MatLab, half C-notation. I don't know what to believe, the book > or what I observe (+ what you say)? >
Okay, after re-reading the same paragraph in that book many times looking for something to verify my words I think I get it. Precisely, he writes "the product vector u(n)e*(n) is normalized with respect to the SQUARED Euclidean norm of the tap-input vector u(n)". Since the Euclidean norm itself is given as written above, i.e. ||u(n)|| = sqrt(sum(w_0[i]*w_0[i])), squaring the whole thing cancels the sqrt, thus we actually divide my*u(n)*e(n) by sum of w_0[i]*w_0[i]. Am I on the right track now? o_O Is it /that/ single word 'squared' I missed out before? -- Mvh / Best regards, Jack, Copenhagen The email address is for real :)