Reply by Kesh June 18, 20072007-06-18
> Hello. I have a toy situation in which, > > known the desired signal d(n), > > d(n) = sin(100*pi*n+pi/3); > > and a noisy signal, with white noise v(n) with known variance and mean > > x(n) = d(n) + v(n);
> According to "Statistical digital processing and modeling" by Hayes, the > output error of LMS, for the same filter length, should be always greater > than xi_min when the LMS filter reaches a steady state, because > > xi_LMS(inf) = xi_min + xi_excess(n) > > but in my case the LMS filter with fair values of the mu parameter (e.g. > 10^-5) performs much better than the optimum filter, i.e. xi_excess is > negative!! > Is there an interpretation for that, or it's more likely that I did an > error writing the code? I suppose that if there's a problem, it should be > in the optimum filter computation...
While I'm not counting out entirely that your code is error-free :P, yes, such behavior, so-called non-Wiener or non-linear phenomenon, occurs with the LMS algorithm when you have strong narrowband input signals. My intuitive interpretation of this effect is that if you have a periodic deterministic (or almost deterministic) signal as a reference input, the LMS algorithm updates its weight so that their time-varying dynamic behavior (which we typically see as a bad guy as it's viewed as misadjustment) can be used to our advantage to further reduce the error. In other words, the optimal filter is then become a time-varying Wiener filter as opposed to the conventional time-invariant Wiener filter. I suspect that as you reduce the sinusoid power in x(n) w.r.t. the power of v(n), you observe that the error level approaches and surpasses that of the Wiener error level. Hope this helps, Kesh
> > Thanks in advance. > Gabriele Paganelli >
Reply by Tim Wescott June 18, 20072007-06-18
gabinet wrote:
> Hello. I have a toy situation in which, > > known the desired signal d(n), > > d(n) = sin(100*pi*n+pi/3); > > and a noisy signal, with white noise v(n) with known variance and mean > > x(n) = d(n) + v(n); > > the goal is define an optimal Wiener filter of 101 coefficients. > (I call it "toy situation" because I have at the same time the value of > the noisy signal and the value of the original signal) > > After some calculation, in this problem we have that the autocorrelation > of the signal d(n) equals the correlation sequence between x(n) and d(n), > r_dx(n). > > I calculated (with Matlab) r_d(n), the 101 pts autocorrelation sequence of > d(n). > > I found the optimal coefficients w: > > w = (R_x^(-1)) * r_d(n) > > where R_x is the Toeplitz autocorrelation matrix of x, > and the minimum error xi_min, > > xi_min = r_d(0) - r_d(0)'(R_x^-1)r_d(0) > > Then I implemented a LMS filter to compare the difference between the > optimum coefficients and the coefficients estimated by LMS. > > According to "Statistical digital processing and modeling" by Hayes, the > output error of LMS, for the same filter length, should be always greater > than xi_min when the LMS filter reaches a steady state, because > > xi_LMS(inf) = xi_min + xi_excess(n) > > but in my case the LMS filter with fair values of the mu parameter (e.g. > 10^-5) performs much better than the optimum filter, i.e. xi_excess is > negative!! > Is there an interpretation for that, or it's more likely that I did an > error writing the code? I suppose that if there's a problem, it should be > in the optimum filter computation... > > Thanks in advance. > Gabriele Paganelli >
How are you defining your signal for the purposes of synthesizing the Wiener filter? Your signal as defined is exceedingly narrow band; your LMS algorithm would pick up on this and make a very narrow filter (with tremendous noise attenuation) where your Wiener filter would be based on whatever bandwidth you specified. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" gives you just what it says. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by gabinet June 18, 20072007-06-18
Hello. I have a toy situation in which, 

known the desired signal d(n),

d(n) = sin(100*pi*n+pi/3);

and a noisy signal, with white noise v(n) with known variance and mean

x(n) = d(n) + v(n);

the goal is define an optimal Wiener filter of 101 coefficients.
(I call it "toy situation" because I have at the same time the value of
the noisy signal and the value of the original signal)

After some calculation, in this problem we have that the autocorrelation
of the signal d(n) equals the correlation sequence between x(n) and d(n),
r_dx(n).

I calculated (with Matlab) r_d(n), the 101 pts autocorrelation sequence of
d(n).

I found the optimal coefficients w:

w = (R_x^(-1)) * r_d(n)

where R_x is the Toeplitz autocorrelation matrix of x, 
and the minimum error xi_min,

xi_min = r_d(0) - r_d(0)'(R_x^-1)r_d(0)

Then I implemented a LMS filter to compare the difference between the
optimum coefficients and the coefficients estimated by LMS.

According to "Statistical digital processing and modeling" by Hayes, the
output error of LMS, for the same filter length, should be always greater
than xi_min when the LMS filter reaches a steady state, because

xi_LMS(inf) = xi_min + xi_excess(n) 

but in my case the LMS filter with fair values of the mu parameter (e.g.
10^-5) performs much better than the optimum filter, i.e. xi_excess is
negative!!
Is there an interpretation for that, or it's more likely that I did an
error writing the code? I suppose that if there's a problem, it should be
in the optimum filter computation...

Thanks in advance.
Gabriele Paganelli