LMS vs Wiener

Started by June 18, 2007
Hello. I have a toy situation in which,

known the desired signal d(n),

d(n) = sin(100*pi*n+pi/3);

and a noisy signal, with white noise v(n) with known variance and mean

x(n) = d(n) + v(n);

the goal is define an optimal Wiener filter of 101 coefficients.
(I call it "toy situation" because I have at the same time the value of
the noisy signal and the value of the original signal)

After some calculation, in this problem we have that the autocorrelation
of the signal d(n) equals the correlation sequence between x(n) and d(n),
r_dx(n).

I calculated (with Matlab) r_d(n), the 101 pts autocorrelation sequence of
d(n).

I found the optimal coefficients w:

w = (R_x^(-1)) * r_d(n)

where R_x is the Toeplitz autocorrelation matrix of x,
and the minimum error xi_min,

xi_min = r_d(0) - r_d(0)'(R_x^-1)r_d(0)

Then I implemented a LMS filter to compare the difference between the
optimum coefficients and the coefficients estimated by LMS.

According to "Statistical digital processing and modeling" by Hayes, the
output error of LMS, for the same filter length, should be always greater
than xi_min when the LMS filter reaches a steady state, because

xi_LMS(inf) = xi_min + xi_excess(n)

but in my case the LMS filter with fair values of the mu parameter (e.g.
10^-5) performs much better than the optimum filter, i.e. xi_excess is
negative!!
Is there an interpretation for that, or it's more likely that I did an
error writing the code? I suppose that if there's a problem, it should be
in the optimum filter computation...