Reply by October 10, 20052005-10-10
McC wrote:

>I have been looking at Widrows paper on LM4 (like LMS but minimises >E(error^4) ) > >It looks good in that the min error is smaller and the cost in computation >is not much more - so why isn't it used more?
The theoretical reason for using an estimator that minimizes the mean squared error (MSE) is its statistical property: assuming the linear model is correct and the "measurement error" is iid Gaussian (and has zero mean, implied by the assumption that the linear model is correct), then the MSE estimator is the "best" unbiased estimator (because it satisfies tha Cramer-Rao bound). If you drop the Gaussian condition on the error (but still assume iid), you can prove the weaker statement (Gauss-Markov theorem) that the MSE estimator is the "best" (as above) _linear_ unbiased estimator. Gauss justified using the MSE estimator for that reason, and is therefore usually given precedence for "inventing" the least squares method over Legendre, who published it first but didn't comment on (or know of) its statistical properties. The real reason why everbody uses the linear MSE estimator is that it is numerically simple. The theoretical justification is just the icing. Nobody is interested in the pitfall that opens up when you drop the iid assumption on the error series. In that case, MSE performs lousy ("minimum fourth error" will be even worse). If this interest you, look up "robust (linear) regression".
Reply by Real_McCoy October 8, 20052005-10-08
I have been looking at Widrows paper on LM4 (like LMS but minimises
E(error^4) )

It looks good in that the min error is smaller and the cost in computation
is not much more - so why isn't it used more?

I saw some papers on stability issues and I have a paper on Normalised LM4.

McC