Linear equalizers and similarities

Started by Peter Mairhofer July 18, 2017

The usual setup: Suppose that I can model a channel as FIR filter h such
that the received signal is y=h*x+w (*: convolution; x: transmitted
signal; w: measurement noise). The goal is to find an FIR equalizer g
such that xhat=g*y is close to x.

Writing x/y as vectors, the relation can be written as y=Hx+w. The
equalizer xhat=Gzf y with

   Gzf = (H^T H)^-1 H^T

is called the ZF equalizer. It's just inverting the channel via Least

Similarly, I know that xhat=Gmmse y with

   Gmmse = Cxx H^T (H Cxx H^T + Cww)^-1

is the MMSE solution. (Cxx: auto correlation matrix of input; Cww of
measurement noise).

Question 1: According to Wikipedia, the alternative form is

   Gmmse = (A^T Cww^-1 A + Cxx^-1)^-1 A^T Cww^-1

from which can be seen, that it amounts to weighted least squares
combined with Tikhonov regularization. Cww=I and Cxx=I results in
ordinary Least Squares. But in my case, Cxx is not invertible. Hence
both forms cannot be equivalent. What is the difference between them and
how would the first form result in ordinary Least Squares? I would like
to understand the edge case in which the first form becomes ordinary
Least Squares (and hence ZF solution).

Question 2: I can also write the equation xhat=g*y+w as xhat=Yg+w where
g is the coefficient vector of the equalizer and can be directly
estimated without knowledge of the channel using LS. What kind of
equalizer is this? ZF? MMSE? Something else/in between? (In my
simulations, it gives different results... ).

Question 3: Can I obtain the MMSE equalizer only from y and x? I.e.
using the form xhat=Yg (rather than xhat=Gy)?

Question 4: Can the LMS algorithm be interpreted as a sample based
approximation to any of the above? Which equalizer would it correspond
to? (I would think MMSE equalizer. But this is inconsistent because (a)
the matrix version of MMSE requires the channel estimate h. And (b) I
think RLS can be interpreted as Weighted Least Squares solution.)

Question 5: What is the relation between LMS, RLS and Kalman filter? Can
they be interpreted as sample based approximations of LS, WLS, Tikhonov

If I am completely wrong I would be happy if someone could explain the
relationship between those.