DSPRelated.com
Forums

ML equalizer

Started by sair...@gmail.com November 6, 2006
Hi,
       I am  working on equalization schemes without using Viterbi type
of algorithms. Assume that complexity  is not an issue. I am trying to
detect the  received samples using ML equation. Mathematically , if a1,
a2, a3 .. aN are the input symbols, say h1, h2, h3 are the channel tap
gains. Then the received signal is represented in vector fom as
R= A*h+n, where R is the received vector of N samples, n is the AWGN
vector, and A is the data matrix such that A*h implements the
convolution operation between the data vector and the channel.
Accoring to the ML criteria, I am checking the norm square of ( R-A*H),
and assigning the ML data bit whch has the minimum norm.
My problem starts when I am using a channel code with this scheme, as
my channel code needs soft info, I m  findding the LLR at the equalizer
o/p which  has minimum norm but with  the oppoisite sign to the ML data
bit found  above. To my surprise, every time I am finding   a code word
same as ML except a chnage in the bit position  for which I am deciding
the LLR.
I am unable to  find the bug, is it happens like that ? I feel this
thing will happen only when the chanel is a  AWGN only.
Any idea, where I am doing wrong?
Thanks in advance,
-SaiRamesh.