Reply by pankajb August 12, 20072007-08-12
Hello,
    I simulated a soft output MIMO detection scheme in which a mimo
detector produces LLRs which are fed to a soft input viterbi decoder(using
vitdec matlab function). LLRs (using max-log approx.) are computed based on
the euclidean distances of the received point wrt to the lattice points. I
noticed that the LLRs have large magnitudes ( 40-50 or even more) at
higher SNR. I observed that the BER performance is more or less same as
that of hard decoder.
Questions:
1) Is it because the magnitudes of LLRs are so large that the soft input
viterbi essentially treats them as though it received hard inputs?
2) Do I have to normalize the euclidean distances(or LLRs) to take in to
account for the "blowing up" of the constellation (or lattice) due to
fading?
if so with what quantity?
   I will elaborate more on this. Suppose consider a BPSK scheme in fading
channel to compute LLRs I need to compute the distance of received point
from 1*h and -1*h, where "h" is an instantaneous fading sample. These
distances will obviously depend on the value of h (even if I am generating
fading process which has unit average energy). if h is very large the LLR
will be large too. How is this problem overcome?..how does all this play
out in two or higher dimensional lattices (in a 4x4 MIMO with 16-QAM
modulation it has 8 real dimensions).
****************************************************************************
 system model for 4x4 16-QAM..rate 1/2 conv. code

random bits--->conv. encoder-->QAM symbol mapping(gray coding)-->
channel+noise--->soft mimo detection and QAM demapping---> soft in
viterbi(vitdec function matlab).
for sake of brevity I have not shown the interleaver, but in the actual
simulation I have included interleaver.

Thanks for your time