On 18 May 2005 01:21:55 -0700, porterboy76@yahoo.com wrote:
>> Don't forget to compare TCM and BICM.
>Hi Eric,
>
>That's Bit-Interleaved Coded Modulation, right? You mentioned in
>another thread (august 2004) that "Bit Interleaved Coded Modulation
>further decouples the system from the modulation and signal processing,
>and is known to always be as good or better than TCM". Surely that
>depends on the code rate and the constraint lengths of the two codes,
>no? Is BICM always better for the same complexity? Can you recommend a
>good starter text for BICM?
>
>Porterboy
Dang, yer gonna make me work again, arent' you? ;)
One of the commonly referenced papers regarding this is:
Bit-Interleaved Coded Modulation
Giuseppe Caire, Giorgio Taricco, Ezio Biglieri
IEEE Trans on Information Theory
Vol 44 #3, May 1998 p927
This paper shows reasonably well (although it is also a little bit
controversial) that BICM can be expected to be as good or better than
TCM in most conditions, especially in fading.
To me this makes intuitive sense because TCM requires that coherence
be maintained enough to sustain tracing the trellis. In fading,
especially dynamic fading, this becomes difficult and once the trellis
coherence is lost you're in for some significant trouble for a
while...e.g., lots and lots of error multiplication. With BICM even
a strong, deep, momentary fade only affects the bits associated with
the fade, which then get spread out by the depth of the interleaver.
This is much more robust to this type of event than TCM, which works
very well in reasonably static channels.
And since BICM is pretty easy to implement (easier, IMHO, than TCM),
to me it's a slam dunk. I've not messed with TCM much since, and have
found it difficult to defend in the presence of simpler options like
this.
FWIW, yes, I think it's only fair to compare the two with the same
underlying code and a similar rate.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by Steve Underwood●May 18, 20052005-05-18
porterboy76@yahoo.com wrote:
>>Don't forget to compare TCM and BICM.
>>
>>
>Hi Eric,
>
>That's Bit-Interleaved Coded Modulation, right? You mentioned in
>another thread (august 2004) that "Bit Interleaved Coded Modulation
>further decouples the system from the modulation and signal processing,
>and is known to always be as good or better than TCM". Surely that
>depends on the code rate and the constraint lengths of the two codes,
>no? Is BICM always better for the same complexity? Can you recommend a
>good starter text for BICM?
>
>
Always as good or better? Surely any form of interleaving is only a win
when the channel is non-stationary. I thought that in the stationary
case TCM was actually superior. Isn't that always true?
Regards,
Steve
Reply by ●May 18, 20052005-05-18
> Don't forget to compare TCM and BICM.
Hi Eric,
That's Bit-Interleaved Coded Modulation, right? You mentioned in
another thread (august 2004) that "Bit Interleaved Coded Modulation
further decouples the system from the modulation and signal processing,
and is known to always be as good or better than TCM". Surely that
depends on the code rate and the constraint lengths of the two codes,
no? Is BICM always better for the same complexity? Can you recommend a
good starter text for BICM?
Porterboy
Reply by Eric Jacobsen●May 17, 20052005-05-17
On 17 May 2005 00:25:41 -0700, porterboy76@yahoo.com wrote:
>Oh yeah, now I get it...
>By using energy per information bit you can compare codes with
>different rates at a glance...
You got it....
>Thanks guys, I've nailed Viterbi now I think.
>
>Now for TCM ;-)....
Don't forget to compare TCM and BICM.
;)
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by ●May 17, 20052005-05-17
Oh yeah, now I get it...
By using energy per information bit you can compare codes with
different rates at a glance...
Thanks guys, I've nailed Viterbi now I think.
Now for TCM ;-)....
Reply by Eric Jacobsen●May 13, 20052005-05-13
On 12 May 2005 02:53:17 -0700, porterboy76@yahoo.com wrote:
>Speaking of BER simulation, when I plot the BER vs SNR_per_bit like in
>Proakis, is that SNR per code bit or per information bit? As far as I
>can tell it is the SNR per information bit, but this doesn't seem like
>an intuitively logical parameter for comparison with uncoded BPSK. In
>fact I would have thought that SNR and not SNR_per_bit would be a more
>appropriate measure, since it is immediately obvious then how different
>codes do over the same channel.
Eb/No is a common metric, which is the energy for each _information_
bit over the noise power normalized to unit Hz bw. Using the
per-bit metric is useful because it compares how much transmit
resource (power, essentially) it takes to get the _information_
across, which is what you really care about. So for power efficiency
comparisons it makes perfect sense. Using the normalized noise power
makes it independent of bandwidth, i.e., you no longer need to know
the rate or bandwidth at which the data was transmitted. This allows
systems to be compared in a very generic and consistent matter.
There are times that it makes more sense to compares SNRs instead, or
some other metric, but Eb/No or like metrics are quite useful,
especially when comparing FEC systems or things like that.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by ●May 12, 20052005-05-12
Speaking of BER simulation, when I plot the BER vs SNR_per_bit like in
Proakis, is that SNR per code bit or per information bit? As far as I
can tell it is the SNR per information bit, but this doesn't seem like
an intuitively logical parameter for comparison with uncoded BPSK. In
fact I would have thought that SNR and not SNR_per_bit would be a more
appropriate measure, since it is immediately obvious then how different
codes do over the same channel.
Reply by ●May 12, 20052005-05-12
Thanks again Clay
OK, I think I have it now, owing to some excellent guidance ;-). I'm
going to run a big sim, and see can I get the BER curves to match the
literature... While they are running I can go out and enjoy the lovely
Summer which has just arrived here in Switzerland...
Porterboy
Reply by Clay●May 11, 20052005-05-11
Hello Porterboy,
You are on the right path (sorry I couldn't resist) about the traceback
with continuous data. Typically once you are 4 to 5 times the
constraint length of the code past a given symbol, then the traceback
to that symbol is going to give you about all of the reconstruction
help that is available for that symbol. Sometimes you can go about six
times the constaint length into the data, then traceback to a symbol
and output a block of data whose length is a constraint length of the
code. There are many variations on this theme. This helps reduce the
processing somewhat. You don't want to have to do a complete traceback
for each symbol. In the case where your metrics indicate something is
amiss, you have an uncorrectible error. In most applications I come
across with trellis coding, the data is blocked and the state machine
has a known starting and ending state. I.e., flush bits are used at the
end. But sometimes a code is punctured or flush bits are not used, then
you just have to go with the least error metric. And the continuous
data case is like the no flush bits case. Hopefully your data has a CRC
or something similar inside of the trellis code for detecting
uncorrectible blocks.
IHTH,
Clay
Reply by ●May 11, 20052005-05-11
Another quick question about the VA...
How do I do traceback for continuous mode data?
Suppose, for a 4 state trellis, the total survivor history is eg.
t = 0 1 2 3 4 5 6 7 8 9 10
----------------------------------------------------------------------
0 0 1 0 1 1 0 1 0 0 1
2 2 2 3 3 2 3 3 2 2 3
0 0 0 1 1 1 0 1 0 0 1
2 2 2 3 2 3 2 3 2 2 3
The above numbers indicate the predecessor to the current state,
numbered 0-3, corresponding to the lowest accumulated metric (a trellis
diagram without the trellis ;-). t is the time varible.
An example metric history for the above survivor history might be...
t = 0 1 2 3 4 5 6 7 8 9 10
----------------------------------------------------------------------
2 0 0 2 4 6 6 14 12 12 16
-999 -2 4 4 6 6 12 8 10 18 14
-2 4 2 6 4 6 6 10 16 12 20
-999 -2 4 4 8 10 8 8 10 14 14
(bigger metric = more likely the correct path)
Suppose I want to decode with a traceback length of 5...
this is what I THINK you do, but I'm not sure...
At t = 5 pick the biggest metric (metric = 10, predecessor = 3). Using
the survivor history, step back 5 points in the trellis (3 2 1 2 0 0).
Decode the transition at that point. Move to t= 6. repeat procedure.
etc.
What happens if there is no path from the biggest metric at t = 5 to
that at t = 6? Am I completely wrong here?