Is it a generally accepted fact that the best punctured convolutional
code performance approaches the performance of the equivalent
non-punctured code at the same rate.
Is there any figure of merit as to what the traceback depth should be
given a certain puncturing scheme.
For example, I notice that:
with a rate 1/2, constraint length K=7 unpunctured code, tracebach
depth D=5*K=35, I get close to 7-8 dB improvement in Viterbi decoding
(unquantized) performance relative to no coding.
Now, when I puncture the rate 1/2, K=7 code to get a rate 7/8 code, I
have to use a traceback depth of 20*K=140 to get a performance
improvement. However, this performance improvement occurs only at Eb/N0
values>8 dB, where I observe a 2-3 dB improvement in performance...
Any comments regarding the traceback depth vs puncturing
Reply by tara...@gmail.com●May 27, 20052005-05-27
This was what I noted while working on punctured convolutional codes.
=B7As the traceback length of the viterbi decoder increases, error in
the decoded bit stream reduces and also the viterbi decoder approaches
optimum decoding. It provides more uniform and predictable performance
as can be seen by comparing performance obtained from shorter length
Same is the case with the punctured conolutional codes. Since we have
already weakened the data by puncturing the coded bit stream and hence
viterbi decoder is more prone to make errors if we prematurely (with
shorter traceback) decode the bits. But as we increase the traceback
length, chance of error propagating to later states is less and hence
we get better error performance.=20
Hope this helps