Reply by xsong May 5, 20092009-05-05
Theoretically, the error correction capability of a convolutional code
does not change with the block length, because
it is roughly determined by the so called "free distance" of the code
and which is fixed given the encoder structure. While for
turbo code, the error correction capability does change with the block
length as it is roughly determined by the minimum distance of the
code, and which is usually about 3% of the code length.


On May 5, 9:31&#4294967295;pm, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
> On Tue, 05 May 2009 15:25:29 -0500, "Melinda" <melinda.m...@gmail.com> > wrote: > > >Hi, > > >Does anyone knows does block length of info data, which then goes through > >convolutional encoder(convolutional encoding), has impact of performance on > >convolutional coding and Viterbi decoding(FEC system). > >For example if info message length is 256 bits, or 1024 bits, or 8912 bits > >does performance of FEC system(convolutional encoding and Viterbi decoding) > >changes with different input block lengths? Does this FEC system has better > >performance on shorter or longer input block lengths? > >I heard that performance is better on shorter input data blocks, (and also > >that Turbo coding has better performance on longer input blocks), but can > >someone confirm(or not) that, and explain why is that true (or false). > > >Thanks and best regards > > There are multiple ways of looking at this problem. &#4294967295; I'm not sure > which way you're most interested in, but I'll offer the following: > > Generally longer blocks provide better error correction capability > because there's more mutual information to spread around. &#4294967295; However, > with a Viterbi decoder there's a useful rule of thumb that after > around five or six contraint lengths (depending on the traceback depth > of the decoder) it start getting into diminishing returns pretty > quickly. > > An interative code, on the other hand, has a fixed block length, but > always uses all of the bits in the block to decode the rest of the > bits in the block. &#4294967295;So a longer block length provides more "bit > diversity" with which to recover the information for any given bit. If > one has the patience or the ability to spend the hardware complexity, > the longer the block length the better. &#4294967295; Again, at a certin point > there are diminishing returns. > > If one looks at packet error rate for various possible codes for a > fixed block length an interesting thing happens. &#4294967295; Clearly for longer > block lengths the Viterbi decoder gets less advantage (since after > several constraint lengths there's not much to gain), while an > iterative decoder (e.g., LDPC, Turbo Code) keeps getting better. This > means that as the block size decreases one might expect that at some > point the Viterbi might be a better choice than the iterative code, > and in my experience that's exactly what happens. &#4294967295; Depending on the > codes and the system involved, one may be better off with just a > Viterbi decoder compared to an iterative decoder. &#4294967295;At what block size > that happens depends on a lot of things, but can be as large as many > tens of bytes in some systems. > > Hope that helps a bit. > > Eric Jacobsen > Minister of Algorithms > Abineau Communicationshttp://www.ericjacobsen.org > > Blog:http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by Eric Jacobsen May 5, 20092009-05-05
On Tue, 05 May 2009 15:25:29 -0500, "Melinda" <melinda.mel3@gmail.com>
wrote:

>Hi, > >Does anyone knows does block length of info data, which then goes through >convolutional encoder(convolutional encoding), has impact of performance on >convolutional coding and Viterbi decoding(FEC system). >For example if info message length is 256 bits, or 1024 bits, or 8912 bits >does performance of FEC system(convolutional encoding and Viterbi decoding) >changes with different input block lengths? Does this FEC system has better >performance on shorter or longer input block lengths? >I heard that performance is better on shorter input data blocks, (and also >that Turbo coding has better performance on longer input blocks), but can >someone confirm(or not) that, and explain why is that true (or false). > >Thanks and best regards
There are multiple ways of looking at this problem. I'm not sure which way you're most interested in, but I'll offer the following: Generally longer blocks provide better error correction capability because there's more mutual information to spread around. However, with a Viterbi decoder there's a useful rule of thumb that after around five or six contraint lengths (depending on the traceback depth of the decoder) it start getting into diminishing returns pretty quickly. An interative code, on the other hand, has a fixed block length, but always uses all of the bits in the block to decode the rest of the bits in the block. So a longer block length provides more "bit diversity" with which to recover the information for any given bit. If one has the patience or the ability to spend the hardware complexity, the longer the block length the better. Again, at a certin point there are diminishing returns. If one looks at packet error rate for various possible codes for a fixed block length an interesting thing happens. Clearly for longer block lengths the Viterbi decoder gets less advantage (since after several constraint lengths there's not much to gain), while an iterative decoder (e.g., LDPC, Turbo Code) keeps getting better. This means that as the block size decreases one might expect that at some point the Viterbi might be a better choice than the iterative code, and in my experience that's exactly what happens. Depending on the codes and the system involved, one may be better off with just a Viterbi decoder compared to an iterative decoder. At what block size that happens depends on a lot of things, but can be as large as many tens of bytes in some systems. Hope that helps a bit. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by Vladimir Vassilevsky May 5, 20092009-05-05

Melinda wrote:

> Hi, > > Does anyone knows does block length of info data, which then goes through > convolutional encoder(convolutional encoding), has impact of performance on > convolutional coding and Viterbi decoding(FEC system).
First consideration: the beginning and the termination of the trellis incur some overhead. It depends on how it is done, but it still does.
> For example if info message length is 256 bits, or 1024 bits, or 8912 bits > does performance of FEC system(convolutional encoding and Viterbi decoding) > changes with different input block lengths?
Viterbi rule of thumb applies here: the block size should be no less then about 5 x constraint length. Also depends on the code rate. Lower rate allows for smaller blocks.
> Does this FEC system has better > performance on shorter or longer input block lengths? > I heard that performance is better on shorter input data blocks,
What do you think?
> (and also > that Turbo coding has better performance on longer input blocks), but can > someone confirm(or not) that, and explain why is that true (or false).
Why didn't you check with google, wikipedia or your local univercity library before asking a really clueless questions here? Christian Schlegel. "Trellis Coding" Johannesson and Zigangirov "Fundamentals of convolutional coding" VLV
Reply by glen herrmannsfeldt May 5, 20092009-05-05
Melinda <melinda.mel3@gmail.com> wrote:
 
> Does anyone knows does block length of info data, which then goes through > convolutional encoder(convolutional encoding), has impact of performance on > convolutional coding and Viterbi decoding(FEC system).
I believe you need longer ones to correct for longer errors. There is a test that they do on CD players where they take a wedge of black tape, maybe up to 2 or 3 mm at the wide end, and put it on a CD. Then play it an see at what point reading the disk fails due to inability to correct for the lost bits. The could be thousands of bits, so the code block better be long enough. Without correction, even tiny scratches would cause it to fail. -- glen
Reply by Melinda May 5, 20092009-05-05
Hi,

Does anyone knows does block length of info data, which then goes through
convolutional encoder(convolutional encoding), has impact of performance on
convolutional coding and Viterbi decoding(FEC system).
For example if info message length is 256 bits, or 1024 bits, or 8912 bits
does performance of FEC system(convolutional encoding and Viterbi decoding)
changes with different input block lengths? Does this FEC system has better
performance on shorter or longer input block lengths?
I heard that performance is better on shorter input data blocks, (and also
that Turbo coding has better performance on longer input blocks), but can
someone confirm(or not) that, and explain why is that true (or false).

Thanks and best regards