DSPRelated.com
Forums

Reed-Solomon FEC necessary to have the same characteristic for symbols and words?

Started by lrq3000 June 29, 2015
On Thu, 16 Jul 2015 18:52:41 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>On Thu, 16 Jul 2015 06:53:48 -0500, "lrq3000" <106516@DSPRelated> > >>>Have someone heard of any ecc without a limit on the codeword length? > >>Use a FEC like a convolutional code? You can stream it as long as you >>like. > >Yes, but the simplest such approaches have multiple coderwords >in the stream. Which is fine. One approach I like is Berlekamp's >helical interleaver, in which the codeword length N and the interleaving >depth D are selected to be relatively prime. > >Steve
Not sure what you're talking about here. A convolutional code by itself doesn't require an interleaver, and the simplest form doesn't have multiple codewords. I mention it because it can have any desired length of information stream, up to infinite. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>On Thu, 16 Jul 2015 18:52:41 +0000 (UTC), spope33@speedymail.org
>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >> >>>On Thu, 16 Jul 2015 06:53:48 -0500, "lrq3000" <106516@DSPRelated> >> >>>>Have someone heard of any ecc without a limit on the codeword length? >> >>>Use a FEC like a convolutional code? You can stream it as long as you >>>like. >> >>Yes, but the simplest such approaches have multiple coderwords >>in the stream. Which is fine. One approach I like is Berlekamp's >>helical interleaver, in which the codeword length N and the interleaving >>depth D are selected to be relatively prime.
>Not sure what you're talking about here.
I completely mis-read you, and thought the goal was to stream a string of Reed-Solmon codewords. Sorry bout that.
> A convolutional code by >itself doesn't require an interleaver, and the simplest form doesn't >have multiple codewords. > >I mention it because it can have any desired length of information >stream, up to infinite.
Yep Steve
On Thursday, July 16, 2015 at 11:20:47 AM UTC-5, Eric Jacobsen wondered:

> > Use a FEC like a convolutional code? You can stream it as long as you > like.
Convolutional codes over extension fields such as GF(2^8) are the _same_ as convolutional codes over the base field (GF(2)) here with a longer constraint length. Each codeword symbol c_m in the code over GF(2^8) is a linear combination c_m = sum_{i=0}^t g_i u_{m-i} of the current data symbol and previous t data symbols with "weights" g_i, the coefficients of the appropriate generator polynomial. But, the 8 bits in the product g_i u_{m-i} are 8 different XOR sums of the 8 bits in u_{m-i}, and so a (n,k) convolutional code over GF(2^8) is just a (8n,8k) convolutioal code over GF(2) with a constraint length (measured in bits) that is 8 times larger than the constraint length (measured in bytes) of the GF(2^8) code. Thus, unless one has devised a fantastic new method of using the properties of GF(2^8) in the design of a convolutional code over GF(2^8), there is little to gained by working in the larger field instead of over GF(2) where there is a huge literature already about good codes and how to decode them. In any case, even if one has the fantastic new method, the _complexity of the decoder_ will likely make the scheme impractical in terms of current (and foreseeable future) technologies. Dilip Sarwate
On Fri, 17 Jul 2015 04:05:37 -0700 (PDT), dvsarwate
<dvsarwate@yahoo.com> wrote:

>On Thursday, July 16, 2015 at 11:20:47 AM UTC-5, Eric Jacobsen wondered: > >> >> Use a FEC like a convolutional code? You can stream it as long as you >> like. > >Convolutional codes over extension fields such >as GF(2^8) are the _same_ as convolutional codes >over the base field (GF(2)) here with a longer >constraint length. Each codeword symbol c_m in the >code over GF(2^8) is a linear combination > >c_m = sum_{i=0}^t g_i u_{m-i} > >of the current data symbol and previous t data >symbols with "weights" g_i, the coefficients of >the appropriate generator polynomial. But, the >8 bits in the product g_i u_{m-i} are 8 >different XOR sums of the 8 bits in u_{m-i}, and >so a (n,k) convolutional code over GF(2^8) is >just a (8n,8k) convolutioal code over GF(2) with >a constraint length (measured in bits) that is 8 >times larger than the constraint length (measured >in bytes) of the GF(2^8) code. > >Thus, unless one has devised a fantastic new method >of using the properties of GF(2^8) in the design of >a convolutional code over GF(2^8), there is little >to gained by working in the larger field instead of >over GF(2) where there is a huge literature already >about good codes and how to decode them. In any case, >even if one has the fantastic new method, the >_complexity of the decoder_ will likely make the scheme >impractical in terms of current (and foreseeable >future) technologies. > >Dilip Sarwate
Since the question was more open-ended, i.e., "Have someone heard of any ecc without a limit on the codeword length?", I figured I'd suggest convolutional codes, and I had binary in mind. Even with the larger field, though, it does address the problem of extremely large codewords. ;) Larger fields have been used in convolutional codes for Turbo Coding to squeeze a little more performance out of short blocks, e.g., the DVB-RCS standard, which uses duo-binary GF(2^2). I think there are other ways to do it, but that's what was used in that case. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
dvsarwate  <dvsarwate@yahoo.com> wrote:

>Convolutional codes over extension fields such >as GF(2^8) are the _same_ as convolutional codes >over the base field (GF(2)) here with a longer >constraint length. Each codeword symbol c_m in the >code over GF(2^8) is a linear combination > >c_m = sum_{i=0}^t g_i u_{m-i} > >of the current data symbol and previous t data >symbols with "weights" g_i, the coefficients of >the appropriate generator polynomial. But, the >8 bits in the product g_i u_{m-i} are 8 >different XOR sums of the 8 bits in u_{m-i}, and >so a (n,k) convolutional code over GF(2^8) is >just a (8n,8k) convolutioal code over GF(2) with >a constraint length (measured in bits) that is 8 >times larger than the constraint length (measured >in bytes) of the GF(2^8) code. > >Thus, unless one has devised a fantastic new method >of using the properties of GF(2^8) in the design of >a convolutional code over GF(2^8), there is little >to gained by working in the larger field instead of >over GF(2) where there is a huge literature already >about good codes and how to decode them. In any case, >even if one has the fantastic new method, the >_complexity of the decoder_ will likely make the scheme >impractical in terms of current (and foreseeable >future) technologies.
Yes, I've wrangled this problem in my head on on paper and there doesn't seem to be much way of getting around the vast complexity of trellis/Viterbi/BCJR decoding of a nonbinary algebraic code. But things like BCJR-decoding a (63, 57) Hamming code are feasible. One can show in simulation the BCJR-decoded Hamming codes give a slightly low BER than Viterbi-decoding of the same codes, although the main reason for doing it is for when you need a better soft output than SOVA, etc. Tangentially, I think you mis-read Eric's post similarly to how I mis-read it. Steve