>Convolutional codes over extension fields such
>as GF(2^8) are the _same_ as convolutional codes
>over the base field (GF(2)) here with a longer
>constraint length. Each codeword symbol c_m in the
>code over GF(2^8) is a linear combination
>
>c_m = sum_{i=0}^t g_i u_{m-i}
>
>of the current data symbol and previous t data
>symbols with "weights" g_i, the coefficients of
>the appropriate generator polynomial. But, the
>8 bits in the product g_i u_{m-i} are 8
>different XOR sums of the 8 bits in u_{m-i}, and
>so a (n,k) convolutional code over GF(2^8) is
>just a (8n,8k) convolutioal code over GF(2) with
>a constraint length (measured in bits) that is 8
>times larger than the constraint length (measured
>in bytes) of the GF(2^8) code.
>
>Thus, unless one has devised a fantastic new method
>of using the properties of GF(2^8) in the design of
>a convolutional code over GF(2^8), there is little
>to gained by working in the larger field instead of
>over GF(2) where there is a huge literature already
>about good codes and how to decode them. In any case,
>even if one has the fantastic new method, the
>_complexity of the decoder_ will likely make the scheme
>impractical in terms of current (and foreseeable
>future) technologies.
Yes, I've wrangled this problem in my head on on paper
and there doesn't seem to be much way of getting around
the vast complexity of trellis/Viterbi/BCJR decoding
of a nonbinary algebraic code.
But things like BCJR-decoding a (63, 57) Hamming code
are feasible. One can show in simulation the BCJR-decoded
Hamming codes give a slightly low BER than Viterbi-decoding of
the same codes, although the main reason for doing it
is for when you need a better soft output than SOVA, etc.
Tangentially, I think you mis-read Eric's post similarly
to how I mis-read it.
Steve
>On Thursday, July 16, 2015 at 11:20:47 AM UTC-5, Eric Jacobsen wondered:
>
>>
>> Use a FEC like a convolutional code? You can stream it as long as you
>> like.
>
>Convolutional codes over extension fields such
>as GF(2^8) are the _same_ as convolutional codes
>over the base field (GF(2)) here with a longer
>constraint length. Each codeword symbol c_m in the
>code over GF(2^8) is a linear combination
>
>c_m = sum_{i=0}^t g_i u_{m-i}
>
>of the current data symbol and previous t data
>symbols with "weights" g_i, the coefficients of
>the appropriate generator polynomial. But, the
>8 bits in the product g_i u_{m-i} are 8
>different XOR sums of the 8 bits in u_{m-i}, and
>so a (n,k) convolutional code over GF(2^8) is
>just a (8n,8k) convolutioal code over GF(2) with
>a constraint length (measured in bits) that is 8
>times larger than the constraint length (measured
>in bytes) of the GF(2^8) code.
>
>Thus, unless one has devised a fantastic new method
>of using the properties of GF(2^8) in the design of
>a convolutional code over GF(2^8), there is little
>to gained by working in the larger field instead of
>over GF(2) where there is a huge literature already
>about good codes and how to decode them. In any case,
>even if one has the fantastic new method, the
>_complexity of the decoder_ will likely make the scheme
>impractical in terms of current (and foreseeable
>future) technologies.
>
>Dilip Sarwate
Since the question was more open-ended, i.e., "Have someone heard of
any ecc without a limit on the codeword length?", I figured I'd
suggest convolutional codes, and I had binary in mind. Even with the
larger field, though, it does address the problem of extremely large
codewords. ;)
Larger fields have been used in convolutional codes for Turbo Coding
to squeeze a little more performance out of short blocks, e.g., the
DVB-RCS standard, which uses duo-binary GF(2^2). I think there are
other ways to do it, but that's what was used in that case.
Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
Reply by dvsarwate●July 17, 20152015-07-17
On Thursday, July 16, 2015 at 11:20:47 AM UTC-5, Eric Jacobsen wondered:
>
> Use a FEC like a convolutional code? You can stream it as long as you
> like.
Convolutional codes over extension fields such
as GF(2^8) are the _same_ as convolutional codes
over the base field (GF(2)) here with a longer
constraint length. Each codeword symbol c_m in the
code over GF(2^8) is a linear combination
c_m = sum_{i=0}^t g_i u_{m-i}
of the current data symbol and previous t data
symbols with "weights" g_i, the coefficients of
the appropriate generator polynomial. But, the
8 bits in the product g_i u_{m-i} are 8
different XOR sums of the 8 bits in u_{m-i}, and
so a (n,k) convolutional code over GF(2^8) is
just a (8n,8k) convolutioal code over GF(2) with
a constraint length (measured in bits) that is 8
times larger than the constraint length (measured
in bytes) of the GF(2^8) code.
Thus, unless one has devised a fantastic new method
of using the properties of GF(2^8) in the design of
a convolutional code over GF(2^8), there is little
to gained by working in the larger field instead of
over GF(2) where there is a huge literature already
about good codes and how to decode them. In any case,
even if one has the fantastic new method, the
_complexity of the decoder_ will likely make the scheme
impractical in terms of current (and foreseeable
future) technologies.
Dilip Sarwate
>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
>>
>>>On Thu, 16 Jul 2015 06:53:48 -0500, "lrq3000" <106516@DSPRelated>
>>
>>>>Have someone heard of any ecc without a limit on the codeword length?
>>
>>>Use a FEC like a convolutional code? You can stream it as long as you
>>>like.
>>
>>Yes, but the simplest such approaches have multiple coderwords
>>in the stream. Which is fine. One approach I like is Berlekamp's
>>helical interleaver, in which the codeword length N and the interleaving
>>depth D are selected to be relatively prime.
>Not sure what you're talking about here.
I completely mis-read you, and thought the goal was to stream a
string of Reed-Solmon codewords.
Sorry bout that.
> A convolutional code by
>itself doesn't require an interleaver, and the simplest form doesn't
>have multiple codewords.
>
>I mention it because it can have any desired length of information
>stream, up to infinite.
>Eric Jacobsen <eric.jacobsen@ieee.org> wrote:
>
>>On Thu, 16 Jul 2015 06:53:48 -0500, "lrq3000" <106516@DSPRelated>
>
>>>Have someone heard of any ecc without a limit on the codeword length?
>
>>Use a FEC like a convolutional code? You can stream it as long as you
>>like.
>
>Yes, but the simplest such approaches have multiple coderwords
>in the stream. Which is fine. One approach I like is Berlekamp's
>helical interleaver, in which the codeword length N and the interleaving
>depth D are selected to be relatively prime.
>
>Steve
Not sure what you're talking about here. A convolutional code by
itself doesn't require an interleaver, and the simplest form doesn't
have multiple codewords.
I mention it because it can have any desired length of information
stream, up to infinite.
Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
>>Have someone heard of any ecc without a limit on the codeword length?
>Use a FEC like a convolutional code? You can stream it as long as you
>like.
Yes, but the simplest such approaches have multiple coderwords
in the stream. Which is fine. One approach I like is Berlekamp's
helical interleaver, in which the codeword length N and the interleaving
depth D are selected to be relatively prime.
Steve
Reply by Eric Jacobsen●July 16, 20152015-07-16
On Thu, 16 Jul 2015 06:53:48 -0500, "lrq3000" <106516@DSPRelated>
wrote:
>>Thank you all very much, all the answers were very helpful.
>>
>>I still tried to use longer codewords than what is normally permitted by
>>the field's size (eg, codeword of length 510 in field 2^8), and I
>adapted
>>the different algorithms to support that, but there's a theoretical wall
>>that I can't beat: values above 2^8 will just be modulo reduced, so that
>>any error will have two locations: one in range 0-255 and one in range
>>256-510. So now I better understand the intuition behind the fact that
>the
>>field's size must be the same for both at the codeword length level and
>at
>>the symbols values level.
>>
>>Also thank you Dilip for your explanation, this is an alternative I have
>>thought of and that I would probably have spent time trying without your
>>clear explanation that it's not worthwile at all.
>>
>>Steve Pope, I am interested in learning more about singly and doubly
>>extended RS code, what is a good resource you would advise me (if
>possible
>>with algorithms, not only mathematical equations as I intend to
>implement
>>them), and what are the advantages of extended RS codes compared to
>>standard RS?
>>---------------------------------------
>>Posted through http://www.DSPRelated.com
>
>PS: to extend my explanation about why the field's size needs to be the
>same, it's because the values of the generated ECC symbols and of any
>symbol in fact are in fine used to generate the positions of the errors:
>if a symbol's value is limited to 2^8, you cannot generate a position
>above, and thus a codeword cannot be of a longer size. So in reality,
>there is no field's size for the "codeword level" as I imagined: there is
>only the field's size at the symbol's level, the limit on codeword length
>is just a natural consequence.
>
>Anyway I wonder if some genius mathematician could devise an error
>correcting code that would not have any limit on the codeword length. I've
>heard about Cauchy-Reed-Solomon, but in fact it's a trick because it just
>moves the limit by aggregating several symbols into blocks, and the
>codeword limit is then on the number of blocks instead of the number of
>symbols.
>
>Have someone heard of any ecc without a limit on the codeword length?
>---------------------------------------
>Posted through http://www.DSPRelated.com
Use a FEC like a convolutional code? You can stream it as long as you
like.
Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
Reply by dvsarwate●July 16, 20152015-07-16
On Thursday, July 16, 2015 at 6:53:51 AM UTC-5, lrq3000 asked:
> Anyway I wonder if some genius mathematician could devise an error
> correcting code that would not have any limit on the codeword length.
About 45 years ago, "genius mathematicians" named Bose and
Chaudhuri, and independently Hocquenghem, devised a method
for constructing t-error-correcting codes that can be used
to make binary codes (and more generally, codes over a
specified finite field such as GF(256)) that are as long as
one wants. Note that t is fixed but the block length n can
be made as large as one wants.
What exactly do you need?
Dilip Sarwate
Reply by lrq3000●July 16, 20152015-07-16
>Thank you all very much, all the answers were very helpful.
>
>I still tried to use longer codewords than what is normally permitted by
>the field's size (eg, codeword of length 510 in field 2^8), and I
adapted
>the different algorithms to support that, but there's a theoretical wall
>that I can't beat: values above 2^8 will just be modulo reduced, so that
>any error will have two locations: one in range 0-255 and one in range
>256-510. So now I better understand the intuition behind the fact that
the
>field's size must be the same for both at the codeword length level and
at
>the symbols values level.
>
>Also thank you Dilip for your explanation, this is an alternative I have
>thought of and that I would probably have spent time trying without your
>clear explanation that it's not worthwile at all.
>
>Steve Pope, I am interested in learning more about singly and doubly
>extended RS code, what is a good resource you would advise me (if
possible
>with algorithms, not only mathematical equations as I intend to
implement
>them), and what are the advantages of extended RS codes compared to
>standard RS?
>---------------------------------------
>Posted through http://www.DSPRelated.com
PS: to extend my explanation about why the field's size needs to be the
same, it's because the values of the generated ECC symbols and of any
symbol in fact are in fine used to generate the positions of the errors:
if a symbol's value is limited to 2^8, you cannot generate a position
above, and thus a codeword cannot be of a longer size. So in reality,
there is no field's size for the "codeword level" as I imagined: there is
only the field's size at the symbol's level, the limit on codeword length
is just a natural consequence.
Anyway I wonder if some genius mathematician could devise an error
correcting code that would not have any limit on the codeword length. I've
heard about Cauchy-Reed-Solomon, but in fact it's a trick because it just
moves the limit by aggregating several symbols into blocks, and the
codeword limit is then on the number of blocks instead of the number of
symbols.
Have someone heard of any ecc without a limit on the codeword length?
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by lrq3000●July 16, 20152015-07-16
Thank you all very much, all the answers were very helpful.
I still tried to use longer codewords than what is normally permitted by
the field's size (eg, codeword of length 510 in field 2^8), and I adapted
the different algorithms to support that, but there's a theoretical wall
that I can't beat: values above 2^8 will just be modulo reduced, so that
any error will have two locations: one in range 0-255 and one in range
256-510. So now I better understand the intuition behind the fact that the
field's size must be the same for both at the codeword length level and at
the symbols values level.
Also thank you Dilip for your explanation, this is an alternative I have
thought of and that I would probably have spent time trying without your
clear explanation that it's not worthwile at all.
Steve Pope, I am interested in learning more about singly and doubly
extended RS code, what is a good resource you would advise me (if possible
with algorithms, not only mathematical equations as I intend to implement
them), and what are the advantages of extended RS codes compared to
standard RS?
---------------------------------------
Posted through http://www.DSPRelated.com