I may have asked this question some years ago, but cannot remember. All of my books on error correcting codes are, basically, about codes that can be decoded systematically. I have a situation where when you divide the processor clock rate by the data rate, you find that you have all the time in the world for decoding. And, the customer is talking about upgrades. Are there any known codes that work well with simple brute-force maximum- likelihood decoding, but that might not be in the literature much because they aren't mathematically "pretty"? Inquiring minds want to know. Thanks. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Block codes: optimal vs. ease of decoding
Started by ●June 10, 2016
Reply by ●June 10, 20162016-06-10
Tim Wescott <tim@seemywebsite.com> wrote:>I may have asked this question some years ago, but cannot remember. > >All of my books on error correcting codes are, basically, about codes >that can be decoded systematically.(Note, you may mean "algorithmically", in coding "systematic" has a specific meaning.)> >I have a situation where when you divide the processor clock rate by the >data rate, you find that you have all the time in the world for >decoding. And, the customer is talking about upgrades. > >Are there any known codes that work well with simple brute-force maximum- >likelihood decoding, but that might not be in the literature much because >they aren't mathematically "pretty"? > >Inquiring minds want to know. Thanks.Brute-force maximum-likelyhood decoding blows up in complexity very rapidly with the size parameters of the codeword. But: There are some situations where this approach has been taken. The BVD (Big Viterbi Decoder) designed by McEliece et. al. used a binary convolutional code with 2^15 bits of state. Other codes that sometimes are used with brute-force methods are Golay, Reed-Muller, Quadratic Residue, and Nordstrom-Robinson. You may want to find Sloan's survey of the more efficient known binary codes (Googling will lead you to a version of this). Any BCH code is amenable to BCJR decoding for a maximum likelyhood result; the number of bits of state is equal to the number of check bits. Steve
Reply by ●June 10, 20162016-06-10
On Fri, 10 Jun 2016 19:38:57 -0500, Tim Wescott <tim@seemywebsite.com> wrote:>I may have asked this question some years ago, but cannot remember. > >All of my books on error correcting codes are, basically, about codes >that can be decoded systematically. > >I have a situation where when you divide the processor clock rate by the >data rate, you find that you have all the time in the world for >decoding. And, the customer is talking about upgrades. > >Are there any known codes that work well with simple brute-force maximum- >likelihood decoding, but that might not be in the literature much because >they aren't mathematically "pretty"? > >Inquiring minds want to know. Thanks.Trading processing complexity for coding gain is a very common thing to do, and goes along with a lot of different kinds of codes, pretty or not or "systematic" or not (and, yeah, there is ambiguity around what you may mean by "systematic"). Even randomly generated codes (i.e., codes for which the encoder matrix is just a random matrix) often work very well, but they're very inefficient to decode...so there's an easy case where just throwing complexity at the problem can get you potentially better coding. Otherwise you may want to look into Turbo Codes or LDPC codes or Turbo Product Codes, which are more complex to decode but often give capacity-approaching performance if they're done reasonably well. Depending on what code is in the current system, switching from whatever it is to a capacity-approaching code can be a big performance shift. Even if you don't need all the additional gain, you can also use the increased gain to increase the code rate and get more throughput in the same channel. So I think your option space is large in this case, because more decoder complexity is the general recipe for improved coding gain, but you may need to change the code if that's possible.
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 02:25:47 +0000, Eric Jacobsen wrote:> On Fri, 10 Jun 2016 19:38:57 -0500, Tim Wescott <tim@seemywebsite.com> > wrote: > >>I may have asked this question some years ago, but cannot remember. >> >>All of my books on error correcting codes are, basically, about codes >>that can be decoded systematically. >> >>I have a situation where when you divide the processor clock rate by the >>data rate, you find that you have all the time in the world for >>decoding. And, the customer is talking about upgrades. >> >>Are there any known codes that work well with simple brute-force >>maximum- >>likelihood decoding, but that might not be in the literature much >>because they aren't mathematically "pretty"? >> >>Inquiring minds want to know. Thanks. > > Trading processing complexity for coding gain is a very common thing to > do, and goes along with a lot of different kinds of codes, pretty or not > or "systematic" or not (and, yeah, there is ambiguity around what you > may mean by "systematic"). > > Even randomly generated codes (i.e., codes for which the encoder matrix > is just a random matrix) often work very well, but they're very > inefficient to decode...so there's an easy case where just throwing > complexity at the problem can get you potentially better coding. > > Otherwise you may want to look into Turbo Codes or LDPC codes or Turbo > Product Codes, which are more complex to decode but often give > capacity-approaching performance if they're done reasonably well. > Depending on what code is in the current system, switching from whatever > it is to a capacity-approaching code can be a big performance shift. > Even if you don't need all the additional gain, you can also use the > increased gain to increase the code rate and get more throughput in the > same channel. > > So I think your option space is large in this case, because more decoder > complexity is the general recipe for improved coding gain, but you may > need to change the code if that's possible.A significant constraint -- and why I was asking about block codes specifically -- is the total delay. It's for a closed-loop system with a fairly severe time vs. cost criterion. So anything that approaches the channel capacity is almost automatically out, because you approach the channel capacity by letting the delay get long. Come to think of it -- stand by for a new thread. (I'd love to find a book or paper that investigates error correcting codes from this perspective. I suspect that I'm just going to have to bumble along on my own, though.) -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by ●June 11, 20162016-06-11
Tim Wescott <tim@seemywebsite.com> wrote:>A significant constraint -- and why I was asking about block codes >specifically -- is the total delay. It's for a closed-loop system with a >fairly severe time vs. cost criterion. > >So anything that approaches the channel capacity is almost automatically >out, because you approach the channel capacity by letting the delay get >long. Come to think of it -- stand by for a new thread. > >(I'd love to find a book or paper that investigates error correcting >codes from this perspective.What you want is the Pollara et. al derivation of capacity vs. block size. http://ipnpr.jpl.nasa.gov/progress_report/42-133/133K.pdf That said, the codes I previously mentioned are some of the best for short block sizes, if ML decoded. You will see in Figure 12 in the above paper which codes work close to capacity for different block sizes: Hamming, Golay, Quadratic Residue, terminated convolutional, turbo, depending on the block size and code rate regime. Steve
Reply by ●June 11, 20162016-06-11
On 11.06.2016 3:38, Tim Wescott wrote:> I may have asked this question some years ago, but cannot remember. > > All of my books on error correcting codes are, basically, about codes > that can be decoded systematically. > > I have a situation where when you divide the processor clock rate by the > data rate, you find that you have all the time in the world for > decoding. And, the customer is talking about upgrades. > > Are there any known codes that work well with simple brute-force maximum- > likelihood decoding, but that might not be in the literature much because > they aren't mathematically "pretty"? > > Inquiring minds want to know. Thanks. >Just a suggestion. Random codes are presumed to work well, but they indeed aren't pretty. And many codes used in the literature are presumed to work as well as the random codes, but with the requirement that decoding is easy. If you have resources to do brute-force ML decoding, perhaps you don't need a "code" at all. Just use some random 2^k -> 2^n mapping and see if that's good for you. But, specifically for very short codes there are some known goodies like a perfect binary Golay code, which are worth looking at. Gene
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 11:17:35 +0300, Evgeny Filatov wrote:> On 11.06.2016 3:38, Tim Wescott wrote: >> I may have asked this question some years ago, but cannot remember. >> >> All of my books on error correcting codes are, basically, about codes >> that can be decoded systematically. >> >> I have a situation where when you divide the processor clock rate by >> the data rate, you find that you have all the time in the world for >> decoding. And, the customer is talking about upgrades. >> >> Are there any known codes that work well with simple brute-force >> maximum- >> likelihood decoding, but that might not be in the literature much >> because they aren't mathematically "pretty"? >> >> Inquiring minds want to know. Thanks. >> >> > Just a suggestion. Random codes are presumed to work well, but they > indeed aren't pretty. And many codes used in the literature are presumed > to work as well as the random codes, but with the requirement that > decoding is easy. > > If you have resources to do brute-force ML decoding, perhaps you don't > need a "code" at all. Just use some random 2^k -> 2^n mapping and see if > that's good for you. > > But, specifically for very short codes there are some known goodies like > a perfect binary Golay code, which are worth looking at. > > GeneOr do a brute-force search of all 2^k -> 2^n mappings for the ones with the best Hamming distance. This is nice because I can also look for things that aren't in the literature, like Hamming distance between two words when one of them has cycle-slipped by one bit. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 12:07:35 -0500, Tim Wescott <tim@seemywebsite.com> wrote:>On Sat, 11 Jun 2016 11:17:35 +0300, Evgeny Filatov wrote: > >> On 11.06.2016 3:38, Tim Wescott wrote: >>> I may have asked this question some years ago, but cannot remember. >>> >>> All of my books on error correcting codes are, basically, about codes >>> that can be decoded systematically. >>> >>> I have a situation where when you divide the processor clock rate by >>> the data rate, you find that you have all the time in the world for >>> decoding. And, the customer is talking about upgrades. >>> >>> Are there any known codes that work well with simple brute-force >>> maximum- >>> likelihood decoding, but that might not be in the literature much >>> because they aren't mathematically "pretty"? >>> >>> Inquiring minds want to know. Thanks. >>> >>> >> Just a suggestion. Random codes are presumed to work well, but they >> indeed aren't pretty. And many codes used in the literature are presumed >> to work as well as the random codes, but with the requirement that >> decoding is easy. >> >> If you have resources to do brute-force ML decoding, perhaps you don't >> need a "code" at all. Just use some random 2^k -> 2^n mapping and see if >> that's good for you. >> >> But, specifically for very short codes there are some known goodies like >> a perfect binary Golay code, which are worth looking at. >> >> Gene > >Or do a brute-force search of all 2^k -> 2^n mappings for the ones with >the best Hamming distance. This is nice because I can also look for >things that aren't in the literature, like Hamming distance between two >words when one of them has cycle-slipped by one bit.Yes, exhaustive search is one way to decode any block code, and there are lots of easy tricks to simplify it quite a bit for a simple random binary code. It's also not too hard to do soft-decision processing on mismatched bits (with candidate codewords) to help break ties or improve performance.
Reply by ●June 11, 20162016-06-11
Tim Wescott wrote:> I may have asked this question some years ago, but cannot remember. > > All of my books on error correcting codes are, basically, about codes > that can be decoded systematically. > > I have a situation where when you divide the processor clock rate by the > data rate, you find that you have all the time in the world for > decoding. And, the customer is talking about upgrades. > > Are there any known codes that work well with simple brute-force maximum- > likelihood decoding, but that might not be in the literature much because > they aren't mathematically "pretty"? > > Inquiring minds want to know. Thanks. >Why not just use turbo or LDPC codes? Too much redundant data? -- Les Cargill
Reply by ●June 15, 20162016-06-15
On Saturday, June 11, 2016 at 12:27:05 AM UTC-4, Steve Pope wrote:> Tim Wescott <tim@.com> wrote: > > >A significant constraint -- and why I was asking about block codes > >specifically -- is the total delay. It's for a closed-loop system with a > >fairly severe time vs. cost criterion. > > > >So anything that approaches the channel capacity is almost automatically > >out, because you approach the channel capacity by letting the delay get > >long. Come to think of it -- stand by for a new thread. > > > >(I'd love to find a book or paper that investigates error correcting > >codes from this perspective. > > What you want is the Pollara et. al derivation of capacity vs. > block size. > > http://ipnpr.jpl.nasa.gov/progress_report/42-133/133K.pdf > > That said, the codes I previously mentioned are some of the best > for short block sizes, if ML decoded. > > You will see in Figure 12 in the above paper which codes work close > to capacity for different block sizes: Hamming, Golay, Quadratic > Residue, terminated convolutional, turbo, depending on the block > size and code rate regime. > > SteveI love that paper.






