DSPRelated.com
Forums

Error correcting codes for extremely high error rates

Started by Johannes Bauer January 14, 2014
Hi group,

when designing an error correcting function it's usually assumed that
the error rate is relatively small (i.e. the ratio of "good" bits to
"bad" bits is usually well above 1). However, let's assume we have a
transmission that is extremely faulty and produces only 1 "good" bit
every 10 bits on average. What kind of error correction mechanisms would
be applicable to such a distribution? Let's say we want to be able to
recover the original bitstream with a 99% probability and still want to
keep the overhead to a minimum.

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt? > Zumindest nicht �ffentlich!
Ah, der neueste und bis heute genialste Streich unsere gro&#4294967295;en Kosmologen: Die Geheim-Vorhersage. - Karl Kaos &#4294967295;ber R&#4294967295;diger Thomas in dsa <hidbv3$om2$1@speranza.aioe.org>
On Tuesday, January 14, 2014 10:40:12 AM UTC-5, Johannes Bauer wrote:
> Hi group, > > when designing an error correcting function it's usually assumed that > the error rate is relatively small (i.e. the ratio of "good" bits to > "bad" bits is usually well above 1). However, let's assume we have a > transmission that is extremely faulty and produces only 1 "good" bit > every 10 bits on average. What kind of error correction mechanisms would > be applicable to such a distribution? Let's say we want to be able to > recover the original bitstream with a 99% probability and still want to > keep the overhead to a minimum. > > Best regards, > Johannes >
You have to describe your channel more precisely. Are you talking about the binary symmetric channel (BSC) with very high error probability? Or do you deterministically get 9 bit flips every 10? That channel is much easier to deal with than you think (think about it again for a moment). http://en.wikipedia.org/wiki/Binary_symmetric_channel To give you an idea, if you are to talk about the BSC you can at least get a guideline regarding coding rates from the simple Singleton bound. http://en.wikipedia.org/wiki/Singleton_bound
On 1/14/14 10:40 AM, Johannes Bauer wrote:
> Hi group, > > when designing an error correcting function it's usually assumed that > the error rate is relatively small (i.e. the ratio of "good" bits to > "bad" bits is usually well above 1). However, let's assume we have a > transmission that is extremely faulty and produces only 1 "good" bit > every 10 bits on average. What kind of error correction mechanisms would > be applicable to such a distribution?
seems to me that reed solomon coding is what the Voyager used. and they had something like a 25 watt transmitter and they're billions of kilometers away.
> Let's say we want to be able to > recover the original bitstream with a 99% probability
see those nice pictures of Neptune? or the small blue dot? it came out better than 99%.
> and still want to keep the overhead to a minimum.
gotta pay for it somewhere. no free lunch. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Tue, 14 Jan 2014 16:40:12 +0100, Johannes Bauer
<dfnsonfsduifb@gmx.de> wrote:

>Hi group, > >when designing an error correcting function it's usually assumed that >the error rate is relatively small (i.e. the ratio of "good" bits to >"bad" bits is usually well above 1). However, let's assume we have a >transmission that is extremely faulty and produces only 1 "good" bit >every 10 bits on average. What kind of error correction mechanisms would >be applicable to such a distribution? Let's say we want to be able to >recover the original bitstream with a 99% probability and still want to >keep the overhead to a minimum.
You're going to be looking for very, very, very low rate codes, but they must still be very carefully selected. Julius pointed you in a good direction, in that finding very low rate block codes with large distances will be required (assuming random error distribution). That's a very unusual application and I don't know if any such codes have already been developed. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
robert bristow-johnson wrote:
> On 1/14/14 10:40 AM, Johannes Bauer wrote: >> Hi group, >> >> when designing an error correcting function it's usually assumed that >> the error rate is relatively small (i.e. the ratio of "good" bits to >> "bad" bits is usually well above 1). However, let's assume we have a >> transmission that is extremely faulty and produces only 1 "good" bit >> every 10 bits on average. What kind of error correction mechanisms would >> be applicable to such a distribution? > > seems to me that reed solomon coding is what the Voyager used. and they > had something like a 25 watt transmitter and they're billions of > kilometers away. >
Yes, but little or no Rayleigh scattering...
>> Let's say we want to be able to >> recover the original bitstream with a 99% probability > > see those nice pictures of Neptune? or the small blue dot? it came out > better than 99%. >
Ayup.
>> and still want to keep the overhead to a minimum. > > gotta pay for it somewhere. no free lunch. > > >
-- Les Cargill
On Tue, 14 Jan 2014 16:49:44 +0000, Eric Jacobsen wrote:

> On Tue, 14 Jan 2014 16:40:12 +0100, Johannes Bauer > <dfnsonfsduifb@gmx.de> wrote: > >>Hi group, >> >>when designing an error correcting function it's usually assumed that >>the error rate is relatively small (i.e. the ratio of "good" bits to >>"bad" bits is usually well above 1). However, let's assume we have a >>transmission that is extremely faulty and produces only 1 "good" bit >>every 10 bits on average. What kind of error correction mechanisms would >>be applicable to such a distribution? Let's say we want to be able to >>recover the original bitstream with a 99% probability and still want to >>keep the overhead to a minimum. > > You're going to be looking for very, very, very low rate codes, but they > must still be very carefully selected. > > Julius pointed you in a good direction, in that finding very low rate > block codes with large distances will be required (assuming random error > distribution). > > That's a very unusual application and I don't know if any such codes > have already been developed.
If the errors are truly random, then a good first stage may just be to average and decimate over a whole bunch of bits (i.e., use a repetitive code). Eventually you'll end up with something much slower that will be tractable for conventional FEC techniques. In a sense, this is what GPS and other "one-bit ADC" spread-spectrum receivers do: if you look at it from the right angle you can take the "one-bit ADC" as being a very error-prone bit decision, which is then added up coherently to a whole bunch of other error-prone bit decisions. Then the result is compared to some threshold, and voila! you have implemented a simple repetitive code over a whole bunch of bits. I suspect that the coding gain won't be a heck of a lot less than some more comprehensive code, and I know the system complexity _will_ be a heck of a lot less! -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On Tuesday, January 14, 2014 10:40:12 AM UTC-5, Johannes Bauer wrote:
> Hi group, > > > > when designing an error correcting function it's usually assumed that > > the error rate is relatively small (i.e. the ratio of "good" bits to > > "bad" bits is usually well above 1). However, let's assume we have a > > transmission that is extremely faulty and produces only 1 "good" bit > > every 10 bits on average. What kind of error correction mechanisms would > > be applicable to such a distribution? Let's say we want to be able to > > recover the original bitstream with a 99% probability and still want to > > keep the overhead to a minimum. > > > > Best regards, > > Johannes > > > > -- > > >> Wo hattest Du das Beben nochmal GENAU vorhergesagt? > > > Zumindest nicht &#65533;ffentlich! > > Ah, der neueste und bis heute genialste Streich unsere gro&#65533;en > > Kosmologen: Die Geheim-Vorhersage. > > - Karl Kaos &#65533;ber R&#65533;diger Thomas in dsa <hidbv3$om2$1@speranza.aioe.org>
Are your errors randomly distributed or do they tend to be bursty? But if your error rate is really high for example 90%, can't you invert all of the bits and then only 10% are incorrect? Then just correct from there. Often the problem is hardest when half of the bits are corrupted. Clay
Tim Wescott <tim@seemywebsite.really> wrote:
>> On Tue, 14 Jan 2014 16:40:12 +0100, Johannes Bauer >> <dfnsonfsduifb@gmx.de> wrote:
(snip)
>>>when designing an error correcting function it's usually assumed that >>>the error rate is relatively small (i.e. the ratio of "good" bits to >>>"bad" bits is usually well above 1). However, let's assume we have a >>>transmission that is extremely faulty and produces only 1 "good" bit >>>every 10 bits on average.
(snip)
> If the errors are truly random, then a good first stage may just be to > average and decimate over a whole bunch of bits (i.e., use a repetitive > code). Eventually you'll end up with something much slower that will be > tractable for conventional FEC techniques.
> In a sense, this is what GPS and other "one-bit ADC" spread-spectrum > receivers do: if you look at it from the right angle you can take the > "one-bit ADC" as being a very error-prone bit decision, which is then > added up coherently to a whole bunch of other error-prone bit decisions.
I think coherently is important here...
> Then the result is compared to some threshold, and voila! you have > implemented a simple repetitive code over a whole bunch of bits.
For a random error distribution, seems to me that there isn't much reason to do anything else. The fancier codes are for less random error models. For CDs, one expects a big piece of dust to cause many consecutive errors, with large error-free blocks in between.
> I suspect that the coding gain won't be a heck of a lot less than some > more comprehensive code, and I know the system complexity _will_ be a > heck of a lot less!
-- glen
On Tue, 14 Jan 2014 11:31:51 -0800, clay wrote:

> On Tuesday, January 14, 2014 10:40:12 AM UTC-5, Johannes Bauer wrote: >> Hi group, >> >> >> >> when designing an error correcting function it's usually assumed that >> >> the error rate is relatively small (i.e. the ratio of "good" bits to >> >> "bad" bits is usually well above 1). However, let's assume we have a >> >> transmission that is extremely faulty and produces only 1 "good" bit >> >> every 10 bits on average. What kind of error correction mechanisms >> would >> >> be applicable to such a distribution? Let's say we want to be able to >> >> recover the original bitstream with a 99% probability and still want to >> >> keep the overhead to a minimum. >> > Are your errors randomly distributed or do they tend to be bursty? But > if your error rate is really high for example 90%, can't you invert all > of the bits and then only 10% are incorrect? Then just correct from > there. Often the problem is hardest when half of the bits are corrupted.
Why do I think the OP meant "90 % of the bits are random"... -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On Tuesday, January 14, 2014 4:53:04 PM UTC-6, Tim Wescott wrote:

> > Why do I think the OP meant "90 % of the bits are random"... >
If "90% of the bits are random", are the 10% guaranteed to be correct as well as (on average) half (that is 45% of the 90%) of the "random" bits for a BER of only 45% instead of the 90% claimed? If not, how does one tell the guaranteed correct bits from the "random bits"? Dilip Sarwate