Reply by Ken Ryan April 9, 20092009-04-09
makolber@yahoo.com wrote:
> On Apr 1, 8:20 pm, Ken Ryan <newsr...@leesburg-geeks.org> wrote: >> Steve Pope wrote: >>> Ken Ryan <newsr...@leesburg-geeks.org> wrote: >>>> Steve Pope wrote: >>>>> Ken Ryan <newsr...@leesburg-geeks.org> wrote: >>>>>> Is there some way to characterize X% chance of detecting 3-symbol >>>>>> errors, Y% chance of detecting 4-symbol errors, etc.? >>>>> Certainly, these percentages (once you define exactly what >>>>> you're talking about) can be stated exactly based on pure >>>>> combinatorics. >>>> Do you happen to know of someplace you can point me where I can learn >>>> how to do this? >>> Usually you can assume that any uncorrectable error pattern >>> gives you a random syndrome (out of the 2^16 syndrome >>> patterns for your code). You can calculate how many of >>> these syndromes correspond to 0, 1, or 2 errors. (Making >>> sure you only consider valid error locations for the >>> shortened code.) Those would be the misdecodes, while the rest >>> of them would be the detectable uncorrectables. >>> There is a respect in which this assumption is not >>> quite exact, but the difference is probably not too >>> significant. >>> Steve >> OK, thanks for the explanation. I need to think on it a bit - this is a >> learning curve for me! >> >> Ken > > you spoke about "clumps" of errors.. > > do you know about "interleaving" ? >
Yes, I do. In my case, 4-bit symbols are a natural fit for how my data is being stored and transported. My error model (how I expect errors will be introduced into the system) will affect any number of bits within the symbol, but is extremely unlikely to cross symbols. (sorry I can't be explicit as to what I'm doing). As Steve surmised, in my case a uniform distribution of bit errors is entirely the wrong model. Thanks, Ken
Reply by Steve Pope April 1, 20092009-04-01
<makolber@yahoo.com> wrote:

>you spoke about "clumps" of errors.. > >do you know about "interleaving" ?
I believe Ken's statement, that the nonbinary code would outperform a binary code if the errors are in short bursts, is true independent of interleaving. This is actually a fairly important area of code design. If the errors are bursty, and the first thing you do is interleave, you've actually removed useful information and your coding will now be suboptimal (in the general case). This also ties back to the notion mentioned a couple threads ago that an AWGN channel is actually the worst possible channel for a given noise energy. Steve
Reply by April 1, 20092009-04-01
On Apr 1, 8:20&#4294967295;pm, Ken Ryan <newsr...@leesburg-geeks.org> wrote:
> Steve Pope wrote: > > Ken Ryan &#4294967295;<newsr...@leesburg-geeks.org> wrote: > > >> Steve Pope wrote: > > >>> Ken Ryan &#4294967295;<newsr...@leesburg-geeks.org> wrote: > > >>>> Is there some way to characterize X% chance of detecting 3-symbol > >>>> errors, Y% chance of detecting 4-symbol errors, etc.? &#4294967295; > >>> Certainly, these percentages (once you define exactly what > >>> you're talking about) can be stated exactly based on pure > >>> combinatorics. > > >> Do you happen to know of someplace you can point me where I can learn > >> how to do this? > > > Usually you can assume that any uncorrectable error pattern > > gives you a random syndrome (out of the 2^16 syndrome > > patterns for your code). &#4294967295;You can calculate how many of > > these syndromes correspond to 0, 1, or 2 errors. &#4294967295;(Making > > sure you only consider valid error locations for the > > shortened code.) &#4294967295;Those would be the misdecodes, while the rest > > of them would be the detectable uncorrectables. > > > There is a respect in which this assumption is not > > quite exact, but the difference is probably not too > > significant. > > > Steve > > OK, thanks for the explanation. &#4294967295;I need to think on it a bit - this is a > learning curve for me! > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; Ken
you spoke about "clumps" of errors.. do you know about "interleaving" ? Mark
Reply by Ken Ryan April 1, 20092009-04-01
Steve Pope wrote:
> Ken Ryan <newsryan@leesburg-geeks.org> wrote: > >> Steve Pope wrote: > >>> Ken Ryan <newsryan@leesburg-geeks.org> wrote: > >>>> Is there some way to characterize X% chance of detecting 3-symbol >>>> errors, Y% chance of detecting 4-symbol errors, etc.? >>> Certainly, these percentages (once you define exactly what >>> you're talking about) can be stated exactly based on pure >>> combinatorics. > >> Do you happen to know of someplace you can point me where I can learn >> how to do this? > > Usually you can assume that any uncorrectable error pattern > gives you a random syndrome (out of the 2^16 syndrome > patterns for your code). You can calculate how many of > these syndromes correspond to 0, 1, or 2 errors. (Making > sure you only consider valid error locations for the > shortened code.) Those would be the misdecodes, while the rest > of them would be the detectable uncorrectables. > > There is a respect in which this assumption is not > quite exact, but the difference is probably not too > significant. > > Steve
OK, thanks for the explanation. I need to think on it a bit - this is a learning curve for me! Ken
Reply by Steve Pope March 31, 20092009-03-31
Ken Ryan  <newsryan@leesburg-geeks.org> wrote:

>Steve Pope wrote:
>> Ken Ryan <newsryan@leesburg-geeks.org> wrote:
>>> Is there some way to characterize X% chance of detecting 3-symbol >>> errors, Y% chance of detecting 4-symbol errors, etc.? >> >> Certainly, these percentages (once you define exactly what >> you're talking about) can be stated exactly based on pure >> combinatorics.
>Do you happen to know of someplace you can point me where I can learn >how to do this?
Usually you can assume that any uncorrectable error pattern gives you a random syndrome (out of the 2^16 syndrome patterns for your code). You can calculate how many of these syndromes correspond to 0, 1, or 2 errors. (Making sure you only consider valid error locations for the shortened code.) Those would be the misdecodes, while the rest of them would be the detectable uncorrectables. There is a respect in which this assumption is not quite exact, but the difference is probably not too significant. Steve
Reply by Ken Ryan March 31, 20092009-03-31
Steve Pope wrote:
> Ken Ryan <newsryan@leesburg-geeks.org> wrote: > >> Is there some way to characterize X% chance of detecting 3-symbol >> errors, Y% chance of detecting 4-symbol errors, etc.? > > Certainly, these percentages (once you define exactly what > you're talking about) can be stated exactly based on pure > combinatorics.
Do you happen to know of someplace you can point me where I can learn how to do this? Thanks! Ken
Reply by Steve Pope March 30, 20092009-03-30
Ken Ryan  <newsryan@leesburg-geeks.org> wrote:

>Is there some way to characterize X% chance of detecting 3-symbol >errors, Y% chance of detecting 4-symbol errors, etc.?
Certainly, these percentages (once you define exactly what you're talking about) can be stated exactly based on pure combinatorics. Steve
Reply by Ken Ryan March 30, 20092009-03-30
Steve Pope wrote:
> Yes, according to Sloane, there is a nonlinear binary code > with distance 7 and the same parameters (it would be a shortened > (63,47) code). Such a code would correct any three random bit errors. > The RS would work better only if errors were non-random.
Thanks. I'll get some scattered single-bit errors, but I need to be able to handle four-bit clumps of errors as well. It sounds like RS is still what I want. Ken
Reply by Ken Ryan March 30, 20092009-03-30
dvsarwate@yahoo.com wrote:
 > Yes, you are misunderstanding the  claim that with
 > four check symbols one can detect three errors but
 > not correct them.  (In fact one can even detect four
 > errors but not correct them.)  When one tries to do
 > *both* error correction *and* error detection on the
 > same received word, there is a trade-off: the more that
 > you want to do of one, the less you can do of the other.
 > For example, if you change your decoder so that it
 > corrects one error but does not try and correct two
 > errors (even when it does find a valid error-locator
 > polynomial etc.), you will see that all triple errors are
 > correctly detected.

Vladimir Vassilevsky wrote:
 > This is what expected. BTW, for the same block size and rate, there
 > probablly could be the better code then shortened RS.



Thank you, Dilip and Vladimir!  I can at least stop spinning my wheels.

Is there some way to characterize X% chance of detecting 3-symbol 
errors, Y% chance of detecting 4-symbol errors, etc.?  Or is running 
test cases and collecting statistics the only feasible way?  (I know
I'm going to be asked this question).


	Ken
Reply by Steve Pope March 30, 20092009-03-30
Vladimir Vassilevsky  <antispam_bogus@hotmail.com> wrote:

>Ken Ryan wrote:
>> I have a Reed-Solomon design which is a shortened (15,11) code (to >> (12,8).
>This is what expected. BTW, for the same block size and rate, there >probably could be the better code then shortened RS.
Yes, according to Sloane, there is a nonlinear binary code with distance 7 and the same parameters (it would be a shortened (63,47) code). Such a code would correct any three random bit errors. The RS would work better only if errors were non-random. Steve