Reply by Steve Pope August 9, 20162016-08-09
<makolber@yahoo.com> replies to my post,

>> Impulse noise is much easier to deal with, or more precisely, >> for the same raw bit-error rate RS codes (whether soft-decoded >> or not) perform better when the noise occurs in bursts than >> if it is random.
>> This is a bit tangential to the OP's question, which is whether >> methods to deal with bit-level information exist for RS codes.
>is this true? > >I though FEC was NOT effective for burst errors but rather for randomly >distributed errors and that Interleaving is effective for burst errors.
>Is that not the case?
It's pretty much not the case, on a couple of levels. Firstly, the worst case channel is an AWGN channel. If one knows the errors occur in bursts, then one can use this information when designing an FEC code optimized for that channel, and it will perform better than a code optimized for an AWGN channel. Secondly, symbol-error-correcting codes such as RS codes specifically work better when the errors occur in bursts. The normalized coding gain of a typical hard-decision-decoded RS improves by a couple of dB if the errors occur in 2-bit bursts as opposed to isolated bit errors. All interleaving does from a channel-coding perspectve is remove information from the channel that might be useful to a decoder, but interleaving does help in situations where the particular decoder type cannot make use of such information in the first place and is particularly sensitive to bursts. So situationally, if one has pre-selected an FEC code and decoder method, and then is unexpectedly faced with a burst channel, it can sometimes be true that the burst channel now looks worse than the AWGN channel. In this sense, your scenario above can occur. Interleaving also has other uses such as dealing with multipath that are not central to the channel coding issue. Steve
Reply by Eric Jacobsen August 9, 20162016-08-09
On Tue, 9 Aug 2016 11:37:18 -0700 (PDT), makolber@yahoo.com wrote:

> >> >> Impulse noise is much easier to deal with, or more precisely, >> for the same raw bit-error rate RS codes (whether soft-decoded >> or not) perform better when the noise occurs in bursts than >> if it is random. >> >> This is a bit tangential to the OP's question, which is whether >> methods to deal with bit-level information exist for RS codes. >> >> Steve > >is this true? > >I though FEC was NOT effective for burst errors but rather for randomly distributed errors and that Interleaving is effective for burst errors. > >Is that not the case?
Algebraic codes are generally characterized by n,k, where k is the length of the data block in the codeword, and k is the length of the codeword. So there are n-k symbols added as "parity" to the data block to make a codeword that facilitates the error correction. In the case of the Reed-Solomon, the "symbols" are bytes, and RS codes can generaly correct (n-k)/2 symbol errors. For example, a popular RS code configuration is n,k = 188,204, so 16 parity bytes are added to 188-byte data block to make a 204-byte codeword, which can correct 16/2 = 8 symbols. In other words, any eight bytes in the codeword can contain errors and the decoder can correct them. It doesn't matter how many bits are in error in a bad byte, it can be one or all eight. So, from this perspective if the bytes are transmitted sequentially (i.e., not interleaved), a string of 8x8 = 64 bit errors can be corrected by a 188,204 RS code. However, randomly spread bit errors can also be corrected, and as long as no more than 8 bytes per codeword contain errors the code will correct them. So it's not incorrect to say that the RS can correct strings of errors, but it's really just the nature of the code. A common legacy concatenated channel code uses a convolutional inner code with a Viterbi decoder and an RS outer code. The idea is that the RS code can clean up both the dribbly random output errors that Viterbi can leave behind as well as the long burst errors that Viterbi decoders create when a string of input errors makes the decoder choose the wrong path through the trellis. Since the RS code's ability to correct burst errors is always limited to (n-k)/2, it is common (well, pretty much universal) to put an interleaver in between the two codes so that clumped errors get spread out into more randomly distributed errors and spread across multiple RS codewords. So it both is and isn't true. An algebraic code can correct a limited number of burst errors, but generally interleaving is used to spread the long bursts out over multiple codewords.
Reply by August 9, 20162016-08-09
> > Impulse noise is much easier to deal with, or more precisely, > for the same raw bit-error rate RS codes (whether soft-decoded > or not) perform better when the noise occurs in bursts than > if it is random. > > This is a bit tangential to the OP's question, which is whether > methods to deal with bit-level information exist for RS codes. > > Steve
is this true? I though FEC was NOT effective for burst errors but rather for randomly distributed errors and that Interleaving is effective for burst errors. Is that not the case? Mark
Reply by Steve Pope August 9, 20162016-08-09
<herrmannsfeldt@gmail.com> wrote:

>On Tuesday, August 9, 2016 at 12:34:04 AM UTC-7, Steve Pope wrote:
>> For any such encoding, once can use a MAP method to compute a probability >> for each of the possible values of the symbol, and this probability >> set is the information required by the soft-decision RS decoder.
>OK, but close depends on the noise model that you have. > >If you have Gaussian noise, then close might make sense, where >for impulse noise it might not. If the receiver can figure that out, >I suppose it can adjust accordingly.
Impulse noise is much easier to deal with, or more precisely, for the same raw bit-error rate RS codes (whether soft-decoded or not) perform better when the noise occurs in bursts than if it is random. This is a bit tangential to the OP's question, which is whether methods to deal with bit-level information exist for RS codes. Steve
Reply by August 9, 20162016-08-09
On Tuesday, August 9, 2016 at 12:34:04 AM UTC-7, Steve Pope wrote:
> Tim Wescott <seemywebsite@myfooter.really> wrote:
> >It is only like m-PSK in that some things are closer than others (I'm not > >trying to be coy -- it's that I don't think I can explain it well). But > >it may be amenable to representing it as some sort of a gray code.
(snip)
> For any such encoding, once can use a MAP method to compute a probability > for each of the possible values of the symbol, and this probability > set is the information required by the soft-decision RS decoder.
(snip) OK, but close depends on the noise model that you have. If you have Gaussian noise, then close might make sense, where for impulse noise it might not. If the receiver can figure that out, I suppose it can adjust accordingly. Or for CDs, where a large number of consecutive bits might be lost due to a dust particle on the disk. Close doesn't help much, there.
Reply by Steve Pope August 9, 20162016-08-09
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>It is only like m-PSK in that some things are closer than others (I'm not >trying to be coy -- it's that I don't think I can explain it well). But >it may be amenable to representing it as some sort of a gray code. > >It's 5b/6b, or maybe 8b/10b encoding to keep the DC content of a baseband >channel at zero. So there's no systemic mapping back to binary. But >there may be some sort of mapping (like grey code) that makes it work >naturally.
For any such encoding, once can use a MAP method to compute a probability for each of the possible values of the symbol, and this probability set is the information required by the soft-decision RS decoder. In the above, a "symbol" is an 8-bit symbol for an 8-bit RS code, but as you mention it may be further encoded into 10 bits (or something) for transmission on the channel. Set up correctly, this line encoding can augment the coding gain. Steve
Reply by Tim Wescott August 8, 20162016-08-08
On Mon, 08 Aug 2016 18:02:25 +0000, Eric Jacobsen wrote:

> On Mon, 08 Aug 2016 11:42:48 -0500, Tim Wescott > <seemywebsite@myfooter.really> wrote: > >>On Mon, 08 Aug 2016 16:23:57 +0000, Eric Jacobsen wrote: >> >>> On Sun, 07 Aug 2016 18:03:09 -0500, Tim Wescott <tim@seemywebsite.com> >>> wrote: >>> >>>>On Sat, 06 Aug 2016 14:40:12 +0000, Eric Jacobsen wrote: >>>> >>>>> On Sat, 6 Aug 2016 04:50:20 +0000 (UTC), spope33@speedymail.org >>>>> (Steve Pope) wrote: >>>>> >>>>>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >>>>>> >>>>>>>On Fri, 05 Aug 2016 16:43:21 -0500, Tim Wescott >>>>>> >>>>>>>>Now -- if I'm decoding this with a Reed-Solomon decoder with hard >>>>>>>>decisions, does it make use of this "closer is better" property, >>>>>>>>or does it only "understand" that a word is in error? >>>>>> >>>>>>>If the 2, 3, 4, 5, 6 are symbols from the alphabet, then it doesn't >>>>>>>matter, i.e., a symbol error is a symbol error. e.g., the >>>>>>>typical RS codes use 8-bit symbols. An errored symbol can have >>>>>>>one-, two-, >>>>>>>three- or any number of bit errors and not affect the correction >>>>>>>performance. In other words, a symbol error is a symbol error. >>>>>>>It doesn't matter how many bits are wrong in the symbol. >>>>>> >>>>>>Yep >>>>>> >>>>>>>If you want the gnarliest of details, summon Prof. Sarwate out of >>>>>>>retirement. I don't know if he reads stuff here any more or not. >>>>>> >>>>>>That's interesting, most people spend more time on the newsgroup >>>>>>after retirement, not less. :-) >>>>>> >>>>>>Back to the question, a number of researchers have figured out >>>>>>soft-decision Reed-Solomon decoding methods that should, as a >>>>>>boundary case, address Tim's scenario. >>>>>> >>>>>>Steve >>>>> >>>>> IIRC the soft-decision RS decoders still don't make a distinction >>>>> between alphabet symbols that may be adjacent in order, just their >>>>> likehood of being correct. In other words, an MSB error in the >>>>> symbol doesn't necessarily affect the likelihood any more than an >>>>> LSB error. >>>> >>>>Well, yes, but the symbol decoder can be aware of what's adjacent in >>>>the transmission space and what's not. So if A is adjacent to B but >>>>not D, then if you receive a questionable A the symbol decoder can >>>>report (essentially) "probably A, maybe B, definitely not D". >>> >>> Adjacent in what way, the previous symbol? The adjacent symbol in >>> the codeword? Or are you assuming that the symbols are not >>> equiprobable? >>> Or that there are conditional probabilities? Any such assumptions can >>> certainly be used in a decoder, but also mean that the code and >>> payload are not generalized. >> >>Adjacent in the transmission space -- the easiest example I can think of >>is m-PSK, where m is greater than three. > > Do you mean like a gray-coded constellation? > >>> Typical RS decoders are not based on those assumptions. Such >>> assumptions might be better applied to a different type of decoder >>> with a more amenable decoding algorithm. >> >>Suggestions? > > It sounds like you're suggesting gray-coding a constellation and using > the slicer distance metrics for the soft decision, but I'm not quite > sure yet. That's a very practical thing to do and is used in many > effective decoding techniques. There are a number of different kinds > of soft-input RS decoders, and I suspect that there are soft-RS > algorithms out there that are compatible with using soft metrics that > are generated that way. > > I've never used a soft-input RS in a practical system, so I don't have > any personal experience other than the usual research and evaluations. > Quickly perusing a few papers seems to indicate that there are methods > that use soft-bit reliabilities as well as computed soft-symbol > reliabilities. A hallmark of many of these techniques is very high > computational complexity, especially as the length of the codeword > increases. > > Not knowing what your constraints or goals are or what the system looks > like, I can throw out that gray-coding is exploited in most modulation > techniques, and things like Trellis-Coded Modulation (TCM) exploit it in > the decoder. Bit-Interleaved Coded Modulation (BICM), kind of throws > that information away in the decoder and generally outperforms TCM. > Naturally the breadth of the complexity tradeoffs over the possible > solutions is large,
It is only like m-PSK in that some things are closer than others (I'm not trying to be coy -- it's that I don't think I can explain it well). But it may be amenable to representing it as some sort of a gray code. It's 5b/6b, or maybe 8b/10b encoding to keep the DC content of a baseband channel at zero. So there's no systemic mapping back to binary. But there may be some sort of mapping (like grey code) that makes it work naturally. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by August 8, 20162016-08-08
On Monday, August 8, 2016 at 9:23:55 AM UTC-7, Eric Jacobsen wrote:
> On Sun, 07 Aug 2016 18:03:09 -0500, Tim Wescott <tim@seemywebsite.com> > wrote:
(snip)
> >>>>>Now -- if I'm decoding this with a Reed-Solomon decoder with hard > >>>>>decisions, does it make use of this "closer is better" property, or > >>>>>does it only "understand" that a word is in error?
(snip)
> >Well, yes, but the symbol decoder can be aware of what's adjacent in the > >transmission space and what's not. So if A is adjacent to B but not D, > >then if you receive a questionable A the symbol decoder can report > >(essentially) "probably A, maybe B, definitely not D".
> Adjacent in what way, the previous symbol? The adjacent symbol in > the codeword? Or are you assuming that the symbols are not > equiprobable? Or that there are conditional probabilities? Any such > assumptions can certainly be used in a decoder, but also mean that the > code and payload are not generalized.
Yes. For a given amount of modulation/demodulation hardware and signal processing ability, what is the best code you can use?
> Typical RS decoders are not based on those assumptions. Such > assumptions might be better applied to a different type of decoder > with a more amenable decoding algorithm.
Seems to me that if you know the S/N ratio, you might be in a better shape to decide how for off a symbol could be. I know cable modems keep track of the S/N for the incoming signal. Though you would also want to know the spectral and time characteristics of the noise. You might have impulse noise (such as switching inductive loads) that every once in a while gives a symbol nowhere close. You could also have uniform white noise that sometimes misses by one symbol. Both could have the same RMS S/N ratio.
Reply by Steve Pope August 8, 20162016-08-08
I believe soft-input RS decoders usually utilize a per-symbol
probability input, that can be constructed however one wishes
based on constellation and other factors.

Steve
Reply by Eric Jacobsen August 8, 20162016-08-08
On Mon, 08 Aug 2016 11:42:48 -0500, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Mon, 08 Aug 2016 16:23:57 +0000, Eric Jacobsen wrote: > >> On Sun, 07 Aug 2016 18:03:09 -0500, Tim Wescott <tim@seemywebsite.com> >> wrote: >> >>>On Sat, 06 Aug 2016 14:40:12 +0000, Eric Jacobsen wrote: >>> >>>> On Sat, 6 Aug 2016 04:50:20 +0000 (UTC), spope33@speedymail.org (Steve >>>> Pope) wrote: >>>> >>>>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >>>>> >>>>>>On Fri, 05 Aug 2016 16:43:21 -0500, Tim Wescott >>>>> >>>>>>>Now -- if I'm decoding this with a Reed-Solomon decoder with hard >>>>>>>decisions, does it make use of this "closer is better" property, or >>>>>>>does it only "understand" that a word is in error? >>>>> >>>>>>If the 2, 3, 4, 5, 6 are symbols from the alphabet, then it doesn't >>>>>>matter, i.e., a symbol error is a symbol error. e.g., the typical >>>>>>RS codes use 8-bit symbols. An errored symbol can have one-, two-, >>>>>>three- or any number of bit errors and not affect the correction >>>>>>performance. In other words, a symbol error is a symbol error. It >>>>>>doesn't matter how many bits are wrong in the symbol. >>>>> >>>>>Yep >>>>> >>>>>>If you want the gnarliest of details, summon Prof. Sarwate out of >>>>>>retirement. I don't know if he reads stuff here any more or not. >>>>> >>>>>That's interesting, most people spend more time on the newsgroup after >>>>>retirement, not less. :-) >>>>> >>>>>Back to the question, a number of researchers have figured out >>>>>soft-decision Reed-Solomon decoding methods that should, as a boundary >>>>>case, address Tim's scenario. >>>>> >>>>>Steve >>>> >>>> IIRC the soft-decision RS decoders still don't make a distinction >>>> between alphabet symbols that may be adjacent in order, just their >>>> likehood of being correct. In other words, an MSB error in the symbol >>>> doesn't necessarily affect the likelihood any more than an LSB error. >>> >>>Well, yes, but the symbol decoder can be aware of what's adjacent in the >>>transmission space and what's not. So if A is adjacent to B but not D, >>>then if you receive a questionable A the symbol decoder can report >>>(essentially) "probably A, maybe B, definitely not D". >> >> Adjacent in what way, the previous symbol? The adjacent symbol in the >> codeword? Or are you assuming that the symbols are not equiprobable? >> Or that there are conditional probabilities? Any such assumptions can >> certainly be used in a decoder, but also mean that the code and payload >> are not generalized. > >Adjacent in the transmission space -- the easiest example I can think of >is m-PSK, where m is greater than three.
Do you mean like a gray-coded constellation?
>> Typical RS decoders are not based on those assumptions. Such >> assumptions might be better applied to a different type of decoder with >> a more amenable decoding algorithm. > >Suggestions?
It sounds like you're suggesting gray-coding a constellation and using the slicer distance metrics for the soft decision, but I'm not quite sure yet. That's a very practical thing to do and is used in many effective decoding techniques. There are a number of different kinds of soft-input RS decoders, and I suspect that there are soft-RS algorithms out there that are compatible with using soft metrics that are generated that way. I've never used a soft-input RS in a practical system, so I don't have any personal experience other than the usual research and evaluations. Quickly perusing a few papers seems to indicate that there are methods that use soft-bit reliabilities as well as computed soft-symbol reliabilities. A hallmark of many of these techniques is very high computational complexity, especially as the length of the codeword increases. Not knowing what your constraints or goals are or what the system looks like, I can throw out that gray-coding is exploited in most modulation techniques, and things like Trellis-Coded Modulation (TCM) exploit it in the decoder. Bit-Interleaved Coded Modulation (BICM), kind of throws that information away in the decoder and generally outperforms TCM. Naturally the breadth of the complexity tradeoffs over the possible solutions is large,