Reed Solomon vs Convolutional coding

Started by JAlbertoDJ September 30, 2009
I have several doubts about this coded forms.

It is known that Reed-Solomon is a good code agains burns errors. 

Also, it is known that Concatenated Reed-Solomon and convolutional coding
with interlaving run very cool.

But i want compare only a simple Reed-Solomon versus a Convolutional
coding with a good interleaving. 

I think that if you put an interleaver-deinterlaver system, then you
should not have problems with burns error if you use convolutional coding.
Then, in this sense, i would like to know which is better. ¿?

We can consider as Reed-Solomon a RS(255,223) (u other), and as
convolutional coding we consider the popular 1/2 K=7 NASA standard.

¿somebody has graphicals of Eb/N0 vs Pb?
JAlbertoDJ wrote:
> I have several doubts about this coded forms. > > It is known that Reed-Solomon is a good code agains burns errors. > > Also, it is known that Concatenated Reed-Solomon and convolutional coding > with interlaving run very cool. > > But i want compare only a simple Reed-Solomon versus a Convolutional > coding with a good interleaving. > > I think that if you put an interleaver-deinterlaver system, then you > should not have problems with burns error if you use convolutional coding. > Then, in this sense, i would like to know which is better. ¿? > > We can consider as Reed-Solomon a RS(255,223) (u other), and as > convolutional coding we consider the popular 1/2 K=7 NASA standard. > > ¿somebody has graphicals of Eb/N0 vs Pb?
Any good text book on channel coding, e.g., Lin/Costello - Error Control Coding. Laurent
>I have several doubts about this coded forms. > >It is known that Reed-Solomon is a good code agains burns errors. > >Also, it is known that Concatenated Reed-Solomon and convolutional
coding
>with interlaving run very cool. > >But i want compare only a simple Reed-Solomon versus a Convolutional >coding with a good interleaving. > >I think that if you put an interleaver-deinterlaver system, then you >should not have problems with burns error if you use convolutional
coding.
>Then, in this sense, i would like to know which is better. ¿? > >We can consider as Reed-Solomon a RS(255,223) (u other), and as >convolutional coding we consider the popular 1/2 K=7 NASA standard. > >¿somebody has graphicals of Eb/N0 vs Pb? >
I had asked the same que few months back I guess. Search this forum. However, I wanted to compare like-wise-like. What you are doing is not fair. Your RS code is not rate 1/2 whereas ur Conv code is rate 1/2. In terms of your ans: Rate 1/2 conv code with K=7 will perform better than rate 1/2 RS code of (255,127) on AWGN channel. This is what I think. On different channel conditions and SNRs, it depends on other factors. Hope this helps. Chintan
There is one interesting theoretical point embedded in this
question, which is that if you deal with burst errors by
interleaving them so that they appear random, you are
discarding information and, therefore, approaching the
coding problem suboptimally.

This may not tilt things solidly towards Reed-Solmon codes,
depending upon all the other usual factors; but it's
something to consider.

Steve
>There is one interesting theoretical point embedded in this >question, which is that if you deal with burst errors by >interleaving them so that they appear random, you are >discarding information and, therefore, approaching the >coding problem suboptimally. > >This may not tilt things solidly towards Reed-Solmon codes, >depending upon all the other usual factors; but it's >something to consider. > >Steve >
Could you explain better about "discardind information". What i say is, in transmission, interleave the order of symbols after convolutional encoder. Then in reception, first de-interleaver the symbols and after run Viterbi decode. Explain why this method is suboptimal, please.

Steve Pope wrote:

> There is one interesting theoretical point embedded in this > question, which is that if you deal with burst errors by > interleaving them so that they appear random, you are > discarding information and, therefore, approaching the > coding problem suboptimally.
If we know the distribution of errors, we can design a code which makes use of that distribution. Interleaving is crude way to do that. Going the other way, i.e. designing a decoding algorithm for a given code so it would be optimal for the particular error distibution, seems to be more difficult problem.
> This may not tilt things solidly towards Reed-Solmon codes, > depending upon all the other usual factors; but it's > something to consider.
GF(2^n) codes are unoptimal for the purpose of burst correction either: if errors appear in packs, it doesn't mean all bits in the pack are corrupt. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
> >I had asked the same que few months back I guess. Search this forum. > >However, I wanted to compare like-wise-like. > >What you are doing is not fair. Your RS code is not rate 1/2 whereas ur >Conv code is rate 1/2. > >In terms of your ans: Rate 1/2 conv code with K=7 will perform better
than
>rate 1/2 RS code of (255,127) on AWGN channel. This is what I think. > >On different channel conditions and SNRs, it depends on other factors. > >Hope this helps. > >Chintan >
It is true, then we have to consider an RS(31,15) versus k=7 1/2, for example. Also, in this comparative, BW occuppied would not be a problem. The question is that yesterday, i read something about Voyager missions. It result that with a Pe=0.005 there is not differrent (only 0.2 dBs) between Viterbi k=7 1/2 or a RS(255,223) concatenated with the same Viterbi k=7 1/2. Only for low probabilities of bit error the concatenated system is better than no-concatenated system. For example, for not compressed images, Voyager did not use concatenated code. Otherwise, for compressed image PB requeriments were others: Pb=1*10e-5. Then in this case Voyager transmitted with the concatenated system. In my case, a poor Pb=0.01 is enough for my system. So, i am looking for a channel code optimum for that Pb. Now i know than RS concatenated with Viterbi not run well in thats situationb, and i suppose that a simple RS code neither. Also, i am reading something about Turbo-codes, but this is only better than convolutional code for long blocks of bit. With short lenght a turbo-code is worst than simple convolutional code. ¿?

JAlbertoDJ wrote:


> In my case, a poor Pb=0.01 is enough for my system. So, i am looking for a > channel code optimum for that Pb.
If the target bit error rate is as high as 0.01, you are not going to gain much by using any reasonable coding. You'd be better by using the direct uncoded modulation, especially if you account for the modem losses. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Vladimir Vassilevsky   wrote:

>Steve Pope wrote:
>> There is one interesting theoretical point embedded in this >> question, which is that if you deal with burst errors by >> interleaving them so that they appear random, you are >> discarding information and, therefore, approaching the >> coding problem suboptimally.
>If we know the distribution of errors, we can design a code which makes >use of that distribution.
Correct
>Interleaving is crude way to do that.
I'm not sure it accomplishes this at all; if after interleaving the errors are in random location, then we have lost information. However it could be argued that after interleaving, the errors are separated by a more-than-random amount, and the target convultional code takes advantage of this. I've never been quite convinced it works out that way.
>Going the other way, i.e. designing a decoding algorithm for a given >code so it would be optimal for the particular error distibution, seems >to be more difficult problem.
It's often easy to show that on a random-error channel, the convolutional code is closer to capacity; whereas on a channel exhibiting (for example) mostly 2-bit burst errors, the RS code outperforms the convolutional. What's difficult is computing capacity (and as you state, optimal coding) for these non-random channels. Steve
>Vladimir Vassilevsky wrote: > >>Steve Pope wrote: > >>> There is one interesting theoretical point embedded in this >>> question, which is that if you deal with burst errors by >>> interleaving them so that they appear random, you are >>> discarding information and, therefore, approaching the >>> coding problem suboptimally. > >>If we know the distribution of errors, we can design a code which makes
>>use of that distribution. > >Correct > >>Interleaving is crude way to do that. > >I'm not sure it accomplishes this at all; if after interleaving >the errors are in random location, then we have lost information. > >However it could be argued that after interleaving, the errors >are separated by a more-than-random amount, and the target >convultional code takes advantage of this. I've never been >quite convinced it works out that way. > >>Going the other way, i.e. designing a decoding algorithm for a given >>code so it would be optimal for the particular error distibution, seems
>>to be more difficult problem. > >It's often easy to show that on a random-error channel, the >convolutional code is closer to capacity; whereas on a channel >exhibiting (for example) mostly 2-bit burst errors, the >RS code outperforms the convolutional. > >What's difficult is computing capacity (and as you state, optimal >coding) for these non-random channels. > >Steve >
%%% Hi Sorry to say this but can you please explain why with interleaving the information is lost.? What I understand is that with iterative decoding and all that stuff we can achieve pretty low bit error rates. So where do we loose the information? Thanks Chintan
dvsarwate@yahoo.com wrote:

   ...

> For about twenty years after its founding, Saugor University had > on its books, and on the transcripts/mark-sheets issued, a > graduate course titled Unclear Physics. ...
:-) Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
On Oct 1, 11:46=A0am, Jerry Avins  wrote:

> Many people say "nucular" when they mean "nuclear". It's funny, though. > Those same people don't sat "uncular" when they mean "unclear", even > though the only difference is the order of the first two letters.
In the mid-1930s, my grandfather, who was a government official in India, was told by the government, ".................... is making a huge donation to establish a new university, and we need a complete set of documents: University Constitution, Bylaws, Degree Requirements, Curricula, Course Syllabi, etc. in two weeks' time." Faced with an impossible task, he arranged for a bunch of typists to re-type the corresponding documents for Nagpur University, and merely substitute "Saugor University" wherever a document said "Nagpur University", and this got the job done in a hurry as was needed by the government. For about twenty years after its founding, Saugor University had on its books, and on the transcripts/mark-sheets issued, a graduate course titled Unclear Physics.... None of the faculty noticed, and all of the students thought the course title described the course perfectly. Dilip Sarwate
On 2009-10-01 13:41:45 -0300, Jerry Avins  said:

> Eric Jacobsen wrote: >> On 9/30/2009 6:46 PM, Jerry Avins wrote: >>> JAlbertoDJ wrote: >>>>> JAlbertoDJ wrote: >>>>>>> cpshah99 wrote: >>>>>>> >>>>>>>> Sorry to say this but can you please explain why with interleaving >>>> the >>>>>>>> information is lost.? >>>>>>> Here's an example. Suppose all errors occur in three-bit bursts. >>>>>>> Suppose you apply a pseudo-random interleaver. In the interleaved >>>>>>> stream, the ability to predict bit errors based on the previous >>>>>>> bit is lost. >>>>>>> >>>>>>> You could assert of course that if you know the interleaver >>>>>>> pattern, no information is lost but this may not help you out >>>>>>> practically. >>>>>>> >>>>>>> Steve >>>>>>> >>>>>> But, you cannot apply a pseudo-random interleaver, you should apply a >>>>>> interleaver type fodney, for example. So you separate the symbols in >>>> time >>>>>> transforming a channel with memory to a memoryless one, and thereby >>>> enables >>>>>> the random-error-correcting codes to be useful in a burns-noise >>>> channel. >>>>> You keep writing burns noise. Do you mean burst noise? >>>> >>>> Means High levels of noise during a time interval. >>> >>> Can you cite a reference that uses it that way? >>> >>> Jerry >> >> Still sounds like he means burst noise to me. I've never heard of >> burns-noise, either. > > In a masters-level course I took at Rutgers, the professor -- yes, a > full professor -- persisted in talking about casual circuits. He > insisted that causal wasn't a word. > > Jerry
He must have been a colleague of another one I heard of when I was a summer student. It seems the speaker/researcher has used "miserable" functions as part of his analysis. Most other folks used measureable functions so had an easier time in obtaining their results. ;-) The chap telling the story had other comments on the heavy accent of the speaker.
JAlbertoDJ wrote:

   ...

> Excuse me, i want say burst-noise. I have some problems with english.
No problem. I just wanted to be clear. Many people say "nucular" when they mean "nuclear". It's funny, though. Those same people don't sat "uncular" when they mean "unclear", even though the only difference is the order of the first two letters. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Eric Jacobsen wrote:
> On 9/30/2009 6:46 PM, Jerry Avins wrote: >> JAlbertoDJ wrote: >>>> JAlbertoDJ wrote: >>>>>> cpshah99 wrote: >>>>>> >>>>>>> Sorry to say this but can you please explain why with interleaving >>> the >>>>>>> information is lost.? >>>>>> Here's an example. Suppose all errors occur in three-bit bursts. >>>>>> Suppose you apply a pseudo-random interleaver. In the interleaved >>>>>> stream, the ability to predict bit errors based on the previous >>>>>> bit is lost. >>>>>> >>>>>> You could assert of course that if you know the interleaver >>>>>> pattern, no information is lost but this may not help you out >>>>>> practically. >>>>>> >>>>>> Steve >>>>>> >>>>> But, you cannot apply a pseudo-random interleaver, you should apply a >>>>> interleaver type fodney, for example. So you separate the symbols in >>> time >>>>> transforming a channel with memory to a memoryless one, and thereby >>> enables >>>>> the random-error-correcting codes to be useful in a burns-noise >>> channel. >>>> You keep writing burns noise. Do you mean burst noise? >>> >>> Means High levels of noise during a time interval. >> >> Can you cite a reference that uses it that way? >> >> Jerry > > Still sounds like he means burst noise to me. I've never heard of > burns-noise, either.
In a masters-level course I took at Rutgers, the professor -- yes, a full professor -- persisted in talking about casual circuits. He insisted that causal wasn't a word. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
>On 9/30/2009 6:46 PM, Jerry Avins wrote: >> JAlbertoDJ wrote: >>>> JAlbertoDJ wrote: >>>>>> cpshah99 wrote: >>>>>> >>>>>>> Sorry to say this but can you please explain why with
interleaving
>>> the >>>>>>> information is lost.? >>>>>> Here's an example. Suppose all errors occur in three-bit bursts. >>>>>> Suppose you apply a pseudo-random interleaver. In the interleaved >>>>>> stream, the ability to predict bit errors based on the previous >>>>>> bit is lost. >>>>>> >>>>>> You could assert of course that if you know the interleaver >>>>>> pattern, no information is lost but this may not help you out >>>>>> practically. >>>>>> >>>>>> Steve >>>>>> >>>>> But, you cannot apply a pseudo-random interleaver, you should apply
a
>>>>> interleaver type fodney, for example. So you separate the symbols
in
>>> time >>>>> transforming a channel with memory to a memoryless one, and thereby >>> enables >>>>> the random-error-correcting codes to be useful in a burns-noise >>> channel. >>>> You keep writing burns noise. Do you mean burst noise? >>> >>> Means High levels of noise during a time interval. >> >> Can you cite a reference that uses it that way? >> >> Jerry > >Still sounds like he means burst noise to me. I've never heard of >burns-noise, either. > >-- >Eric Jacobsen >Minister of Algorithms >Abineau Communications >http://www.abineau.com >
Excuse me, i want say burst-noise. I have some problems with english.
Eric Jacobsen   wrote:

>Still sounds like he means burst noise to me. I've never heard of >burns-noise, either.
Sounds like long-winded Scottish poetry. (GD&R) Steve
On 9/30/2009 6:46 PM, Jerry Avins wrote:
> JAlbertoDJ wrote: >>> JAlbertoDJ wrote: >>>>> cpshah99 wrote: >>>>> >>>>>> Sorry to say this but can you please explain why with interleaving >> the >>>>>> information is lost.? >>>>> Here's an example. Suppose all errors occur in three-bit bursts. >>>>> Suppose you apply a pseudo-random interleaver. In the interleaved >>>>> stream, the ability to predict bit errors based on the previous >>>>> bit is lost. >>>>> >>>>> You could assert of course that if you know the interleaver >>>>> pattern, no information is lost but this may not help you out >>>>> practically. >>>>> >>>>> Steve >>>>> >>>> But, you cannot apply a pseudo-random interleaver, you should apply a >>>> interleaver type fodney, for example. So you separate the symbols in >> time >>>> transforming a channel with memory to a memoryless one, and thereby >> enables >>>> the random-error-correcting codes to be useful in a burns-noise >> channel. >>> You keep writing burns noise. Do you mean burst noise? >> >> Means High levels of noise during a time interval. > > Can you cite a reference that uses it that way? > > Jerry
Still sounds like he means burst noise to me. I've never heard of burns-noise, either. -- Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
On Thu, 1 Oct 2009 01:23:07 +0000 (UTC), spope33@speedymail.org (Steve
Pope) wrote:

>Muzaffer Kal wrote: > >>On Wed, 30 Sep 2009 23:14:34 +0000 (UTC), spope33@speedymail.org > >>>Here's an example. Suppose all errors occur in three-bit bursts. > >>The question is how can one suppose such a situation. If you know the >>channel to that degree then you design a encoder/decoder which takes >>advantage of that information so you remove that certainty and in the >>end you're left with an iid source and channel with awgn which is >>where you have started and which is the main problem to solve. > >Except that the three-bit-burst channel has more capacity >than a random-error channel with the same bit-error rate. >You can do better than just randomizing it and treating it >as if it were random. >
Which is exactly what I said "If you know the channel to that degree then you design a encoder/decoder which takes advantage of that information so you remove that certainty"
>A similar thread came up here not long ago, in which it was >discussed that an AWGN channel is the worst possible channel.
This is certainly true and I already stated it " in the end you're left with an iid source and channel with awgn which is where you have started and which is the main problem to solve". The bottom line is when ever you know that your channel is anything other than iid + awgn you can design an encoder/decoder to your advantage but you're in the end left with the basic problem with some higher SNR at the decision point. -- Muzaffer Kal DSPIA INC. ASIC/FPGA Design Services http://www.dspia.com
JAlbertoDJ wrote:
>> JAlbertoDJ wrote: >>>> cpshah99 wrote: >>>> >>>>> Sorry to say this but can you please explain why with interleaving > the >>>>> information is lost.? >>>> Here's an example. Suppose all errors occur in three-bit bursts. >>>> Suppose you apply a pseudo-random interleaver. In the interleaved >>>> stream, the ability to predict bit errors based on the previous >>>> bit is lost. >>>> >>>> You could assert of course that if you know the interleaver >>>> pattern, no information is lost but this may not help you >>>> out practically. >>>> >>>> Steve >>>> >>> But, you cannot apply a pseudo-random interleaver, you should apply a >>> interleaver type fodney, for example. So you separate the symbols in > time >>> transforming a channel with memory to a memoryless one, and thereby > enables >>> the random-error-correcting codes to be useful in a burns-noise > channel. >> You keep writing burns noise. Do you mean burst noise? > > Means High levels of noise during a time interval.
Can you cite a reference that uses it that way? Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������