DSPRelated.com
Forums

About decoding punctured turbo codes using dual codes

Started by geraldmanfroid July 22, 2009
I have read several articles dealing with inverse puncturing and dual codes
for decoding punctured concatenated convolutional codes. 
I have a find an efficient way to decode punctured serially convolutional
concatenated code (SCCC). I have first tried to use a classic serial
turbo-code scheme (with SISO modules), but I have to improve my results by
almost 1dB.

My question is : do you think that we may obtain better BER performances
using dual codes and these methods than with just the classic structure ?
Or is there a trick ?

Thank you very much for your help.

GM


geraldmanfroid <inscriptionalacon@gmail.com> wrote:

>I have read several articles dealing with inverse puncturing and dual codes >for decoding punctured concatenated convolutional codes.
>I have a find an efficient way to decode punctured serially convolutional >concatenated code (SCCC). I have first tried to use a classic serial >turbo-code scheme (with SISO modules), but I have to improve my results by >almost 1dB.
>My question is : do you think that we may obtain better BER performances >using dual codes and these methods than with just the classic structure ?
I'm not aware of absolutely all research results but: No, I don't think you will obtain better performance. You will get the same performance. Also the SCC will always be a few tenths of a dB worse than the PCC. Over AWGN, the PCC should be within about 0.8 dB of capacity (Pollara's capacity) for long codewords (4K information symbols or more). If your performance for PCC/SCC is in the range I stated then your implementation is probably correct and there is nothing I am aware of that will make it much better. (This last sentence is of course tautological...if I was aware of anything better, then I would not consider the implementation to be correct. :-) ) You are not going to be able to squeeze an extra 0.5 dB out of it, let alone 1 dB. Be very skeptical of claimed improved turbo codes. Sometimes, a researcher thinks he has improved on a code when in reality his implementation of the classic turbo codes was not correct, and so his improvement is illusory. Steve
On Jul 22, 7:45&#4294967295;am, "geraldmanfroid" <inscriptionala...@gmail.com>
wrote:
> I have read several articles dealing with inverse puncturing and dual codes > for decoding punctured concatenated convolutional codes. > I have a find an efficient way to decode punctured serially convolutional > concatenated code (SCCC). I have first tried to use a classic serial > turbo-code scheme (with SISO modules), but I have to improve my results by > almost 1dB. > > My question is : do you think that we may obtain better BER performances > using dual codes and these methods than with just the classic structure ? > Or is there a trick ? > > Thank you very much for your help. > > GM
I am curious: have you found out how the 1dB is obtained? what is the coding gain improvement?
Verictor  <stehuang@gmail.com> wrote:

>I am curious: have you found out how the 1dB is obtained? what is the >coding gain improvement?
Moreover, how does one get a 1 dB improvement from a code that's already less than 1 dB from capacity? S.

Steve Pope wrote:

> Verictor <stehuang@gmail.com> wrote: > > >>I am curious: have you found out how the 1dB is obtained? what is the >>coding gain improvement? > > > Moreover, how does one get a 1 dB improvement from a code > that's already less than 1 dB from capacity? >
OP: "I have a find an efficient way to decode punctured serially convolutional concatenated code (SCCC). I have first tried to use a classic serial turbo-code scheme (with SISO modules), but I have to improve my results by almost 1dB" I take this as the OP's decoder performance is 1dB worse then the classics. OP: "My question is : do you think that we may obtain better BER performances using dual codes and these methods than with just the classic structure ? Or is there a trick ?" The BER performance is a property of a code. A suboptimal decoder can only make it worse. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Vladimir Vassilevsky  <nospam@nowhere.com> wrote:

>"I have a find an efficient way to decode punctured serially >convolutional concatenated code (SCCC). I have first tried to use a >classic serial turbo-code scheme (with SISO modules), but I have to >improve my results by almost 1dB"
> I take this as the OP's decoder performance is 1dB worse then > the classics.
In this case the OP should first fix his decoder, *then* experiment with dual codes, rather than researching both at once. Steve
On 7/23/2009 10:22 AM, Steve Pope wrote:
> Vladimir Vassilevsky<nospam@nowhere.com> wrote: > >> "I have a find an efficient way to decode punctured serially >> convolutional concatenated code (SCCC). I have first tried to use a >> classic serial turbo-code scheme (with SISO modules), but I have to >> improve my results by almost 1dB" > >> I take this as the OP's decoder performance is 1dB worse then >> the classics. > > In this case the OP should first fix his decoder, *then* > experiment with dual codes, rather than researching both at once. > > Steve
To be fair, though, with many capacity-approaching codes performance depends on the decoding algorithm, so I don't think the OP is necessarily way out in the weeds in looking for improvement via decoding. That would only apply when the code in question isn't already being decoded close to capacity, as you pointed out. I recall seeing that fairly consistently with the LDPC stuff, that depending on the conditions the performance would change, sometimes significantly, depending on the algorithm, including the scheduling. That's not all that unusual or surprising. It gets tricky because the benchmarks then necessarily become some accepted known, best method, and if you don't have that known, best method working properly the comparisons will be irrelevant. -- Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
Eric Jacobsen  <eric.jacobsen@ieee.org> wrote:

>> Vladimir Vassilevsky<nospam@nowhere.com> wrote:
>>> "I have a find an efficient way to decode punctured serially >>> convolutional concatenated code (SCCC). I have first tried to use a >>> classic serial turbo-code scheme (with SISO modules), but I have to >>> improve my results by almost 1dB"
>>> I take this as the OP's decoder performance is 1dB worse then >>> the classics.
>> In this case the OP should first fix his decoder, *then* >> experiment with dual codes, rather than researching both at once.
>To be fair, though, with many capacity-approaching codes performance >depends on the decoding algorithm, so I don't think the OP is >necessarily way out in the weeds in looking for improvement via >decoding. That would only apply when the code in question isn't already >being decoded close to capacity, as you pointed out.
>I recall seeing that fairly consistently with the LDPC stuff, that >depending on the conditions the performance would change, sometimes >significantly, depending on the algorithm, including the scheduling. >That's not all that unusual or surprising.
Good point; I agree that both code design and decoder design for any of the near-channel-capacity codes is tricky, sometimes even tweakish. I have found SCC and PCC turbo codes to be pretty sensitive to details of the interleaving and puncturing design, and that the SCC is more sensitive to constituent code design than one might expect. Despite these issues, when I first looked into turbo codes I was able to use decoders for PCC and SCC codes as described in Heegard and Wicker without modification, and obtain the expected performance. I would be a little less confident if I were starting instead from a collection of journal articles, which often contain errors that nobody has spotted. Steve
>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>> Vladimir Vassilevsky<nospam@nowhere.com> wrote: > >>>> "I have a find an efficient way to decode punctured serially >>>> convolutional concatenated code (SCCC). I have first tried to use a >>>> classic serial turbo-code scheme (with SISO modules), but I have to >>>> improve my results by almost 1dB" > >>>> I take this as the OP's decoder performance is 1dB worse then >>>> the classics. > >>> In this case the OP should first fix his decoder, *then* >>> experiment with dual codes, rather than researching both at once. > >>To be fair, though, with many capacity-approaching codes performance >>depends on the decoding algorithm, so I don't think the OP is >>necessarily way out in the weeds in looking for improvement via >>decoding. That would only apply when the code in question isn't already
>>being decoded close to capacity, as you pointed out. > >>I recall seeing that fairly consistently with the LDPC stuff, that >>depending on the conditions the performance would change, sometimes >>significantly, depending on the algorithm, including the scheduling. >>That's not all that unusual or surprising. > >Good point; I agree that both code design and decoder design >for any of the near-channel-capacity codes is tricky, sometimes >even tweakish. I have found SCC and PCC turbo codes to be >pretty sensitive to details of the interleaving and puncturing >design, and that the SCC is more sensitive to constituent code >design than one might expect. > >Despite these issues, when I first looked into turbo codes >I was able to use decoders for PCC and SCC codes as described in >Heegard and Wicker without modification, and obtain the expected >performance. I would be a little less confident if I were starting >instead from a collection of journal articles, which often contain >errors that nobody has spotted. > >Steve >
Hi, If I remember correct SCC are better from PCC in the error floor region (high SNRs). The opposite is true in low SNRs. Indeed for a code only 1 dB away from Shannons capacity is very difficult to improve performance further. Ofcourse 1 dB where the error floor starts. You can do many things to improve the performance like find the best generator polynomials and combine with the best interleaving structure...:) The turbo decoding algorithm based on BCJR is near optimum with the so called extrinsic information. If you can find something better than you can patented...:) Of course puncturing pattern is very important too. About dual codes you can refer to Hagenauer,i think was the first to use them for decoding turbo like codes. Thats all for now /Kostas
koutote <koutote@yahoo.gr> wrote:

> If I remember correct SCC are better from PCC in the error > floor region (high SNRs). The opposite is true in low SNRs.
I believe The PCC should not have an error floor effect in most cases and should perform as well as the SCC even high SNR. Steve