DSPRelated.com
Forums

FEC for Burst FSK

Started by mite_learner May 20, 2015
On Thu, 21 May 2015 18:40:19 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>(Steve Pope) wrote: > >>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>>Polarra et. al, 1998 formulated an expression for channel >>>capacity vs. block size using a sphere-packing bound. There >>>is a JPL technical report on the topic, readilys" googlable. > >>>Yes, by the time you have a block length as short as a few >>>hundred bits, capacity is noticably less than longer blocks >>>(say a few thousand bits). But you're still in the range >>>where either turbo or LDPC codes perform close to capacity. > >>It's tough to achieve. When we did the LDPC codes for 802.16 and >>802.11n, we only defined codes down to a certain block size because >>the existing, plain Convolutional Code with a single-pass Viterbi >>decoder would do just as well at that size or smaller blocks. It was >>counter-productive to use the LDPC for smaller blocks. > >Hmm. Did you evaluate turbo codes? According to the Polarra result they >should outperform convolutional codes at a bit length of 440 >by a couple dB. > >See Figure 11 in "TMO Progress Report 42-133", which is on the >JPL website. > >Usually LDPC codes and turbo codes work approximately equally >well, but it's possible there are block size, code rate, >and demodulator distance metric combinations where one >underperforms the other. In parallel-tone modems (as I prefer >calling them) the tone interleaver figures in as well. > >Steve
There are design tradeoffs with Turbo Codes that can be made to improve their performance for short blocks. Duo-binary codes work well for that, and there are some that are standardized for that purpose, e.g., DVB-RCS. Like many things, the optimizations for short blocks aren't necessarily great for long blocks, so, like the point I was trying to make with the OP, everything depends when it comes to FEC. But nothing works really, really well for short blocks because of the reduction in bit diversity. Similar tricks can be done with LDPCs, but the tradeoffs may hurt you elsewhere. We did look at TCs for 802.16 and 802.11, and at the time opted for LDPCs. 802.16 eventually wound up with both LDPC and TC, and the TCs were included in WiMax and the LDPC wasn't. Maybe that's what impaired the adoption of WiMax (badum tsshhh). 802.11 did a little better job of controlling feature creep, IMHO, and has continued with just the LDPC as the only advanced FEC. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Thu, 21 May 2015 23:12:17 +0200, Sebastian Doht
<seb_doht@lycos.com> wrote:

>Am 20.05.2015 um 20:58 schrieb mite_learner: >> Hi all, >> >> I started working on FEC for burst FSK system over multipath channel. I >> want to ask the community here about what type of FEC would be best for >> such system. Which type of FEC is being used in existing FSK systems? Any >> pointers to literature and implementation work would be welcomed. >> >> -- >> Sam >> --------------------------------------- >> Posted through http://www.DSPRelated.com >> > >Interesting question I am not an expert here but I wonder if the >modulation scheme matters at all for the choice of the FEC. Things that >matter are: >- Channel model / conditions >- Bandwidth >- Target Bitrate >- Framing / Interleaving >But I can not think of a reason why one would use a different FEC for >FSK than for QPSK, but maybe one of the experts in the groups know better. > >Greetz, >Sebastian
I think you're basically right. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Thu, 21 May 2015 15:03:58 +0000, Eric Jacobsen wrote:

> On Thu, 21 May 2015 05:56:22 -0700 (PDT), makolber@yahoo.com wrote: > >>On Thursday, May 21, 2015 at 3:19:23 AM UTC-4, mite_learner wrote: >>> >On Wed, 20 May 2015 13:58:47 -0500, "mite_learner" <94814@DSPRelated> >>> >wrote: >>> > >>> >>Hi all, >>> >> >>> >>I started working on FEC for burst FSK system over multipath >>> >>channel. >>> >I >>> >>want to ask the community here about what type of FEC would be best >>> >for >>> >>such system. Which type of FEC is being used in existing FSK >>> >>systems? >>> >Any >>> >>pointers to literature and implementation work would be welcomed. >>> > >>> >How long is the data block? What is the expected raw error rate and >>> >error distribution? Is there a channel interleaver? What is the >>> >order of the FSK modulation? >>> > >>> >All of these things matter, probably plus other stuff I'm not >>> >thinking of right now. >>> > >>> > >>> >Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com >>> >>> Hi thanks for the response, below are the answers to your questions: >>> >>> data block length is 220 symbols with 4-FSK. there is no channel >>> interleaver yet. I have not calculated the raw error rate yet but it >>> correspondes to around 7dB SNR. >>> --------------------------------------- >>> Posted through http://www.DSPRelated.com >> >>Ok, I'm going to ask a question here... >> >>If the effectice SNR is 7 dB and it is mostly AWGN, then I understand >>that you would want an FEC to help. >> >>BUT >> >>If the effective SNR is 7 dB and that is due mostly to multipath which >>causes ISI, then is FEC really the correct approach? Would not an >>equalizer be more appropriate. >> >>Mark >> >> > Ooh, you're gonna make him define SNR, aren't you? ;) > > > > Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Isn't SNR from the Estonian, from their phrase for "flame war?" -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>On Thu, 21 May 2015 18:40:19 +0000 (UTC), spope33@speedymail.org
>(Steve Pope) wrote: > >>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >> >>>(Steve Pope) wrote: >> >>>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >> >>>>Polarra et. al, 1998 formulated an expression for channel >>>>capacity vs. block size using a sphere-packing bound. There >>>>is a JPL technical report on the topic, readilys" googlable. >> >>>>Yes, by the time you have a block length as short as a few >>>>hundred bits, capacity is noticably less than longer blocks >>>>(say a few thousand bits). But you're still in the range >>>>where either turbo or LDPC codes perform close to capacity. >> >>>It's tough to achieve. When we did the LDPC codes for 802.16 and >>>802.11n, we only defined codes down to a certain block size because >>>the existing, plain Convolutional Code with a single-pass Viterbi >>>decoder would do just as well at that size or smaller blocks. It was >>>counter-productive to use the LDPC for smaller blocks. >> >>Hmm. Did you evaluate turbo codes? According to the Polarra result they >>should outperform convolutional codes at a bit length of 440 >>by a couple dB. >> >>See Figure 11 in "TMO Progress Report 42-133", which is on the >>JPL website.
>There are design tradeoffs with Turbo Codes that can be made to >improve their performance for short blocks. Duo-binary codes work >well for that, and there are some that are standardized for that >purpose, e.g., DVB-RCS. Like many things, the optimizations for >short blocks aren't necessarily great for long blocks, so, like the >point I was trying to make with the OP, everything depends when it >comes to FEC.
Thanks
>But nothing works really, really well for short blocks because of the >reduction in bit diversity.
Okay. The way I look at it, a coded modulation design is "working really well" if it approaches the channel capacity, where "capacity" is defined appropriately for the situation. For arbitrary modulation and sufficiently long block sizes, Shannon capacity is most relevant. For arbitrary modulation and short block sizes, the Polarra sphere-packing capacity is most relevant. For long block sizes but modulation constrained to using a typical mapping of binary bits to constellation points, the mutual information limit is most relevant. For the remaining case, with both short block sizes and modulation constrained to using a mapping of binary bits to constellation points, so far as I know nobody has yet derived a closed-form formula for the capacity. (It would definitely be a publishable result or if someone went to the trouble to do this.) However, what is commonly done is the following: take the loss (in dB) by which the sphere-packing limit falls short of the Shannon limit; add to this loss (in dB) by which the mutual information limit falls short of the Shannon limit; and use this as an estimate as to the extent to which the capacity of the short-block, mutual-information limited channel falls short of the Shannon limit. What piqued my interest here was the use of the phrase bit-diversity as expressing the capacity impairment associated with short codeblocks; as neither the Shannon- nor the sphere-packing capacities assume information is encoded in bits. Steve
On Fri, 22 May 2015 02:59:16 +0000 (UTC), spope33@speedymail.org
(Steve Pope) wrote:

>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>On Thu, 21 May 2015 18:40:19 +0000 (UTC), spope33@speedymail.org > >>(Steve Pope) wrote: >> >>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >>> >>>>(Steve Pope) wrote: >>> >>>>>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: >>> >>>>>Polarra et. al, 1998 formulated an expression for channel >>>>>capacity vs. block size using a sphere-packing bound. There >>>>>is a JPL technical report on the topic, readilys" googlable. >>> >>>>>Yes, by the time you have a block length as short as a few >>>>>hundred bits, capacity is noticably less than longer blocks >>>>>(say a few thousand bits). But you're still in the range >>>>>where either turbo or LDPC codes perform close to capacity. >>> >>>>It's tough to achieve. When we did the LDPC codes for 802.16 and >>>>802.11n, we only defined codes down to a certain block size because >>>>the existing, plain Convolutional Code with a single-pass Viterbi >>>>decoder would do just as well at that size or smaller blocks. It was >>>>counter-productive to use the LDPC for smaller blocks. >>> >>>Hmm. Did you evaluate turbo codes? According to the Polarra result they >>>should outperform convolutional codes at a bit length of 440 >>>by a couple dB. >>> >>>See Figure 11 in "TMO Progress Report 42-133", which is on the >>>JPL website. > >>There are design tradeoffs with Turbo Codes that can be made to >>improve their performance for short blocks. Duo-binary codes work >>well for that, and there are some that are standardized for that >>purpose, e.g., DVB-RCS. Like many things, the optimizations for >>short blocks aren't necessarily great for long blocks, so, like the >>point I was trying to make with the OP, everything depends when it >>comes to FEC. > >Thanks > >>But nothing works really, really well for short blocks because of the >>reduction in bit diversity. > >Okay. The way I look at it, a coded modulation design is >"working really well" if it approaches the channel capacity, >where "capacity" is defined appropriately for the situation. > >For arbitrary modulation and sufficiently long block sizes, Shannon >capacity is most relevant. > >For arbitrary modulation and short block sizes, the Polarra >sphere-packing capacity is most relevant. > >For long block sizes but modulation constrained to using a typical >mapping of binary bits to constellation points, the mutual >information limit is most relevant. > >For the remaining case, with both short block sizes and modulation >constrained to using a mapping of binary bits to constellation >points, so far as I know nobody has yet derived a closed-form >formula for the capacity. (It would definitely be a publishable >result or if someone went to the trouble to do this.) However, what is >commonly done is the following: take the loss (in dB) by which the >sphere-packing limit falls short of the Shannon limit; add to this loss >(in dB) by which the mutual information limit falls short of the >Shannon limit; and use this as an estimate as to the extent >to which the capacity of the short-block, mutual-information limited >channel falls short of the Shannon limit. > >What piqued my interest here was the use of the phrase bit-diversity as >expressing the capacity impairment associated with short codeblocks; >as neither the Shannon- nor the sphere-packing capacities assume >information is encoded in bits.
One still has to be careful about using "capacity" as a benchmark. The Shannon-Hartley limit is general for a single-use of an AWGN channel and, as you mention, is independent of the modulation scheme. This is useful for testing FEC schemes in applications where the channel is AWGN or the demodulator sufficiently whitens the signal so that FEC schemes can be objectively compared. This does not address a number of things, like the adequacy of the constellation in the application, how well the demod whitens the channel if it isn't already white, the presence of multipath or multiple uses of the channel (e.g., MIMO). For a binary-input continous-output (BICO) channel, (which is typically applicable for digital modulation schemes), modulated transmission with arbitrary constellations, the capacity can be determined using Ungerboeck's method. This is handy for determining capacity limits on a particular modulation scheme with a sliced constellation. Evaluating constellations can be important to manage things like PAPR or PAs that don't do null excursions well (if that's even still a problem, I don't know). In these cases one can evaluate the capacity of an arbitrary constellation and use that as an appropriate benchmark. These capacities will always be less than or equal to the Shannon-Hartley limit, which is an overall bound for single-use of an AWGN channel. The use of MIMO requires a different evaluation of capacity as well, and multipath also changes things. And that doesn't even touch on block size yet. So, yeah, depending on what you're doing and what exactly it is that you're trying to evaluate, there are different capacity metrics that may come into play. This is Yet Another Reason why the OP's question may not have a straightforward answer. [1] G. Ungerboeck, "Channel Coding with Multilevel/Phase Signals," IEEE Trans. Inform. Theory, vol. IT-28, No. 1, pp. 55-67, Jan. 1982. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Eric Jacobsen <eric.jacobsen@ieee.org> wrote:

>One still has to be careful about using "capacity" as a benchmark. >The Shannon-Hartley limit is general for a single-use of an AWGN >channel and, as you mention, is independent of the modulation scheme. >This is useful for testing FEC schemes in applications where the >channel is AWGN or the demodulator sufficiently whitens the signal so >that FEC schemes can be objectively compared. This does not address >a number of things, like the adequacy of the constellation in the >application, how well the demod whitens the channel if it isn't >already white, the presence of multipath or multiple uses of the >channel (e.g., MIMO).
>For a binary-input continous-output (BICO) channel, (which is >typically applicable for digital modulation schemes), modulated >transmission with arbitrary constellations, the capacity can be >determined using Ungerboeck's method. This is handy for determining >capacity limits on a particular modulation scheme with a sliced >constellation.
Thanks for the reference to Ungerboeck's method. I have not used it. Need to check this out. (Tangentially I like to call binary convolutional codes Ungerboeck Codes since he was the first to do a computer search for the best polynomials, and many of the commonly used codes today are in fact his.)
>[...]
>And that doesn't even touch on block size yet.
That's where Polarra's result is useful, IMO.
>So, yeah, depending on what you're doing and what exactly it is that >you're trying to evaluate, there are different capacity metrics that >may come into play. > >This is Yet Another Reason why the OP's question may not have a >straightforward answer.
>[1] G. Ungerboeck, "Channel Coding with Multilevel/Phase Signals," >IEEE Trans. Inform. Theory, vol. IT-28, No. 1, pp. 55-67, Jan. 1982.
Steve