On Mon, 19 Dec 2016 13:10:32 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:
>On Sun, 18 Dec 2016 21:18:21 +0000, eric.jacobsen wrote:
>
>> On Sun, 18 Dec 2016 12:52:59 -0600, Tim Wescott <tim@seemywebsite.com>
>> wrote:
>>
>>>On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:
>>>
>>>> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
>>>> <seemywebsite@myfooter.really> wrote:
>>>>
>>>>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>>>>>
>>>>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
>>>>>> <seemywebsite@myfooter.really> wrote:
>>>>>>
>>>>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote:
>>>>>>>
>>>>>>>> Tim Wescott <tim@seemywebsite.really> wrote:
>>>>>>>>
>>>>>>>>>I've got a severe constraint on the amount of delay that I can
>>>>>>>>>add,
>>>>>>>>>so I'm not working in the realm of codes that come closest to the
>>>>>>>>>channel capacity -- rather, I'm working in the realm of codes that
>>>>>>>>>come closest to the sphere-packing limit for the amount of delay
>>>>>>>>>that I'm allowing myself.
>>>>>>>>
>>>>>>>>>It doesn't seem to be an area where a lot of work has been done.
>>>>>>>>
>>>>>>>> For the general constraint of code performance vs. clock length
>>>>>>>> based on the sphere-packing limit, there is a paper by Fabrizio
>>>>>>>> Pollara that I have linked to here before, one result of which is
>>>>>>>> that for codeword lengths in the range of several dozen bits,
>>>>>>>> conventional BCC's are the best available.
>>>>>>>>
>>>>>>>> I think this latency constraint makes it even less likely that
>>>>>>>> you'll want to use a code over GF(16). Codes over GF(2)
>>>>>>>> outperform codes over GF(N), N > 2, for the same codeword length
>>>>>>>> -- so there'll have to be something very serindipitous about how
>>>>>>>> the transmitted symbols map into code symbols. It would have to
>>>>>>>> outperform feeding the MAP values into the binary decoder and
>>>>>>>> there isn't a lot of room to improve on that.
>>>>>>>
>>>>>>>Here is my confusion:
>>>>>>>
>>>>>>>I am under the impression that the available decoder algorithms take
>>>>>>>MAP values on a per-bit basis, but the structure of my symbols is
>>>>>>>such that I cannot map my set of symbols to a collection of four
>>>>>>>bits such that a "short distance" symbol error reliably results in
>>>>>>>fewer bit errors.
>>>>>>>
>>>>>>>I already gave this example, but for the mapping that I'm currently
>>>>>>>using, symbol #3 and symbol #4 are a short distance apart, so it
>>>>>>>would be entirely natural for a decoder at the symbol level to come
>>>>>>>up with an output that says that these two symbols are equally
>>>>>>>likely to have been transmitted, and any other symbol values are
>>>>>>>highly unlikely to have been transmitted.
>>>>>>>
>>>>>>>If I were to take that and turn it into binary with likelihoods, it
>>>>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and
>>>>>>>that bit 3 is definitely a 0.
>>>>>>>
>>>>>>>So either I'm confused about why going from a 2-in-16 decode to an
>>>>>>>8- in-16 decode is a good thing, or I'm confused about how this
>>>>>>>binary MAP decoding stuff really works, and there is, indeed, a way
>>>>>>>to do conditional probabilities between bits.
>>>>>>
>>>>>> Are you familiar with how soft decision is mapped for bits in a
>>>>>> high-order constellation, e.g., 16-QAM?
>>>>>
>>>>>No, although I have this fuzzy notion that where possible it's mapped
>>>>>out in some sort of gray-code order, so that adjacent dots on the
>>>>>constellation are only one bit off. I don't think I can do that here,
>>>>>although I haven't tried to solve that particular puzzle.
>>>>>
>>>>>> Even though particular symbols may be the same distance apart, the
>>>>>> bit reliabilities for two adjacent symbols depend on the decision
>>>>>> regions for the particular bits,
>>>>>> not the symbols. The bit reliabilities are determined by the
>>>>>> slicing patterns for those bits, not necessarily how far away the
>>>>>> next closest symbol may be.
>>>>>>
>>>>>> It is the same when distances between symbols aren't equal, e.g.,
>>>>>> 16-APSK, or other oddball, crazy-shaped constellations. The
>>>>>> individual bit soft-decisions are determined by the slicing pattern.
>>>>>>
>>>>>> In something like TCM the patterns are dynamic, i.e., depend on a
>>>>>> previously decoded symbol in order to increase the distance between
>>>>>> possible adjacent candidate symbols. If you don't use such a
>>>>>> scheme you can just slice consistently and still do very well
>>>>>> performance-wise (this is basically the idea behind BICM).
>>>>>
>>>>>
>>>>>It's the end of the day and my brain wants me to lie down and watch
>>>>>"Futerama" for a while. Can you point to a paper or tutorial for me
>>>>>to read? I'd ask for words of one syllable, or that it only use the
>>>>>ten- hundred* most common words in English, but I guess I can't hope
>>>>>for that.
>>>>>
>>>>>* "Thousand" is not one of the thousand most common words in English.
>>>>
>>>> Sorry, I must have stepped away from usenet for a few days...that
>>>> usually happens during a topic I'm actually participating in for some
>>>> reason.
>>>>
>>>> Which parts are causing the most difficulty? The mapping or the soft
>>>> decision generation? Both? Neither are very difficult in
>>>> implementation, usually. For the soft-decision generation simple
>>>> approximations often perform only negligibly different than optimal,
>>>> complex. ones.
>>>>
>>>> Several of the decent, early papers on BICM cover both in that
>>>> context, but the general ideas apply to non-BICM systems as well.
>>>>
>>>> In this paper Fig. 2.1 shows simple, practical soft-decision "curves"
>>>> for 16-QAM:
>>>>
>>>> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf
>>>>
>>>> Basically, figure out which input bits make a slope between the
>>>> decision regions and you're most of the way there.
>>>>
>>>> These papers discuss various mapping tricks for high-order modulations
>>>> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas
>>>> apply generally to any high-order mapping system. I've found the
>>>> first reference very useful.
>>>>
>>>> Design, Analysis, and Performance Evaluation for BICM-ID with Square
>>>> QAM Constellations in Rayleigh Fading Channels, Chindapol and
>>>> Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5,
>>>> May, 2001
>>>>
>>>> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with
>>>> Iterative Decoding, Schrekenback, Gortz, Hagenauer, Bauch,
>>>> Proceedings Globecomm 2003.
>>>>
>>>> The seminal paper on BICM. See Fig 9 for an example of
>>>> set-partitioning an 8-PSK constellation:
>>>>
>>>> 8PSK Trellis Codes for a Rayleigh Channel, Ephraim Zehavi, IEEE
>>>> Trans. Comm, Vol 40 No 5, May 1992
>>>>
>>>> A useful follow-up is Caire's paper, which I think I cited earlier
>>>> somewhere:
>>>>
>>>> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans.
>>>> Info Theory, Vol 44 No 3, May 1998.
>>>>
>>>> If any of those references make your head hurt, cherry pick the
>>>> sections that don't or just move on to the next. The complicated
>>>> parts are really mostly of academic interest. The important parts are
>>>> really pretty simple. If we could meet for beer (I wish!) we could
>>>> hammer the basic ideas out easily before a pint was drained, but since
>>>> that's not practical at the moment the citations above will have to
>>>> do.
>>>>
>>>> High-order mapping and soft-decision generation are the sort of thing
>>>> that once you get it, for many systems it can be easily shown
>>>> completely in a diagram or two. I was looking for examples on the
>>>> intartubes but didn't see anything suitable. I have some that are,
>>>> sadly, proprietary else I'd share. If I can find something sharable
>>>> I will.
>>>
>>>I have no problem with the soft decision making -- it's the question of
>>>whether and how I can map my set of symbols to a set of multi-digit
>>>binary numbers such that a short-distance ambiguity in the original
>>>symbol results in fewer ambiguous (or wrong) bits than would happen if a
>>>symbol were seriously off.
>>
>> I'm still guessing what your signal space looks like, but for things
>> like PSK/QAM, some of the papers I cited talk about the set partitioning
>> or mapping for good performance in the interleaved case. This can be
>> done for arbitrary constellations, assuming you actually have a
>> constellation of some kind.
>>
>>>If a symbol is completely trashed and I'm doing soft-decision making
>>>then the answer's easy: I just have a block of erasures.
>>
>> This is one of the things that bit interleaving helps with; a symbol or
>> a few adjacent symbols that are heavily damaged have a reduced impact on
>> the ability to properly decode the message.
>
>I'm being jumped on to keep delay to a minimum, so a scheme that can
>identify and cope with burst errors without adding delay is a plus. In
>the end the customer may well accept a higher block error rate easier
>than they'll accept the significantly higher delay that it'd take to
>avoid it by interleaving.
Yeah, the interleaving certainly does affect delay, but so will any
decent coding system. If you want gain, you need the code to
traverse a lot of bits (bit diversity), so that's a natural tradeoff,
anyway. A Viterbi decoder with no interleaving will likely be as
good as you can do, and TCM will allow higher-order modulation without
really adding significant delay. TCM does want a relatively tame
channel, though, but as long as your channel isn't too crazy it would
probably work just fine.