DSPRelated.com
Forums

Really good book on coding

Started by Tim Wescott December 12, 2016
On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: > >> Tim Wescott <tim@seemywebsite.really> wrote: >> >>>I've got a severe constraint on the amount of delay that I can add, so >>>I'm not working in the realm of codes that come closest to the channel >>>capacity -- rather, I'm working in the realm of codes that come closest >>>to the sphere-packing limit for the amount of delay that I'm allowing >>>myself. >> >>>It doesn't seem to be an area where a lot of work has been done. >> >> For the general constraint of code performance vs. clock length based on >> the sphere-packing limit, there is a paper by Fabrizio Pollara that I >> have linked to here before, one result of which is that for codeword >> lengths in the range of several dozen bits, conventional BCC's are the >> best available. >> >> I think this latency constraint makes it even less likely that you'll >> want to use a code over GF(16). Codes over GF(2) outperform codes over >> GF(N), N > 2, for the same codeword length -- so there'll have to be >> something very serindipitous about how the transmitted symbols map into >> code symbols. It would have to outperform feeding the MAP values into >> the binary decoder and there isn't a lot of room to improve on that. > >Here is my confusion: > >I am under the impression that the available decoder algorithms take MAP >values on a per-bit basis, but the structure of my symbols is such that I >cannot map my set of symbols to a collection of four bits such that a >"short distance" symbol error reliably results in fewer bit errors. > >I already gave this example, but for the mapping that I'm currently >using, symbol #3 and symbol #4 are a short distance apart, so it would be >entirely natural for a decoder at the symbol level to come up with an >output that says that these two symbols are equally likely to have been >transmitted, and any other symbol values are highly unlikely to have been >transmitted. > >If I were to take that and turn it into binary with likelihoods, it would >say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that bit 3 >is definitely a 0. > >So either I'm confused about why going from a 2-in-16 decode to an 8- >in-16 decode is a good thing, or I'm confused about how this binary MAP >decoding stuff really works, and there is, indeed, a way to do >conditional probabilities between bits.
Are you familiar with how soft decision is mapped for bits in a high-order constellation, e.g., 16-QAM? Even though particular symbols may be the same distance apart, the bit reliabilities for two adjacent symbols depend on the decision regions for the particular bits, not the symbols. The bit reliabilities are determined by the slicing patterns for those bits, not necessarily how far away the next closest symbol may be. It is the same when distances between symbols aren't equal, e.g., 16-APSK, or other oddball, crazy-shaped constellations. The individual bit soft-decisions are determined by the slicing pattern. In something like TCM the patterns are dynamic, i.e., depend on a previously decoded symbol in order to increase the distance between possible adjacent candidate symbols. If you don't use such a scheme you can just slice consistently and still do very well performance-wise (this is basically the idea behind BICM).
On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:

> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott > <seemywebsite@myfooter.really> wrote: > >>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: >> >>> Tim Wescott <tim@seemywebsite.really> wrote: >>> >>>>I've got a severe constraint on the amount of delay that I can add, so >>>>I'm not working in the realm of codes that come closest to the channel >>>>capacity -- rather, I'm working in the realm of codes that come >>>>closest to the sphere-packing limit for the amount of delay that I'm >>>>allowing myself. >>> >>>>It doesn't seem to be an area where a lot of work has been done. >>> >>> For the general constraint of code performance vs. clock length based >>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that >>> I have linked to here before, one result of which is that for codeword >>> lengths in the range of several dozen bits, conventional BCC's are the >>> best available. >>> >>> I think this latency constraint makes it even less likely that you'll >>> want to use a code over GF(16). Codes over GF(2) outperform codes >>> over GF(N), N > 2, for the same codeword length -- so there'll have to >>> be something very serindipitous about how the transmitted symbols map >>> into code symbols. It would have to outperform feeding the MAP values >>> into the binary decoder and there isn't a lot of room to improve on >>> that. >> >>Here is my confusion: >> >>I am under the impression that the available decoder algorithms take MAP >>values on a per-bit basis, but the structure of my symbols is such that >>I cannot map my set of symbols to a collection of four bits such that a >>"short distance" symbol error reliably results in fewer bit errors. >> >>I already gave this example, but for the mapping that I'm currently >>using, symbol #3 and symbol #4 are a short distance apart, so it would >>be entirely natural for a decoder at the symbol level to come up with an >>output that says that these two symbols are equally likely to have been >>transmitted, and any other symbol values are highly unlikely to have >>been transmitted. >> >>If I were to take that and turn it into binary with likelihoods, it >>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that >>bit 3 is definitely a 0. >> >>So either I'm confused about why going from a 2-in-16 decode to an 8- >>in-16 decode is a good thing, or I'm confused about how this binary MAP >>decoding stuff really works, and there is, indeed, a way to do >>conditional probabilities between bits. > > Are you familiar with how soft decision is mapped for bits in a > high-order constellation, e.g., 16-QAM?
No, although I have this fuzzy notion that where possible it's mapped out in some sort of gray-code order, so that adjacent dots on the constellation are only one bit off. I don't think I can do that here, although I haven't tried to solve that particular puzzle.
> Even though particular > symbols may be the same distance apart, the bit reliabilities for two > adjacent symbols depend on the decision regions for the particular bits, > not the symbols. The bit reliabilities are determined by the slicing > patterns for those bits, not necessarily how far away the next closest > symbol may be. > > It is the same when distances between symbols aren't equal, e.g., > 16-APSK, or other oddball, crazy-shaped constellations. The > individual bit soft-decisions are determined by the slicing pattern. > > In something like TCM the patterns are dynamic, i.e., depend on a > previously decoded symbol in order to increase the distance between > possible adjacent candidate symbols. If you don't use such a scheme > you can just slice consistently and still do very well performance-wise > (this is basically the idea behind BICM).
It's the end of the day and my brain wants me to lie down and watch "Futerama" for a while. Can you point to a paper or tutorial for me to read? I'd ask for words of one syllable, or that it only use the ten- hundred* most common words in English, but I guess I can't hope for that. * "Thousand" is not one of the thousand most common words in English. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Tim Wescott  <seemywebsite@myfooter.really> wrote:

>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote:
>> Are you familiar with how soft decision is mapped for bits in a >> high-order constellation, e.g., 16-QAM?
>No, although I have this fuzzy notion that where possible it's mapped out >in some sort of gray-code order, so that adjacent dots on the >constellation are only one bit off. I don't think I can do that here, >although I haven't tried to solve that particular puzzle.
>> Even though particular >> symbols may be the same distance apart, the bit reliabilities for two >> adjacent symbols depend on the decision regions for the particular bits, >> not the symbols. The bit reliabilities are determined by the slicing >> patterns for those bits, not necessarily how far away the next closest >> symbol may be. >> >> It is the same when distances between symbols aren't equal, e.g., >> 16-APSK, or other oddball, crazy-shaped constellations. The >> individual bit soft-decisions are determined by the slicing pattern.
>It's the end of the day and my brain wants me to lie down and watch >"Futerama" for a while. Can you point to a paper or tutorial for me to >read? I'd ask for words of one syllable, or that it only use the ten- >hundred* most common words in English, but I guess I can't hope for that.
Here is what I posted to the group a number of years ago for computing the a-priori probability of a given received bit for an arbitrary constellation. It has nothing to do with grey-coding (or with what is usually called "slicing"): The APP that a transmitted bit is a one is P(1) = poss(1) / (poss(1) + poss(0)) where poss(1) = sum over (constellation points p corresponding to a transmitted 1) of poss(1,p) where for additive white noise of variance v and received signal x poss(1,p) = exp(-(x-p)^2/(2*v)) Use a simlar formula for poss(0). You compute P(1) for each transmitted bit. [...] For BPSK there is only one constellation point for each possible value of the transmitted bit and the log of this expression reduces to the familiar distance metric used in most Viterbi decoders. For higher-order QAM, either use the full expression, or just use the dominant term in which case you are using the "nearest neighbor approximation" or "slicing" which entails an implementation loss. https://www.dsprelated.com/showthread/comp.dsp/107583-1.php Steve
On Tuesday, December 13, 2016 at 8:16:25 PM UTC-5, Tim Wescott wrote:
> On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote: > > > On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott > > <seemywebsite@myfooter.really> wrote: > > > >>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: > >> > >>> Tim Wescott <tim@seemywebsite.really> wrote: > >>> > >>>>I've got a severe constraint on the amount of delay that I can add, so > >>>>I'm not working in the realm of codes that come closest to the channel > >>>>capacity -- rather, I'm working in the realm of codes that come > >>>>closest to the sphere-packing limit for the amount of delay that I'm > >>>>allowing myself. > >>> > >>>>It doesn't seem to be an area where a lot of work has been done. > >>> > >>> For the general constraint of code performance vs. clock length based > >>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that > >>> I have linked to here before, one result of which is that for codeword > >>> lengths in the range of several dozen bits, conventional BCC's are the > >>> best available. > >>> > >>> I think this latency constraint makes it even less likely that you'll > >>> want to use a code over GF(16). Codes over GF(2) outperform codes > >>> over GF(N), N > 2, for the same codeword length -- so there'll have to > >>> be something very serindipitous about how the transmitted symbols map > >>> into code symbols. It would have to outperform feeding the MAP values > >>> into the binary decoder and there isn't a lot of room to improve on > >>> that. > >> > >>Here is my confusion: > >> > >>I am under the impression that the available decoder algorithms take MAP > >>values on a per-bit basis, but the structure of my symbols is such that > >>I cannot map my set of symbols to a collection of four bits such that a > >>"short distance" symbol error reliably results in fewer bit errors. > >> > >>I already gave this example, but for the mapping that I'm currently > >>using, symbol #3 and symbol #4 are a short distance apart, so it would > >>be entirely natural for a decoder at the symbol level to come up with an > >>output that says that these two symbols are equally likely to have been > >>transmitted, and any other symbol values are highly unlikely to have > >>been transmitted. > >> > >>If I were to take that and turn it into binary with likelihoods, it > >>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that > >>bit 3 is definitely a 0. > >> > >>So either I'm confused about why going from a 2-in-16 decode to an 8- > >>in-16 decode is a good thing, or I'm confused about how this binary MAP > >>decoding stuff really works, and there is, indeed, a way to do > >>conditional probabilities between bits. > > > > Are you familiar with how soft decision is mapped for bits in a > > high-order constellation, e.g., 16-QAM? > > No, although I have this fuzzy notion that where possible it's mapped out > in some sort of gray-code order, so that adjacent dots on the > constellation are only one bit off. I don't think I can do that here, > although I haven't tried to solve that particular puzzle. > > > Even though particular > > symbols may be the same distance apart, the bit reliabilities for two > > adjacent symbols depend on the decision regions for the particular bits, > > not the symbols. The bit reliabilities are determined by the slicing > > patterns for those bits, not necessarily how far away the next closest > > symbol may be. > > > > It is the same when distances between symbols aren't equal, e.g., > > 16-APSK, or other oddball, crazy-shaped constellations. The > > individual bit soft-decisions are determined by the slicing pattern. > > > > In something like TCM the patterns are dynamic, i.e., depend on a > > previously decoded symbol in order to increase the distance between > > possible adjacent candidate symbols. If you don't use such a scheme > > you can just slice consistently and still do very well performance-wise > > (this is basically the idea behind BICM). > > > It's the end of the day and my brain wants me to lie down and watch > "Futerama" for a while. Can you point to a paper or tutorial for me to > read? I'd ask for words of one syllable, or that it only use the ten- > hundred* most common words in English, but I guess I can't hope for that. > > * "Thousand" is not one of the thousand most common words in English. > > -- > > Tim Wescott > Wescott Design Services > http://www.wescottdesign.com > > I'm looking for work -- see my website!
Section III has a good development of soft bit demapping for square QAM. Section IV develops simplified approximations. http://www.hpl.hp.com/techreports/2001/HPL-2001-246.pdf
On Wed, 14 Dec 2016 11:34:07 +0000, Steve Pope wrote:

> Tim Wescott <seemywebsite@myfooter.really> wrote: > >>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote: > >>> Are you familiar with how soft decision is mapped for bits in a >>> high-order constellation, e.g., 16-QAM? > >>No, although I have this fuzzy notion that where possible it's mapped >>out in some sort of gray-code order, so that adjacent dots on the >>constellation are only one bit off. I don't think I can do that here, >>although I haven't tried to solve that particular puzzle. > >>> Even though particular symbols may be the same distance apart, the bit >>> reliabilities for two adjacent symbols depend on the decision regions >>> for the particular bits, >>> not the symbols. The bit reliabilities are determined by the slicing >>> patterns for those bits, not necessarily how far away the next closest >>> symbol may be. >>> >>> It is the same when distances between symbols aren't equal, e.g., >>> 16-APSK, or other oddball, crazy-shaped constellations. The >>> individual bit soft-decisions are determined by the slicing pattern. > >>It's the end of the day and my brain wants me to lie down and watch >>"Futerama" for a while. Can you point to a paper or tutorial for me to >>read? I'd ask for words of one syllable, or that it only use the ten- >>hundred* most common words in English, but I guess I can't hope for >>that. > > Here is what I posted to the group a number of years ago for computing > the a-priori probability of a given received bit for an arbitrary > constellation. It has nothing to do with grey-coding (or with what is > usually called "slicing"): > > The APP that a transmitted bit is a one is > > P(1) = poss(1) / (poss(1) + poss(0)) where > > poss(1) = sum over (constellation points p corresponding to > a transmitted 1) of poss(1,p) > > where for additive white noise of variance v and received signal x > > poss(1,p) = exp(-(x-p)^2/(2*v)) > > Use a simlar formula for poss(0). You compute P(1) for each > transmitted bit. > > [...] > > For BPSK there is only one constellation point for each possible > value of the transmitted bit and the log of this expression reduces > to the familiar distance metric used in most Viterbi decoders. For > higher-order QAM, either use the full expression, or just use the > dominant term in which case you are using the "nearest neighbor > approximation" or "slicing" which entails an implementation loss. > > https://www.dsprelated.com/showthread/comp.dsp/107583-1.php > > Steve
It's that implementation loss that concerns me. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
On 14.12.2016 0:54, Evgeny Filatov wrote:
> On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote: >> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov >> <filatov.ev@mipt.ru> wrote: > >> >> That's him! He's a good dude, too. I liked him a lot. >> >> And thanks for the corrections on the texts. He had given me a >> signed copy of a Russian-language text that has a lot of familiar >> graphs and diagrams and such in it, so I had made the erroneous >> assumption that the text I cited was a translation, I think the >> Russian text I have may be on information theory, at least the last >> word in the title appears to be "information". I'm not very competent >> at Russian other than being able to read words that happen to be >> fairly cognitive to English, order beer, etc. >> >> > > Actually I don't know anything about the man or his works, so I couldn't > correct you. You may be correct and it's a translation, just I couldn't > find the original book. Or perhaps he has given you a manuscript. ;-) > > Gene >
I didn't know about Prof. Krouk because I'm not that much of an EE guy. My academic background is mainly in (cond-mat) physics. However I've checked with some acquaintances of mine in the EE community and they know Prof. Krouk. A bit of offtopic. If you'd like to improve your command of Russian, I would suggest having a look at "The Last trial" -- some fantasy music loosely based on the "Dragonlance Legends": https://www.youtube.com/playlist?list=PLyhtJsrq6Kznc28Ksshkoq_zdsfUO4kcU It's nice entertainment stuff, and I occasionally put it on my playlist. Perhaps it would help you to understand Prof. Krouk's text. ;-) Gene
On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote: > >> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott >> <seemywebsite@myfooter.really> wrote: >> >>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: >>> >>>> Tim Wescott <tim@seemywebsite.really> wrote: >>>> >>>>>I've got a severe constraint on the amount of delay that I can add, so >>>>>I'm not working in the realm of codes that come closest to the channel >>>>>capacity -- rather, I'm working in the realm of codes that come >>>>>closest to the sphere-packing limit for the amount of delay that I'm >>>>>allowing myself. >>>> >>>>>It doesn't seem to be an area where a lot of work has been done. >>>> >>>> For the general constraint of code performance vs. clock length based >>>> on the sphere-packing limit, there is a paper by Fabrizio Pollara that >>>> I have linked to here before, one result of which is that for codeword >>>> lengths in the range of several dozen bits, conventional BCC's are the >>>> best available. >>>> >>>> I think this latency constraint makes it even less likely that you'll >>>> want to use a code over GF(16). Codes over GF(2) outperform codes >>>> over GF(N), N > 2, for the same codeword length -- so there'll have to >>>> be something very serindipitous about how the transmitted symbols map >>>> into code symbols. It would have to outperform feeding the MAP values >>>> into the binary decoder and there isn't a lot of room to improve on >>>> that. >>> >>>Here is my confusion: >>> >>>I am under the impression that the available decoder algorithms take MAP >>>values on a per-bit basis, but the structure of my symbols is such that >>>I cannot map my set of symbols to a collection of four bits such that a >>>"short distance" symbol error reliably results in fewer bit errors. >>> >>>I already gave this example, but for the mapping that I'm currently >>>using, symbol #3 and symbol #4 are a short distance apart, so it would >>>be entirely natural for a decoder at the symbol level to come up with an >>>output that says that these two symbols are equally likely to have been >>>transmitted, and any other symbol values are highly unlikely to have >>>been transmitted. >>> >>>If I were to take that and turn it into binary with likelihoods, it >>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and that >>>bit 3 is definitely a 0. >>> >>>So either I'm confused about why going from a 2-in-16 decode to an 8- >>>in-16 decode is a good thing, or I'm confused about how this binary MAP >>>decoding stuff really works, and there is, indeed, a way to do >>>conditional probabilities between bits. >> >> Are you familiar with how soft decision is mapped for bits in a >> high-order constellation, e.g., 16-QAM? > >No, although I have this fuzzy notion that where possible it's mapped out >in some sort of gray-code order, so that adjacent dots on the >constellation are only one bit off. I don't think I can do that here, >although I haven't tried to solve that particular puzzle. > >> Even though particular >> symbols may be the same distance apart, the bit reliabilities for two >> adjacent symbols depend on the decision regions for the particular bits, >> not the symbols. The bit reliabilities are determined by the slicing >> patterns for those bits, not necessarily how far away the next closest >> symbol may be. >> >> It is the same when distances between symbols aren't equal, e.g., >> 16-APSK, or other oddball, crazy-shaped constellations. The >> individual bit soft-decisions are determined by the slicing pattern. >> >> In something like TCM the patterns are dynamic, i.e., depend on a >> previously decoded symbol in order to increase the distance between >> possible adjacent candidate symbols. If you don't use such a scheme >> you can just slice consistently and still do very well performance-wise >> (this is basically the idea behind BICM). > > >It's the end of the day and my brain wants me to lie down and watch >"Futerama" for a while. Can you point to a paper or tutorial for me to >read? I'd ask for words of one syllable, or that it only use the ten- >hundred* most common words in English, but I guess I can't hope for that. > >* "Thousand" is not one of the thousand most common words in English.
Sorry, I must have stepped away from usenet for a few days...that usually happens during a topic I'm actually participating in for some reason. Which parts are causing the most difficulty? The mapping or the soft decision generation? Both? Neither are very difficult in implementation, usually. For the soft-decision generation simple approximations often perform only negligibly different than optimal, complex. ones. Several of the decent, early papers on BICM cover both in that context, but the general ideas apply to non-BICM systems as well. In this paper Fig. 2.1 shows simple, practical soft-decision "curves" for 16-QAM: http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf Basically, figure out which input bits make a slope between the decision regions and you're most of the way there. These papers discuss various mapping tricks for high-order modulations in the context of BICM and BICM-ID (Iterative Decoding), but the ideas apply generally to any high-order mapping system. I've found the first reference very useful. Design, Analysis, and Performance Evaluation for BICM-ID with Square QAM Constellations in Rayleigh Fading Channels, Chindapol and Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5, May, 2001 Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with Iterative Decoding, Schrekenback, Gortz, Hagenauer, Bauch, Proceedings Globecomm 2003. The seminal paper on BICM. See Fig 9 for an example of set-partitioning an 8-PSK constellation: 8PSK Trellis Codes for a Rayleigh Channel, Ephraim Zehavi, IEEE Trans. Comm, Vol 40 No 5, May 1992 A useful follow-up is Caire's paper, which I think I cited earlier somewhere: Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans. Info Theory, Vol 44 No 3, May 1998. If any of those references make your head hurt, cherry pick the sections that don't or just move on to the next. The complicated parts are really mostly of academic interest. The important parts are really pretty simple. If we could meet for beer (I wish!) we could hammer the basic ideas out easily before a pint was drained, but since that's not practical at the moment the citations above will have to do. High-order mapping and soft-decision generation are the sort of thing that once you get it, for many systems it can be easily shown completely in a diagram or two. I was looking for examples on the intartubes but didn't see anything suitable. I have some that are, sadly, proprietary else I'd share. If I can find something sharable I will.
On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote:

> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott > <seemywebsite@myfooter.really> wrote: > >>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote: >> >>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott >>> <seemywebsite@myfooter.really> wrote: >>> >>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: >>>> >>>>> Tim Wescott <tim@seemywebsite.really> wrote: >>>>> >>>>>>I've got a severe constraint on the amount of delay that I can add, >>>>>>so I'm not working in the realm of codes that come closest to the >>>>>>channel capacity -- rather, I'm working in the realm of codes that >>>>>>come closest to the sphere-packing limit for the amount of delay >>>>>>that I'm allowing myself. >>>>> >>>>>>It doesn't seem to be an area where a lot of work has been done. >>>>> >>>>> For the general constraint of code performance vs. clock length >>>>> based on the sphere-packing limit, there is a paper by Fabrizio >>>>> Pollara that I have linked to here before, one result of which is >>>>> that for codeword lengths in the range of several dozen bits, >>>>> conventional BCC's are the best available. >>>>> >>>>> I think this latency constraint makes it even less likely that >>>>> you'll want to use a code over GF(16). Codes over GF(2) outperform >>>>> codes over GF(N), N > 2, for the same codeword length -- so there'll >>>>> have to be something very serindipitous about how the transmitted >>>>> symbols map into code symbols. It would have to outperform feeding >>>>> the MAP values into the binary decoder and there isn't a lot of room >>>>> to improve on that. >>>> >>>>Here is my confusion: >>>> >>>>I am under the impression that the available decoder algorithms take >>>>MAP values on a per-bit basis, but the structure of my symbols is such >>>>that I cannot map my set of symbols to a collection of four bits such >>>>that a "short distance" symbol error reliably results in fewer bit >>>>errors. >>>> >>>>I already gave this example, but for the mapping that I'm currently >>>>using, symbol #3 and symbol #4 are a short distance apart, so it would >>>>be entirely natural for a decoder at the symbol level to come up with >>>>an output that says that these two symbols are equally likely to have >>>>been transmitted, and any other symbol values are highly unlikely to >>>>have been transmitted. >>>> >>>>If I were to take that and turn it into binary with likelihoods, it >>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and >>>>that bit 3 is definitely a 0. >>>> >>>>So either I'm confused about why going from a 2-in-16 decode to an 8- >>>>in-16 decode is a good thing, or I'm confused about how this binary >>>>MAP decoding stuff really works, and there is, indeed, a way to do >>>>conditional probabilities between bits. >>> >>> Are you familiar with how soft decision is mapped for bits in a >>> high-order constellation, e.g., 16-QAM? >> >>No, although I have this fuzzy notion that where possible it's mapped >>out in some sort of gray-code order, so that adjacent dots on the >>constellation are only one bit off. I don't think I can do that here, >>although I haven't tried to solve that particular puzzle. >> >>> Even though particular symbols may be the same distance apart, the bit >>> reliabilities for two adjacent symbols depend on the decision regions >>> for the particular bits, >>> not the symbols. The bit reliabilities are determined by the slicing >>> patterns for those bits, not necessarily how far away the next closest >>> symbol may be. >>> >>> It is the same when distances between symbols aren't equal, e.g., >>> 16-APSK, or other oddball, crazy-shaped constellations. The >>> individual bit soft-decisions are determined by the slicing pattern. >>> >>> In something like TCM the patterns are dynamic, i.e., depend on a >>> previously decoded symbol in order to increase the distance between >>> possible adjacent candidate symbols. If you don't use such a scheme >>> you can just slice consistently and still do very well >>> performance-wise (this is basically the idea behind BICM). >> >> >>It's the end of the day and my brain wants me to lie down and watch >>"Futerama" for a while. Can you point to a paper or tutorial for me to >>read? I'd ask for words of one syllable, or that it only use the ten- >>hundred* most common words in English, but I guess I can't hope for >>that. >> >>* "Thousand" is not one of the thousand most common words in English. > > Sorry, I must have stepped away from usenet for a few days...that > usually happens during a topic I'm actually participating in for some > reason. > > Which parts are causing the most difficulty? The mapping or the soft > decision generation? Both? Neither are very difficult in > implementation, usually. For the soft-decision generation simple > approximations often perform only negligibly different than optimal, > complex. ones. > > Several of the decent, early papers on BICM cover both in that context, > but the general ideas apply to non-BICM systems as well. > > In this paper Fig. 2.1 shows simple, practical soft-decision "curves" > for 16-QAM: > > http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf > > Basically, figure out which input bits make a slope between the decision > regions and you're most of the way there. > > These papers discuss various mapping tricks for high-order modulations > in the context of BICM and BICM-ID (Iterative Decoding), but the ideas > apply generally to any high-order mapping system. I've found the first > reference very useful. > > Design, Analysis, and Performance Evaluation for BICM-ID with Square > QAM Constellations in Rayleigh Fading Channels, Chindapol and Ritcey, > IEEE Journal on Selected Areas in Communications, Vol 19 No 5, > May, 2001 > > Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with > Iterative Decoding, Schrekenback, Gortz, Hagenauer, Bauch, Proceedings > Globecomm 2003. > > The seminal paper on BICM. See Fig 9 for an example of set-partitioning > an 8-PSK constellation: > > 8PSK Trellis Codes for a Rayleigh Channel, Ephraim Zehavi, IEEE Trans. > Comm, Vol 40 No 5, May 1992 > > A useful follow-up is Caire's paper, which I think I cited earlier > somewhere: > > Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans. > Info Theory, Vol 44 No 3, May 1998. > > If any of those references make your head hurt, cherry pick the sections > that don't or just move on to the next. The complicated parts are > really mostly of academic interest. The important parts are really > pretty simple. If we could meet for beer (I wish!) we could hammer the > basic ideas out easily before a pint was drained, but since that's not > practical at the moment the citations above will have to do. > > High-order mapping and soft-decision generation are the sort of thing > that once you get it, for many systems it can be easily shown completely > in a diagram or two. I was looking for examples on the intartubes but > didn't see anything suitable. I have some that are, sadly, proprietary > else I'd share. If I can find something sharable I will.
I have no problem with the soft decision making -- it's the question of whether and how I can map my set of symbols to a set of multi-digit binary numbers such that a short-distance ambiguity in the original symbol results in fewer ambiguous (or wrong) bits than would happen if a symbol were seriously off. If a symbol is completely trashed and I'm doing soft-decision making then the answer's easy: I just have a block of erasures. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Fri, 16 Dec 2016 19:37:09 +0300, Evgeny Filatov
<filatov.ev@mipt.ru> wrote:

>On 14.12.2016 0:54, Evgeny Filatov wrote: >> On 13.12.2016 22:03, eric.jacobsen@ieee.org wrote: >>> On Tue, 13 Dec 2016 13:56:01 +0300, Evgeny Filatov >>> <filatov.ev@mipt.ru> wrote: >> >>> >>> That's him! He's a good dude, too. I liked him a lot. >>> >>> And thanks for the corrections on the texts. He had given me a >>> signed copy of a Russian-language text that has a lot of familiar >>> graphs and diagrams and such in it, so I had made the erroneous >>> assumption that the text I cited was a translation, I think the >>> Russian text I have may be on information theory, at least the last >>> word in the title appears to be "information". I'm not very competent >>> at Russian other than being able to read words that happen to be >>> fairly cognitive to English, order beer, etc. >>> >>> >> >> Actually I don't know anything about the man or his works, so I couldn't >> correct you. You may be correct and it's a translation, just I couldn't >> find the original book. Or perhaps he has given you a manuscript. ;-) >> >> Gene >> > >I didn't know about Prof. Krouk because I'm not that much of an EE guy. >My academic background is mainly in (cond-mat) physics. However I've >checked with some acquaintances of mine in the EE community and they >know Prof. Krouk. > >A bit of offtopic. If you'd like to improve your command of Russian, I >would suggest having a look at "The Last trial" -- some fantasy music >loosely based on the "Dragonlance Legends": > >https://www.youtube.com/playlist?list=PLyhtJsrq6Kznc28Ksshkoq_zdsfUO4kcU > >It's nice entertainment stuff, and I occasionally put it on my playlist. > >Perhaps it would help you to understand Prof. Krouk's text. ;-) > >Gene >
Your assistance is very kind. ;) I had a long-time live-in g/f who was Ukrainian, and between that and frequent travel to Russia with a former employer, I think I absorbed about as much of the language as I could hope to. I did get good enough at reading Cyrillic to pick out the English-cognitive words, which worked far better than I would have ever expected. I do miss going over there as I always found the trips interesting and adventurous.
Tim Wescott  <tim@seemywebsite.com> wrote:

>I have no problem with the soft decision making -- it's the question of >whether and how I can map my set of symbols to a set of multi-digit >binary numbers such that a short-distance ambiguity in the original >symbol results in fewer ambiguous (or wrong) bits than would happen if a >symbol were seriously off. > >If a symbol is completely trashed and I'm doing soft-decision making then >the answer's easy: I just have a block of erasures.
Why would a symbol be "completely trashed"? Is it a burst-error channel? On an AWGN channel, the I and Q noise components are independent, therefore if the I component is "trashed" there is no reason to predict the Q component is trashed. And also, with AWGN, large-amplitude noise is exponentially less likely than small-amplitude noise. In short, with AWGN you do not worry about the "completely trashed" case as it is so unlikely -- most raw errors are because the noise is just enough to push you over to the next constellation point. But, if it's not an AWGN channel, and you can characterize the channel, then your problem has gotten easier, as AWGN is the worst case. If it's not AWGN, and you cannot characterize it, then you are probably "Sync Outta Lock" as far as designing towards it. S.