Reply by Tim Wescott December 24, 20162016-12-24
On Tue, 13 Dec 2016 01:14:41 +0000, eric.jacobsen wrote:

> On Mon, 12 Dec 2016 06:19:16 +0000 (UTC), spope33@speedymail.org (Steve > Pope) wrote: > >>Tim Wescott <seemywebsite@myfooter.really> wrote: >> >>>On Mon, 12 Dec 2016 05:16:06 +0000, Steve Pope wrote: >> >>>> Tim Wescott <seemywebsite@myfooter.really> wrote: >> >>>>>I'm especially interested in non-binary convolutional coding. At >>>>>least today. > >>>> Non-binary, but linear? >> >>>> Or non-linear as well? >> >>>> Steve >> >>>If there's a comprehensive theory of nonlinear coding theory I'm open. >> >>I don't know of any comprehensive theories in this area. >> >>A large useful class of non-binary, non-linear convolutional codes are >>the Ungerboek codes. >> >>A large useful class of non-binary, linear convolutional codes are the >>binary convolutional codes interleaved across nonbinary symbols. > > Aka Bit-Interleaved Coded Modulation (BICM), which turns out to be, in > many cases, much better than Trellis-Coded Modulation and also simpler > to implement. > > A good reference paper is Bit-Interleaved Coded Modulation, Caire, > Taricco, Biglieri, "IEEE Trans. on Information Theory", Vol 44, No 3, > May 1998, p 927. > > This is more or less my default go-to technique for basic systems. > >>I have created native, non-binary convolutiona codes as toy examples, >>but that I could tell they were never any better than more conventional >>convolutional codes. > > That's been my experience as well. Binary codes are prevalent because > they're difficult to beat in many applications, even if the overlying > symbols are not binary. > > There are some non-binary radix decoders which mostly fold hardware to > accelerate computation, if that's what you're interested in. > Probably searching on Radix-4 or Radix-16 ACS sections for Viterbi > decoders may get you some good references there. >
It turns out that this approach seems to be a winner, and pretty easy implement. It's the path I'm going to follow at least for the time being, just using "ordinary" convolutional encoding of the output of the inner-little-block code. (I'm not sure if that made sense. I just took a break from working to play some solitare, and my brain kept generating trellis diagrams between them. I think it's time to stop working for the night.) -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by Tim Wescott December 21, 20162016-12-21
On Wed, 21 Dec 2016 07:38:46 +0200, Vassilios Spiliopoulos wrote:

> Tim Wescott <seemywebsite@myfooter.really> Wrote in message: >> Suggestions? I'm looking for something more specific and new than >> "Error- >> Correction Coding for Digital Communications" by Clark and Cain. >> >> I was going to ask for a really good book specifically on >> _Convolutional_ coding, and that's kind of the direction that I'm >> leaning at the moment, but one book to bring them all and in the >> darkness bind them would be acceptable. >> >> I'm especially interested in non-binary convolutional coding. At least >> today. >> >> -- >> >> Tim Wescott Wescott Design Services http://www.wescottdesign.com >> >> I'm looking for work -- see my website! >> >> > My favourite book on a programming language is the "manual of the > c++ language by the White team"(translation from Greek, I can't > remember the title in english)
Error correcting coding. Not error avoidance coding. :) -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by Mark DeArman December 21, 20162016-12-21
On Tuesday, December 20, 2016 at 9:38:50 PM UTC-8, Vassilios Spiliopoulos wrote:
> Tim Wescott <seemywebsite@myfooter.really> Wrote in message: > > Suggestions? I'm looking for something more specific and new than "Error- > > Correction Coding for Digital Communications" by Clark and Cain. > > > > I was going to ask for a really good book specifically on _Convolutional_ > > coding, and that's kind of the direction that I'm leaning at the moment, > > but one book to bring them all and in the darkness bind them would be > > acceptable. > > > > I'm especially interested in non-binary convolutional coding. At least > > today. > > > > -- > > > > Tim Wescott > > Wescott Design Services > > http://www.wescottdesign.com > > > > I'm looking for work -- see my website! > > > > My favourite book on a programming language is the "manual of the > c++ language by the White team"(translation from Greek, I can't > remember the title in english) > -- > > > ----Android NewsGroup Reader---- > http://usenet.sinaapp.com/
Lol, I knew this was coming :-P I wanted to make a meme for comp.dsp vs r/DSP/ discussions.
Reply by Vassilios Spiliopoulos December 21, 20162016-12-21
Tim Wescott <seemywebsite@myfooter.really> Wrote in message:
> Suggestions? I'm looking for something more specific and new than "Error- > Correction Coding for Digital Communications" by Clark and Cain. > > I was going to ask for a really good book specifically on _Convolutional_ > coding, and that's kind of the direction that I'm leaning at the moment, > but one book to bring them all and in the darkness bind them would be > acceptable. > > I'm especially interested in non-binary convolutional coding. At least > today. > > -- > > Tim Wescott > Wescott Design Services > http://www.wescottdesign.com > > I'm looking for work -- see my website! >
My favourite book on a programming language is the "manual of the c++ language by the White team"(translation from Greek, I can't remember the title in english) -- ----Android NewsGroup Reader---- http://usenet.sinaapp.com/
Reply by December 19, 20162016-12-19
On Mon, 19 Dec 2016 15:36:38 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Mon, 19 Dec 2016 19:34:39 +0000, Steve Pope wrote: > >> Tim Wescott <seemywebsite@myfooter.really> wrote: >> >>>I'm being jumped on to keep delay to a minimum, so a scheme that can >>>identify and cope with burst errors without adding delay is a plus. In >>>the end the customer may well accept a higher block error rate easier >>>than they'll accept the significantly higher delay that it'd take to >>>avoid it by interleaving. >> >> Is it the worst case delay that needs to be kept to a minimum, >> or the average delay? And, is there a backchannel? > >It's for down-hole communication. Worst-case delay is what needs to be >kept to a minimum, but the customer feedback that I get after any time >has passed is "no delay! no delay at all!!", and then I have to re- >explain the basics of coding theory. > >There's sort of a backchannel, but only by turning off the pumps that >power the down-hole machinery, and it's expensive. The basic plan is to >try to anticipate how difficult the communications are on the well, and >set things up in the equipment before it goes down, however, failing >that, they can change parameters to a very limited extent by powering >down the pumps for varying amounts of time (at great expense, and much >irritation).
Down-hole stuff is always fun. I've had some exposure to it, but not a lot in-depth and no complete system designs. Enough to know that it's tricky, regardless of the many types of systems (mud pulse, wired pipe, etc., etc.) I do not envy you this task. Julius Kusuma lives and breathes that stuff these days, but even if he hung around here much any more I don't know how much he could share.
Reply by Tim Wescott December 19, 20162016-12-19
On Mon, 19 Dec 2016 19:34:39 +0000, Steve Pope wrote:

> Tim Wescott <seemywebsite@myfooter.really> wrote: > >>I'm being jumped on to keep delay to a minimum, so a scheme that can >>identify and cope with burst errors without adding delay is a plus. In >>the end the customer may well accept a higher block error rate easier >>than they'll accept the significantly higher delay that it'd take to >>avoid it by interleaving. > > Is it the worst case delay that needs to be kept to a minimum, > or the average delay? And, is there a backchannel?
It's for down-hole communication. Worst-case delay is what needs to be kept to a minimum, but the customer feedback that I get after any time has passed is "no delay! no delay at all!!", and then I have to re- explain the basics of coding theory. There's sort of a backchannel, but only by turning off the pumps that power the down-hole machinery, and it's expensive. The basic plan is to try to anticipate how difficult the communications are on the well, and set things up in the equipment before it goes down, however, failing that, they can change parameters to a very limited extent by powering down the pumps for varying amounts of time (at great expense, and much irritation). -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by Tim Wescott December 19, 20162016-12-19
On Mon, 19 Dec 2016 20:13:46 +0000, eric.jacobsen wrote:

> On Mon, 19 Dec 2016 00:12:59 -0600, Tim Wescott > <seemywebsite@myfooter.really> wrote: > >>On Sun, 18 Dec 2016 19:09:41 +0000, Steve Pope wrote: >> >>> Tim Wescott <tim@seemywebsite.com> wrote: >>> >>>>I have no problem with the soft decision making -- it's the question >>>>of whether and how I can map my set of symbols to a set of multi-digit >>>>binary numbers such that a short-distance ambiguity in the original >>>>symbol results in fewer ambiguous (or wrong) bits than would happen if >>>>a symbol were seriously off. >>>> >>>>If a symbol is completely trashed and I'm doing soft-decision making >>>>then the answer's easy: I just have a block of erasures. >>> >>> Why would a symbol be "completely trashed"? Is it a burst-error >>> channel? >>> >>> On an AWGN channel, the I and Q noise components are independent, >>> therefore if the I component is "trashed" there is no reason to >>> predict the Q component is trashed. >>> >>> And also, with AWGN, large-amplitude noise is exponentially less >>> likely than small-amplitude noise. In short, with AWGN you do not >>> worry about the "completely trashed" case as it is so unlikely -- most >>> raw errors are because the noise is just enough to push you over to >>> the next constellation point. >>> >>> But, if it's not an AWGN channel, and you can characterize the >>> channel, then your problem has gotten easier, as AWGN is the worst >>> case. >>> >>> If it's not AWGN, and you cannot characterize it, then you are >>> probably "Sync Outta Lock" as far as designing towards it. >> >>No Q. Sorry, this isn't Star Trek. Or radio. Or even telephony. >> >>AWGN, baseband, symbols can be anything you want as long as they're >>either a fixed magnitude or zero, with a minimum switching time and with >>a constant average DC value. > > Sounds like PSK? Or is it real-valued only so OOK?
Real valued only. It's definitely the protocol of choice for the librarian at Unseen University. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by Steve Pope December 19, 20162016-12-19
One problem with adding a convolutional code to this 4b/6b
modulation is that you now have error-propogation that did
not exist before.

So you may actually want something like a Reed-Solomon code,
such as a (15,k) code over GF(16), with k chosen from 13, 11, 9 ...  
with some sort of mixed error/erasure decoding strategy that 
takes some advantage of the avilable soft decisions.

If latency requirements permit, block-interleaving this code to depth 2
may be wise.

Those soft decisions could be formed by brute-forse ML in this 
case.

Because of your parameters there are probably no Goppa / Hermitian /
Algebraic Geometry codes that are of any use.

Steve
Reply by December 19, 20162016-12-19
On Mon, 19 Dec 2016 13:10:32 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Sun, 18 Dec 2016 21:18:21 +0000, eric.jacobsen wrote: > >> On Sun, 18 Dec 2016 12:52:59 -0600, Tim Wescott <tim@seemywebsite.com> >> wrote: >> >>>On Sun, 18 Dec 2016 18:39:26 +0000, eric.jacobsen wrote: >>> >>>> On Tue, 13 Dec 2016 19:16:15 -0600, Tim Wescott >>>> <seemywebsite@myfooter.really> wrote: >>>> >>>>>On Wed, 14 Dec 2016 00:32:08 +0000, eric.jacobsen wrote: >>>>> >>>>>> On Tue, 13 Dec 2016 16:48:29 -0600, Tim Wescott >>>>>> <seemywebsite@myfooter.really> wrote: >>>>>> >>>>>>>On Tue, 13 Dec 2016 19:48:04 +0000, Steve Pope wrote: >>>>>>> >>>>>>>> Tim Wescott <tim@seemywebsite.really> wrote: >>>>>>>> >>>>>>>>>I've got a severe constraint on the amount of delay that I can >>>>>>>>>add, >>>>>>>>>so I'm not working in the realm of codes that come closest to the >>>>>>>>>channel capacity -- rather, I'm working in the realm of codes that >>>>>>>>>come closest to the sphere-packing limit for the amount of delay >>>>>>>>>that I'm allowing myself. >>>>>>>> >>>>>>>>>It doesn't seem to be an area where a lot of work has been done. >>>>>>>> >>>>>>>> For the general constraint of code performance vs. clock length >>>>>>>> based on the sphere-packing limit, there is a paper by Fabrizio >>>>>>>> Pollara that I have linked to here before, one result of which is >>>>>>>> that for codeword lengths in the range of several dozen bits, >>>>>>>> conventional BCC's are the best available. >>>>>>>> >>>>>>>> I think this latency constraint makes it even less likely that >>>>>>>> you'll want to use a code over GF(16). Codes over GF(2) >>>>>>>> outperform codes over GF(N), N > 2, for the same codeword length >>>>>>>> -- so there'll have to be something very serindipitous about how >>>>>>>> the transmitted symbols map into code symbols. It would have to >>>>>>>> outperform feeding the MAP values into the binary decoder and >>>>>>>> there isn't a lot of room to improve on that. >>>>>>> >>>>>>>Here is my confusion: >>>>>>> >>>>>>>I am under the impression that the available decoder algorithms take >>>>>>>MAP values on a per-bit basis, but the structure of my symbols is >>>>>>>such that I cannot map my set of symbols to a collection of four >>>>>>>bits such that a "short distance" symbol error reliably results in >>>>>>>fewer bit errors. >>>>>>> >>>>>>>I already gave this example, but for the mapping that I'm currently >>>>>>>using, symbol #3 and symbol #4 are a short distance apart, so it >>>>>>>would be entirely natural for a decoder at the symbol level to come >>>>>>>up with an output that says that these two symbols are equally >>>>>>>likely to have been transmitted, and any other symbol values are >>>>>>>highly unlikely to have been transmitted. >>>>>>> >>>>>>>If I were to take that and turn it into binary with likelihoods, it >>>>>>>would say that bits 0, 1 and 2 are equally likely to be 0 or 1, and >>>>>>>that bit 3 is definitely a 0. >>>>>>> >>>>>>>So either I'm confused about why going from a 2-in-16 decode to an >>>>>>>8- in-16 decode is a good thing, or I'm confused about how this >>>>>>>binary MAP decoding stuff really works, and there is, indeed, a way >>>>>>>to do conditional probabilities between bits. >>>>>> >>>>>> Are you familiar with how soft decision is mapped for bits in a >>>>>> high-order constellation, e.g., 16-QAM? >>>>> >>>>>No, although I have this fuzzy notion that where possible it's mapped >>>>>out in some sort of gray-code order, so that adjacent dots on the >>>>>constellation are only one bit off. I don't think I can do that here, >>>>>although I haven't tried to solve that particular puzzle. >>>>> >>>>>> Even though particular symbols may be the same distance apart, the >>>>>> bit reliabilities for two adjacent symbols depend on the decision >>>>>> regions for the particular bits, >>>>>> not the symbols. The bit reliabilities are determined by the >>>>>> slicing patterns for those bits, not necessarily how far away the >>>>>> next closest symbol may be. >>>>>> >>>>>> It is the same when distances between symbols aren't equal, e.g., >>>>>> 16-APSK, or other oddball, crazy-shaped constellations. The >>>>>> individual bit soft-decisions are determined by the slicing pattern. >>>>>> >>>>>> In something like TCM the patterns are dynamic, i.e., depend on a >>>>>> previously decoded symbol in order to increase the distance between >>>>>> possible adjacent candidate symbols. If you don't use such a >>>>>> scheme you can just slice consistently and still do very well >>>>>> performance-wise (this is basically the idea behind BICM). >>>>> >>>>> >>>>>It's the end of the day and my brain wants me to lie down and watch >>>>>"Futerama" for a while. Can you point to a paper or tutorial for me >>>>>to read? I'd ask for words of one syllable, or that it only use the >>>>>ten- hundred* most common words in English, but I guess I can't hope >>>>>for that. >>>>> >>>>>* "Thousand" is not one of the thousand most common words in English. >>>> >>>> Sorry, I must have stepped away from usenet for a few days...that >>>> usually happens during a topic I'm actually participating in for some >>>> reason. >>>> >>>> Which parts are causing the most difficulty? The mapping or the soft >>>> decision generation? Both? Neither are very difficult in >>>> implementation, usually. For the soft-decision generation simple >>>> approximations often perform only negligibly different than optimal, >>>> complex. ones. >>>> >>>> Several of the decent, early papers on BICM cover both in that >>>> context, but the general ideas apply to non-BICM systems as well. >>>> >>>> In this paper Fig. 2.1 shows simple, practical soft-decision "curves" >>>> for 16-QAM: >>>> >>>> http://shannon.cm.nctu.edu.tw/html/thesis/chia-wei.pdf >>>> >>>> Basically, figure out which input bits make a slope between the >>>> decision regions and you're most of the way there. >>>> >>>> These papers discuss various mapping tricks for high-order modulations >>>> in the context of BICM and BICM-ID (Iterative Decoding), but the ideas >>>> apply generally to any high-order mapping system. I've found the >>>> first reference very useful. >>>> >>>> Design, Analysis, and Performance Evaluation for BICM-ID with Square >>>> QAM Constellations in Rayleigh Fading Channels, Chindapol and >>>> Ritcey, IEEE Journal on Selected Areas in Communications, Vol 19 No 5, >>>> May, 2001 >>>> >>>> Optimized Symbol Mappings for Bit-Interleaved Coded Modulation with >>>> Iterative Decoding, Schrekenback, Gortz, Hagenauer, Bauch, >>>> Proceedings Globecomm 2003. >>>> >>>> The seminal paper on BICM. See Fig 9 for an example of >>>> set-partitioning an 8-PSK constellation: >>>> >>>> 8PSK Trellis Codes for a Rayleigh Channel, Ephraim Zehavi, IEEE >>>> Trans. Comm, Vol 40 No 5, May 1992 >>>> >>>> A useful follow-up is Caire's paper, which I think I cited earlier >>>> somewhere: >>>> >>>> Bit-Interleave Coded Modulation, Caire, Taricco, Biglieri, IEEE Trans. >>>> Info Theory, Vol 44 No 3, May 1998. >>>> >>>> If any of those references make your head hurt, cherry pick the >>>> sections that don't or just move on to the next. The complicated >>>> parts are really mostly of academic interest. The important parts are >>>> really pretty simple. If we could meet for beer (I wish!) we could >>>> hammer the basic ideas out easily before a pint was drained, but since >>>> that's not practical at the moment the citations above will have to >>>> do. >>>> >>>> High-order mapping and soft-decision generation are the sort of thing >>>> that once you get it, for many systems it can be easily shown >>>> completely in a diagram or two. I was looking for examples on the >>>> intartubes but didn't see anything suitable. I have some that are, >>>> sadly, proprietary else I'd share. If I can find something sharable >>>> I will. >>> >>>I have no problem with the soft decision making -- it's the question of >>>whether and how I can map my set of symbols to a set of multi-digit >>>binary numbers such that a short-distance ambiguity in the original >>>symbol results in fewer ambiguous (or wrong) bits than would happen if a >>>symbol were seriously off. >> >> I'm still guessing what your signal space looks like, but for things >> like PSK/QAM, some of the papers I cited talk about the set partitioning >> or mapping for good performance in the interleaved case. This can be >> done for arbitrary constellations, assuming you actually have a >> constellation of some kind. >> >>>If a symbol is completely trashed and I'm doing soft-decision making >>>then the answer's easy: I just have a block of erasures. >> >> This is one of the things that bit interleaving helps with; a symbol or >> a few adjacent symbols that are heavily damaged have a reduced impact on >> the ability to properly decode the message. > >I'm being jumped on to keep delay to a minimum, so a scheme that can >identify and cope with burst errors without adding delay is a plus. In >the end the customer may well accept a higher block error rate easier >than they'll accept the significantly higher delay that it'd take to >avoid it by interleaving.
Yeah, the interleaving certainly does affect delay, but so will any decent coding system. If you want gain, you need the code to traverse a lot of bits (bit diversity), so that's a natural tradeoff, anyway. A Viterbi decoder with no interleaving will likely be as good as you can do, and TCM will allow higher-order modulation without really adding significant delay. TCM does want a relatively tame channel, though, but as long as your channel isn't too crazy it would probably work just fine.
Reply by December 19, 20162016-12-19
On Mon, 19 Dec 2016 00:12:59 -0600, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Sun, 18 Dec 2016 19:09:41 +0000, Steve Pope wrote: > >> Tim Wescott <tim@seemywebsite.com> wrote: >> >>>I have no problem with the soft decision making -- it's the question of >>>whether and how I can map my set of symbols to a set of multi-digit >>>binary numbers such that a short-distance ambiguity in the original >>>symbol results in fewer ambiguous (or wrong) bits than would happen if a >>>symbol were seriously off. >>> >>>If a symbol is completely trashed and I'm doing soft-decision making >>>then the answer's easy: I just have a block of erasures. >> >> Why would a symbol be "completely trashed"? Is it a burst-error channel? >> >> On an AWGN channel, the I and Q noise components are independent, >> therefore if the I component is "trashed" there is no reason to predict >> the Q component is trashed. >> >> And also, with AWGN, large-amplitude noise is exponentially less likely >> than small-amplitude noise. In short, with AWGN you do not worry about >> the "completely trashed" case as it is so unlikely -- most raw errors >> are because the noise is just enough to push you over to the next >> constellation point. >> >> But, if it's not an AWGN channel, and you can characterize the channel, >> then your problem has gotten easier, as AWGN is the worst case. >> >> If it's not AWGN, and you cannot characterize it, then you are probably >> "Sync Outta Lock" as far as designing towards it. > >No Q. Sorry, this isn't Star Trek. Or radio. Or even telephony. > >AWGN, baseband, symbols can be anything you want as long as they're >either a fixed magnitude or zero, with a minimum switching time and with >a constant average DC value.
Sounds like PSK? Or is it real-valued only so OOK?
>I'm currently using a fixed interval (for sanity), with a 4b6b encoding >from the input data stream to the "symbols", which chooses 16 of the 20 >available combinations of six bits that consist of three 1's an three 0's.