(This came to mind during a reply to Eric Jacobsen's comment in my thread about block codes). Anyone know of a treatment of Shannon's channel capacity theorem that adds a finite limit to the delay? I.e., instead of saying that I have a linear channel with additive noise, defined power and bandwidth, say that I have all that PLUS a constraint on the total delay in the coding and decoding processes. Can I then define the best-case properties of any coding scheme? That seems clear to me, but let me restate: Say that I generate a bit at time t_0. That bit gets gathered up with other bits, coded, then decoded, and then is finally available to the receiver at time t_1. If I define a maximum delay T_d, and insist that t_1 - t_0 < T_d, what does the channel capacity theorem look like now? -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Delay-limited channel capacity
Started by ●June 11, 2016
Reply by ●June 11, 20162016-06-11
On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> wrote:>(This came to mind during a reply to Eric Jacobsen's comment in my thread >about block codes). > >Anyone know of a treatment of Shannon's channel capacity theorem that >adds a finite limit to the delay? > >I.e., instead of saying that I have a linear channel with additive noise, >defined power and bandwidth, say that I have all that PLUS a constraint >on the total delay in the coding and decoding processes. Can I then >define the best-case properties of any coding scheme? > >That seems clear to me, but let me restate: > >Say that I generate a bit at time t_0. That bit gets gathered up with >other bits, coded, then decoded, and then is finally available to the >receiver at time t_1. If I define a maximum delay T_d, and insist that >t_1 - t_0 < T_d, what does the channel capacity theorem look like now?I think you're just describing latency, i.e., you need minimum latency. There are several abstractions that have to be kept in mind. You've almost eliminated block codes for consideration, since for block codes the bit generated at t_0 may not even be able to be encoded until its N-1 brethren (for the rest of the block) are available to the encoder. Likewise in the decoder, even if the bit is received at t_1, the decoder may not be able to start working on that block (depending on the code structure), until the other N-1 bits in the block have been received. For mimizing latency this suggests the smallest block possible, or, even better, don't even use a block code, use something like an ordinary convolutional code, which provides a much more minimized latency at the potential expense of some coding gain. I think the main thing relevent in the computation of capacity is the length of the block, and capacity increases with block length (due to the increase in bit diversity or mutual information). But as the block length increases, so does the latency, so there is a somewhat natural tradeoff there for capacity approaching codes. You'll also need to quantify how important the delay is compared to coding gain, which is system dependent.
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote:> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> > wrote: > >>(This came to mind during a reply to Eric Jacobsen's comment in my >>thread about block codes). >> >>Anyone know of a treatment of Shannon's channel capacity theorem that >>adds a finite limit to the delay? >> >>I.e., instead of saying that I have a linear channel with additive >>noise, >>defined power and bandwidth, say that I have all that PLUS a constraint >>on the total delay in the coding and decoding processes. Can I then >>define the best-case properties of any coding scheme? >> >>That seems clear to me, but let me restate: >> >>Say that I generate a bit at time t_0. That bit gets gathered up with >>other bits, coded, then decoded, and then is finally available to the >>receiver at time t_1. If I define a maximum delay T_d, and insist that >>t_1 - t_0 < T_d, what does the channel capacity theorem look like now? > > I think you're just describing latency, i.e., you need minimum latency. > > There are several abstractions that have to be kept in mind. > > You've almost eliminated block codes for consideration, since for block > codes the bit generated at t_0 may not even be able to be encoded until > its N-1 brethren (for the rest of the block) are available to the > encoder. > > Likewise in the decoder, even if the bit is received at t_1, the decoder > may not be able to start working on that block (depending on the code > structure), until the other N-1 bits in the block have been received. > > For mimizing latency this suggests the smallest block possible, or, even > better, don't even use a block code, use something like an ordinary > convolutional code, which provides a much more minimized latency at the > potential expense of some coding gain. > > I think the main thing relevent in the computation of capacity is the > length of the block, and capacity increases with block length (due to > the increase in bit diversity or mutual information). But as the block > length increases, so does the latency, so there is a somewhat natural > tradeoff there for capacity approaching codes. > > You'll also need to quantify how important the delay is compared to > coding gain, which is system dependent.Block codes may still be in the running, if the blocks are short enough, because to some extent things could be arranged so that the Really Important Bits could fall on the last bit of a block. It'd make certain people at my customer site cranky, but it could be done. I want lots of coding gain AND low delay, of course. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by ●June 11, 20162016-06-11
On 11.06.2016 8:29, Tim Wescott wrote:> On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: > >> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >> wrote: >> >>> (This came to mind during a reply to Eric Jacobsen's comment in my >>> thread about block codes). >>> >>> Anyone know of a treatment of Shannon's channel capacity theorem that >>> adds a finite limit to the delay? >>> >>> I.e., instead of saying that I have a linear channel with additive >>> noise, >>> defined power and bandwidth, say that I have all that PLUS a constraint >>> on the total delay in the coding and decoding processes. Can I then >>> define the best-case properties of any coding scheme? >>> >>> That seems clear to me, but let me restate: >>> >>> Say that I generate a bit at time t_0. That bit gets gathered up with >>> other bits, coded, then decoded, and then is finally available to the >>> receiver at time t_1. If I define a maximum delay T_d, and insist that >>> t_1 - t_0 < T_d, what does the channel capacity theorem look like now? >> >> I think you're just describing latency, i.e., you need minimum latency. >> >> There are several abstractions that have to be kept in mind. >> >> You've almost eliminated block codes for consideration, since for block >> codes the bit generated at t_0 may not even be able to be encoded until >> its N-1 brethren (for the rest of the block) are available to the >> encoder. >> >> Likewise in the decoder, even if the bit is received at t_1, the decoder >> may not be able to start working on that block (depending on the code >> structure), until the other N-1 bits in the block have been received. >> >> For mimizing latency this suggests the smallest block possible, or, even >> better, don't even use a block code, use something like an ordinary >> convolutional code, which provides a much more minimized latency at the >> potential expense of some coding gain. >> >> I think the main thing relevent in the computation of capacity is the >> length of the block, and capacity increases with block length (due to >> the increase in bit diversity or mutual information). But as the block >> length increases, so does the latency, so there is a somewhat natural >> tradeoff there for capacity approaching codes. >> >> You'll also need to quantify how important the delay is compared to >> coding gain, which is system dependent. > > Block codes may still be in the running, if the blocks are short enough, > because to some extent things could be arranged so that the Really > Important Bits could fall on the last bit of a block. It'd make certain > people at my customer site cranky, but it could be done. > > I want lots of coding gain AND low delay, of course. >That depends on just how short is "short". There are works on non-binary turbo and LDPC codes which are specifically suited for short block lengths. Also, with purely convolutional codes, you can use long constraint lengths, for which you'd need serial decoders (because complexity of a Viterbi decoder rises exponentially with constraint length). Gene
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 11:05:46 +0300, Evgeny Filatov <filatov.ev@mipt.ru> wrote:>On 11.06.2016 8:29, Tim Wescott wrote: >> On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: >> >>> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >>> wrote: >>> >>>> (This came to mind during a reply to Eric Jacobsen's comment in my >>>> thread about block codes). >>>> >>>> Anyone know of a treatment of Shannon's channel capacity theorem that >>>> adds a finite limit to the delay? >>>> >>>> I.e., instead of saying that I have a linear channel with additive >>>> noise, >>>> defined power and bandwidth, say that I have all that PLUS a constraint >>>> on the total delay in the coding and decoding processes. Can I then >>>> define the best-case properties of any coding scheme? >>>> >>>> That seems clear to me, but let me restate: >>>> >>>> Say that I generate a bit at time t_0. That bit gets gathered up with >>>> other bits, coded, then decoded, and then is finally available to the >>>> receiver at time t_1. If I define a maximum delay T_d, and insist that >>>> t_1 - t_0 < T_d, what does the channel capacity theorem look like now? >>> >>> I think you're just describing latency, i.e., you need minimum latency. >>> >>> There are several abstractions that have to be kept in mind. >>> >>> You've almost eliminated block codes for consideration, since for block >>> codes the bit generated at t_0 may not even be able to be encoded until >>> its N-1 brethren (for the rest of the block) are available to the >>> encoder. >>> >>> Likewise in the decoder, even if the bit is received at t_1, the decoder >>> may not be able to start working on that block (depending on the code >>> structure), until the other N-1 bits in the block have been received. >>> >>> For mimizing latency this suggests the smallest block possible, or, even >>> better, don't even use a block code, use something like an ordinary >>> convolutional code, which provides a much more minimized latency at the >>> potential expense of some coding gain. >>> >>> I think the main thing relevent in the computation of capacity is the >>> length of the block, and capacity increases with block length (due to >>> the increase in bit diversity or mutual information). But as the block >>> length increases, so does the latency, so there is a somewhat natural >>> tradeoff there for capacity approaching codes. >>> >>> You'll also need to quantify how important the delay is compared to >>> coding gain, which is system dependent. >> >> Block codes may still be in the running, if the blocks are short enough, >> because to some extent things could be arranged so that the Really >> Important Bits could fall on the last bit of a block. It'd make certain >> people at my customer site cranky, but it could be done. >> >> I want lots of coding gain AND low delay, of course. >> > >That depends on just how short is "short". > >There are works on non-binary turbo and LDPC codes which are >specifically suited for short block lengths.Yes, there are definitely tricks that can be done for short blocks, but as you say it still depends on what "short" means.>Also, with purely convolutional codes, you can use long constraint >lengths, for which you'd need serial decoders (because complexity of a >Viterbi decoder rises exponentially with constraint length). > >GeneIt's hard to beat a convolutional code for low latency and decent coding gain, and for long constraint lengths a sequential decoder is much simpler than a Viterbi decoder. The performance increase with increasing constraint length diminishes quickly, though, else it'd be a very attractive approach for many applications. In some common applications that supports varying payload size, below some threshold message length it just reverts to convolutional coding since the advanced code may not offer any advantage. 802.11n+ works this way.
Reply by ●June 11, 20162016-06-11
On 11.06.2016 17:57, Eric Jacobsen wrote:> On Sat, 11 Jun 2016 11:05:46 +0300, Evgeny Filatov > <filatov.ev@mipt.ru> wrote: > >> On 11.06.2016 8:29, Tim Wescott wrote: >>> On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: >>> >>>> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >>>> wrote: >>>> >>>>> (This came to mind during a reply to Eric Jacobsen's comment in my >>>>> thread about block codes). >>>>> >>>>> Anyone know of a treatment of Shannon's channel capacity theorem that >>>>> adds a finite limit to the delay? >>>>> >>>>> I.e., instead of saying that I have a linear channel with additive >>>>> noise, >>>>> defined power and bandwidth, say that I have all that PLUS a constraint >>>>> on the total delay in the coding and decoding processes. Can I then >>>>> define the best-case properties of any coding scheme? >>>>> >>>>> That seems clear to me, but let me restate: >>>>> >>>>> Say that I generate a bit at time t_0. That bit gets gathered up with >>>>> other bits, coded, then decoded, and then is finally available to the >>>>> receiver at time t_1. If I define a maximum delay T_d, and insist that >>>>> t_1 - t_0 < T_d, what does the channel capacity theorem look like now? >>>> >>>> I think you're just describing latency, i.e., you need minimum latency. >>>> >>>> There are several abstractions that have to be kept in mind. >>>> >>>> You've almost eliminated block codes for consideration, since for block >>>> codes the bit generated at t_0 may not even be able to be encoded until >>>> its N-1 brethren (for the rest of the block) are available to the >>>> encoder. >>>> >>>> Likewise in the decoder, even if the bit is received at t_1, the decoder >>>> may not be able to start working on that block (depending on the code >>>> structure), until the other N-1 bits in the block have been received. >>>> >>>> For mimizing latency this suggests the smallest block possible, or, even >>>> better, don't even use a block code, use something like an ordinary >>>> convolutional code, which provides a much more minimized latency at the >>>> potential expense of some coding gain. >>>> >>>> I think the main thing relevent in the computation of capacity is the >>>> length of the block, and capacity increases with block length (due to >>>> the increase in bit diversity or mutual information). But as the block >>>> length increases, so does the latency, so there is a somewhat natural >>>> tradeoff there for capacity approaching codes. >>>> >>>> You'll also need to quantify how important the delay is compared to >>>> coding gain, which is system dependent. >>> >>> Block codes may still be in the running, if the blocks are short enough, >>> because to some extent things could be arranged so that the Really >>> Important Bits could fall on the last bit of a block. It'd make certain >>> people at my customer site cranky, but it could be done. >>> >>> I want lots of coding gain AND low delay, of course. >>> >> >> That depends on just how short is "short". >> >> There are works on non-binary turbo and LDPC codes which are >> specifically suited for short block lengths. > > Yes, there are definitely tricks that can be done for short blocks, > but as you say it still depends on what "short" means. > >> Also, with purely convolutional codes, you can use long constraint >> lengths, for which you'd need serial decoders (because complexity of a >> Viterbi decoder rises exponentially with constraint length). >> >> Gene > > It's hard to beat a convolutional code for low latency and decent > coding gain, and for long constraint lengths a sequential decoder is > much simpler than a Viterbi decoder. The performance increase with > increasing constraint length diminishes quickly, though, else it'd be > a very attractive approach for many applications. > > In some common applications that supports varying payload size, below > some threshold message length it just reverts to convolutional coding > since the advanced code may not offer any advantage. 802.11n+ works > this way. >Thanks, that's interesting. Gene
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 00:29:23 -0500, Tim Wescott <seemywebsite@myfooter.really> wrote:>On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: > >> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >> wrote: >> >>>(This came to mind during a reply to Eric Jacobsen's comment in my >>>thread about block codes). >>> >>>Anyone know of a treatment of Shannon's channel capacity theorem that >>>adds a finite limit to the delay? >>> >>>I.e., instead of saying that I have a linear channel with additive >>>noise, >>>defined power and bandwidth, say that I have all that PLUS a constraint >>>on the total delay in the coding and decoding processes. Can I then >>>define the best-case properties of any coding scheme? >>> >>>That seems clear to me, but let me restate: >>> >>>Say that I generate a bit at time t_0. That bit gets gathered up with >>>other bits, coded, then decoded, and then is finally available to the >>>receiver at time t_1. If I define a maximum delay T_d, and insist that >>>t_1 - t_0 < T_d, what does the channel capacity theorem look like now? >> >> I think you're just describing latency, i.e., you need minimum latency. >> >> There are several abstractions that have to be kept in mind. >> >> You've almost eliminated block codes for consideration, since for block >> codes the bit generated at t_0 may not even be able to be encoded until >> its N-1 brethren (for the rest of the block) are available to the >> encoder. >> >> Likewise in the decoder, even if the bit is received at t_1, the decoder >> may not be able to start working on that block (depending on the code >> structure), until the other N-1 bits in the block have been received. >> >> For mimizing latency this suggests the smallest block possible, or, even >> better, don't even use a block code, use something like an ordinary >> convolutional code, which provides a much more minimized latency at the >> potential expense of some coding gain. >> >> I think the main thing relevent in the computation of capacity is the >> length of the block, and capacity increases with block length (due to >> the increase in bit diversity or mutual information). But as the block >> length increases, so does the latency, so there is a somewhat natural >> tradeoff there for capacity approaching codes. >> >> You'll also need to quantify how important the delay is compared to >> coding gain, which is system dependent. > >Block codes may still be in the running, if the blocks are short enough, >because to some extent things could be arranged so that the Really >Important Bits could fall on the last bit of a block. It'd make certain >people at my customer site cranky, but it could be done. > >I want lots of coding gain AND low delay, of course.Can we ask what coding is used in the system now, if any?
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 15:25:53 +0000, Eric Jacobsen wrote:> On Sat, 11 Jun 2016 00:29:23 -0500, Tim Wescott > <seemywebsite@myfooter.really> wrote: > >>On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: >> >>> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >>> wrote: >>> >>>>(This came to mind during a reply to Eric Jacobsen's comment in my >>>>thread about block codes). >>>> >>>>Anyone know of a treatment of Shannon's channel capacity theorem that >>>>adds a finite limit to the delay? >>>> >>>>I.e., instead of saying that I have a linear channel with additive >>>>noise, >>>>defined power and bandwidth, say that I have all that PLUS a >>>>constraint on the total delay in the coding and decoding processes. >>>>Can I then define the best-case properties of any coding scheme? >>>> >>>>That seems clear to me, but let me restate: >>>> >>>>Say that I generate a bit at time t_0. That bit gets gathered up with >>>>other bits, coded, then decoded, and then is finally available to the >>>>receiver at time t_1. If I define a maximum delay T_d, and insist >>>>that t_1 - t_0 < T_d, what does the channel capacity theorem look like >>>>now? >>> >>> I think you're just describing latency, i.e., you need minimum >>> latency. >>> >>> There are several abstractions that have to be kept in mind. >>> >>> You've almost eliminated block codes for consideration, since for >>> block codes the bit generated at t_0 may not even be able to be >>> encoded until its N-1 brethren (for the rest of the block) are >>> available to the encoder. >>> >>> Likewise in the decoder, even if the bit is received at t_1, the >>> decoder may not be able to start working on that block (depending on >>> the code structure), until the other N-1 bits in the block have been >>> received. >>> >>> For mimizing latency this suggests the smallest block possible, or, >>> even better, don't even use a block code, use something like an >>> ordinary convolutional code, which provides a much more minimized >>> latency at the potential expense of some coding gain. >>> >>> I think the main thing relevent in the computation of capacity is the >>> length of the block, and capacity increases with block length (due to >>> the increase in bit diversity or mutual information). But as the >>> block length increases, so does the latency, so there is a somewhat >>> natural tradeoff there for capacity approaching codes. >>> >>> You'll also need to quantify how important the delay is compared to >>> coding gain, which is system dependent. >> >>Block codes may still be in the running, if the blocks are short enough, >>because to some extent things could be arranged so that the Really >>Important Bits could fall on the last bit of a block. It'd make certain >>people at my customer site cranky, but it could be done. >> >>I want lots of coding gain AND low delay, of course. > > Can we ask what coding is used in the system now, if any?Not an intentional one. It's mud-pulse communication for oil wells, and it's primarily designed to minimize battery consumption on an obsolete design of mud pulser which only consumed battery power during a '1'. Secondarily, it's an extension of an even older protocol that was close to being human readable. It's basically an M-ary pulse-position coding, with guard intervals between words and a single parity bit. M can equal either 4 or 8 (to minimize delay, word lengths are variable). The even older protocol used M = 10, and you could count out numbers while watching the pressure gauge. There is redundancy there (a pulse can't exist at position 0 and 3, for instance, and a pulse is always a specific width, etc), but it's not deliberate redundancy for coding gain. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 11:44:51 -0500, Tim Wescott <tim@seemywebsite.com> wrote:>On Sat, 11 Jun 2016 15:25:53 +0000, Eric Jacobsen wrote: > >> On Sat, 11 Jun 2016 00:29:23 -0500, Tim Wescott >> <seemywebsite@myfooter.really> wrote: >> >>>On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: >>> >>>> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott <tim@seemywebsite.com> >>>> wrote: >>>> >>>>>(This came to mind during a reply to Eric Jacobsen's comment in my >>>>>thread about block codes). >>>>> >>>>>Anyone know of a treatment of Shannon's channel capacity theorem that >>>>>adds a finite limit to the delay? >>>>> >>>>>I.e., instead of saying that I have a linear channel with additive >>>>>noise, >>>>>defined power and bandwidth, say that I have all that PLUS a >>>>>constraint on the total delay in the coding and decoding processes. >>>>>Can I then define the best-case properties of any coding scheme? >>>>> >>>>>That seems clear to me, but let me restate: >>>>> >>>>>Say that I generate a bit at time t_0. That bit gets gathered up with >>>>>other bits, coded, then decoded, and then is finally available to the >>>>>receiver at time t_1. If I define a maximum delay T_d, and insist >>>>>that t_1 - t_0 < T_d, what does the channel capacity theorem look like >>>>>now? >>>> >>>> I think you're just describing latency, i.e., you need minimum >>>> latency. >>>> >>>> There are several abstractions that have to be kept in mind. >>>> >>>> You've almost eliminated block codes for consideration, since for >>>> block codes the bit generated at t_0 may not even be able to be >>>> encoded until its N-1 brethren (for the rest of the block) are >>>> available to the encoder. >>>> >>>> Likewise in the decoder, even if the bit is received at t_1, the >>>> decoder may not be able to start working on that block (depending on >>>> the code structure), until the other N-1 bits in the block have been >>>> received. >>>> >>>> For mimizing latency this suggests the smallest block possible, or, >>>> even better, don't even use a block code, use something like an >>>> ordinary convolutional code, which provides a much more minimized >>>> latency at the potential expense of some coding gain. >>>> >>>> I think the main thing relevent in the computation of capacity is the >>>> length of the block, and capacity increases with block length (due to >>>> the increase in bit diversity or mutual information). But as the >>>> block length increases, so does the latency, so there is a somewhat >>>> natural tradeoff there for capacity approaching codes. >>>> >>>> You'll also need to quantify how important the delay is compared to >>>> coding gain, which is system dependent. >>> >>>Block codes may still be in the running, if the blocks are short enough, >>>because to some extent things could be arranged so that the Really >>>Important Bits could fall on the last bit of a block. It'd make certain >>>people at my customer site cranky, but it could be done. >>> >>>I want lots of coding gain AND low delay, of course. >> >> Can we ask what coding is used in the system now, if any? > >Not an intentional one. It's mud-pulse communication for oil wells, and >it's primarily designed to minimize battery consumption on an obsolete >design of mud pulser which only consumed battery power during a '1'. >Secondarily, it's an extension of an even older protocol that was close >to being human readable. > >It's basically an M-ary pulse-position coding, with guard intervals >between words and a single parity bit. M can equal either 4 or 8 (to >minimize delay, word lengths are variable). The even older protocol used >M = 10, and you could count out numbers while watching the pressure gauge. > >There is redundancy there (a pulse can't exist at position 0 and 3, for >instance, and a pulse is always a specific width, etc), but it's not >deliberate redundancy for coding gain. >It seems like everybody winds up working on down-hole stuff at one time or other. Since the current system is essentially uncoded, adding a basic convolutional code with a Viterbi decoder should have a significant improvement on link reliability. Depending on available processing power and power consumption limits, there are common, easy-to-implement codes with constraint lengths of k = 4, 7, and 9, with each increasing decoding complexity and coding gain simultaneously. The beauty of the convolutional codes in this case is that the latency can be very, very low. Gain compared to the current system would be expected to be significant. While convolutional codes work well with hard-decisions, they work much better with soft decisions (as do most codes). I'd think it wouldn't be too difficult to come up with a soft-decision mapper for that.
Reply by ●June 11, 20162016-06-11
On Sat, 11 Jun 2016 19:57:04 +0000, Eric Jacobsen wrote:> On Sat, 11 Jun 2016 11:44:51 -0500, Tim Wescott <tim@seemywebsite.com> > wrote: > >>On Sat, 11 Jun 2016 15:25:53 +0000, Eric Jacobsen wrote: >> >>> On Sat, 11 Jun 2016 00:29:23 -0500, Tim Wescott >>> <seemywebsite@myfooter.really> wrote: >>> >>>>On Sat, 11 Jun 2016 04:01:46 +0000, Eric Jacobsen wrote: >>>> >>>>> On Fri, 10 Jun 2016 22:28:19 -0500, Tim Wescott >>>>> <tim@seemywebsite.com> >>>>> wrote: >>>>> >>>>>>(This came to mind during a reply to Eric Jacobsen's comment in my >>>>>>thread about block codes). >>>>>> >>>>>>Anyone know of a treatment of Shannon's channel capacity theorem >>>>>>that adds a finite limit to the delay? >>>>>> >>>>>>I.e., instead of saying that I have a linear channel with additive >>>>>>noise, >>>>>>defined power and bandwidth, say that I have all that PLUS a >>>>>>constraint on the total delay in the coding and decoding processes. >>>>>>Can I then define the best-case properties of any coding scheme? >>>>>> >>>>>>That seems clear to me, but let me restate: >>>>>> >>>>>>Say that I generate a bit at time t_0. That bit gets gathered up >>>>>>with other bits, coded, then decoded, and then is finally available >>>>>>to the receiver at time t_1. If I define a maximum delay T_d, and >>>>>>insist that t_1 - t_0 < T_d, what does the channel capacity theorem >>>>>>look like now? >>>>> >>>>> I think you're just describing latency, i.e., you need minimum >>>>> latency. >>>>> >>>>> There are several abstractions that have to be kept in mind. >>>>> >>>>> You've almost eliminated block codes for consideration, since for >>>>> block codes the bit generated at t_0 may not even be able to be >>>>> encoded until its N-1 brethren (for the rest of the block) are >>>>> available to the encoder. >>>>> >>>>> Likewise in the decoder, even if the bit is received at t_1, the >>>>> decoder may not be able to start working on that block (depending on >>>>> the code structure), until the other N-1 bits in the block have been >>>>> received. >>>>> >>>>> For mimizing latency this suggests the smallest block possible, or, >>>>> even better, don't even use a block code, use something like an >>>>> ordinary convolutional code, which provides a much more minimized >>>>> latency at the potential expense of some coding gain. >>>>> >>>>> I think the main thing relevent in the computation of capacity is >>>>> the length of the block, and capacity increases with block length >>>>> (due to the increase in bit diversity or mutual information). But >>>>> as the block length increases, so does the latency, so there is a >>>>> somewhat natural tradeoff there for capacity approaching codes. >>>>> >>>>> You'll also need to quantify how important the delay is compared to >>>>> coding gain, which is system dependent. >>>> >>>>Block codes may still be in the running, if the blocks are short >>>>enough, >>>>because to some extent things could be arranged so that the Really >>>>Important Bits could fall on the last bit of a block. It'd make >>>>certain people at my customer site cranky, but it could be done. >>>> >>>>I want lots of coding gain AND low delay, of course. >>> >>> Can we ask what coding is used in the system now, if any? >> >>Not an intentional one. It's mud-pulse communication for oil wells, and >>it's primarily designed to minimize battery consumption on an obsolete >>design of mud pulser which only consumed battery power during a '1'. >>Secondarily, it's an extension of an even older protocol that was close >>to being human readable. >> >>It's basically an M-ary pulse-position coding, with guard intervals >>between words and a single parity bit. M can equal either 4 or 8 (to >>minimize delay, word lengths are variable). The even older protocol >>used M = 10, and you could count out numbers while watching the pressure >>gauge. >> >>There is redundancy there (a pulse can't exist at position 0 and 3, for >>instance, and a pulse is always a specific width, etc), but it's not >>deliberate redundancy for coding gain. >> >> > It seems like everybody winds up working on down-hole stuff at one time > or other. > > Since the current system is essentially uncoded, adding a basic > convolutional code with a Viterbi decoder should have a significant > improvement on link reliability. Depending on available processing > power and power consumption limits, there are common, easy-to-implement > codes with constraint lengths of k = 4, 7, and 9, with each increasing > decoding complexity and coding gain simultaneously. The beauty of the > convolutional codes in this case is that the latency can be very, very > low. Gain compared to the current system would be expected to be > significant. > > While convolutional codes work well with hard-decisions, they work much > better with soft decisions (as do most codes). I'd think it wouldn't > be too difficult to come up with a soft-decision mapper for that.When you have all the time in the world, soft decisions are easy. I've done Viterbi decoders before, I can certainly do one again. It'd be easier than what's in there now, for sure. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!






