> "Randy Yates"<yates@digitalsignallabs.com> wrote in message
> news:871un92mef.fsf@randy.site...
>
>> Typically the decoding depth is set at 5 * K, where K is the constraint
>> length. It is also known that the shorter the decoding depth, the higher
>> the probability of bit errors Pe becomes.
>
> Actually, it depends on the input SNR, output error rate, code rate and
> particular code structure. The 5 x K is a ballpark; optimum decoding depth
> is application dependent. The 5 x K is reasonable estimate for R ~ 1/2;
> higher rate codes require more decoding depth.
>
>> Have there been any studies or is there a rule of thumb on how the Pe
>> changes as D decreases from 5 * K?
>
> It is simple enough to derive an estimate for any particular code.
Simple for you, maybe. I'm glad to see you back!
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
><a
>href="http://www.ee.ucla.edu/~wesel/documents/Misc/eot309.pdf">CONVOLUTIONAL
>CODES</a> (pdf) state:
>
><i>As mentioned in Section 3.4, the specific decision depth
>used in finite traceback Viterbi is usually determined
>by simulation. However, a good estimate of the decision
>depth helps designers know where to begin simulations.
>For noncatastrophic feedforward encoders, Anderson and
>Balachandran [16] compute a useful lower bound on the
>required decision depth as a byproduct of the free-distance
>computation.</i>
>
>The Anderson and Balachandran paper recommends 4.54 times the constraint
>length for rate 1/2 codes and 2.87 times the constraint length for rate 1/3
>codes. Anderson and Balachandran don't use Pe as a measure though.
>
I agree that usually this is determined via simulation, since it will
depend partly on the implementation of the decoder (e.g., the number
of soft bits, internal precision, etc.), and also on the polynomial
and puncturing patterns for higher code rates. It's also a bit system
dependent to account for the tradeoff cost of the extra memory vs
latency, performance requirements, etc., etc.
The usual rules of thumb, e.g., the 5k that Randy cited, are general
and taken to be roughly the point at where the peformance returns
start to diminish with increasing memory depth. How to evaluate that
tradeoff is system dependent, but in general the 4k-7k region is
typical for most practical codes.
As I mentioned before, sometimes a decoder core vendor may have
selectable memory depth and publish performance curves to show the
difference for that particular decoder with a particular polynomial
and code rate.
Eric Jacobsen
Anchor Hill Communications
www.anchorhill.com
Reply by David Drumm●April 28, 20122012-04-28
<a
href="http://www.ee.ucla.edu/~wesel/documents/Misc/eot309.pdf">CONVOLUTIONAL
CODES</a> (pdf) state:
<i>As mentioned in Section 3.4, the specific decision depth
used in finite traceback Viterbi is usually determined
by simulation. However, a good estimate of the decision
depth helps designers know where to begin simulations.
For noncatastrophic feedforward encoders, Anderson and
Balachandran [16] compute a useful lower bound on the
required decision depth as a byproduct of the free-distance
computation.</i>
The Anderson and Balachandran paper recommends 4.54 times the constraint
length for rate 1/2 codes and 2.87 times the constraint length for rate 1/3
codes. Anderson and Balachandran don't use Pe as a measure though.
Reply by robert bristow-johnson●April 27, 20122012-04-27
On 4/27/12 3:09 PM, Vladimir Vassilevsky wrote:
> "Randy Yates"<yates@digitalsignallabs.com> wrote in message
> news:871un92mef.fsf@randy.site...
>
>> Typically the decoding depth is set at 5 * K, where K is the constraint
>> length. It is also known that the shorter the decoding depth, the higher
>> the probability of bit errors Pe becomes.
>
> Actually, it depends on the input SNR, output error rate, code rate and
> particular code structure. The 5 x K is a ballpark; optimum decoding depth
> is application dependent. The 5 x K is reasonable estimate for R ~ 1/2;
> higher rate codes require more decoding depth.
>
>> Have there been any studies or is there a rule of thumb on how the Pe
>> changes as D decreases from 5 * K?
>
> It is simple enough to derive an estimate for any particular code.
>
> VLV
>
>
hey Vlad! good to see you here again. :-)
even though i might agree with some folks here about pooping into the
punch bowl, i am still happy to see you back. (please don't eviscerate
the noobs, biting might be okay, but don't leave them in a state
suitable only for scavengers.)
L8r,
--
r b-j rbj@audioimagination.com
"Imagination is more important than knowledge."
> "Randy Yates" <yates@digitalsignallabs.com> wrote in message
> news:871un92mef.fsf@randy.site...
>
>> Typically the decoding depth is set at 5 * K, where K is the constraint
>> length. It is also known that the shorter the decoding depth, the higher
>> the probability of bit errors Pe becomes.
>
> Actually, it depends on the input SNR, output error rate, code rate and
> particular code structure. The 5 x K is a ballpark; optimum decoding depth
> is application dependent. The 5 x K is reasonable estimate for R ~ 1/2;
> higher rate codes require more decoding depth.
>
>> Have there been any studies or is there a rule of thumb on how the Pe
>> changes as D decreases from 5 * K?
>
> It is simple enough to derive an estimate for any particular code.
Thanks for the response, Vlad. I'd like to see how you would derive
an estimate. This is a rate 1/2, K=4 code with polynomials
G0 = 1 + D + D2 + D3
G1 = 1 + D2 + D3
--Randy
--
Randy Yates
DSP/Firmware Engineer
919-577-9882 (H)
919-720-2916 (C)
Reply by Vladimir Vassilevsky●April 27, 20122012-04-27
"Randy Yates" <yates@digitalsignallabs.com> wrote in message
news:871un92mef.fsf@randy.site...
> Typically the decoding depth is set at 5 * K, where K is the constraint
> length. It is also known that the shorter the decoding depth, the higher
> the probability of bit errors Pe becomes.
Actually, it depends on the input SNR, output error rate, code rate and
particular code structure. The 5 x K is a ballpark; optimum decoding depth
is application dependent. The 5 x K is reasonable estimate for R ~ 1/2;
higher rate codes require more decoding depth.
> Have there been any studies or is there a rule of thumb on how the Pe
> changes as D decreases from 5 * K?
It is simple enough to derive an estimate for any particular code.
VLV
Reply by Eric Jacobsen●April 27, 20122012-04-27
On Fri, 27 Apr 2012 12:02:32 -0400, Randy Yates
<yates@digitalsignallabs.com> wrote:
>Typically the decoding depth is set at 5 * K, where K is the constraint
>length. It is also known that the shorter the decoding depth, the higher
>the probability of bit errors Pe becomes.
>
>Have there been any studies or is there a rule of thumb on how the Pe
>changes as D decreases from 5 * K?
>--
>Randy Yates
>DSP/Firmware Engineer
>919-577-9882 (H)
>919-720-2916 (C)
If there are any studies I think they'd be pretty old. Elsewise it's
not hard to simulate that sort of thing for a typical code.
There are some commercial implementations that have selectable or
variable memory depth in order to adjust latency or complexity (for
soft cores). You may be able to find some simple performance
comparisons for those cases in related product literature.
Eric Jacobsen
Anchor Hill Communications
www.anchorhill.com
Reply by Randy Yates●April 27, 20122012-04-27
Typically the decoding depth is set at 5 * K, where K is the constraint
length. It is also known that the shorter the decoding depth, the higher
the probability of bit errors Pe becomes.
Have there been any studies or is there a rule of thumb on how the Pe
changes as D decreases from 5 * K?
--
Randy Yates
DSP/Firmware Engineer
919-577-9882 (H)
919-720-2916 (C)