Typically the decoding depth is set at 5 * K, where K is the constraint length. It is also known that the shorter the decoding depth, the higher the probability of bit errors Pe becomes. Have there been any studies or is there a rule of thumb on how the Pe changes as D decreases from 5 * K? -- Randy Yates DSP/Firmware Engineer 919-577-9882 (H) 919-720-2916 (C)

# Shortening Typical Viterbi Decoding Depth

Started by ●April 27, 2012

Reply by ●April 27, 20122012-04-27

On Fri, 27 Apr 2012 12:02:32 -0400, Randy Yates <yates@digitalsignallabs.com> wrote:>Typically the decoding depth is set at 5 * K, where K is the constraint >length. It is also known that the shorter the decoding depth, the higher >the probability of bit errors Pe becomes. > >Have there been any studies or is there a rule of thumb on how the Pe >changes as D decreases from 5 * K? >-- >Randy Yates >DSP/Firmware Engineer >919-577-9882 (H) >919-720-2916 (C)If there are any studies I think they'd be pretty old. Elsewise it's not hard to simulate that sort of thing for a typical code. There are some commercial implementations that have selectable or variable memory depth in order to adjust latency or complexity (for soft cores). You may be able to find some simple performance comparisons for those cases in related product literature. Eric Jacobsen Anchor Hill Communications www.anchorhill.com

Reply by ●April 27, 20122012-04-27

"Randy Yates" <yates@digitalsignallabs.com> wrote in message news:871un92mef.fsf@randy.site...> Typically the decoding depth is set at 5 * K, where K is the constraint > length. It is also known that the shorter the decoding depth, the higher > the probability of bit errors Pe becomes.Actually, it depends on the input SNR, output error rate, code rate and particular code structure. The 5 x K is a ballpark; optimum decoding depth is application dependent. The 5 x K is reasonable estimate for R ~ 1/2; higher rate codes require more decoding depth.> Have there been any studies or is there a rule of thumb on how the Pe > changes as D decreases from 5 * K?It is simple enough to derive an estimate for any particular code. VLV

Reply by ●April 27, 20122012-04-27

"Vladimir Vassilevsky" <nospam@nowhere.com> writes:> "Randy Yates" <yates@digitalsignallabs.com> wrote in message > news:871un92mef.fsf@randy.site... > >> Typically the decoding depth is set at 5 * K, where K is the constraint >> length. It is also known that the shorter the decoding depth, the higher >> the probability of bit errors Pe becomes. > > Actually, it depends on the input SNR, output error rate, code rate and > particular code structure. The 5 x K is a ballpark; optimum decoding depth > is application dependent. The 5 x K is reasonable estimate for R ~ 1/2; > higher rate codes require more decoding depth. > >> Have there been any studies or is there a rule of thumb on how the Pe >> changes as D decreases from 5 * K? > > It is simple enough to derive an estimate for any particular code.Thanks for the response, Vlad. I'd like to see how you would derive an estimate. This is a rate 1/2, K=4 code with polynomials G0 = 1 + D + D2 + D3 G1 = 1 + D2 + D3 --Randy -- Randy Yates DSP/Firmware Engineer 919-577-9882 (H) 919-720-2916 (C)

Reply by ●April 27, 20122012-04-27

On 4/27/12 3:09 PM, Vladimir Vassilevsky wrote:> "Randy Yates"<yates@digitalsignallabs.com> wrote in message > news:871un92mef.fsf@randy.site... > >> Typically the decoding depth is set at 5 * K, where K is the constraint >> length. It is also known that the shorter the decoding depth, the higher >> the probability of bit errors Pe becomes. > > Actually, it depends on the input SNR, output error rate, code rate and > particular code structure. The 5 x K is a ballpark; optimum decoding depth > is application dependent. The 5 x K is reasonable estimate for R ~ 1/2; > higher rate codes require more decoding depth. > >> Have there been any studies or is there a rule of thumb on how the Pe >> changes as D decreases from 5 * K? > > It is simple enough to derive an estimate for any particular code. > > VLV > >hey Vlad! good to see you here again. :-) even though i might agree with some folks here about pooping into the punch bowl, i am still happy to see you back. (please don't eviscerate the noobs, biting might be okay, but don't leave them in a state suitable only for scavengers.) L8r, -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

Reply by ●April 28, 20122012-04-28

<a href="http://www.ee.ucla.edu/~wesel/documents/Misc/eot309.pdf">CONVOLUTIONAL CODES</a> (pdf) state: <i>As mentioned in Section 3.4, the speciﬁc decision depth used in ﬁnite traceback Viterbi is usually determined by simulation. However, a good estimate of the decision depth helps designers know where to begin simulations. For noncatastrophic feedforward encoders, Anderson and Balachandran [16] compute a useful lower bound on the required decision depth as a byproduct of the free-distance computation.</i> The Anderson and Balachandran paper recommends 4.54 times the constraint length for rate 1/2 codes and 2.87 times the constraint length for rate 1/3 codes. Anderson and Balachandran don't use Pe as a measure though.

Reply by ●April 28, 20122012-04-28

On Sat, 28 Apr 2012 16:15:58 -0500, "David Drumm" <drumm@n_o_s_p_a_m.coherentlogix.com> wrote:><a >href="http://www.ee.ucla.edu/~wesel/documents/Misc/eot309.pdf">CONVOLUTIONAL >CODES</a> (pdf) state: > ><i>As mentioned in Section 3.4, the speciﬁc decision depth >used in ﬁnite traceback Viterbi is usually determined >by simulation. However, a good estimate of the decision >depth helps designers know where to begin simulations. >For noncatastrophic feedforward encoders, Anderson and >Balachandran [16] compute a useful lower bound on the >required decision depth as a byproduct of the free-distance >computation.</i> > >The Anderson and Balachandran paper recommends 4.54 times the constraint >length for rate 1/2 codes and 2.87 times the constraint length for rate 1/3 >codes. Anderson and Balachandran don't use Pe as a measure though. >I agree that usually this is determined via simulation, since it will depend partly on the implementation of the decoder (e.g., the number of soft bits, internal precision, etc.), and also on the polynomial and puncturing patterns for higher code rates. It's also a bit system dependent to account for the tradeoff cost of the extra memory vs latency, performance requirements, etc., etc. The usual rules of thumb, e.g., the 5k that Randy cited, are general and taken to be roughly the point at where the peformance returns start to diminish with increasing memory depth. How to evaluate that tradeoff is system dependent, but in general the 4k-7k region is typical for most practical codes. As I mentioned before, sometimes a decoder core vendor may have selectable memory depth and publish performance curves to show the difference for that particular decoder with a particular polynomial and code rate. Eric Jacobsen Anchor Hill Communications www.anchorhill.com

Reply by ●May 6, 20122012-05-06

On 4/27/2012 3:09 PM, Vladimir Vassilevsky wrote:> "Randy Yates"<yates@digitalsignallabs.com> wrote in message > news:871un92mef.fsf@randy.site... > >> Typically the decoding depth is set at 5 * K, where K is the constraint >> length. It is also known that the shorter the decoding depth, the higher >> the probability of bit errors Pe becomes. > > Actually, it depends on the input SNR, output error rate, code rate and > particular code structure. The 5 x K is a ballpark; optimum decoding depth > is application dependent. The 5 x K is reasonable estimate for R ~ 1/2; > higher rate codes require more decoding depth. > >> Have there been any studies or is there a rule of thumb on how the Pe >> changes as D decreases from 5 * K? > > It is simple enough to derive an estimate for any particular code.Simple for you, maybe. I'm glad to see you back! Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������