Reply by John May 7, 20092009-05-07
On May 7, 9:23&#4294967295;pm, "Melinda" <melinda.m...@gmail.com> wrote:
> >"Decoding delay" would usually be a term related to latency, which is > >related to the traceback depth of the Viterbi decoder. Your solution > >actually has more latency; if a block is 128 bits long, you don't get > >any bits out until you have all 128. If you're doing partial > >traceback, over maybe 16 bits, then you start getting bits out of your > >decoder after only 16 information bit times. > >Jasno > > OK, I know that, thanks, but like I said in case of partial traceback, we > need larger overall!!!(this is a keyword) decoding delay i.e. time, to get > all!!!(this is also keyword) info bits out of decoder, because for every(!) > symbol period we need to go backwards through trellis K x Y times, as > opposite to full traceback when we just once go backwards through trellis. > > Am I right? > > So just hypothetically speaking if we need 128 seconds to calculate all > and get 128 bits out of decoder for fulltrace back, does we then for case > of partial traceback need (lets traceback depth be 15) 128*15 seconds to > get all!!! bits? > > Melinda > > Thanks and best regards
The "partial traceback" should not take 15 seconds, because those symbols have already arrived. That's why it is called traceBACK. John
Reply by Melinda May 7, 20092009-05-07
>"Decoding delay" would usually be a term related to latency, which is >related to the traceback depth of the Viterbi decoder. Your solution >actually has more latency; if a block is 128 bits long, you don't get >any bits out until you have all 128. If you're doing partial >traceback, over maybe 16 bits, then you start getting bits out of your >decoder after only 16 information bit times.
>Jasno
OK, I know that, thanks, but like I said in case of partial traceback, we need larger overall!!!(this is a keyword) decoding delay i.e. time, to get all!!!(this is also keyword) info bits out of decoder, because for every(!) symbol period we need to go backwards through trellis K x Y times, as opposite to full traceback when we just once go backwards through trellis. Am I right? So just hypothetically speaking if we need 128 seconds to calculate all and get 128 bits out of decoder for fulltrace back, does we then for case of partial traceback need (lets traceback depth be 15) 128*15 seconds to get all!!! bits? Melinda Thanks and best regards
Reply by May 7, 20092009-05-07
On May 7, 12:35&#4294967295;pm, cincy...@gmail.com wrote:
> On May 7, 9:54&#4294967295;am, "Melinda" <melinda.m...@gmail.com> wrote: > > > > > > > Hi, > > > Can someone explain fact that for longer messages a traceback depth of K x > > Y, where K is constraint length of convolutional encoder, and if Y > > increases, then ok we could say that decoding delay and decoder memory > > requirements increases, while not significantly improving the performance > > of the decoder. But, if we use full-traceback (i.e. best performance) > > Viterbi decoder, we need to built large trellis i.e. decoder memory > > requirements increases but then after we built trellis we just need to go > > backwards through trellis once, and for case K x Y we need less memory but > > (what is my opinion) we need also larger overall decoding delay because for > > every symbol period we need to go backwards through trellis K x Y times. > > > So in short: > > 1) traceback depth of K x Y, and if Y increases(but not that K x Y be > > equal full traceback!) then decoding(overall !!!) delay and decoder memory > > requirements increases > > 2) full traceback - then decoder memory requirements are largest, but > > overall decoding delay is smallest. > > > *And I suppose that most applications can't wait that decoder form whole > > trellis and then output whole info data block bits. > > > Am I right with my claims 1) and 2) ? > > If so, how You explain my claim 2) "overall decoding delay is smallest" - > > if this is true, why then we always use full-traceback? - suppose we have > > enough memory! > > > And if I'm right with 1) and 2), am I also right when I say that we can't > > parallelize in hardver, operations of traceback because each operation need > > preceded operation to be done. > > > Thanks and best regards > > "Decoding delay" would usually be a term related to latency, which is > related to the traceback depth of the Viterbi decoder. Your solution > actually has more latency; if a block is 128 bits long, you don't get > any bits out until you have all 128. If you're doing partial > traceback, over maybe 16 bits, then you start getting bits out of your > decoder after only 16 information bit times. > > The diminishing returns in performance is due to a phenomenon called > "path merging." The idea is, once you go so far back into the trellis > (hopefully less than your traceback depth!), all of the surviving > paths will merge to a common ancestor with a high probability. Then, > there's no benefit in tracing back any farther. > > Lastly, although memory becomes more plentiful all the time, making a > ridiculously long traceback buffer is not necessarily an option. If > performance is a concern, then you want to ensure that you use fast > memory for as much of your data as possible. On most DSPs, there isn't > all that much of the fastest memory available. You'll get better bang > for your buck by using a reasonable traceback buffer and making that > memory available for other uses in your receiver. > > Jasno- Hide quoted text - > > - Show quoted text -
A lot of packets in cellular and satcom are pretty short, so the single traceback isn't that bad. I recall iDEN having only 60 symbols per frame and the traceback was pretty short. Now on the other hand if your stream is 1000s of symbols long, that's a horse of a different color. Clay
Reply by Eric Jacobsen May 7, 20092009-05-07
On Thu, 7 May 2009 09:35:22 -0700 (PDT), cincydsp@gmail.com wrote:

>On May 7, 9:54&#4294967295;am, "Melinda" <melinda.m...@gmail.com> wrote: >> Hi, >> >> Can someone explain fact that for longer messages a traceback depth of K x >> Y, where K is constraint length of convolutional encoder, and if Y >> increases, then ok we could say that decoding delay and decoder memory >> requirements increases, while not significantly improving the performance >> of the decoder. But, if we use full-traceback (i.e. best performance) >> Viterbi decoder, we need to built large trellis i.e. decoder memory >> requirements increases but then after we built trellis we just need to go >> backwards through trellis once, and for case K x Y we need less memory but >> (what is my opinion) we need also larger overall decoding delay because for >> every symbol period we need to go backwards through trellis K x Y times. >> >> So in short: >> 1) traceback depth of K x Y, and if Y increases(but not that K x Y be >> equal full traceback!) then decoding(overall !!!) delay and decoder memory >> requirements increases >> 2) full traceback - then decoder memory requirements are largest, but >> overall decoding delay is smallest. >> >> *And I suppose that most applications can't wait that decoder form whole >> trellis and then output whole info data block bits. >> >> Am I right with my claims 1) and 2) ? >> If so, how You explain my claim 2) "overall decoding delay is smallest" - >> if this is true, why then we always use full-traceback? - suppose we have >> enough memory! >> >> And if I'm right with 1) and 2), am I also right when I say that we can't >> parallelize in hardver, operations of traceback because each operation need >> preceded operation to be done. >> >> Thanks and best regards > >"Decoding delay" would usually be a term related to latency, which is >related to the traceback depth of the Viterbi decoder. Your solution >actually has more latency; if a block is 128 bits long, you don't get >any bits out until you have all 128. If you're doing partial >traceback, over maybe 16 bits, then you start getting bits out of your >decoder after only 16 information bit times. > >The diminishing returns in performance is due to a phenomenon called >"path merging." The idea is, once you go so far back into the trellis >(hopefully less than your traceback depth!), all of the surviving >paths will merge to a common ancestor with a high probability. Then, >there's no benefit in tracing back any farther.
I think that's why sequential decoders can get a bit better performance than a Viterbi...they don't exhibit this phenomenon, at the cost of potentially long tracebacks and huge constraint lengths. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by May 7, 20092009-05-07
On May 7, 9:54&#4294967295;am, "Melinda" <melinda.m...@gmail.com> wrote:
> Hi, > > Can someone explain fact that for longer messages a traceback depth of K x > Y, where K is constraint length of convolutional encoder, and if Y > increases, then ok we could say that decoding delay and decoder memory > requirements increases, while not significantly improving the performance > of the decoder. But, if we use full-traceback (i.e. best performance) > Viterbi decoder, we need to built large trellis i.e. decoder memory > requirements increases but then after we built trellis we just need to go > backwards through trellis once, and for case K x Y we need less memory but > (what is my opinion) we need also larger overall decoding delay because for > every symbol period we need to go backwards through trellis K x Y times. > > So in short: > 1) traceback depth of K x Y, and if Y increases(but not that K x Y be > equal full traceback!) then decoding(overall !!!) delay and decoder memory > requirements increases > 2) full traceback - then decoder memory requirements are largest, but > overall decoding delay is smallest. > > *And I suppose that most applications can't wait that decoder form whole > trellis and then output whole info data block bits. > > Am I right with my claims 1) and 2) ? > If so, how You explain my claim 2) "overall decoding delay is smallest" - > if this is true, why then we always use full-traceback? - suppose we have > enough memory! > > And if I'm right with 1) and 2), am I also right when I say that we can't > parallelize in hardver, operations of traceback because each operation need > preceded operation to be done. > > Thanks and best regards
"Decoding delay" would usually be a term related to latency, which is related to the traceback depth of the Viterbi decoder. Your solution actually has more latency; if a block is 128 bits long, you don't get any bits out until you have all 128. If you're doing partial traceback, over maybe 16 bits, then you start getting bits out of your decoder after only 16 information bit times. The diminishing returns in performance is due to a phenomenon called "path merging." The idea is, once you go so far back into the trellis (hopefully less than your traceback depth!), all of the surviving paths will merge to a common ancestor with a high probability. Then, there's no benefit in tracing back any farther. Lastly, although memory becomes more plentiful all the time, making a ridiculously long traceback buffer is not necessarily an option. If performance is a concern, then you want to ensure that you use fast memory for as much of your data as possible. On most DSPs, there isn't all that much of the fastest memory available. You'll get better bang for your buck by using a reasonable traceback buffer and making that memory available for other uses in your receiver. Jasno
Reply by Vladimir Vassilevsky May 7, 20092009-05-07

Melinda wrote:

> Hi, > > Can someone explain fact
[...] 1. Get a book such as Larsen or Johanesson & Jigangirov. 2. Implement the Viterbi decoder yourself. You won't have any further questions.
> So in short: > 1) traceback depth of K x Y, and if Y increases(but not that K x Y be > equal full traceback!) then decoding(overall !!!) delay and decoder memory > requirements increases
You are searching through all codewords in the radius of K x Y. There is little point in searching further then some max. radius , since it is getting less and less likely that you will find the right solution.
> 2) full traceback - then decoder memory requirements are largest, but > overall decoding delay is smallest.
Decoding delay is largest, since you got to have the full packet to do the full traceback.
> Am I right with my claims 1) and 2) ?
I didn't get it. What are you claiming? Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Melinda May 7, 20092009-05-07
Hi,

Can someone explain fact that for longer messages a traceback depth of K x
Y, where K is constraint length of convolutional encoder, and if Y
increases, then ok we could say that decoding delay and decoder memory
requirements increases, while not significantly improving the performance
of the decoder. But, if we use full-traceback (i.e. best performance)
Viterbi decoder, we need to built large trellis i.e. decoder memory
requirements increases but then after we built trellis we just need to go
backwards through trellis once, and for case K x Y we need less memory but
(what is my opinion) we need also larger overall decoding delay because for
every symbol period we need to go backwards through trellis K x Y times.

So in short:
1) traceback depth of K x Y, and if Y increases(but not that K x Y be
equal full traceback!) then decoding(overall !!!) delay and decoder memory
requirements increases
2) full traceback - then decoder memory requirements are largest, but
overall decoding delay is smallest.

*And I suppose that most applications can't wait that decoder form whole
trellis and then output whole info data block bits.

Am I right with my claims 1) and 2) ?
If so, how You explain my claim 2) "overall decoding delay is smallest" -
if this is true, why then we always use full-traceback? - suppose we have
enough memory!

And if I'm right with 1) and 2), am I also right when I say that we can't
parallelize in hardver, operations of traceback because each operation need
preceded operation to be done.

Thanks and best regards