Reply by Eric Jacobsen August 4, 20052005-08-04
Tailbiting also assumes a fixed block length, but does eliminate the
transmission overhead of the tail bits.   A tailbiting system starts
the encoder state using the last k bits of the message as the starting
state, so that the start and end states are the same.

There are some obvious disadvantages, like the encoder has to have the
last bits of the message available in order to start encoding, which
can aggravate latency issues.

So there is some evil to deal with either way.

Many continuous-stream systems do neither and just use a sliding
window decoder to process the stream without any assumptions about
start or end states.   It takes the decoder a little bit to achieve
lock, naturally, but after that there is no additional overhead or
assumptions to make about how things were done.



On 3 Aug 2005 23:55:01 -0700, porterboy76@yahoo.com wrote:

>It is common to return to the zero state after every D bits at the >transmitter for precisely this reason. At the receiver you then know >exactly from which state you should begin the traceback after D blocks. >This comes at the expense of rate reduction, since the zero-tail >sequence contains no information. Admittedly the rate reduction is >quite low, if D is large. However, efforts have been made to avoid this >rate reduction, the most successful of which is "tail-biting". I'm >still not sure exactly how this works, but I do know that it allows >continuous data transmission, without needing to return to the zero >state. Have a look on the web, and if you do find any good explanations >let me know! > >Porterboy
Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by Raymond Toy August 4, 20052005-08-04
>>>>> "compsavy" == compsavy <compsavy@rediffmail.com> writes:
compsavy> I am using traceback approach for viterbi decoding. compsavy> After acquisition of a block of D data,I choose the compsavy> minimum state as best state to start traceback. I am compsavy> implementing normalisation on path metrics to constrain compsavy> the bit width. Now my question is while finding the compsavy> minimum state as best state,is there any chance of compsavy> occuring more than two or three states having minimum compsavy> value.If it happens then is it because of incorrect Yes, it happens. You can't avoid it. But how often it happens depends on the noise and how many bits you use to represent everything. The more noise and the more bits you use makes the chance of two states having exactly the same metric low. And even if two states have the same metric, I don't see why it matters. You just pick one. Based on the criteria (best metric), you can't tell which is better. compsavy> acqusition depth or improper choice of acquisition depth compsavy> or improper normalisation. Yes, I think that can cause that too. Ray
Reply by August 4, 20052005-08-04
It is common to return to the zero state after every D bits at the
transmitter for precisely this reason. At the receiver you then know
exactly from which state you should begin the traceback after D blocks.
This comes at the expense of rate reduction, since the zero-tail
sequence contains no information. Admittedly the rate reduction is
quite low, if D is large. However, efforts have been made to avoid this
rate reduction, the most successful of which is "tail-biting". I'm
still not sure exactly how this works, but I do know that it allows
continuous data transmission, without needing to return to the zero
state. Have a look on the web, and if you do find any good explanations
let me know!

Porterboy

Reply by August 3, 20052005-08-03
Hi All,
       I am using traceback approach for viterbi decoding.
After acquisition of a block of D data,I choose the minimum state
as best state to start traceback. I am implementing normalisation
on path metrics to constrain the bit width.
Now my question is while finding the minimum state as best state,is
there any chance of occuring more than two or three states having
minimum value.If it happens then is it because of incorrect acqusition
depth or improper
choice of acquisition depth or improper normalisation.

Thanks and Regards.