On 11/29/2011 01:21 PM, dvsarwate wrote: > On Nov 29, 8:29 am, Randy Yates<ya...@digitalsignallabs.com> wrote: > >> Branch Metrics: >> >> If the inputs are hard bits, then the 2^N branch metrics for a rate 1/ >> N code are the 2^N Hamming distances between the N demodulated bits >> and the 2^N possible values for those N bits. > > Randy: > > I thought at each stage, the trellis for a rate-(1/N) code has > 2^{m} nodes where the code has constraint length m+1, > with each node having 2 outgoing branches connecting > to the next stage, and having two incoming branches from > nodes at the previous stage. It is essentially a rolling out > of what used to be studied once upon a time as the state > transition diagram of a sequential machine. The nodes > at each stage are labeled with the 2^n possible contents > of the transmitter shift register. The two branches leaving > a node are labeled as 0/X and 1/Y where X and Y are > N-bit vectors -- the transmitted bits corresponding to > input data bits 0 and 1 respectively when the transmitter > shift register contents are the node label. The branch > metrics for the branches are the Hamming distances > between X and Z, and between Y and Z, where Z is the > N-bit received vector corresponding to that particular > *data* bit. The nice thing is that the Hamming distance > is additive (just as the log likelihood is for soft > decision decoding; the branch metrics are taken as the > log likelihood of the soft-decision vector Z for parameters > X and Y respectively.) > > In particular, there are only 2^{m+1} branches from the > n-th stage to the (n+1)-th stage, and so, for, say, a rate 1/5 > code of constraint length 3, there are not 32 branches > to be labeled with the 32 possible 5-bit binary vectors. Dilip, Thank you for your gentle guidance. Your corrections on the number of states and the trellis architecture are of course correct. I'm not sure why I made such a gross error in this post. It's probably best to just admit I made a mistake and move on. However, I am still unclear regarding your explanation of the log likelihood: The nice thing is that the Hamming distance is additive (just as the log likelihood is for soft decision decoding; the branch metrics are taken as the log likelihood of the soft-decision vector Z for parameters X and Y respectively.) I certainly see that log(x * y) = v + w, and that if we actually had metrics that were probabilities, working with their logs would allow us to determine the optimal path through the trellis via addition. What I don't understand is why the branch metrics are NECESSARILY log likelihoods. To be fair, you said they are "taken" as the log likelihod. Perhaps this is the very crux of the matter (for me). In our case, the inputs are simply the values out of the (FSK) demodulator - there is no log function applied to them - and the metrics are simple Euclidean distance. So they aren't log likelihoods. Has it been shown somewhere that such metrics are equivalent to log likelihoods for the purposes of establishing the most likely path through the trellis? Further, our system engineers are calling the _inputs_ to the Viterbi decoder "LLRs" - NOT the branch metrics. And LLRs are "log likelihood RATIOS", not simply log likelihoods. So none of this makes sense to me. I would appreciate any further insight you might have into this, and hope you are willing to share it. --Randy