Reply by Raymond Toy May 12, 20052005-05-12
>>>>> "porterboy76" == porterboy76 <porterboy76@yahoo.com> writes:
>>> This follows from the derivation. porterboy76> I have seen plenty of explanations of the Viterbi porterboy76> algorithm but no derivations. I thought it was an porterboy76> heuristic algorithm? Is there a derivation in the porterboy76> literature you could point me to? I would think just about any book that describes the Viterbi algorithm would derive it. Failing that, you can also look for Dijkstra's (?) algorithm, which is essentially the same. Ray
Reply by May 12, 20052005-05-12
>>This follows from the derivation.
I have seen plenty of explanations of the Viterbi algorithm but no derivations. I thought it was an heuristic algorithm? Is there a derivation in the literature you could point me to?
Reply by Raymond Toy May 11, 20052005-05-11
>>>>> "porterboy76" == porterboy76 <porterboy76@yahoo.com> writes:
porterboy76> and we can drop the -2 and search for maximum M porterboy76> instead of minimum M. Ilike this approach, it is porterboy76> easy to understand. I still dont know why some people porterboy76> talk about logs though ;-) This follows from the derivation. I won't go into all the details, but the Viterbi is often used to find the maximum likelihood sequence. There are lots of details missing, but the usual assumption is that the noise is AWGN. So the final result is that the likelihood of a sequence is basically exp sum (r[k] - s[k])^2/2/sigma^2 where r[k] is the received symbol, and s[k] is the expected symbol. We want to maximize this. Take the LOG to get sum (r[k] - s[k])^2/2/sigma^2 the 2 and sigma^2 aren't important, so we get sum (r[k] - s[k])^2 And hence, the Euclidean distance that you had. That's why it's log likelihood. And also says that if you have AWGN noise (and probably other assumptions), Euclidean distance is the optimum metric to use. Even when the noise isn't AWGN, we sometimes use Euclidean distance anyway. :-) Ray
Reply by Clay May 10, 20052005-05-10
Hello Porterboy,

The idea behind Viterbi decoding is to find to path with the least
accumulated error. So mapping the Euclidian norm via a log function
still results in an error function that increases as your received
symbol is moved away from the ideal symbol. So you don't have to worry
about logs other than from a theoretical viewpoint.

The C to I reference (Carrier to interference ratio) was a bit of a
tangent. But what I was trying to get to, is a measure of the
difference between the recieved symbol and the ideal symbol. A common
error measurement is the square of distance between the two. In other
words the square of quantity measured minus ideal is a type of variance
measurement, and it is easy to relate variance to power. And thus you
can get a good estimate of the signal to noise. The Viterbi algo is one
way to find the most likely sequence given noisey data by finding the
path of least error. I though maybe you were familiar with this and
therefore this could have led to an understanding of the soft metric.
Sorry if I threw out a red herring.

I'm glad you found the trick with bipolar data where you can throw out
the constant terms and end up maximizing the complementary error
function.

Using soft decisioning buys you some gain over hard decisioning. For a
bipolar rate 1/2 K=7 code, the soft decisioning can be quantized to 32
steps and yield almost all of the advantage of a high precision soft
decision based decoder. Once you get it all working, try quantizing the
error function to see its effect.

Clay

Reply by May 10, 20052005-05-10
OK, I found some info in a texas instruments about the soft decsion
metric.
(Google "viterbi decoding techniques for the TMS320C54x")

They use Euclidian distance squared, slightly modified to ease
computation.

Suppose the coding rate is 1/2, so bits are decoded in groups of 2. Now
suppose we receive (0.325,0.5) as the soft inputs to the decoder. And
suppose we want to find the metric for this decoder input compared to
encoder output 00 for example.

metric = (euclidian distance)^2
          = (0-0.325)^2  + (0-0.5)^2

In general, the metric is

M = (b0 - s0)^2 + (b1-s1)^2

where b0,b1 are the reference (bipolar) bits and s0,s1 are the soft
decions inout to the decoder.

They simplify this to

M = -2(b0.s0 + b1.s1) since the other terms are common to all metrics

and we can drop the -2 and search for maximum M instead of minimum M.
Ilike this approach, it is easy to understand. I still dont know why
some people talk about logs though ;-)

Reply by May 10, 20052005-05-10
Thanks Clay. I am actually using matlab, just to get a feel for the
algorithm. I understand the algorithm, and it seems easy enough to
implement. However, this soft metric is confusing me.

If I use sum of squares of distances for each received bit in a block,
I think it will work, and that I can makeprogress. However, I want to
be sure that this is valid, since I have seen a few references to "log
likelihood" metrics. I dont know what these are or why they are used.

Oh yeah, what do you mean by this???? ....
 >>This correlates well with C to I.

Reply by Clay S. Turner May 9, 20052005-05-09
<porterboy76@yahoo.com> wrote in message 
news:1115654690.522491.277690@z14g2000cwz.googlegroups.com...
> How do you compute a soft Viterbi metric? > > I have gone through this excellent tutorial on the Viterbi decoder: > http://home.netcom.com/~chip.f/viterbi/tutorial.html > It is for a hard decision decoder, which makes it easy to compute the > metrics. > > In the hard decsion case, if I receive [1,1] say, then the metric > compared to [0,0] is 2. Suppose I received [0.9,1.1] in the soft > decsion case. Is the metric compared to [0,0]... > > (0.9)^2 + (1.1)^2 ??? > > Or are you supposed to use logs to get a log likelihood? > I havent been able to find a good example... >
Hello Porterboy, The standard use (one I've used in many cases) is sum of the distances of the measured points from their ideal points. And usually this is done with the distances squared for faster computation. In some forms of cellular an SQE (signal quality estimate) is formed by taking the variance of each point relative to its nearest constellation point. This correlates well with C to I. In the case of bipolar data (i.e., plus and minus ones), you can just tak their linear distances away from plus or minus one. I have some c code I wrote that does a K=7 Viterbi decoder for a rate 1/2 code. Do you have a particular code you need, or are you just playing with examples to get a handle on the algo? Clay
Reply by May 9, 20052005-05-09
How do you compute a soft Viterbi metric?

I have gone through this excellent tutorial on the Viterbi decoder:
http://home.netcom.com/~chip.f/viterbi/tutorial.html
It is for a hard decision decoder, which makes it easy to compute the
metrics.

In the hard decsion case, if I receive [1,1] say, then the metric
compared to [0,0] is 2. Suppose I received [0.9,1.1] in the soft
decsion case. Is the metric compared to [0,0]...

(0.9)^2 + (1.1)^2  ???

Or are you supposed to use logs to get a log likelihood?
I havent been able to find a good example...