DSPRelated.com
Forums

Soft demmaper - LLRs calculations

Started by Melinda February 28, 2009
Hi all,
Few questions: I developed soft demmaper(exact LLR Algorithm) and test
with my own Soft Input Viterbi decoder. and results I get, are very close
to teoretical(almost identical). But after while, I ask my self is my LLR
algorithm good. Why I say that?-Exact LLR Algorithm on mathworks site(just
type in google: Mathworks exact LLR algorithm), you will see :
L(b)=log(Pr(b=0|r=(x,y)) / Pr(b=1|r=(x,y))) , and below full formula with
comments. My question actualy is: does Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) =
1 ? Why I ask that - if You look below full formula(on web site) if we say
use BPSK modulation((-1)^bo is mapping - i.e. bit 0->1, bit1->-1) and we
lets say receive channel (AWGN) symbol 0+0i, and if we calculate those
probabilities we will get L(b)=log{ exp(-1/(No/2)) / exp(-1/(No/2)) }, and
if You notice, upper and lower exp(…) expression are the same(someone
could say they must be each = 0.5, if Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) =
1; is correct /or not hmm?), and lets say that No = 1 (i.e. for SNR = 0dB
-> No=10^(-SNR/10)=1), and so expression: exp(-1/(No/2)) will be equal
0.1353. and we now have that those two(upper and lower) probabilities are
the same but their sum is not = 1;!!! Can You please explain to me am I
correct or I am wrong with my claims. Does sum of Pr(b=0|r=(x,y)) and
Pr(b=1|r=(x,y)) must be equal 1 when we calculate LLRs? If that is true how
You look on my simple example - am I correct(and let me remind You - I get
very good results with my calculation of this LLRs (on QPSK, 16QAM…) and
in my case like I explain sum of this two, upper and lower probabilites are
not 1!). Can You please give some kind explanation on this.

Thanks and best regards.


Melinda <melinda.mel3@gmail.com> wrote:

> I developed soft demmaper(exact LLR Algorithm) and test > with my own Soft Input Viterbi decoder. and results I get, are > very close to teoretical(almost identical). But after while, > I ask my self is my LLR algorithm good. Why I say that?-Exact > LLR Algorithm on mathworks site(just type in google: Mathworks > exact LLR algorithm), you will see : L(b)=log(Pr(b=0|r=(x,y)) > / Pr(b=1|r=(x,y))) , and below full formula with comments. My > question actualy is: does Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) = 1 ?
Not necessarily. This is a case where people often say something is a probability, when it is actually a measure. I think there was a discussion on exactly this question here, a few weeks ago. Steve
Melinda wrote:
> Hi all, > Few questions: I developed soft demmaper(exact LLR Algorithm) and test > with my own Soft Input Viterbi decoder. and results I get, are very close > to teoretical(almost identical). But after while, I ask my self is my LLR > algorithm good. Why I say that?-Exact LLR Algorithm on mathworks site(just > type in google: Mathworks exact LLR algorithm), you will see : > L(b)=log(Pr(b=0|r=(x,y)) / Pr(b=1|r=(x,y))) , and below full formula with > comments. My question actualy is: does Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) = > 1 ? Why I ask that - if You look below full formula(on web site) if we say > use BPSK modulation((-1)^bo is mapping - i.e. bit 0->1, bit1->-1) and we > lets say receive channel (AWGN) symbol 0+0i, and if we calculate those > probabilities we will get L(b)=log{ exp(-1/(No/2)) / exp(-1/(No/2)) }, and > if You notice, upper and lower exp(&hellip;) expression are the same(someone > could say they must be each = 0.5, if Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) = > 1; is correct /or not hmm?), and lets say that No = 1 (i.e. for SNR = 0dB > -> No=10^(-SNR/10)=1), and so expression: exp(-1/(No/2)) will be equal > 0.1353. and we now have that those two(upper and lower) probabilities are > the same but their sum is not = 1;!!! Can You please explain to me am I > correct or I am wrong with my claims. Does sum of Pr(b=0|r=(x,y)) and > Pr(b=1|r=(x,y)) must be equal 1 when we calculate LLRs? If that is true how > You look on my simple example - am I correct(and let me remind You - I get > very good results with my calculation of this LLRs (on QPSK, 16QAM&hellip;) and > in my case like I explain sum of this two, upper and lower probabilites are > not 1!). Can You please give some kind explanation on this.
What you are calculating above is p(r|b) (the likelihood of b), not p(b|r) (the a posteriori probability of b). The two are related as: p(b|r) = p(r|b).p(b)/p(r) It is true that p(b=1|r) + p(b=0|r) = 1, but not in general true that p(r|b=1) + p(r|b=0) = 1. -- Oli
>Melinda wrote: >> Hi all, >> Few questions: I developed soft demmaper(exact LLR Algorithm) and test >> with my own Soft Input Viterbi decoder. and results I get, are very
close
>> to teoretical(almost identical). But after while, I ask my self is my
LLR
>> algorithm good. Why I say that?-Exact LLR Algorithm on mathworks
site(just
>> type in google: Mathworks exact LLR algorithm), you will see : >> L(b)=log(Pr(b=0|r=(x,y)) / Pr(b=1|r=(x,y))) , and below full formula
with
>> comments. My question actualy is: does Pr(b=0|r=(x,y)) +
Pr(b=1|r=(x,y)) =
>> 1 ? Why I ask that - if You look below full formula(on web site) if we
say
>> use BPSK modulation((-1)^bo is mapping - i.e. bit 0->1, bit1->-1) and
we
>> lets say receive channel (AWGN) symbol 0+0i, and if we calculate those >> probabilities we will get L(b)=log{ exp(-1/(No/2)) / exp(-1/(No/2)) },
and
>> if You notice, upper and lower exp(&hellip;) expression are the
same(someone
>> could say they must be each = 0.5, if Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y))
=
>> 1; is correct /or not hmm?), and lets say that No = 1 (i.e. for SNR =
0dB
>> -> No=10^(-SNR/10)=1), and so expression: exp(-1/(No/2)) will be equal >> 0.1353. and we now have that those two(upper and lower) probabilities
are
>> the same but their sum is not = 1;!!! Can You please explain to me am
I
>> correct or I am wrong with my claims. Does sum of Pr(b=0|r=(x,y)) and >> Pr(b=1|r=(x,y)) must be equal 1 when we calculate LLRs? If that is true
how
>> You look on my simple example - am I correct(and let me remind You - I
get
>> very good results with my calculation of this LLRs (on QPSK, 16QAM&hellip;)
and
>> in my case like I explain sum of this two, upper and lower probabilites
are
>> not 1!). Can You please give some kind explanation on this. > >What you are calculating above is p(r|b) (the likelihood of b), not >p(b|r) (the a posteriori probability of b). The two are related as: > >p(b|r) = p(r|b).p(b)/p(r) > >It is true that p(b=1|r) + p(b=0|r) = 1, but not in general true that >p(r|b=1) + p(r|b=0) = 1. > > >-- >Oli >Oli and Steve thanks for replay,
When You said "What you are calculating above is p(r|b) (the likelihood of b), not p(b|r) (the a posteriori probability of b)" - 1) do You mean my procedure is right or not; 2)or You mean that - for general formula in which we have Pr(b=0/1|r=(x,y)), not P(r=(x,y)|b=0/1). Can You one more time explain this to me? - And one more question: Can one (or maybe both) 'probabilities' like Steve said 'measures' {Pr(b=0|r=(x,y)) or Pr(b=1|r=(x,y)) } be > 1. Thanks and best regards
Melinda wrote:
>> Melinda wrote: >>> Hi all, >>> Few questions: I developed soft demmaper(exact LLR Algorithm) and test >>> with my own Soft Input Viterbi decoder. and results I get, are very > close >>> to teoretical(almost identical). But after while, I ask my self is my > LLR >>> algorithm good. Why I say that?-Exact LLR Algorithm on mathworks > site(just >>> type in google: Mathworks exact LLR algorithm), you will see : >>> L(b)=log(Pr(b=0|r=(x,y)) / Pr(b=1|r=(x,y))) , and below full formula > with >>> comments. My question actualy is: does Pr(b=0|r=(x,y)) + > Pr(b=1|r=(x,y)) = >>> 1 ? Why I ask that - if You look below full formula(on web site) if we > say >>> use BPSK modulation((-1)^bo is mapping - i.e. bit 0->1, bit1->-1) and > we >>> lets say receive channel (AWGN) symbol 0+0i, and if we calculate those >>> probabilities we will get L(b)=log{ exp(-1/(No/2)) / exp(-1/(No/2)) }, > and >>> if You notice, upper and lower exp(&hellip;) expression are the > same(someone >>> could say they must be each = 0.5, if Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) > = >>> 1; is correct /or not hmm?), and lets say that No = 1 (i.e. for SNR = > 0dB >>> -> No=10^(-SNR/10)=1), and so expression: exp(-1/(No/2)) will be equal >>> 0.1353. and we now have that those two(upper and lower) probabilities > are >>> the same but their sum is not = 1;!!! Can You please explain to me am > I >>> correct or I am wrong with my claims. Does sum of Pr(b=0|r=(x,y)) and >>> Pr(b=1|r=(x,y)) must be equal 1 when we calculate LLRs? If that is true > how >>> You look on my simple example - am I correct(and let me remind You - I > get >>> very good results with my calculation of this LLRs (on QPSK, 16QAM&hellip;) > and >>> in my case like I explain sum of this two, upper and lower probabilites > are >>> not 1!). Can You please give some kind explanation on this. >> What you are calculating above is p(r|b) (the likelihood of b), not >> p(b|r) (the a posteriori probability of b). The two are related as: >> >> p(b|r) = p(r|b).p(b)/p(r) >> >> It is true that p(b=1|r) + p(b=0|r) = 1, but not in general true that >> p(r|b=1) + p(r|b=0) = 1. >> > Oli and Steve thanks for replay, > When You said "What you are calculating above is p(r|b) (the likelihood of > b), not p(b|r) (the a posteriori probability of b)" - 1) do You mean my > procedure is right or not; 2)or You mean that - for general formula in > which we have Pr(b=0/1|r=(x,y)), not P(r=(x,y)|b=0/1). Can You one more > time explain this to me?
Your procedure is correct! You are calculating the log-likelihood ratio (LLR), which is defined as: LLR(r) = p(r|b=1) / p(r|b=0) When the prior probabilities of a 1 or a 0 are equal (i.e. p(b=0) = p(b=1) = 0.5), then the LLR will happen to be equal to p(b=1|r) / p(b=0|r), due to the relationship between p(b|r) and p(r|b). However, in the general case (where p(b=1) and p(b=0) are not equal), the relationship is more complicated.
> - And one more question: Can one (or maybe both) 'probabilities' like > Steve said 'measures' {Pr(b=0|r=(x,y)) or Pr(b=1|r=(x,y)) } be > 1.
I don't really know much about measure theory, I'm afraid. But as far as I'm aware, it's always true that p(b=1|r) + p(b=0|r) = 1, and you can't have a negative probability, so neither of them can be > 1. -- Oli
>Your procedure is correct! You are calculating the log-likelihood ratio >(LLR), which is defined as: > >LLR(r) = p(r|b=1) / p(r|b=0)
Hi Oli, On Mathworks site the LLRs calculation is : The log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities of a 0 bit being transmitted versus a 1 bit being transmitted for a received signal(r(x,y)). The LLR for a bit b is defined as: LLR(b)=log[ Pr(b=0|r=(x,y)) / Pr(b=0|r=(x,y)) ] , and You wrote: LLR(r) = p(r|b=1) / p(r|b=0), so I think the condition is r=(x,y) i.e. the received channel simbol, but You wrote that condition is |b=1 or |b=0, and You wrote LLR(r)=... ,did You think LLR(b)? - because "The log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities of a 0 BIT(not distance - i.e. received noisy channel simbol with (x,y) coordinates!) being transmitted...." . So in short, we receive channel noisy simbol(r(x,y)) and then we must calculate those two 'probabilites' that cpecified (transmitted) bit in the one of the K bits in an M-ary symbol, is zero - Pr(b=0|r=(x,y)); or one - Pr(b=1|r=(x,y)); on base what we received(i.e. r(x,y)). So what is correct? Maybe You can clear this for me. Maybe You think the same thing but wrote that on Your "way". Thanks for replay and best regards
Melinda wrote:
>> Your procedure is correct! You are calculating the log-likelihood ratio >> (LLR), which is defined as: >> >> LLR(r) = p(r|b=1) / p(r|b=0) > > Hi Oli, > > On Mathworks site the LLRs calculation is : > The log-likelihood ratio (LLR) is the logarithm of the ratio of > probabilities of a 0 bit being transmitted versus a 1 bit being transmitted > for a received signal(r(x,y)). The LLR for a bit b is defined as: > LLR(b)=log[ Pr(b=0|r=(x,y)) / Pr(b=0|r=(x,y)) ] , > and You wrote: > LLR(r) = p(r|b=1) / p(r|b=0), so I think the condition is r=(x,y) i.e. > the received channel simbol, but You wrote that condition is |b=1 or |b=0, > and You wrote LLR(r)=... ,did You think LLR(b)? - because "The > log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities > of a 0 BIT(not distance - i.e. received noisy channel simbol with (x,y) > coordinates!) being transmitted...." . > > So in short, we receive channel noisy simbol(r(x,y)) and then we must > calculate those two 'probabilites' that cpecified (transmitted) bit in the > one of the K bits in an M-ary symbol, is zero - Pr(b=0|r=(x,y)); or one - > Pr(b=1|r=(x,y)); on base what we received(i.e. r(x,y)). > > So what is correct? Maybe You can clear this for me. > Maybe You think the same thing but wrote that on Your "way". >
I believe the "definition" on the Mathworks page is not correct. p(b=1|r) is not the likelihood of b=1, see for instance: * http://en.wikipedia.org/wiki/Log-likelihood_ratio * http://en.wikipedia.org/wiki/Likelihood_function I also disagree with the notation L(b), as it's not a function of b! (That's why I wrote LLR(r) in my previous posts.) One very important point is that the following is true (ignoring scaling): p(r|b) = exp{-|b-r|^2 / sigma^2} But the following is NOT true (in general): p(b|r) = exp{-|b-r|^2 / sigma^2} Instead, to get p(b|r), you need to know p(r), i.e. p(b|r) = p(r|b).p(b) ----------- p(r) = p(r|b).p(b) ----------- SUM p(r|b) b But if you're calculating ratios, then the p(r) cancels from top and bottom, and if p(b)=0.5, then that cancels too. DISCLAIMER: This may all be a matter of convention (i.e. different people are used to different terminology and notation). The actual maths on the Mathworks page looks correct. -- Oli
Oli Charlesworth  <catch@olifilth.co.uk> wrote:

>One very important point is that the following is true (ignoring scaling): > >p(r|b) = exp{-|b-r|^2 / sigma^2}
Good thing you said ignoring scaling, since p(r|b) = 0. Assuming p() is supposed to indicate probability. Steve
>Oli Charlesworth <catch@olifilth.co.uk> wrote: > >>One very important point is that the following is true (ignoring
scaling):
>> >>p(r|b) = exp{-|b-r|^2 / sigma^2} > >Good thing you said ignoring scaling, since p(r|b) = 0. >Assuming p() is supposed to indicate probability.
Hi guys, When You say scaling - did You mean noise power No(i.e. sigma) as scaling factor and how You mean "...since p(r|b) = 0." ? Thanks for Your times and replays guys
Steve Pope wrote:
> Oli Charlesworth <catch@olifilth.co.uk> wrote: > >> One very important point is that the following is true (ignoring scaling): >> >> p(r|b) = exp{-|b-r|^2 / sigma^2} > > Good thing you said ignoring scaling, since p(r|b) = 0. > Assuming p() is supposed to indicate probability.
Perhaps it's late, and I'm missing something?... -- Oli