Reply by Oli Charlesworth March 2, 20092009-03-02
Melinda wrote:
>> Melinda wrote: >>>> Steve Pope wrote: >>>>> Oli Charlesworth <catch@olifilth.co.uk> wrote: >>>>> >>>>>> One very important point is that the following is true (ignoring >>> scaling): >>>>>> p(r|b) = exp{-|b-r|^2 / sigma^2} >>>>> Good thing you said ignoring scaling, since p(r|b) = 0. >>>>> Assuming p() is supposed to indicate probability. >>>> Perhaps it's late, and I'm missing something?... >>>> >>>> >>>> -- >>>> Oli >>> Hi Oli, >>> You wrote >>> "One very important point is that the following is true (ignoring >>> scaling): >>> >>> p(r|b) = exp{-|b-r|^2 / sigma^2} " >>> >>> and Steve just give his comment on that . >>> >>> I'm not sure on what you think when You said "(ignoring scaling)" >>> Can You explain this. >> I don't understand Steve's comment, but what I was talking about is that > >> the Gaussian distribution always has a normalising constant on the >> front, and I can't remember the precise arrangement. It's something >> along the lines of 1/(sigma.sqrt(2.pi)). >> >> -- >> Oli >> > > Hi Oli, > Thanks very much and let me sumarize this: > The Gaussian function(G) is (wikipedia site): > G(x) = 1/(sigma*sqrt(2*pi)) * exp{-(b-r)^2 / "2"*(sigma^2)} so lets me > comment each section of this formula: the "1/[(sigma*sqrt(2*pi))]" part can > be note as : > sigma = sqrt(sigma^2); -> > 1/[sqrt(sigma^2) * sqrt(2*pi)] = 1/[sqrt((sigma^2)* 2*pi)] = > 1/[sqrt((No/2)*2*pi)] = 1/[sqrt(No*pi)]. > > --> so: "1/[(sigma*sqrt(2*pi))]" is 1/[sqrt(No*pi)] > > the "exp{-(b-r)^2 / "2"*(sigma^2)}" section can be note as : > exp(-(Di^2)/2*(No/2)) = exp(-(Di^2)/No) where Di is distance/s between > received and ideal constellation points. As You see I mark "2" when I wrote > G(x)=.... In Your previous post You didn't write this 2 factor beside > sigma^2. ("p(r|b) = exp{-|b-r|^2 / "here" sigma^2} " ) and in Gausian > function there is a factor 2 beside sigma^2. > > This is the way I calculate my p(r|b=0) i.e. p(r|b=1), i.e. > LLRs(b) = log[ {1/[sqrt(No*pi)] * sum exp(-(Di^2)/No)} --> Di(b=0) > -------------------------------------- > {1/[sqrt(No*pi)] * sum exp(-(Di^2)/No)} ] --> Di(b=1) > > Is my calculation good? > And one more question - on dependence of No(noise power level) the upper > and lower 'probabilities'(measures like Steve said) {...}/{...} can be as > 0.01345 or 0 or 12.42 or 47.456 ... and alike. > { for example: LLR = log(44.56 / 0.008) } > > This is the reason for my questions and doubts and can You one more time > confirm or not my calculations.
The only thing I'd say is that you can safely eliminate the constant scaling factor (1/sqrt(No.pi)), as it's the same in the numerator and denominator! -- Oli
Reply by Melinda March 2, 20092009-03-02
>Melinda wrote: >>> Steve Pope wrote: >>>> Oli Charlesworth <catch@olifilth.co.uk> wrote: >>>> >>>>> One very important point is that the following is true (ignoring >> scaling): >>>>> p(r|b) = exp{-|b-r|^2 / sigma^2} >>>> Good thing you said ignoring scaling, since p(r|b) = 0. >>>> Assuming p() is supposed to indicate probability. >>> Perhaps it's late, and I'm missing something?... >>> >>> >>> -- >>> Oli >> >> Hi Oli, >> You wrote >> "One very important point is that the following is true (ignoring >> scaling): >> >> p(r|b) = exp{-|b-r|^2 / sigma^2} " >> >> and Steve just give his comment on that . >> >> I'm not sure on what you think when You said "(ignoring scaling)" >> Can You explain this. > >I don't understand Steve's comment, but what I was talking about is that
>the Gaussian distribution always has a normalising constant on the >front, and I can't remember the precise arrangement. It's something >along the lines of 1/(sigma.sqrt(2.pi)). > >-- >Oli >
Hi Oli, Thanks very much and let me sumarize this: The Gaussian function(G) is (wikipedia site): G(x) = 1/(sigma*sqrt(2*pi)) * exp{-(b-r)^2 / "2"*(sigma^2)} so lets me comment each section of this formula: the "1/[(sigma*sqrt(2*pi))]" part can be note as : sigma = sqrt(sigma^2); -> 1/[sqrt(sigma^2) * sqrt(2*pi)] = 1/[sqrt((sigma^2)* 2*pi)] = 1/[sqrt((No/2)*2*pi)] = 1/[sqrt(No*pi)]. --> so: "1/[(sigma*sqrt(2*pi))]" is 1/[sqrt(No*pi)] the "exp{-(b-r)^2 / "2"*(sigma^2)}" section can be note as : exp(-(Di^2)/2*(No/2)) = exp(-(Di^2)/No) where Di is distance/s between received and ideal constellation points. As You see I mark "2" when I wrote G(x)=.... In Your previous post You didn't write this 2 factor beside sigma^2. ("p(r|b) = exp{-|b-r|^2 / "here" sigma^2} " ) and in Gausian function there is a factor 2 beside sigma^2. This is the way I calculate my p(r|b=0) i.e. p(r|b=1), i.e. LLRs(b) = log[ {1/[sqrt(No*pi)] * sum exp(-(Di^2)/No)} --> Di(b=0) -------------------------------------- {1/[sqrt(No*pi)] * sum exp(-(Di^2)/No)} ] --> Di(b=1) Is my calculation good? And one more question - on dependence of No(noise power level) the upper and lower 'probabilities'(measures like Steve said) {...}/{...} can be as 0.01345 or 0 or 12.42 or 47.456 ... and alike. { for example: LLR = log(44.56 / 0.008) } This is the reason for my questions and doubts and can You one more time confirm or not my calculations. Thanks and best regards
Reply by Oli Charlesworth March 1, 20092009-03-01
Melinda wrote:
>> Steve Pope wrote: >>> Oli Charlesworth <catch@olifilth.co.uk> wrote: >>> >>>> One very important point is that the following is true (ignoring > scaling): >>>> p(r|b) = exp{-|b-r|^2 / sigma^2} >>> Good thing you said ignoring scaling, since p(r|b) = 0. >>> Assuming p() is supposed to indicate probability. >> Perhaps it's late, and I'm missing something?... >> >> >> -- >> Oli > > Hi Oli, > You wrote > "One very important point is that the following is true (ignoring > scaling): > > p(r|b) = exp{-|b-r|^2 / sigma^2} " > > and Steve just give his comment on that . > > I'm not sure on what you think when You said "(ignoring scaling)" > Can You explain this.
I don't understand Steve's comment, but what I was talking about is that the Gaussian distribution always has a normalising constant on the front, and I can't remember the precise arrangement. It's something along the lines of 1/(sigma.sqrt(2.pi)). -- Oli
Reply by Melinda March 1, 20092009-03-01
>Steve Pope wrote: >> Oli Charlesworth <catch@olifilth.co.uk> wrote: >> >>> One very important point is that the following is true (ignoring
scaling):
>>> >>> p(r|b) = exp{-|b-r|^2 / sigma^2} >> >> Good thing you said ignoring scaling, since p(r|b) = 0. >> Assuming p() is supposed to indicate probability. > >Perhaps it's late, and I'm missing something?... > > >-- >Oli
Hi Oli, You wrote "One very important point is that the following is true (ignoring scaling): p(r|b) = exp{-|b-r|^2 / sigma^2} " and Steve just give his comment on that . I'm not sure on what you think when You said "(ignoring scaling)" Can You explain this. Thanks
Reply by Oli Charlesworth March 1, 20092009-03-01
Steve Pope wrote:
> Oli Charlesworth <catch@olifilth.co.uk> wrote: > >> One very important point is that the following is true (ignoring scaling): >> >> p(r|b) = exp{-|b-r|^2 / sigma^2} > > Good thing you said ignoring scaling, since p(r|b) = 0. > Assuming p() is supposed to indicate probability.
Perhaps it's late, and I'm missing something?... -- Oli
Reply by Melinda March 1, 20092009-03-01
>Oli Charlesworth <catch@olifilth.co.uk> wrote: > >>One very important point is that the following is true (ignoring
scaling):
>> >>p(r|b) = exp{-|b-r|^2 / sigma^2} > >Good thing you said ignoring scaling, since p(r|b) = 0. >Assuming p() is supposed to indicate probability.
Hi guys, When You say scaling - did You mean noise power No(i.e. sigma) as scaling factor and how You mean "...since p(r|b) = 0." ? Thanks for Your times and replays guys
Reply by Steve Pope March 1, 20092009-03-01
Oli Charlesworth  <catch@olifilth.co.uk> wrote:

>One very important point is that the following is true (ignoring scaling): > >p(r|b) = exp{-|b-r|^2 / sigma^2}
Good thing you said ignoring scaling, since p(r|b) = 0. Assuming p() is supposed to indicate probability. Steve
Reply by Oli Charlesworth March 1, 20092009-03-01
Melinda wrote:
>> Your procedure is correct! You are calculating the log-likelihood ratio >> (LLR), which is defined as: >> >> LLR(r) = p(r|b=1) / p(r|b=0) > > Hi Oli, > > On Mathworks site the LLRs calculation is : > The log-likelihood ratio (LLR) is the logarithm of the ratio of > probabilities of a 0 bit being transmitted versus a 1 bit being transmitted > for a received signal(r(x,y)). The LLR for a bit b is defined as: > LLR(b)=log[ Pr(b=0|r=(x,y)) / Pr(b=0|r=(x,y)) ] , > and You wrote: > LLR(r) = p(r|b=1) / p(r|b=0), so I think the condition is r=(x,y) i.e. > the received channel simbol, but You wrote that condition is |b=1 or |b=0, > and You wrote LLR(r)=... ,did You think LLR(b)? - because "The > log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities > of a 0 BIT(not distance - i.e. received noisy channel simbol with (x,y) > coordinates!) being transmitted...." . > > So in short, we receive channel noisy simbol(r(x,y)) and then we must > calculate those two 'probabilites' that cpecified (transmitted) bit in the > one of the K bits in an M-ary symbol, is zero - Pr(b=0|r=(x,y)); or one - > Pr(b=1|r=(x,y)); on base what we received(i.e. r(x,y)). > > So what is correct? Maybe You can clear this for me. > Maybe You think the same thing but wrote that on Your "way". >
I believe the "definition" on the Mathworks page is not correct. p(b=1|r) is not the likelihood of b=1, see for instance: * http://en.wikipedia.org/wiki/Log-likelihood_ratio * http://en.wikipedia.org/wiki/Likelihood_function I also disagree with the notation L(b), as it's not a function of b! (That's why I wrote LLR(r) in my previous posts.) One very important point is that the following is true (ignoring scaling): p(r|b) = exp{-|b-r|^2 / sigma^2} But the following is NOT true (in general): p(b|r) = exp{-|b-r|^2 / sigma^2} Instead, to get p(b|r), you need to know p(r), i.e. p(b|r) = p(r|b).p(b) ----------- p(r) = p(r|b).p(b) ----------- SUM p(r|b) b But if you're calculating ratios, then the p(r) cancels from top and bottom, and if p(b)=0.5, then that cancels too. DISCLAIMER: This may all be a matter of convention (i.e. different people are used to different terminology and notation). The actual maths on the Mathworks page looks correct. -- Oli
Reply by Melinda March 1, 20092009-03-01
>Your procedure is correct! You are calculating the log-likelihood ratio >(LLR), which is defined as: > >LLR(r) = p(r|b=1) / p(r|b=0)
Hi Oli, On Mathworks site the LLRs calculation is : The log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities of a 0 bit being transmitted versus a 1 bit being transmitted for a received signal(r(x,y)). The LLR for a bit b is defined as: LLR(b)=log[ Pr(b=0|r=(x,y)) / Pr(b=0|r=(x,y)) ] , and You wrote: LLR(r) = p(r|b=1) / p(r|b=0), so I think the condition is r=(x,y) i.e. the received channel simbol, but You wrote that condition is |b=1 or |b=0, and You wrote LLR(r)=... ,did You think LLR(b)? - because "The log-likelihood ratio (LLR) is the logarithm of the ratio of probabilities of a 0 BIT(not distance - i.e. received noisy channel simbol with (x,y) coordinates!) being transmitted...." . So in short, we receive channel noisy simbol(r(x,y)) and then we must calculate those two 'probabilites' that cpecified (transmitted) bit in the one of the K bits in an M-ary symbol, is zero - Pr(b=0|r=(x,y)); or one - Pr(b=1|r=(x,y)); on base what we received(i.e. r(x,y)). So what is correct? Maybe You can clear this for me. Maybe You think the same thing but wrote that on Your "way". Thanks for replay and best regards
Reply by Oli Charlesworth March 1, 20092009-03-01
Melinda wrote:
>> Melinda wrote: >>> Hi all, >>> Few questions: I developed soft demmaper(exact LLR Algorithm) and test >>> with my own Soft Input Viterbi decoder. and results I get, are very > close >>> to teoretical(almost identical). But after while, I ask my self is my > LLR >>> algorithm good. Why I say that?-Exact LLR Algorithm on mathworks > site(just >>> type in google: Mathworks exact LLR algorithm), you will see : >>> L(b)=log(Pr(b=0|r=(x,y)) / Pr(b=1|r=(x,y))) , and below full formula > with >>> comments. My question actualy is: does Pr(b=0|r=(x,y)) + > Pr(b=1|r=(x,y)) = >>> 1 ? Why I ask that - if You look below full formula(on web site) if we > say >>> use BPSK modulation((-1)^bo is mapping - i.e. bit 0->1, bit1->-1) and > we >>> lets say receive channel (AWGN) symbol 0+0i, and if we calculate those >>> probabilities we will get L(b)=log{ exp(-1/(No/2)) / exp(-1/(No/2)) }, > and >>> if You notice, upper and lower exp(&hellip;) expression are the > same(someone >>> could say they must be each = 0.5, if Pr(b=0|r=(x,y)) + Pr(b=1|r=(x,y)) > = >>> 1; is correct /or not hmm?), and lets say that No = 1 (i.e. for SNR = > 0dB >>> -> No=10^(-SNR/10)=1), and so expression: exp(-1/(No/2)) will be equal >>> 0.1353. and we now have that those two(upper and lower) probabilities > are >>> the same but their sum is not = 1;!!! Can You please explain to me am > I >>> correct or I am wrong with my claims. Does sum of Pr(b=0|r=(x,y)) and >>> Pr(b=1|r=(x,y)) must be equal 1 when we calculate LLRs? If that is true > how >>> You look on my simple example - am I correct(and let me remind You - I > get >>> very good results with my calculation of this LLRs (on QPSK, 16QAM&hellip;) > and >>> in my case like I explain sum of this two, upper and lower probabilites > are >>> not 1!). Can You please give some kind explanation on this. >> What you are calculating above is p(r|b) (the likelihood of b), not >> p(b|r) (the a posteriori probability of b). The two are related as: >> >> p(b|r) = p(r|b).p(b)/p(r) >> >> It is true that p(b=1|r) + p(b=0|r) = 1, but not in general true that >> p(r|b=1) + p(r|b=0) = 1. >> > Oli and Steve thanks for replay, > When You said "What you are calculating above is p(r|b) (the likelihood of > b), not p(b|r) (the a posteriori probability of b)" - 1) do You mean my > procedure is right or not; 2)or You mean that - for general formula in > which we have Pr(b=0/1|r=(x,y)), not P(r=(x,y)|b=0/1). Can You one more > time explain this to me?
Your procedure is correct! You are calculating the log-likelihood ratio (LLR), which is defined as: LLR(r) = p(r|b=1) / p(r|b=0) When the prior probabilities of a 1 or a 0 are equal (i.e. p(b=0) = p(b=1) = 0.5), then the LLR will happen to be equal to p(b=1|r) / p(b=0|r), due to the relationship between p(b|r) and p(r|b). However, in the general case (where p(b=1) and p(b=0) are not equal), the relationship is more complicated.
> - And one more question: Can one (or maybe both) 'probabilities' like > Steve said 'measures' {Pr(b=0|r=(x,y)) or Pr(b=1|r=(x,y)) } be > 1.
I don't really know much about measure theory, I'm afraid. But as far as I'm aware, it's always true that p(b=1|r) + p(b=0|r) = 1, and you can't have a negative probability, so neither of them can be > 1. -- Oli