Reply by Tom Gardner December 10, 20132013-12-10
You're welcome!

You can tell I'm not a "consultant" now; if I was I would have used
that link myself and charged you for the information I discovered therein :)

On 10/12/13 12:40, jameer411@gmail.com wrote:
> Thanks a lot for ur valuable suggestion.. :) > > On Tuesday, December 10, 2013 4:57:12 PM UTC+5:30, Tom Gardner wrote: >> On 10/12/13 10:08, jameer411@gmail.com wrote: >> >>> Hi all, >> >>> >> >>> Can any one suggest references about Practical SOVA Implementation???? >> >>> >> >>> Thanks >> >>> Ali >> >> >> >> >> >> One starting point would be >> >> http://lmgtfy.com/?q=Practical+SOVA+Implementation >
Reply by December 10, 20132013-12-10
Thanks a lot for ur valuable suggestion.. :) 

On Tuesday, December 10, 2013 4:57:12 PM UTC+5:30, Tom Gardner wrote:
> On 10/12/13 10:08, jameer411@gmail.com wrote: > > > Hi all, > > > > > > Can any one suggest references about Practical SOVA Implementation???? > > > > > > Thanks > > > Ali > > > > > > One starting point would be > > http://lmgtfy.com/?q=Practical+SOVA+Implementation
Reply by Tom Gardner December 10, 20132013-12-10
On 10/12/13 10:08, jameer411@gmail.com wrote:
> Hi all, > > Can any one suggest references about Practical SOVA Implementation???? > > Thanks > Ali
One starting point would be http://lmgtfy.com/?q=Practical+SOVA+Implementation
Reply by December 10, 20132013-12-10
Hi all,

Can any one suggest references about Practical SOVA Implementation????

Thanks
Ali

On Saturday, July 2, 2011 11:29:22 PM UTC+5:30, aizza ahmed wrote:
> hi all, > i am back again with questions :-). as i am into viterbi > decoder, i am stuck with understanding of LLR and soft demodulated > output. the other day, my post on comp.dsp has filled with code.this > time (according to Tim wescott's advice) i am putting just question > > the other day, the posted code is > > 1. I could see many papers proposing soft demodulated outputs. lets > say a value after soft demodulation is fed into viterbi decoder. say > values between -1 and +1 voltage are paritioned into 8 regions as > shown below. They use it for SOVA. > > -1 -0.75 -0.5 -0.25 0 0.25 .50 .75 +1 > regions > 111 110 101 100 011 010 001 001 > outputs > > so these are used to compute euclidean distance when doing trellis > decoding. Here we are nowhere doing any Log Likehood ratio computation > (i meant bit by bit soft bit ) > > > 2. now the doubt is get what is N bit LLR, which they say they use it > for viterbi decoding algorithm (soft variant). they do one_distance - > zero_distance for each bit and extract some value for each bit > > say 000 will fetch YrYrYr (Yr is real part of the signal ) > > according to this link. > > soft quantized output is not equal to LLR right..how both can be sent > to decoding algorithm or quantized output is for different algo and > LLR is for different algo ?. please explain.. > > Thanks > A.Ahmed
Reply by David Drumm December 4, 20132013-12-04
You can go here: 

http://www.hindawi.com/journals/jece/2007/053517/abs/

and download the pdf that has a nice explanation, at the top of page 2, of
the relationship between LLRs and Euclidean distance. The soft
demapping/decision is a quantization of the Euclidean distance. 	 

_____________________________		
Posted through www.DSPRelated.com
Reply by commsignal December 3, 20132013-12-03
The term you need to search is "soft demapping".	 

_____________________________		
Posted through www.DSPRelated.com
Reply by December 3, 20132013-12-03
Hi Friends,

I am currently working on SOVA implementation for MMSE LE based Turbo Equalization. I have one doubt. I understood that SOVA will give soft information about the decoded bits(LLR) which are to be feed back to SISO MMSE Equalizer as soft input. Here the LLRs from SOVA are decoded information whereas SISO Equalizer is working on Coded data symbols. So Do i need to apply coding along with interleaver for LLRs from SOVA while feeding to SISO Equalizer??? or Only interleaver is enough??

Thanks
Ali


On Saturday, July 2, 2011 11:29:22 PM UTC+5:30, aizza ahmed wrote:
> hi all, > i am back again with questions :-). as i am into viterbi > decoder, i am stuck with understanding of LLR and soft demodulated > output. the other day, my post on comp.dsp has filled with code.this > time (according to Tim wescott's advice) i am putting just question > > the other day, the posted code is > > 1. I could see many papers proposing soft demodulated outputs. lets > say a value after soft demodulation is fed into viterbi decoder. say > values between -1 and +1 voltage are paritioned into 8 regions as > shown below. They use it for SOVA. > > -1 -0.75 -0.5 -0.25 0 0.25 .50 .75 +1 > regions > 111 110 101 100 011 010 001 001 > outputs > > so these are used to compute euclidean distance when doing trellis > decoding. Here we are nowhere doing any Log Likehood ratio computation > (i meant bit by bit soft bit ) > > > 2. now the doubt is get what is N bit LLR, which they say they use it > for viterbi decoding algorithm (soft variant). they do one_distance - > zero_distance for each bit and extract some value for each bit > > say 000 will fetch YrYrYr (Yr is real part of the signal ) > > according to this link. > > soft quantized output is not equal to LLR right..how both can be sent > to decoding algorithm or quantized output is for different algo and > LLR is for different algo ?. please explain.. > > Thanks > A.Ahmed
Reply by aizza ahmed July 3, 20112011-07-03
On Jul 3, 10:32=A0am, aizza ahmed <aizzaah...@gmail.com> wrote:
> On Jul 3, 2:20=A0am, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > > > > > > > > > > > On Sat, 2 Jul 2011 10:59:22 -0700 (PDT), aizza ahmed > > > <aizzaah...@gmail.com> wrote: > > >hi all, > > > =A0 =A0 =A0 i am back again with questions :-). as i am into viterbi > > >decoder, i am stuck with understanding of LLR and soft demodulated > > >output. the other day, my post on comp.dsp has filled with code.this > > >time (according to Tim wescott's advice) i am putting just question > > > >the other day, the posted code is > > > >1. I could see many papers proposing soft demodulated outputs. lets > > >say a value after soft demodulation is fed into viterbi decoder. say > > >values between -1 and +1 voltage are paritioned into 8 regions as > > >shown below. They use it for SOVA. > > > >-1 =A0 =A0 -0.75 =A0 =A0-0.5 =A0 =A0-0.25 =A0 =A00 =A0 =A00.25 =A0 .50=
=A0 .75 =A0 +1
> > >regions > > > =A0 =A0 =A0111 =A0 110 =A0 =A0 =A0101 =A0 =A0100 =A0 011 =A0 =A0010 =
=A0 =A0001 =A0 001
> > >outputs > > > >so these are used to compute euclidean distance when doing trellis > > >decoding. Here we are nowhere doing any Log Likehood ratio computation > > >(i meant bit by bit soft bit ) > > > >2. now the doubt is get what is N bit LLR, which they say they use it > > >for viterbi decoding algorithm (soft variant). they do one_distance - > > >zero_distance for each bit and extract some value for each bit > > > >say 000 will fetch YrYrYr (Yr is real part of the signal ) > > > >according to this link. > > > >soft quantized output is not equal to LLR right..how both can be sent > > >to decoding algorithm or quantized output is for different algo and > > >LLR is for different algo ?. please explain.. > > > >Thanks > > >A.Ahmed > > > What really matters is what the decoder implementation is expecting. > > Usually, but not always, the three-bit soft decision values you used > > in your example or something similar are provided in place of (i.e., > > essentially are used as) the LLR input values. =A0 =A0Once in a while t=
he
> > decoder input will perform better if there's a remapping of the linear > > soft-decision to something else, but it really depends on what the > > decoder is expecting. > > > If you're using a SOVA there probably exists some design examples or a > > white paper or an application note that describes what that particular > > decoder expects, since if there is feedback (like for iterative > > decoding) the output and input should be compatible. > > > Eric Jacobsenhttp://www.ericjacobsen.orghttp://www.dsprelated.com/blogs=
-1//Eric_Ja...
> > hello sir, > =A0 =A0 =A0 =A0 =A0 =A0 thanks for explanation and i dig out your posting=
in 2005
> > ---------- > Do you mean to compare Viterbi and MAP decoders for a single > convolutional code used by itself? =A0 The difference isn't going to be > large, and will probably depend on the constraint length. > > Or do you mean to compare the two when used in an iterative code like > a Turbo Code? =A0 There the difference is more significant. > > Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions.http://www.ericjacobsen.org > ------------- > > so what i understand from that conversation is LLR as it is also > called MAP (LLR =3D MAP) is used in MAP Decoding algorithm and Soft > quantized modulation output is used in viterbi decoder (SOVA) > > now from this link > > https://www.cresis.ku.edu/~rvc/projects/faq/map.htm#Question2 > > i found that > > 1. The Maximum Likelihood Algorithms (like the Viterbi Algorithm) find > the most probable information sequence that was transmitted, while the > MAP algorithm finds the most probable information bit to have been > transmitted given the coded sequence. The information bits returned by > the MAP algorithm need not form a connected path through the trellis. > 2. The error performance of both the Viterbi and the MAP algorithms > are not much different under high Eb/N0 and low BER's. But at low Eb/ > N0 and high BER's, the MAP algorithm is found to outperform the > Viterbi Algorithm by quite a margin. > 3. The MAP algorithm is considerably more complex than the Viterbi > Algorithm. > > Sir please let me know whether my understanding is correct > > LLR - MAP decoder > quantized soft output - Soft output viterbi algorithm (SOVA) > > Thanks > A.Ahmed
ohk, i studied little more and here is my understanding and after completing my study i will post final differences Convolutional encoder -----> decoders ----> 1. viterbi (ML decoder) decodes a sequence of bits types of viterbi decoders ---> hard decision viterbi decoders and soft decision viterbi decoders (soft one uses soft demodulation quantized values ) 2. modified viterbi (Soft output viterbi algorithm) uses LLR as inputs 3. MAP decoder (BCJR Algo) uses LLR as inputs and little notes from earlier link it is an inherently Soft - Input, Soft - Output algorithm (SISO algorithm) and it very well suited for Iterative Decoding (as it is used in Turbo codes). The Viterbi algorithm is an inherently Hard output algorithm and it has to be modified to provide soft outputs. (The modification resulted in the Soft Output Viterbi Algorithm (generally called SOVA), which is approximately twice as complex as the Viterbi algorithm (but not as complex than the MAP)). differences between viterbi algorithm and SOVA For a convolutional code, the exhaustive search for the MLSE can be avoided in the Viterbi algorithm by making use of the trellis structure. For the MAP receiver, the exhaustivesearch can be avoided in the BCJR (Bahl, Cocke, Jelinek, Raviv) algorithm (Bahl et al.1974). In contrast to the SOVA, it provides us with the exact LLR value for a bit, not just an approximate one. The price for this exact information is the higher complexity. TheBCJR algorithm has been known for a long time, but it became very popular not before itswidespread application in turbo decoding too much of information to digest :-( please clarify on all things (if possible) i pointed above Thanks A.Ahmed
Reply by aizza ahmed July 3, 20112011-07-03
On Jul 3, 2:20=A0am, eric.jacob...@ieee.org (Eric Jacobsen) wrote:
> On Sat, 2 Jul 2011 10:59:22 -0700 (PDT), aizza ahmed > > > > > > > > > > <aizzaah...@gmail.com> wrote: > >hi all, > > =A0 =A0 =A0 i am back again with questions :-). as i am into viterbi > >decoder, i am stuck with understanding of LLR and soft demodulated > >output. the other day, my post on comp.dsp has filled with code.this > >time (according to Tim wescott's advice) i am putting just question > > >the other day, the posted code is > > >1. I could see many papers proposing soft demodulated outputs. lets > >say a value after soft demodulation is fed into viterbi decoder. say > >values between -1 and +1 voltage are paritioned into 8 regions as > >shown below. They use it for SOVA. > > >-1 =A0 =A0 -0.75 =A0 =A0-0.5 =A0 =A0-0.25 =A0 =A00 =A0 =A00.25 =A0 .50 =
=A0 .75 =A0 +1
> >regions > > =A0 =A0 =A0111 =A0 110 =A0 =A0 =A0101 =A0 =A0100 =A0 011 =A0 =A0010 =A0=
=A0001 =A0 001
> >outputs > > >so these are used to compute euclidean distance when doing trellis > >decoding. Here we are nowhere doing any Log Likehood ratio computation > >(i meant bit by bit soft bit ) > > >2. now the doubt is get what is N bit LLR, which they say they use it > >for viterbi decoding algorithm (soft variant). they do one_distance - > >zero_distance for each bit and extract some value for each bit > > >say 000 will fetch YrYrYr (Yr is real part of the signal ) > > >according to this link. > > >soft quantized output is not equal to LLR right..how both can be sent > >to decoding algorithm or quantized output is for different algo and > >LLR is for different algo ?. please explain.. > > >Thanks > >A.Ahmed > > What really matters is what the decoder implementation is expecting. > Usually, but not always, the three-bit soft decision values you used > in your example or something similar are provided in place of (i.e., > essentially are used as) the LLR input values. =A0 =A0Once in a while the > decoder input will perform better if there's a remapping of the linear > soft-decision to something else, but it really depends on what the > decoder is expecting. > > If you're using a SOVA there probably exists some design examples or a > white paper or an application note that describes what that particular > decoder expects, since if there is feedback (like for iterative > decoding) the output and input should be compatible. > > Eric Jacobsenhttp://www.ericjacobsen.orghttp://www.dsprelated.com/blogs-1=
//Eric_Jacobsen.php hello sir, thanks for explanation and i dig out your posting in 2005 ---------- Do you mean to compare Viterbi and MAP decoders for a single convolutional code used by itself? The difference isn't going to be large, and will probably depend on the constraint length. Or do you mean to compare the two when used in an iterative code like a Turbo Code? There the difference is more significant. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org ------------- so what i understand from that conversation is LLR as it is also called MAP (LLR =3D MAP) is used in MAP Decoding algorithm and Soft quantized modulation output is used in viterbi decoder (SOVA) now from this link https://www.cresis.ku.edu/~rvc/projects/faq/map.htm#Question2 i found that 1. The Maximum Likelihood Algorithms (like the Viterbi Algorithm) find the most probable information sequence that was transmitted, while the MAP algorithm finds the most probable information bit to have been transmitted given the coded sequence. The information bits returned by the MAP algorithm need not form a connected path through the trellis. 2. The error performance of both the Viterbi and the MAP algorithms are not much different under high Eb/N0 and low BER's. But at low Eb/ N0 and high BER's, the MAP algorithm is found to outperform the Viterbi Algorithm by quite a margin. 3. The MAP algorithm is considerably more complex than the Viterbi Algorithm. Sir please let me know whether my understanding is correct LLR - MAP decoder quantized soft output - Soft output viterbi algorithm (SOVA) Thanks A.Ahmed
Reply by Eric Jacobsen July 2, 20112011-07-02
On Sat, 2 Jul 2011 10:59:22 -0700 (PDT), aizza ahmed
<aizzaahmed@gmail.com> wrote:

>hi all, > i am back again with questions :-). as i am into viterbi >decoder, i am stuck with understanding of LLR and soft demodulated >output. the other day, my post on comp.dsp has filled with code.this >time (according to Tim wescott's advice) i am putting just question > >the other day, the posted code is > >1. I could see many papers proposing soft demodulated outputs. lets >say a value after soft demodulation is fed into viterbi decoder. say >values between -1 and +1 voltage are paritioned into 8 regions as >shown below. They use it for SOVA. > >-1 -0.75 -0.5 -0.25 0 0.25 .50 .75 +1 >regions > 111 110 101 100 011 010 001 001 >outputs > >so these are used to compute euclidean distance when doing trellis >decoding. Here we are nowhere doing any Log Likehood ratio computation >(i meant bit by bit soft bit ) > > >2. now the doubt is get what is N bit LLR, which they say they use it >for viterbi decoding algorithm (soft variant). they do one_distance - >zero_distance for each bit and extract some value for each bit > >say 000 will fetch YrYrYr (Yr is real part of the signal ) > >according to this link. > >soft quantized output is not equal to LLR right..how both can be sent >to decoding algorithm or quantized output is for different algo and >LLR is for different algo ?. please explain.. > >Thanks >A.Ahmed
What really matters is what the decoder implementation is expecting. Usually, but not always, the three-bit soft decision values you used in your example or something similar are provided in place of (i.e., essentially are used as) the LLR input values. Once in a while the decoder input will perform better if there's a remapping of the linear soft-decision to something else, but it really depends on what the decoder is expecting. If you're using a SOVA there probably exists some design examples or a white paper or an application note that describes what that particular decoder expects, since if there is feedback (like for iterative decoding) the output and input should be compatible. Eric Jacobsen http://www.ericjacobsen.org http://www.dsprelated.com/blogs-1//Eric_Jacobsen.php