On Fri, 21 Apr 2006 16:01:49 -0700, Tim Wescott <tim@seemywebsite.com>
wrote:
>Eric Jacobsen wrote:
>
>> On Fri, 21 Apr 2006 16:28:51 GMT, Oli Filth <catch@olifilth.co.uk>
>>
>>>john said the following on 21/04/2006 16:59:
>>>
>>>>Oli Filth wrote:
>>>>
>>>>Thank you for the helpful replies. Intuitively, I think that the hard
>>>>decision coding gain should be the same whether FSK or BPSK is used, as
>>>>long as the uncoded BER entering the Viterbi decoder is the same in
>>>>each case. But I was wondering if the specifics of the noise
>>>>distribution influence the coding gain.
>>>
>>>As long as the errors are independent, I would imagine the distribution
>>>is irrelevant. By definition, in hard-decision decoding, the error
>>>magnitudes (statistically given by noise distribution) are not used.
>>>Therefore, the only thing directly affected by noise is pre-decoder bit
>>>errors.
>>
>> The distribution does matter for a lot of coding systems including
>> convolutional coding. If the errors are clumped and about as long as
>> the contraint length or longer, the decoder will have a much harder
>> time maintaining the proper path through the trellis than if the
>> errors are randomly distributed.
>>
>> This is often why convolutional codes are the inner codes for
>> concatenated coding systems in gaussian channels...they work well on
>> randomlly distributed errors. If the errors are clumped then block
>> codes (like RS) are often a better choice.
>>
>I think you're confusing the noise's probability distribution with the
>time-domain characteristics of the noise. Noise can be bursty or not,
>independently of whether it is Gaussian, uniformly distributed, a Cauer
>density, bivalued or anything else.
I thought it was pretty clear that this part of the discussion was
talking about bit error distribution. Clearly that's related to the
channel characteristics (often something other than the noise), but it
was on point to the OPs concerns.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by Tim Wescott●April 21, 20062006-04-21
Eric Jacobsen wrote:
> On Fri, 21 Apr 2006 16:28:51 GMT, Oli Filth <catch@olifilth.co.uk>
> wrote:
>
>
>>john said the following on 21/04/2006 16:59:
>>
>>>Oli Filth wrote:
>>>
>>>Thank you for the helpful replies. Intuitively, I think that the hard
>>>decision coding gain should be the same whether FSK or BPSK is used, as
>>>long as the uncoded BER entering the Viterbi decoder is the same in
>>>each case. But I was wondering if the specifics of the noise
>>>distribution influence the coding gain.
>>
>>As long as the errors are independent, I would imagine the distribution
>>is irrelevant. By definition, in hard-decision decoding, the error
>>magnitudes (statistically given by noise distribution) are not used.
>>Therefore, the only thing directly affected by noise is pre-decoder bit
>>errors.
>
>
> The distribution does matter for a lot of coding systems including
> convolutional coding. If the errors are clumped and about as long as
> the contraint length or longer, the decoder will have a much harder
> time maintaining the proper path through the trellis than if the
> errors are randomly distributed.
>
> This is often why convolutional codes are the inner codes for
> concatenated coding systems in gaussian channels...they work well on
> randomlly distributed errors. If the errors are clumped then block
> codes (like RS) are often a better choice.
>
I think you're confusing the noise's probability distribution with the
time-domain characteristics of the noise. Noise can be bursty or not,
independently of whether it is Gaussian, uniformly distributed, a Cauer
density, bivalued or anything else.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Posting from Google? See http://cfaj.freeshell.org/google/
Reply by Tim Wescott●April 21, 20062006-04-21
john wrote:
> Oli Filth wrote:
>
>>Oli Filth wrote:
>>
>>>Tim Wescott wrote:
>>>
>>>>john wrote:
>>>>
>>>>
>>>>>Hello all,
>>>>>
>>>>>I have a question about coding gain for a convolutional code. In the
>>>>>books that I have, the coding gain is presented in the form of graph of
>>>>>BER vs Eb/No for a given modulation format, typically BPSK. The coding
>>>>>gain is the horizontal distance between the uncoded and coded curves on
>>>>>the graph.
>>>>>
>>>>>My question is, what if I change the modulation format to FSK instead?
>>>>>I understand that both curves (coded and uncoded) will shift to the
>>>>>right and change shape a bit, but will the coding gain (horizontal
>>>>>distance between them) stay the same?
>>>>>
>>>>>If it matters, at this point I am only considering hard decision, rate
>>>>>1/2, K=7.
>>>>>
>>>>
>>>>Since an
>>>>incoherently detected FSK signal isn't going to have the same shape to
>>>>it's waterfall curve the vertical distance (coding gain) will be different.
>>>
>>>Coding gain is generally defined as the *horizontal* gain, i.e. the
>>>reduction in Eb/No required for a given BER.
>>>
>>>In fact, I think the *vertical* gain (BER improvement at a given
>>>uncoded BER) will remain constant, at least for hard-decision.
>>
>>As an addendum, I believe that asymptotic coding gain (ACG) remains
>>constant too. As Eb/No goes to infinity, then probability of
>>non-nearest-neighbour errors goes to zero quicker than
>>nearest-neighbour errors, and are therefore negligible.
>>Nearest-neighbor error distances are defined by the minimum/free
>>distance of the code, which doesn't change with modulation scheme, so
>>the ACG will be the same.
>>
>>
>>--
>>Oli
>
>
> Thank you for the helpful replies. Intuitively, I think that the hard
> decision coding gain should be the same whether FSK or BPSK is used, as
> long as the uncoded BER entering the Viterbi decoder is the same in
> each case. But I was wondering if the specifics of the noise
> distribution influence the coding gain.
>
> For BPSK, assuming perfect synchronization, we apply a sinx/x filter to
> white Gaussian noise and take the real part. The result is real,
> Guassian, and at zero Hertz. For noncoherent FSK, we apply bandpass
> sinx/x filters at plus and minus the deviation, then take the
> magnitudes of each filter output and difference them. The resulting
> noise is not Gaussian -- I think it is Rayleigh or Rician but I
> honestly don't remember the details.
>
> John
>
Yes, distribution _does_ matter, sometimes a _lot_ -- see my post about
atmospheric discharge noise for an extreme example. Coding 'gain' is
only a valid measure for a given noise distribution, modulation format
and demodulator design. It's a handy measure for reducing overall
systems cost (should I go with the 1GW transmitter and no FEC, or the
1kW transmitter and a 4:1 code?), but beyond that it has little to
recommend itself.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Posting from Google? See http://cfaj.freeshell.org/google/
Reply by Tim Wescott●April 21, 20062006-04-21
Oli Filth wrote:
> Tim Wescott wrote:
>
>>john wrote:
>>
>>
>>>Hello all,
>>>
>>>I have a question about coding gain for a convolutional code. In the
>>>books that I have, the coding gain is presented in the form of graph of
>>>BER vs Eb/No for a given modulation format, typically BPSK. The coding
>>>gain is the horizontal distance between the uncoded and coded curves on
>>>the graph.
>>>
>>>My question is, what if I change the modulation format to FSK instead?
>>>I understand that both curves (coded and uncoded) will shift to the
>>>right and change shape a bit, but will the coding gain (horizontal
>>>distance between them) stay the same?
>>>
>>>If it matters, at this point I am only considering hard decision, rate
>>>1/2, K=7.
>>>
>>
>>I doubt that the coding gain will stay the same. What the code really
>>does is shift the uncoded line to the left a certain amount.
>
>
> As far as I know, use of coding shifts the BER curve up and to the
> left, which is why it usually crosses over with the uncoded curve (this
> wouldn't happen if was just a left shift).
>
Dangit -- I had my right and left mixed up.
>
>
>>Since an
>>incoherently detected FSK signal isn't going to have the same shape to
>>it's waterfall curve the vertical distance (coding gain) will be different.
>
>
> Coding gain is generally defined as the *horizontal* gain, i.e. the
> reduction in Eb/No required for a given BER.
>
> In fact, I think the *vertical* gain (BER improvement at a given
> uncoded BER) will remain constant, at least for hard-decision.
>
That's the horizontal axis in my books (or at least in the graph in my
head -- the one which I can't tell right from left).
I should have said that explicitly, instead of being obscure. So I'll
try again: The coder doesn't know squat about Eb, No, or anything else
on that side of the detector. All it knows is what comes out of the
detector. So the coding 'gain' is just a pretend number that depends
not only on the modulation scheme and the demodulator, but on the nature
of the channel.
In fact, my first experience with radio modems was at medium
frequencies* (around 300kHz). Radio on this band is dominated by
electrostatic discharge noise. Such noise has a Cauer-like density
function with effectively an infinite variance -- so barring absurd
power increases on the transmitter you will _always_ have raw bit
errors, which means that the coding 'gain' is effectively infinite.
Since an infinite coding 'gain' is absurd, you find me using quote marks
around the 'g' word.
For details on the radio, it's whys and wherefores, see my master's
thesis: http://www.wescottdesign.com/articles/MSK/mskTop.html.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Posting from Google? See http://cfaj.freeshell.org/google/
Reply by john●April 21, 20062006-04-21
Eric Jacobsen wrote:
> On Fri, 21 Apr 2006 16:28:51 GMT, Oli Filth <catch@olifilth.co.uk>
> wrote:
>
> >john said the following on 21/04/2006 16:59:
> >> Oli Filth wrote:
> >>>
> >> Thank you for the helpful replies. Intuitively, I think that the hard
> >> decision coding gain should be the same whether FSK or BPSK is used, as
> >> long as the uncoded BER entering the Viterbi decoder is the same in
> >> each case. But I was wondering if the specifics of the noise
> >> distribution influence the coding gain.
> >
> >As long as the errors are independent, I would imagine the distribution
> >is irrelevant. By definition, in hard-decision decoding, the error
> >magnitudes (statistically given by noise distribution) are not used.
> >Therefore, the only thing directly affected by noise is pre-decoder bit
> >errors.
>
> The distribution does matter for a lot of coding systems including
> convolutional coding. If the errors are clumped and about as long as
> the contraint length or longer, the decoder will have a much harder
> time maintaining the proper path through the trellis than if the
> errors are randomly distributed.
>
> This is often why convolutional codes are the inner codes for
> concatenated coding systems in gaussian channels...they work well on
> randomlly distributed errors. If the errors are clumped then block
> codes (like RS) are often a better choice.
>
> Eric Jacobsen
> Minister of Algorithms, Intel Corp.
> My opinions may not be Intel's opinions.
> http://www.ericjacobsen.org
I agree about bursty errors. It turns out that the noncoherent FSK
noise looks fairly white. The spectrum rolls off slowly, about 10 dB
over 0 to Fs/2. The kurtosis is about 3. So from that I'd say the
coding gain is not going to be much different than BPSK.
John
Reply by Eric Jacobsen●April 21, 20062006-04-21
On Fri, 21 Apr 2006 16:28:51 GMT, Oli Filth <catch@olifilth.co.uk>
wrote:
>john said the following on 21/04/2006 16:59:
>> Oli Filth wrote:
>>>
>> Thank you for the helpful replies. Intuitively, I think that the hard
>> decision coding gain should be the same whether FSK or BPSK is used, as
>> long as the uncoded BER entering the Viterbi decoder is the same in
>> each case. But I was wondering if the specifics of the noise
>> distribution influence the coding gain.
>
>As long as the errors are independent, I would imagine the distribution
>is irrelevant. By definition, in hard-decision decoding, the error
>magnitudes (statistically given by noise distribution) are not used.
>Therefore, the only thing directly affected by noise is pre-decoder bit
>errors.
The distribution does matter for a lot of coding systems including
convolutional coding. If the errors are clumped and about as long as
the contraint length or longer, the decoder will have a much harder
time maintaining the proper path through the trellis than if the
errors are randomly distributed.
This is often why convolutional codes are the inner codes for
concatenated coding systems in gaussian channels...they work well on
randomlly distributed errors. If the errors are clumped then block
codes (like RS) are often a better choice.
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by Oli Filth●April 21, 20062006-04-21
john said the following on 21/04/2006 16:59:
> Oli Filth wrote:
>> Oli Filth wrote:
>>> Tim Wescott wrote:
>>>> john wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> I have a question about coding gain for a convolutional code. In the
>>>>> books that I have, the coding gain is presented in the form of graph of
>>>>> BER vs Eb/No for a given modulation format, typically BPSK. The coding
>>>>> gain is the horizontal distance between the uncoded and coded curves on
>>>>> the graph.
>>>>>
>>>>> My question is, what if I change the modulation format to FSK instead?
>>>>> I understand that both curves (coded and uncoded) will shift to the
>>>>> right and change shape a bit, but will the coding gain (horizontal
>>>>> distance between them) stay the same?
>>>>>
>>>>> If it matters, at this point I am only considering hard decision, rate
>>>>> 1/2, K=7.
>>>>>
>>>> Since an
>>>> incoherently detected FSK signal isn't going to have the same shape to
>>>> it's waterfall curve the vertical distance (coding gain) will be different.
>>> Coding gain is generally defined as the *horizontal* gain, i.e. the
>>> reduction in Eb/No required for a given BER.
>>>
>>> In fact, I think the *vertical* gain (BER improvement at a given
>>> uncoded BER) will remain constant, at least for hard-decision.
>> As an addendum, I believe that asymptotic coding gain (ACG) remains
>> constant too. As Eb/No goes to infinity, then probability of
>> non-nearest-neighbour errors goes to zero quicker than
>> nearest-neighbour errors, and are therefore negligible.
>> Nearest-neighbor error distances are defined by the minimum/free
>> distance of the code, which doesn't change with modulation scheme, so
>> the ACG will be the same.
>>
> Thank you for the helpful replies. Intuitively, I think that the hard
> decision coding gain should be the same whether FSK or BPSK is used, as
> long as the uncoded BER entering the Viterbi decoder is the same in
> each case. But I was wondering if the specifics of the noise
> distribution influence the coding gain.
As long as the errors are independent, I would imagine the distribution
is irrelevant. By definition, in hard-decision decoding, the error
magnitudes (statistically given by noise distribution) are not used.
Therefore, the only thing directly affected by noise is pre-decoder bit
errors.
--
Oli
Reply by john●April 21, 20062006-04-21
Oli Filth wrote:
> Oli Filth wrote:
> > Tim Wescott wrote:
> > > john wrote:
> > >
> > > > Hello all,
> > > >
> > > > I have a question about coding gain for a convolutional code. In the
> > > > books that I have, the coding gain is presented in the form of graph of
> > > > BER vs Eb/No for a given modulation format, typically BPSK. The coding
> > > > gain is the horizontal distance between the uncoded and coded curves on
> > > > the graph.
> > > >
> > > > My question is, what if I change the modulation format to FSK instead?
> > > > I understand that both curves (coded and uncoded) will shift to the
> > > > right and change shape a bit, but will the coding gain (horizontal
> > > > distance between them) stay the same?
> > > >
> > > > If it matters, at this point I am only considering hard decision, rate
> > > > 1/2, K=7.
> > > >
> > > Since an
> > > incoherently detected FSK signal isn't going to have the same shape to
> > > it's waterfall curve the vertical distance (coding gain) will be different.
> >
> > Coding gain is generally defined as the *horizontal* gain, i.e. the
> > reduction in Eb/No required for a given BER.
> >
> > In fact, I think the *vertical* gain (BER improvement at a given
> > uncoded BER) will remain constant, at least for hard-decision.
>
> As an addendum, I believe that asymptotic coding gain (ACG) remains
> constant too. As Eb/No goes to infinity, then probability of
> non-nearest-neighbour errors goes to zero quicker than
> nearest-neighbour errors, and are therefore negligible.
> Nearest-neighbor error distances are defined by the minimum/free
> distance of the code, which doesn't change with modulation scheme, so
> the ACG will be the same.
>
>
> --
> Oli
Thank you for the helpful replies. Intuitively, I think that the hard
decision coding gain should be the same whether FSK or BPSK is used, as
long as the uncoded BER entering the Viterbi decoder is the same in
each case. But I was wondering if the specifics of the noise
distribution influence the coding gain.
For BPSK, assuming perfect synchronization, we apply a sinx/x filter to
white Gaussian noise and take the real part. The result is real,
Guassian, and at zero Hertz. For noncoherent FSK, we apply bandpass
sinx/x filters at plus and minus the deviation, then take the
magnitudes of each filter output and difference them. The resulting
noise is not Gaussian -- I think it is Rayleigh or Rician but I
honestly don't remember the details.
John
Reply by Oli Filth●April 21, 20062006-04-21
Oli Filth wrote:
> Tim Wescott wrote:
> > john wrote:
> >
> > > Hello all,
> > >
> > > I have a question about coding gain for a convolutional code. In the
> > > books that I have, the coding gain is presented in the form of graph of
> > > BER vs Eb/No for a given modulation format, typically BPSK. The coding
> > > gain is the horizontal distance between the uncoded and coded curves on
> > > the graph.
> > >
> > > My question is, what if I change the modulation format to FSK instead?
> > > I understand that both curves (coded and uncoded) will shift to the
> > > right and change shape a bit, but will the coding gain (horizontal
> > > distance between them) stay the same?
> > >
> > > If it matters, at this point I am only considering hard decision, rate
> > > 1/2, K=7.
> > >
> > Since an
> > incoherently detected FSK signal isn't going to have the same shape to
> > it's waterfall curve the vertical distance (coding gain) will be different.
>
> Coding gain is generally defined as the *horizontal* gain, i.e. the
> reduction in Eb/No required for a given BER.
>
> In fact, I think the *vertical* gain (BER improvement at a given
> uncoded BER) will remain constant, at least for hard-decision.
As an addendum, I believe that asymptotic coding gain (ACG) remains
constant too. As Eb/No goes to infinity, then probability of
non-nearest-neighbour errors goes to zero quicker than
nearest-neighbour errors, and are therefore negligible.
Nearest-neighbor error distances are defined by the minimum/free
distance of the code, which doesn't change with modulation scheme, so
the ACG will be the same.
--
Oli
Reply by Oli Filth●April 21, 20062006-04-21
Tim Wescott wrote:
> john wrote:
>
> > Hello all,
> >
> > I have a question about coding gain for a convolutional code. In the
> > books that I have, the coding gain is presented in the form of graph of
> > BER vs Eb/No for a given modulation format, typically BPSK. The coding
> > gain is the horizontal distance between the uncoded and coded curves on
> > the graph.
> >
> > My question is, what if I change the modulation format to FSK instead?
> > I understand that both curves (coded and uncoded) will shift to the
> > right and change shape a bit, but will the coding gain (horizontal
> > distance between them) stay the same?
> >
> > If it matters, at this point I am only considering hard decision, rate
> > 1/2, K=7.
> >
> I doubt that the coding gain will stay the same. What the code really
> does is shift the uncoded line to the left a certain amount.
As far as I know, use of coding shifts the BER curve up and to the
left, which is why it usually crosses over with the uncoded curve (this
wouldn't happen if was just a left shift).
> Since an
> incoherently detected FSK signal isn't going to have the same shape to
> it's waterfall curve the vertical distance (coding gain) will be different.
Coding gain is generally defined as the *horizontal* gain, i.e. the
reduction in Eb/No required for a given BER.
In fact, I think the *vertical* gain (BER improvement at a given
uncoded BER) will remain constant, at least for hard-decision.
--
Oli