Hi Sir!
I really want the *right* answer not the answer *I want*.
Even I had some doubts regarding my simulation, so what I did is that I
used RS code of (255,127) which is definately rate 1/2 code.
So, now I am getting BER of 1e-4 at 5.5 dB.
I *think* this correct, I might be wrong.
I would really appreciate your opinion. I dont want to report any wrong
results.
Regards,
Chintan
Reply by dvsa...@yahoo.com●October 29, 20082008-10-29
On Oct 26, 11:24�pm, "cpshah99" <cpsha...@rediffmail.com> wrote:
> the channel is AWGN and BPSK modulation:
>
> data=randint(1,16,256);
> data_pad=[zeros(1,223) data];
> code=rsenc(data_pad,255,239);
> actual_code=code(224:end); which gives me 32 symbols.
> But the way I am scaling the noise is:
>
> sigma=sqrt(1/(2*R_c*snr_lin));
>
> where R_c=239/255. By doing this way, at 4.5 dB, I am getting BER of
> 9e-5;
>
> Ideally, R_c=1/2, but if I use R_c=0.5, at 4.5 dB I am geting BER of
> 1e-2.
Chintan:
You ask, Is this correct?
Well, it depends on whether you want to get the right
answer or the answer that you *want* to get, viz., your
RS code will perform better than a rate 1/2 convolutional
code. The code that you are using is a rate 1/2 code,
and so you should use R_c = 1/2, and not R_c = 239/255,
shouldn't you?
Uncoded PSK on an AWGN channel has BER just a little
below 10^-2 at 4.5 dB. If a rate 1/2 code is used, the
raw channel bit error rate is much higher, about 0.046 or
so, and thus an 8-bit RS symbol has a high probability
of being incorrect, likely enough to overwhelm the error
correcting capability of the code. It is not surprising that
your simulation gives a rather large decoded BER
(assuming that it has been done correctly). Making the
answer more palatable (to yourself) by using the wrong
code rate and hence the wrong SNR is not the way to go.
It will make the denizens of comp.dsp very suspicious of
any results that you report in the future.
Dilip Sarwate
Reply by cpshah99●October 27, 20082008-10-27
Hello
I have completed the simulation of rate 1/2 RS code as Dr. Sarwate
suggested. Now, the way I have done is as follows, the channel is AWGN and
BPSK modulation:
data=randint(1,16,256);
data_pad=[zeros(1,223) data];
code=rsenc(data_pad,255,239);
actual_code=code(224:end); which gives me 32 symbols.
and exactly opp at receiver. This works fine and if I remove noise I am
getting 0 BER.
But the way I am scaling the noise is:
sigma=sqrt(1/(2*R_c*snr_lin));
where R_c=239/255. By doing this way, at 4.5 dB, I am getting BER of
9e-5;
Ideally, R_c=1/2, but if I use R_c=0.5, at 4.5 dB I am geting BER of
1e-2.
Is this correct?
Chintan
Reply by Steve Pope●October 26, 20082008-10-26
dvsarwate@yahoo.com <dvsarwate@gmail.com> wrote:
>But an RS encoder/decoder for a (32,16) RS code over
>GF(256) obtained by shortening a (255,239) RS code has the
>same complexity as the original code.
Slight nitpick: on a per-decoded bit basis, it has higher
complexity. On a per-codeword basis, it has about the
same complexity, but the codewords have fewer bits.
>A (32,16) RS code over GF(256) s a very powerful code but
>with *many* weak spots for use on an AWGN channel where
>it would typically be implemented as a (256,128) binary code
>with antipodal PSK signaling. This code is guaranteed to
>correct all (single) bursts of errors as long as 50 bits of the 256
>bits in a block, and *some* bursts of longer lengths (up to 64
>bits) can also be corrected. But there are random bit errors
>of as few as 9 bits that the code is unable to correct.
On an AWGN channel, if you are using a RS code with a given
code rate and a given number of bits per codeword, you are
best off using the smallest possible field size. Thus,
this rate 1/2 code has 256 bits per codeword; a 6-bit
(42,21) RS code has roughly the same number of bits per codeword
but performs better. (And, a binary BCH code performs better
still, if the channel is truly AWGN without bursts.)
An interesting possibility for larger codeword sizes is
to use an algebraic geometry code.
Steve
Reply by Eric Jacobsen●October 26, 20082008-10-26
On Sun, 26 Oct 2008 08:05:48 -0700 (PDT), "dvsarwate@yahoo.com"
<dvsarwate@gmail.com> wrote:
>On Oct 24, 6:32�pm, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
>
>>
>> The relative complexities (of the encoders and decoders) are an
>> important metric to consider as well. � Running an RS encoder or
>> decoder at such a low rate is usually inefficient compared to a decent
>> Viterbi decoder. � The RS encoder will always be far more complex than
>> the CC encoder, especially at R=1/2.
>>
>
>True. But an RS encoder/decoder for a (32,16) RS code over
>GF(256) obtained by shortening a (255,239) RS code has the
>same complexity as the original code.
Yes, and since the larger codeword has to be processed far more
frequently (every 32 bytes received rather than every 255 or 204 or
whatever) there may need to be parallel instantiations needed in order
to keep up. That's just comparing to the typical 255 or 204-byte
codewords.
> On the other hand,
>a (256,128) RS code or (255,127) RS code would be totally
>impractical.
Agreed. I suspect that's why we don't see them used in practice. ;)
>A (32,16) RS code over GF(256) s a very powerful code but
>with *many* weak spots for use on an AWGN channel where
>it would typically be implemented as a (256,128) binary code
>with antipodal PSK signaling. This code is guaranteed to
>correct all (single) bursts of errors as long as 50 bits of the 256
>bits in a block, and *some* bursts of longer lengths (up to 64
>bits) can also be corrected. But there are random bit errors
>of as few as 9 bits that the code is unable to correct.
>Unfortunately,
>on an AWGN channel, random bit errors are generally more
>likely to occur than long bursts. A better modulation scheme for
>an RS code on an AWGN channel would be M-ary orthogonal
>FSK, but nobody wants to use that, do they?
>
>--Dilip Sarwate
Not if M is very large at all.
Agreed on your points, as usual. Depending on what the OP is doing
I'm interested in his results. There are cases in industry where
RS-only FEC is used in practice, usually with reasonably high order
modulation and the typical high code rates. I've not seen such a
low-rate used before, though.
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.ericjacobsen.org
Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by dvsa...@yahoo.com●October 26, 20082008-10-26
On Oct 24, 6:32�pm, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
>
> The relative complexities (of the encoders and decoders) are an
> important metric to consider as well. � Running an RS encoder or
> decoder at such a low rate is usually inefficient compared to a decent
> Viterbi decoder. � The RS encoder will always be far more complex than
> the CC encoder, especially at R=1/2.
>
True. But an RS encoder/decoder for a (32,16) RS code over
GF(256) obtained by shortening a (255,239) RS code has the
same complexity as the original code. On the other hand,
a (256,128) RS code or (255,127) RS code would be totally
impractical.
A (32,16) RS code over GF(256) s a very powerful code but
with *many* weak spots for use on an AWGN channel where
it would typically be implemented as a (256,128) binary code
with antipodal PSK signaling. This code is guaranteed to
correct all (single) bursts of errors as long as 50 bits of the 256
bits in a block, and *some* bursts of longer lengths (up to 64
bits) can also be corrected. But there are random bit errors
of as few as 9 bits that the code is unable to correct.
Unfortunately,
on an AWGN channel, random bit errors are generally more
likely to occur than long bursts. A better modulation scheme for
an RS code on an AWGN channel would be M-ary orthogonal
FSK, but nobody wants to use that, do they?
--Dilip Sarwate
Reply by cpshah99●October 25, 20082008-10-25
Hi Eric
Definately I will share my results with you guys once I am done.
May be I will make a small report.
Chintan
Reply by Eric Jacobsen●October 24, 20082008-10-24
On Fri, 24 Oct 2008 08:30:18 -0500, "cpshah99"
<cpshah99@rediffmail.com> wrote:
>Thanks for replying.
>
>I know that [255 239] is rate 0.937 code, not rate 1/2.
>
>I want to simulate RS code of rate 1/2 for AWGN channel.
>
>And if I am not wrong, rate 1/2 RS code will perform better than rate 1/2
>convolution code for AWGN channel.
>
>Thanks again.
>
>Chintan
I think which one outperforms the other is going to depend on the
codes, e.g., the constraint length of the convolutional code.
The relative complexities (of the encoders and decoders) are an
important metric to consider as well. Running an RS encoder or
decoder at such a low rate is usually inefficient compared to a decent
Viterbi decoder. The RS encoder will always be far more complex than
the CC encoder, especially at R=1/2.
I'd be interested in the conditions and the results if you can share
them when you're done.
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.ericjacobsen.org
Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by cpshah99●October 24, 20082008-10-24
Hi all
Thanks a lot.
I got the idea. I will try to implement, and will get back if I have any
doubts.
Thanks a lot.
Chintan
Reply by dvsa...@yahoo.com●October 24, 20082008-10-24
On Oct 24, 9:40�am, kennheinr...@sympatico.ca wrote:
> Visually, take data
> DDDD
> add zeroes
> DDDD00000000...000000
> compute RS codeword with parity
> DDDD00000000...000000PPPP
> and abbreviate, transmitting
> DDDDPPPP
Actually, it is better to add the zeroes before
the data, i.e. 00000...0000DDDD, then encode
to get 0000...00000DDDDPPPP, and then
truncate, just transmitting DDDDPPPP. The
advantage is that in decoding, one can use a
standard decoder for the original code, just
truncating the syndrome calculation to pretend
that it has run for 223 clock cycles during which
0's were being fed in (thus not changing the state
of the circuit which has remained as it was at
the initialization) and then running for 32 clock cycles
to get the syndrome for the shortened code.