I've got a RS implementation that's producing correct error values for a
(204,188) code with generator g(x) = (1-a^0x)(1-ax)...(1-a^(2t-1)x)
The thing that has me puzzled is that I'm producing the correct error
values, but when I go to look at the roots of the error locator, they don't
seem to be correct.
Example: I placed errors in samples 0-3 of the received codeword, so I
would expect to see roots 2^-0, 2^-1, 2^-2, and 2^-3, or [1 142 71 173] (2
is the primitive element)
But instead I see [20 40 80 160], yet these produce correct error values
after forney's algorithm.
Can anyone explain why I'm seeing this?
Reply by gct●March 21, 20092009-03-21
Interesting, looking more closely at the roots and they correspond to
2^-203, 2^-202, 2^-201, and 2^-200 which would indicate an endianness issue
with the received codeword...hmm