Reply by RFBlackMagic October 21, 20082008-10-21
In order to frame the questions, the following on-line reference articles
are considered;

1. 3/15/2001 EDN Self-correcting codes conquer noise, part 2: Reed-Solomon
codecs article located at (after skipping the annoying advert and pop-up);
http://www.edn.com/index.asp?layout=article&articleid=CA75096
2. Ms. Matache’s often referenced Encoding/Decoding Reed Solomon Codes
on-line tutorial located at;
http://www.ee.ucla.edu/~matache/rsc/slide.html
(Worked examples based on Error and Erasure Decoding for RS (7,3) “j”
= 1)
3. BBC R&D White Paper WHP031,  Reed-Solomon Error Correction, C.K.P.
Clarke located at;
http://www.bbc.co.uk/rd/pubs/whp/whp031.shtml
(Worked examples based on Error Only Decoding for RS (15,11) “b” = 0)

The following First tier questions revolve around Reed Solomon Error and
Erasure Encoding/Decoding equations for unconventional general cases where
“j” does not equals 1 (or “b” in reference article 3 case.)
One of my professors once said that the most difficult Engineering task
was to get around the “jargon”. 

Encoding Equation Questions
Generator Polynomial g(X)
From reference article 1.
“In general, the polynomial g(X) for Reed-Solomon (n, K) codes is: 
g(X) = (X – alpha^j)(X - alpha^(j+1)).......(X- alpha^(j+2t-1)), 
and, if j=1, 
g(X) = gsub(n-k) X^(n-k) + gsub(n-k-1) X^(n-k-1) +........gsub2X^2 +
gsub1X + gsub0.
You can choose any integer value for j; however, it’s conventional to
use j=1. Sometimes, by choosing j = 1, you can reduce the number and the
cost of circuit components.”  

Question 1.
It is stated that j can be any integer value presumably including 0 and
negative integer values. 
Although implied, wouldn’t alpha^j, alpha^(j+1) ….. alpha^(j+2t-1)
need to be a symbol of the finite set of elements of GF(2^m) characterized
by the irreducible polynomial alpha^(2^m –1) + 1 = 0?

Question 2.
The statement “Sometimes, by choosing j = 1, you can reduce the number
and the cost of circuit components.” implies that if j = 1 the number and
the cost of circuit components may not be reduced.
Is there a general rule of thumb for choosing j relative to hardware
complexity and costs?

Decoding Equation Questions
I have not seen the derivation of the Decoding Equations but have a strong
intuition that the chosen value of j = 1 is implicit in the equations shown
in reference articles 1 and 2.

Erasure Polynomial Equation
In reference article 1 and 2. (Not addressed in article 3 due to decoding
of errors only)
Gamma(x) = Pi (1-Ysubix) from i = 1 to f, where Ysubi is the erasure
location and f is the number of erasures

Question 3.
Is there a general form of the Erasure Polynomial Equation for any value
of j when decoding both Errors and Erasures?

Modified Syndrome Polynomial Equation
In reference article 1 and 2. (Not addressed in article 3 due to decoding
of errors only)
Xi(x) = (Gamma(x)[1 + S(x)] -1) mod x^(2t+1)

Question 4.
Is there a general form of the Modified Syndrome Polynomial Equation for
any value of j when decoding both Errors and Erasures?

Error/Erasure Polynomial
In reference article 1 and 2. (Not addressed in article 3 due to decoding
of errors only)
Psi(x) = Gamma(x)Lambda(x)

It has been observed that any Psi(x) terms exceeding X^(n-k) results in a
value of 0.

Question 5.
Is it necessary to carry Psi(x) terms exceeding X^(n-k) to the Forney
algorithms?
Can the degree of Psi(x) be limited to Psi(x)
=[Gamma(x)Lambda(x)]modX^(2t+1) terms?

Error Magnitude Polynomial Equation
In reference article 1 and 2. (j = 1)
Omega(x)=Lambda(x)[1 + Xi(x)]modx^(2t+1)

In reference article 3. (Article 3 addresses decoding of errors only with
“b” = 0)
Omega(x)=[Lambda(x)S(x)]modx^(2t) albeit from the Euclidean algorithm

Question 6.
Is there a general form of the Error Magnitude Polynomial Equation for any
value of j when decoding both Errors and Erasures?

Second tier question revolves around what happens if the RS decoding
capability is exceeded.
From reference article 1.
 "A Reed-Solomon (n, K) code can successfully correct as many as 2t = n -
K erasures if no errors are present. With both errors and erasures present,
the decoder can successfully decode if n - K is greater than or equal to 2v
+ f, where v is the number of errors, and f is the number of erasures."    


Also referring to Figure 3 of reference article 1: A flow chart for the
Berlekamp-Massey algorithm can correct both errors and erasures.

“The end result of the BM unit is a polynomial Lambda(X). The check
“deg Lambda(X) = L” provides a clue as to whether the decoder will
successfully decode r(X) or declare a failure. Remember that although this
check is necessary (to proceed further), it is insufficient to determine
whether the decoder will decode correctly.”

Question 7.
Why is the check necessary? The final decision block of the flowchart of
Figure 3 does not indicate what to do in the event of “No”.
Can a Syndrome check of the Corrected Received Message be used in lieu of
the degree of Lambda(x)?

Question 8.
Should the Figure 3 decision block of reference article 1 “r = 2t”
actually be "r greater than or equal to 2t" to account for the case where
the initial value of r (number of erasures) is equal 2t?

Question 9.
Should the Figure 3 decision block “No” path of reference article 1
“r = 2t” actually go to the increment r block (otherwise r would never
get incremented per the flowchart)?

Question 10.
The Holy Grail question!
What is sufficient other than erasures counts (if anything) to determine
whether the decoder will decode correctly when the decoding capability is
exceeded?

Third tier question revolves (Galois must be spinning in his grave by now
with all these revolutions LOL) around a specific case of Incorrect Decode
percentages when the RS decoding capability is exceeded.

Question 11.
With a simulated default message and given the number of Erasures equal to
n – k and the number of Errors greater than or equal to 1 for all
combinations of errors values/locations and erasure locations, is the
percentage of Incorrect Decodes really 100 percent as I am seeing?

Thanks,
RFBlackMagic