Silly me, I forgot to change one assignment, and this was causing this
weird issue:
Change L = i+1 - L into L = i+1 - L - erasures_count and it works
flawlessly.
Also I needed to change the condition 2*L[l] > K to 2*L[l] >
K+erasures_count (but I didn't see any side effect without changing the
condition, but it's indicated in the Blahut's schema, so I guess it's
safer to change it).
---------------------------------------
Posted through http://www.DSPRelated.com

Reply by lrq3000●June 12, 20152015-06-12

Hi there,
I am trying to implement an errors-and-erasures Reed-Solomon decoder using
Berlekamp-Massey in Python.
I am using the algorithm depicted in "Algebraic Codes for Data
Transmission", Richard E. Blahut, 2003 (see the figure 7.10 in the
following post):
http://stackoverflow.com/questions/30215337/errata-erasureserrors-berlekamp-massey-for-reed-solomon-decoding/30468399#30468399
However, I am getting a very weird issue: errors-only decoding works
perfectly fine, erasures-only decoding (but still calling BM even if
that's not necessary) works perfectly fine, but erasures-and-errors
decoding fail.
In fact, it doesn't fail completely, it's just that the bound is wrong:
normally, it should be able to correct up to 2*e + v <= min_dist, where e
is the number of errors and v the number of erasures. But here, my decoder
can only decode 2 * (e + v) <= min_dist, which greatly limits the
usefulness...
From Blahut's book, I understood that the two conditions for
erasures-and-errors decoding is to:
1- compute the erasures locator polynomial and use it as a seed to
initialize BM's errors locator polynomial.
2- skip the first v iterations (where v is the number of erasures).
Here is my Python code (I tried to stay as close to the mathematical
formulation as possible, so it should be close to pseudo-code):
-------------------------------------------------------------------
def berlekamp_massey(self, s, n, k, erasures_loc=None, erasures_eval=None,
erasures_count=0):
'''Implementation of errata (errors-and-erasures)
Berlekamp-Massey.'''
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = Polynomial(erasures_loc.coefficients)
sigmaprev = Polynomial(sigma.coefficients)
B = Polynomial(sigma.coefficients)
omega = Polynomial(erasures_eval.coefficients)
omegaprev = Polynomial(omega.coefficients)
A = Polynomial(omega.coefficients)
else:
sigma = sigmaprev = Polynomial([GF256int(1)])
sigmaprev = Polynomial([GF256int(1)])
B = Polynomial([GF256int(1)])
omega = omegaprev = Polynomial([GF256int(1)])
omegaprev = Polynomial([GF256int(1)])
A = Polynomial([GF256int(0)])
L = 0
# Polynomial constants:
ONE = Polynomial([GF256int(1)])
ZERO = GF256int(0)
Z = Polynomial([GF256int(1), GF256int(0)]) # used to shift
polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE+s
# Iteratively compute the polynomials 2s times. The last ones will be
# correct
for l in xrange(n-k-erasures_count): # skip the first erasures_count
iterations because we already computed the partial errata locator
polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+1 # skip the first erasures_count iterations
# Goal for each iteration: Compute sigma[K] and omega[K] such
that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and
omega[l],
# and are computing sigma[K] and omega[K]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# This delta is valid for l (this iteration) only
Delta = ( s2 * sigma ).get_coefficient(K) # Delta is also known as
the Discrepancy, and is always a scalar (not a polynomial).
# Can now compute sigma[K] and omega[K] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigmaprev = sigma
omegaprev = omega
sigma = sigmaprev - (Z * B).scale(Delta)
omega = omegaprev - (Z * A).scale(Delta)
# Now compute the next B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the
four polynomials sigma, omega, A and B in order to minimize the degrees of
A and B. For more infos, see
https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials
aren't too large.
if Delta == ZERO or 2*L > K:
# Rule A
B = Z * B
A = Z * A
#L = L
else:
#elif (Delta != ZERO and 2*L < K)
# or (2*L == K and M != 0):
# Rule B
B = sigmaprev.scale(Delta.inverse())
A = omegaprev.scale(Delta.inverse())
print "A", A
L = K - L
# Hack to fix the simultaneous computation of omega, the errata
evaluator polynomial: because A (the errata evaluator support polynomial)
is not correctly initialized (I could not find any info in academic
papers). So at the end, we get the correct errata evaluator polynomial
omega + some higher order terms that should not be present, but since we
know that sigma is always correct and the maximum degree should be the
same as omega, we can fix omega by truncating too high order terms.
if omega.degree > sigma.degree: omega =
Polynomial(omega.coefficients[-(sigma.degree+1):])
# Return the last result of the iterations (since BM compute
iteratively, the last iteration being correct - it may already be before,
but we're not sure)
return sigma, omega
-------------------------------------------------------------------
I don't really care about Omega the evaluator polynomial, I can compute it
later using another function. But I'm sure that the problem is in Sigma
the locator polynomial, because my Chien Search (which I'm nearly 100%
sure is correct) does not compute the correct positions (it only includes
the correct positions of erasures, but for errors it is either completely
off, or it decode one or two errors, but then a lot of errors are
missing).
The degree of the locator polynomial seems ok to me (same degree as the
number of errors), but the Chien Search always find a lot less positions
than what it should.
Any clue to what I did wrong?
Thank you very much for any help, this is really driving me crazy...
---------------------------------------
Posted through http://www.DSPRelated.com