Hi, I am facing a problem while using a BCH code in a project. The nature of the project demands that, instead of t, we have to correct 2t errors. I know this is not possible in general but here we have t dependent errors, i.e., out of these 2t errors, only t are independent. Now there may be a way to find the locations of those t errors and this will automatically give us the location of the other t errors. In our case, if we find t errors in first half of the codeword, we will find the remaining errors. If we have an error at index i, then we will also have an error at index i+(n/2) where n is the size of the codeword. This is an interesting but a very hard problem, I guess. Anyone having any idea? This will be a great help. Thanks. Regrads Moon
Can we correct 2t errors in a BCH code?
Started by ●July 18, 2008
Reply by ●July 18, 20082008-07-18
Broken Arrow wrote:> Hi, > > I am facing a problem while using a BCH code in a project. The nature > of the project demands that, instead of t, we have to correct 2t > errors.Why can't you simply pick a code which corrects 2t errors?> I know this is not possible in general but here we have t > dependent errors, i.e., out of these 2t errors, only t are > independent.Does it have anything to do with the differentially encoded data? If it is so, the first thing to do would be optimizing the demodulator.> Now there may be a way to find the locations of those t > errors and this will automatically give us the location of the other t > errors. In our case, if we find t errors in first half of the > codeword, we will find the remaining errors. If we have an error at > index i, then we will also have an error at index i+(n/2) where n is > the size of the codeword. This is an interesting but a very hard > problem, I guess. Anyone having any idea? This will be a great help.The problem looks similar to the burst error correction; the codes can be designed to correct the cetain patterns of errors. The key word is Fire code; the good book is "Error Correction Codes" by Blahut. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●July 19, 20082008-07-19
> Why can't you simply pick a code which corrects 2t errors?Actually we are working on "Biometric Key Generation" (you can Google to know more about it). We need to generate key using a user's biometric reading. In our case, multiple readings of the same user may differ by at most 32% due to the lighting conditions, etc. We are using an ECC to correct these differences. We treat biometric reading as a codeword, so the requirement is to find a code that can correct 32% of the errors in a codeword. In this way, we will be able to recognize all the users correctly. As far as I know, there is no known code that can correct 32% errors. So the best bet is to find a code that can correct, say, 16% errors and using the correlations info to modify it and correct 2t=32% errors.> Does it have anything to do with the differentially encoded data? If it > is so, the first thing to do would be optimizing the demodulator.No, this is not related to differentially encoded data.> The problem looks similar to the burst error correction; the codes can > be designed to correct the cetain patterns of errors. The key word is > Fire code; the good book is "Error Correction Codes" by Blahut.This may be similar to burst error correction but as I mentioned, we have every ith bit related with ((n/2)+i)th bit and there is no other relationship. So this will be a burst of just 2 bits. For bursts of 2 bits, the value of n will be very small for any burst error correcting code including the fire code but we need to have much larger n. The biometric template size is 2047 so n should be equal to 2047 or at least close to that. Is there any error correcting code that can correct 32% random errors? I have no idea about LDPC or Turbo codes. Can they be used in this scenario? The short version of the question is that "you receive some codeword (of reasonably large length) that may have up to 32% errors (with the only correlation that I mentioned already). Is there any method what so ever to decode it correctly?" Moon
Reply by ●July 19, 20082008-07-19
Broken Arrow wrote:>>Why can't you simply pick a code which corrects 2t errors? > > > Actually we are working on "Biometric Key Generation" (you can Google > to know more about it). We need to generate key using a user's > biometric reading.How many useful bits?> In our case, multiple readings of the same user may > differ by at most 32% due to the lighting conditions, etc. We are > using an ECC to correct these differences. We treat biometric reading > as a codeword, so the requirement is to find a code that can correct > 32% of the errors in a codeword. In this way, we will be able to > recognize all the users correctly. As far as I know, there is no > known code that can correct 32% errors.A code can be designed to correct as many errors as needed; this is just the matter of the size of the codeword and the coding rate.> So the best bet is to find a > code that can correct, say, 16% errors and using the correlations info > to modify it and correct 2t=32% errors.The problem is ill stated. The coding should be considered altogether with the channel modulation and format.> This may be similar to burst error correction but as I mentioned, we > have every ith bit related with ((n/2)+i)th bit and there is no other > relationship. So this will be a burst of just 2 bits. For bursts of 2 > bits, the value of n will be very small for any burst error correcting > code including the fire code but we need to have much larger n. The > biometric template size is 2047 so n should be equal to 2047 or at > least close to that. > > Is there any error correcting code that can correct 32% random errors?Certainly. Consider the low rate convolutional codes, for example.> I have no idea about LDPC or Turbo codes. Can they be used in this > scenario?LDPC/Turbo codes are useful if the data payload is at least 500 bits.> The short version of the question is that "you receive some codeword > (of reasonably large length) that may have up to 32% errors (with the > only correlation that I mentioned already). Is there any method what > so ever to decode it correctly?"Why not. Any correlations in the error pattern simplifies the task. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●July 19, 20082008-07-19
On Fri, 18 Jul 2008 21:29:39 -0700 (PDT), Broken Arrow <moon.zia@gmail.com> wrote:>> Why can't you simply pick a code which corrects 2t errors? > >Actually we are working on "Biometric Key Generation" (you can Google >to know more about it). We need to generate key using a user's >biometric reading. In our case, multiple readings of the same user may >differ by at most 32% due to the lighting conditions, etc. We are >using an ECC to correct these differences. We treat biometric reading >as a codeword, so the requirement is to find a code that can correct >32% of the errors in a codeword. In this way, we will be able to >recognize all the users correctly. As far as I know, there is no >known code that can correct 32% errors. So the best bet is to find a >code that can correct, say, 16% errors and using the correlations info >to modify it and correct 2t=32% errors.I'm confused. When is the code being applied? If this is a biometric reading I don't understand where the code is applied so that it can correct reading errors. Are you trying to use the reading itself as a code?>> The problem looks similar to the burst error correction; the codes can >> be designed to correct the cetain patterns of errors. The key word is >> Fire code; the good book is "Error Correction Codes" by Blahut. > >This may be similar to burst error correction but as I mentioned, we >have every ith bit related with ((n/2)+i)th bit and there is no other >relationship. So this will be a burst of just 2 bits. For bursts of 2 >bits, the value of n will be very small for any burst error correcting >code including the fire code but we need to have much larger n. The >biometric template size is 2047 so n should be equal to 2047 or at >least close to that.Is that 2047 data (information) bits or is the total codeword length 2047? If the total codeword length is 2047, how much of that is information and how much is parity? (i.e., what is the code rate?)>Is there any error correcting code that can correct 32% random errors? >I have no idea about LDPC or Turbo codes. Can they be used in this >scenario?If the parity symbols are subject to error then 32% is very difficult. If the parity symbols are not subject to error then it's possible if you don't mind a low-rate code (i.e., there may be more parity bits than data bits). I don't know how you're applying the code and getting coded parity bits into a biometric reading, though, so I'm perhaps still a bit confused about how you're applying the code. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by ●July 19, 20082008-07-19
On Fri, 18 Jul 2008 21:29:39 -0700 (PDT), Broken Arrow <moon.zia@gmail.com> wrote:>> Why can't you simply pick a code which corrects 2t errors? > >Actually we are working on "Biometric Key Generation" (you can Google >to know more about it). We need to generate key using a user's >biometric reading. In our case, multiple readings of the same user may >differ by at most 32% due to the lighting conditions, etc. We are >using an ECC to correct these differences. We treat biometric reading >as a codeword, so the requirement is to find a code that can correct >32% of the errors in a codeword. In this way, we will be able to >recognize all the users correctly. As far as I know, there is no >known code that can correct 32% errors. So the best bet is to find a >code that can correct, say, 16% errors and using the correlations info >to modify it and correct 2t=32% errors.I'm confused. When is the code being applied? If this is a biometric reading I don't understand where the code is applied so that it can correct reading errors. Are you trying to use the reading itself as a code?>> The problem looks similar to the burst error correction; the codes can >> be designed to correct the cetain patterns of errors. The key word is >> Fire code; the good book is "Error Correction Codes" by Blahut. > >This may be similar to burst error correction but as I mentioned, we >have every ith bit related with ((n/2)+i)th bit and there is no other >relationship. So this will be a burst of just 2 bits. For bursts of 2 >bits, the value of n will be very small for any burst error correcting >code including the fire code but we need to have much larger n. The >biometric template size is 2047 so n should be equal to 2047 or at >least close to that.Is that 2047 data (information) bits or is the total codeword length 2047? If the total codeword length is 2047, how much of that is information and how much is parity? (i.e., what is the code rate?)>Is there any error correcting code that can correct 32% random errors? >I have no idea about LDPC or Turbo codes. Can they be used in this >scenario?If the parity symbols are subject to error then 32% is very difficult. If the parity symbols are not subject to error then it's possible if you don't mind a low-rate code (i.e., there may be more parity bits than data bits). I don't know how you're applying the code and getting coded parity bits into a biometric reading, though, so I'm perhaps still a bit confused about how you're applying the code. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by ●July 20, 20082008-07-20
Thanks Vladimir and Eric for your responses. I guess I need to explain the problem in more detail. The work is a practical implementation of Juels and Wattenberg's seminal paper named "Fuzzy commitment" using iris scan and some error control code. I explain the scheme below. At enrollment: 1. When a user comes for enrollment, a random codeword is selected (A random info word is selected and encoded). Lets call the infor word K and the codeword C. 2. His or her iris scan is taken. Lets name it W. 3. C and W are XORed to get P (This implies that the size of C and W should be same). 4. Store P on a smart card. At verification: 1. When the user comes for verification, he or she presents present smart card and P is read from the card. 2. A fresh iris scan of the user is taken. Lets name it W'. 3. P and W' are XORed to get C'. 4. C' is decoded to get K'. Now clearly, K=K' if the error correction capability of the code being used is greater than the difference between W and W'. We want that if both the readings W and W' are from the same user, decoding should be successful i.e. K=K' and vice versa. Limitations: 1. Size of iris template is 2047 bits. Hence the codeword size should be 2047 bits. 2. The threshold for differentiating between same class and different class is 32%. So the code should be able to correct 0.32*2047=655 errors. The errors can be in message part or the parity part (as Eric has asked). 3. The message size is not fixed but as it is used as the secret key (hence the term biometric key generation), it should be as long as possible to make the brute force attack infeasible. 250+ bits is great. I think now it should be very clear what the problem is. From your posts, it seems that LDPC and Turbo codes can do the job. Can they? Sorry for my lack of knowledge about these codes as my research area is biometrics and not the coding theory. Thanks. Moon
Reply by ●July 20, 20082008-07-20
On Sat, 19 Jul 2008 23:25:00 -0700 (PDT), Broken Arrow <moon.zia@gmail.com> wrote:>Thanks Vladimir and Eric for your responses. I guess I need to >explain the problem in more detail. The work is a practical >implementation of Juels and Wattenberg's seminal paper named "Fuzzy >commitment" using iris scan and some error control code. I explain the >scheme below. > >At enrollment: >1. When a user comes for enrollment, a random codeword is selected (A >random info word is selected and encoded). Lets call the infor word K >and the codeword C. >2. His or her iris scan is taken. Lets name it W. >3. C and W are XORed to get P (This implies that the size of C and W >should be same). >4. Store P on a smart card. > >At verification: >1. When the user comes for verification, he or she presents present >smart card and P is read from the card. >2. A fresh iris scan of the user is taken. Lets name it W'. >3. P and W' are XORed to get C'. >4. C' is decoded to get K'. > >Now clearly, K=K' if the error correction capability of the code being >used is greater than the difference between W and W'. We want that if >both the readings W and W' are from the same user, decoding should be >successful i.e. K=K' and vice versa. > >Limitations: >1. Size of iris template is 2047 bits. Hence the codeword size should >be 2047 bits. >2. The threshold for differentiating between same class and different >class is 32%. So the code should be able to correct 0.32*2047=655 >errors. The errors can be in message part or the parity part (as Eric >has asked). >3. The message size is not fixed but as it is used as the secret key >(hence the term biometric key generation), it should be as long as >possible to make the brute force attack infeasible. 250+ bits is >great. > >I think now it should be very clear what the problem is. From your >posts, it seems that LDPC and Turbo codes can do the job. Can they? >Sorry for my lack of knowledge about these codes as my research area >is biometrics and not the coding theory. Thanks. > >MoonYes, the problem is clear now. Thanks for the clarification. I am much less confused. ;) First, Turbo Codes and LDPCs work best when soft information is available, i.e., a confidence level on each bit as to whether it is a one or a zero. Your application may lend itself to this, since the measurement of the biometric may not be binary for each bit that was originally quantized. In other words, if the biomeasure is made on a scale rather than a hard binary decision for each bit, using that "soft" scaling will significantly improve the performance of many decoders and enable the efficient use of Turbo Codes or LDPCs. Second, would it not be possible to make just the information part of C the same length as W? Then XORing W and K (assuming C is a systematic code) will produce P, then P plus the parity bits from C are stored together (or would that give away too much information about the key?) If that could be done then the number of parity bits could be increased as necessary in order to get the level of correction you're looking for. Since the parity bits wouldn't be subject to error the correcting capability would concentrate on the information field, which would reduce the number of parity bits needed. Turbo Codes and LDPCs are systematic codes, and if the biometric lends itself to soft-decision confidence generation for each bit then they would probably be good candidates for this sort of thing (assuming you can find something that corrects at 32%). That's a quite bad error rate to try to overcome. Usually "very bad" error rates are in the 10e-2 region, and 10e-1 would be "extremely awful". I think correcting an error rate of one in three with a codeword of only 2047 bits is going to be a tough nut to crack. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by ●July 21, 20082008-07-21
> First, Turbo Codes and LDPCs work best when soft information is > available, i.e., a confidence level on each bit as to whether it is a > one or a zero. Your application may lend itself to this, since the > measurement of the biometric may not be binary for each bit that was > originally quantized. In other words, if the biomeasure is made on > a scale rather than a hard binary decision for each bit, using that > "soft" scaling will significantly improve the performance of many > decoders and enable the efficient use of Turbo Codes or LDPCs.Unfortunately, no soft information is available. The iris template is binary having no systematic correlation.> Second, would it not be possible to make just the information part of > C the same length as W? Then XORing W and K (assuming C is a > systematic code) will produce P, then P plus the parity bits from C > are stored together (or would that give away too much information > about the key?)Thats true. Storing P in clear will leak too much information.> That's a quite bad error rate to try to overcome. Usually "very bad" > error rates are in the 10e-2 region, and 10e-1 would be "extremely > awful". I think correcting an error rate of one in three with a > codeword of only 2047 bits is going to be a tough nut to crack.We may be able to increase the template size to 2 times (4094 bits) but 32% error correction will still be needed. Will it help? Moon
Reply by ●July 21, 20082008-07-21
On Jul 20, 10:43�pm, Broken Arrow <moon....@gmail.com> wrote:> We may be able to increase the template size to 2 times (4094 bits) > but 32% error correction will still be needed. Will it help? > > MoonIt depends on whether the errors between the two templates are correlated or not ... At any rate, it seems from the discussion so far that there's not much "clean model" that can be used. But do you have empirical data from which you can learn the statistics of the errors? Are they even independent from the "transmitted signal"? I suspect not, and this may be either an opportunity to do something clever or a problem in trying to cast this as a communication channel problem. Julius






