DSPRelated.com
Blogs

Number Theory for Codes

Mike October 22, 20156 comments

Everything in the digital world is encoded.  ASCII and Unicode are combinations of bits which have specific meanings to us.  If we try to interpret a compiled program as Unicode, the result is a lot of garbage (and beeps!)  To reduce errors in transmissions over radio links we use Error Correction Codes so that even when bits are lost we can recover the ASCII or Unicode original.  To prevent anyone from understanding a transmission we can encrypt the raw data before adding the ECC.  Sometimes we use Elliptic Curve Cryptography to establish the key.  This all gets very confusing if the term "coding" is used with elliptic curves because ECC has two meanings!

There are a great number of books for mathematicians on Number Theory and many engineering books that skim over the top to give just the basics required so codes can be implemented in hardware or software.  What I'd like to do here is whet your appetite for Number Theory to help remove the "black magic" associated with Error Correction Codes and Finite Fields used in cryptography. I'll show how the CRC (Cyclic Redundancy Check) works as an example.

Let's start with the concept of "number".  A number is just a symbol that tells us how many "things" we have. 5 apples, 1 billion transistors, etc.  The symbols we normally use are base 10 - the digits 0 through 9.  With computers we use base 2 or base 16 (once upon a time base 8, but the use of that is pretty much obsolete).  The base determines the meaning of the number: 123 base 10 is 1*10 2 + 2*101 + 3*100.  In binary (base 2) this is 1*26 + 1*25 + 1*24 + 1*23 + 0*22 + 1*21 + 1*20.

One of the things we worry about when doing math on a microprocessor is overflow.  On a 16 bit micro, when we add two numbers that have a result which is bigger than 65536, the result is the remainder of what we want.  The result is what we call "congruent" to the full number modulo 2 16.  This is why signed and unsigned arithmetic can be mixed, the representation of -30000 is 35536, so that when we "add" 31000 - 30000 we overflow into 1000: 0x7918 + 0x8ad0 = 0x03e8 with overflow.  

This ability to do congruent modular arithmetic is really useful.  It gives us a "field" of numbers. A number field consists of a set and the operations of addition and multiplication.  Because we have a fixed set of bits, we get a "finite field".  No matter what we do we always have a 16 bit result when we add or multiply two 16 bit values into a 16 bit register.  For processors which produce a 32 bit result we can just keep the last 16 bits and it will be the correct remainder.

A finite field (also called a Galois Field after the mathematician who discovered it) can consist of any size.  Fields of prime size have special properties. For example if we have field of size 13 and we add 7 + 8 we get 2.  Let's see what happens when we compute 2 n mod 13:

1 2 3 4 5 6 7 8 9 10 11 12 13
2 4 8 3 6 12 11 9 5 10 7 1 2

The top row are the values n and the bottom row are the values of 2n mod 13.  We call 2 a "generator" because every value 1 through 12 is found once as a result of the exponentiation. Zero is not included because anything to the zero is 1.

The smallest prime, 2, is really convenient for use on computers.  The truth table for addition over a field of size 2 is

0 0 0
0 1 1
1 0 1
1 1 0

which is just exclusive or. 

If we choose a different base we can do some interesting things.  Base 2, 10 or 16 are fixed numbers, so when we encode integers we immediately know what it means.  But suppose we choose a base of some complex number $\beta$.   Then our "number" looks like this

$$ a_3\beta^3+a_2\beta^2+a_1\beta^1+a_0\beta^0 $$

This is called a "polynomial basis" because it is a polynomial.  I don't even care what the actual value of β is!  For coding theory, we never need to evaluate the number.  

If the coefficients we use are a size 2 finite field (which I'll call GF(2)), the values of ai are only 0 or 1. What is nice about using GF(2) for our coefficients is that when we add two polynomials we can use the XOR rules on each power of $\beta$.  For example if we have two polynomials $\beta^2 + 1$ and add this to $\beta + 1 $ we get $\beta^2 + \beta$.

We can also factor polynomials just like we do in algebra.  The above example result is $\beta(\beta + 1)$. The term $\beta + 1$ can not be factored or reduced to a lower form.  This is a "prime" or "irreducible" polynomial.  And just like prime numbers, irreducible polynomials have some special properties.

Let's look at the prime polynomial $\beta^4 + \beta + 1$. Taking powers of $\beta$ modulo this prime polynomial leaves us with powers of β that are 3 or less.  The reason is simple - when we divide $\beta^4 + \beta + 1$ into $\beta^j$ we get $\beta^{j-4}$ + remainder.  When we divide $\beta^4$ by $\beta^4 + \beta + 1$ the remainder is $\beta + 1 $.

So let's take a look at what happens when we build a table of βj modulo the prime polynomial $\beta^4$ by $\beta^4 + \beta + 1$.

$\beta^0$
$\beta^1$
$\beta^2$
$\beta^3$
$\beta^4$
$\beta^5$
$\beta^6$
$\beta^7$
$\beta^8$
$\beta^9$
$\beta^{10}$
$\beta^{11}$ $\beta^{12}$ $\beta^{13}$ $\beta^{14}$ $\beta^{15}$
1 $\beta$ $\beta^2$ $\beta^3$ $\beta+1$ $\beta^2+\beta$ $\beta^3+\beta^2$ $\beta^3+\beta+1$ $\beta^2+1$ $\beta^3+\beta$ $\beta^2+\beta+1$ $\beta^3+\beta^2+\beta$ $\beta^3+\beta^2+\beta+1$ $\beta^3+\beta^2+1$ $\beta^3+1$ 1

This particular polynomial is called a "primitive" polynomial because it is a generator.  To see that it creates all possible polynomials let's look at this table again, but with the exponents in hex and the polynomial coefficients in hex as well:

0 1 2 3 4 5 6 7 8 9 A
B
C D E F
1 2 4 8 3
6 C B 5 A 7 E F D 9 1

This looks an awful lot like the 2j mod 13 table before! And while we can use bits in a computer to represent the coefficients of polynomials over GF(2), the meaning is completely different. In the first case we are dealing with integers and in the second case we are dealing with polynomials. 

The Cyclic Redundancy Check takes a message of any length and turns it into a giant polynomial.  It gets its name from the act of cycling in all the message bits one at a time.  If the size of the primitive polynomial is β 9 then the first 8 bits of the message are cycled in.  When the 9th bit gets rotated in, it is reduced by the power of the primitive polynomial similar to how the β 4 term was reduced in the above table.

The actual implementation of a CRC can be made relatively efficient.  If we have an 8 bit remainder CRC, we can build a table of remainders associated with what is really a 16 bit result.  We add the remainder of the upper 8 bits to the next 8 bits using XOR.  This works because the remainder modulo the primitive polynomial for the upper 8 bits is a polynomial of less than (or equal to) 8 bits.  The polynomial coefficients are combined with addition modulo 2 - which as we saw above is just XOR.

In hardware the Linear Feedback Shift Register is simple and easy to implement in an FPGA.  When shifting bits over, if the most significant bit gets set, then we can XOR the remainder portion. In our primitive polynomial above, this would mean an xor on bits 0 and 1 coming from the output of bit 3.  When bit 3 is set and we do a shift to get the next bit (multiplying by β ), we add in β+1 to our result.  This is exactly how the table above was computed.

There is a lot more to Number Theory and polynomials.  For example, Fermat's "little" theorem states that αp-1 = 1. This turns out to be a good way to compute an inverse modulo a polynomial.  With multiplication and division we can do algebra.  Only instead of doing it on numbers, we do it on polynomials.

That's enough for now.  A web search on Number Theory, Cyclic Redundancy Check and polynomials over a Galois Field should keep anyone off the street for a while!

[ - ]
Comment by jms_nhOctober 24, 2015
nice article!

Consider using MathJax for writing equations; it makes it really easy to display equations in a professional-looking form (e.g. $a_3\beta^3 + a_2\beta^2 + a_1\beta + a_0$)
[ - ]
Comment by drmikeOctober 25, 2015
Yeah - much nicer looking!
[ - ]
Comment by drmikeOctober 24, 2015
I did not realize there is a difference. Thanks, I'll check into it!
[ - ]
Comment by Rick LyonsNovember 14, 2015
Hello Dr. Mike,
Thanks for the interesting tutorial. I wonder, are you the talented and friendly Mike Rosing I met at the 2004 COMP.DSP Conference in Minnesota? Or are you the other Mike Rosing I saw last night on the "America's Most Wanted" television show?

Regards,
[-Rick-]
[ - ]
Comment by drmikeNovember 14, 2015
:) Howdy Rick! It has been a long time since comp.dsp. Unfortunately I haven't done much DSP work in the last couple of years, but it hasn't stopped me from having fun with math. I do have a copy of "Streamlining DSP" on my shelf, congrats for that!
Mike
or as I used to sign
Patience, persistence, truth,
Dr. mike
[ - ]
Comment by Rick LyonsNovember 14, 2015
Hi Dr. Mike,
Thanks. Mike, if you're interested in the errata for my "Streamlining DSP" book, send me an e-mail at 'R_dot_Lyons_at_ieee_dot_org'.
Regards,
[-Rick-]

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: