This question is to get concepts clear than nitty-gritties.
In case of convolution coding, I am confused how puncturing where bits are removed at encoder and can still be decoded. Will puncturing not break the sequence at the encoder based on which decoder works?
Another question is, conceptually how does a decoder work?
if I am having 1/3 code rate then does decoder run 3 decoders in parallel for each of the 3 generator outputs and then combine these values to come up with actual output?
Check out page 22 of this: http://www.telecom.tuc.gr/Greek/Liavas/Seminars/co...
You are not removing bits of data, you are just not sending a code word. Code words are maps from data, not the data itself. Because coding recovers the data, the idea is you can speed things up without losing information. The price you pay is a loss of redundancy. A 1/3 code means for each bit of input there are 3 bits of output. There is only one encoder and one decoder - each code word tells you what bit should be used for data.
Conceptually, the decoder looks at the codes coming back and uses a lookup table to translate codes back to bits. The point is that you sometimes get codes which are not in the lookup table, so then you have to use some math to recover the data. The whole point of sending more bits than data allows you to do that.
First I want to say that I understand your frustration in understanding the decoding process. Few references start with a more heuristic explanation. I will try to cover as much as possible but you really should find a more detailed explanation. Unfortunately I do not have good one at hand.
For all the experts, I will use simple terms here to make the explanation simpler to understand.
Regarding puncturing, a FEC corrects error, then if you remove a few bits the code will correct them. Sounds simple enough leaving aside how these bits are "presented" to the decoder. The logic is that if you remove a few bits and the code will correct them, the coding gain will be a little lower but the code rate will be higher and maybe we can find a combination that works.
For convolution decoder the decoder must see all encoded bits. The decoder is presented with the received bits as LLR (Log Likelihood Ratio) or soft decisions. In most cases the soft decision value is a value representing either a 0 or a 1 with an amplitude signifying the "certainty". During de-puncturing the punctured bits are presented to the decoder as "uncertain" bits. In some implementation the decoder also has a punctured input signal to sign that the input bit is punctured.
Regarding the operation of the decoder, well that requires a much more detailed understanding but I will try to simplify it. Let's take your example of a rate 1/3 encoder, each input bit generates 3 output bits from three generators. All three generates have the state which is the information bits shift register. Each one of the output bits depends on the generator state. The decoder job is to estimate the best match of the data bits SEQUENCE and not bit by bit. The decoder goes through all possible combinations of the received sequence and estimates the sequence with highest probability. In your example, the receiver should present three LLR per input to the decoder. The decoder compares the possibilities of the received LLR to all possible states and input (0 or 1) of the generator state (or simply the shift register). In theory the decoder should receive all transmitted bits, go through all possible combinations and only then make a final decision. This is impractical in most cases so the decoder makes a backwards decision every some number of bits.
There are a few algorithm used to decode convolution codes but for the simple case of convolution code only (not turbo coding) the most common is Viterbi decoding. This algorithm simplifies the search of all possible combinations and is very efficient.
I hope I shed some light on this issue.
Dear @drmike, @motilito, @Tim Wescott,
Thanks a lot for spending your valuable time and providing inputs.
Few additional questions based on the inputs:
If there are 3 generators, then in the worst case (max. redundancy) we would get 3 output bits for every input bit. is this correct?
From here, we can only reduce redundancy, by puncturing. Is this correct?
Puncturing involves dropping one full word at the output.
For example, at a given input x, if output is abc then one would completely drop abc in case of puncturing. is this correct?
In case of wireless communication systems, would the nature of puncturing m/n rate change dynamically depending on the channel quality?
Is there some math involved in selecting the generator polyonomials?
Thanks a lot ...
To your example, if the input is x and the output is abc, you might puncture one of those to reduce the code rate to 1/2. e.g., the output of the encoder around x is abc, you might just transmit ac. In the receiver a'c' are received, and the decoder is then fed a'0c', where 0 represents a neutral, unweighted "unknown" symbol (as Tim put it). The decoder then decodes a'0c' and the surrounding (depunctured) symbols into an estimate of the transmitted symbols.
Also as Tim mentioned, the level of your question is of the sort easily learned from study of a good text.
Thanks a lot. YOu are right. I should probably spend a little more time on this topic. I have got access to MIT lecture notes. I will have a look ...thanks again
Punctured codes are done by pre-arrangement with the decoder. So you remove transmitted symbols from the encoded stream, and the decoder (or perhaps even the stage before the decoder) inserts them back in, but marked as "unknown". The decoder then deals with a fully populated stream of symbols.
You really need to pick up a book on error correcting codes. There's actually a bunch of different decoding approaches to decoding convolutional codes. The most common one might be the Viterbi decoder which maintains a bunch of possible decoded streams, along with the likelihood that each stream is the right one. As each symbol comes in, the decoder spawns as many new streams as necessary, recalculates the likelihood of each stream given the latest input symbol, and then finally trims the number of streams down to whatever number of streams it's designed to carry.
But -- you really, really need to get a book.