DSPRelated.com
Forums

Shannon's channel capacity equation

Started by Sharan123 November 18, 2015
Hello,

Probably a very fundamental question. The channel capacity is given by,

C = BW . log2(1+S/N)

Here, for a given bandwidth, I can increase the rate of data transfer by
different encoding schemes.
QAM64,256 etc. being some of them. So, where is the encoding method taking
into account in the above equation?

Thanks,
---------------------------------------
Posted through http://www.DSPRelated.com
On 11/18/2015 10:27 AM, Sharan123 wrote:
> Hello, > > Probably a very fundamental question. The channel capacity is given by, > > C = BW . log2(1+S/N) > > Here, for a given bandwidth, I can increase the rate of data transfer by > different encoding schemes. > QAM64,256 etc. being some of them. So, where is the encoding method taking > into account in the above equation?
S/N Different encoding (actually modulation) provides different degrees of separation between the levels of modulation. I'm not clear on how that is reflected in the above equation though. So I expect this is not actually indicated. That equation is an absolute maximum channel capacity and any given scheme will only approximate it. -- Rick
On Wed, 18 Nov 2015 09:27:46 -0600, "Sharan123" <99077@DSPRelated>
wrote:

>Hello, > >Probably a very fundamental question. The channel capacity is given by, > >C = BW . log2(1+S/N) > >Here, for a given bandwidth, I can increase the rate of data transfer by >different encoding schemes. >QAM64,256 etc. being some of them. So, where is the encoding method taking >into account in the above equation? > >Thanks,
It isn't. Capacity is a theoretical bound, how you get there is up to your ability to build a system that achieves it. Modulation, error coding, etc., are all up for grabs. That said, you can also closely estimate the Capacity of a given modulation scheme using the constellation and Ungerboeck's method described in his TCM paper: G. Ungerboeck, "Channel Coding with Multilevel/Phase Signals," IEEE Trans. Inform. Theory, vol. IT-28, No. 1, pp. 55-67, Jan. 1982. That still doesn't tell you how to approach capacity even for given modulation. You still need an effective, capacity-approaching forward error correction code to get there. What you use is up to you. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Wed, 18 Nov 2015 09:27:46 -0600, Sharan123 wrote:

> Hello, > > Probably a very fundamental question. The channel capacity is given by, > > C = BW . log2(1+S/N) > > Here, for a given bandwidth, I can increase the rate of data transfer by > different encoding schemes. > QAM64,256 etc. being some of them. So, where is the encoding method > taking into account in the above equation?
Encoding (and modulation) is only mentioned in passing. The actual fundamental measure (if you read the paper) is the number of discrete regions that can be packed into the space that you can construct with a given bandwidth and a given signal to noise ratio over some large amount of time. Shannon used the word "spheres". The log2 part is just reducing this number into a number of bits. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
In article <n2i852$kf8$1@dont-email.me>, rickman  <gnuarm@gmail.com> wrote:

>On 11/18/2015 10:27 AM, Sharan123 wrote:
>> The channel capacity is given by,
>> C = BW . log2(1+S/N)
>> Here, for a given bandwidth, I can increase the rate of data transfer by >> different encoding schemes. >> QAM64,256 etc. being some of them. So, where is the encoding method taking >> into account in the above equation?
>S/N > >Different encoding (actually modulation) provides different degrees of >separation between the levels of modulation. I'm not clear on how that >is reflected in the above equation though. So I expect this is not >actually indicated. That equation is an absolute maximum channel >capacity and any given scheme will only approximate it.
One way to view it is the coding and modulation parameters become the value C / BW in the expression above. Suppose for example the system is 64 QAM (6 bits per symbol) and your FEC coding is rate 2/3. Multipying these give you 4 bits per symbol, so the equation becomes 4 = (C / BW) = log2(1 + S/N) Solve this for S/N and you now have the SNR for which, per Shannon's formula, you could idealy code 4 bits per symbol (this is making the further assumption that the symbol rate is the bandwidth, which is a little off, but close enough for a rough estimate.) In reality, your coded modulation will fall short of this. If you require 5 dB more SNR to perform, the your coded modulation can be said to be 5 dB away from the Shannon limit. Stever
Thank you all. In hindsight, I do realize that the equation cannot predict
the type of encoding one can employ.

I assume that in the absence of noise (N = 0), one can, in theory, achieve
very large (infinite) rate ...
---------------------------------------
Posted through http://www.DSPRelated.com
On Fri, 20 Nov 2015 09:44:45 -0600, Sharan123 wrote:

> Thank you all. In hindsight, I do realize that the equation cannot > predict the type of encoding one can employ. > > I assume that in the absence of noise (N = 0), one can, in theory, > achieve very large (infinite) rate ...
Yes. The rate is predicated on the notion that you have a modulation space whose size is determined by the available power, and that the distance from one distinct symbol to the next is determined by the noise level. If you can find the original paper, or his tutorial paper, the basic theory is really quite accessible. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
"Sharan123" <99077@DSPRelated> writes:

> Thank you all. In hindsight, I do realize that the equation cannot predict > the type of encoding one can employ. > > I assume that in the absence of noise (N = 0), one can, in theory, achieve > very large (infinite) rate ...
or with infinite signal power... -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Dear Tim, Randy, 

Thanks again ...
---------------------------------------
Posted through http://www.DSPRelated.com
On November 18, Tim Wescott wrote:
>> C = BW . log2(1+S/N) >> So, where is the encoding method >> taking into account in the above equation? > > Encoding (and modulation) is only mentioned in passing. > The actual fundamental measure is the number of discrete regions > that can be packed into the space that you can construct > with a given bandwidth and a given signal to noise ratio over > some large amount of time. Shannon used the word "spheres".
Yes. However, there remains an enigma here. Shannon discussed both the discrete and continuous channel. For the former, he suggested the obvious solution: error correction via redundancy. Regarding the latter, Shannon framed the latter geometrically, as you alluded, to produce a mathematical work of art. The messages reside in a N-dimensional space. Then, it becomes a packing problem: pack the hyper- spheres optimally into that space, in a minimax sense, to achieve the capacity (or as close as feasible). Each modulated waveform transmitted,representing a particular message, then fits into a unique sphere. Seen this way, Shannon is talking about analog waveforms, and analog channel. Even though the information is measured in digital bits, the encoding is analog. But in practice, engineers simply source encode with redundancy, then send those bit sequences, with a variety of modulation schemes. No one has tried to implement Shannon's idea directly: i.e. encode blocks of source bits into these optimally separated hyper-spheres. -- Rich