DSPRelated.com
Forums

Shannon's channel capacity with arbitrary noise!!

Started by santosh nath September 13, 2003
Hi all,

It is not very common to discuss Shannon's channel capacity with an
arbitrary noise. I guess we are more familiar with Shannon's channel
capacity perturbed by
white Gaussian noise. I also think there had been lot of discussions
with Shannon's limit,Shannon capacity etc. previously.  My post is
slightly different from those threads.

Shannon, in his classic "A Mathematical Theory of Communication",The
Bell System Technical Journal,Vol. 27, pp. 379�423, 623�656, July,
October, 1948.
described  theorem 18 page 44. as:

"Theorem 18: The capacity of a channel of band W perturbed by an
arbitrary noise is bounded by the inequalities
Wlog2((P+N1)/N1) <= C < = Wlog2((P+N)/N1)

where
P = average transmitter power
N = average noise power
N1 = entropy power of the noise."

if N=N1 for white noise it gives our well known formula
C=Wlog2((1+P)/N).

But if N!=N1(which is possible for arbitrary noise) what could be the
upper bound of channel capacity. Is it possible ((P+N)/N1)>((P+N)/N)
or N1<N for a given P? If so, does it mean the following thing:
1. There could exist a new maximum capacity that might exceed white
noise's maximum capacity.

2. Every noise distribution can lead to (including white Gaussian) 
its own maximum channel capacity. For simplicity of discussion we
assumed all noise
sources are uncorrelated with information signal - I guess that does
not necessarily mean white!

It just came in my mind - could be something which I am missing
somewhere!
I hope somebody can give more light on it.

Regards,
Santosh
Santosh:

Shannon's Additive White Gaussian Noise [AWGN] channel is really the
"worst" case channel one has to consider when determining capacity
limitations,
since in a very no other noise can ultimately be any worse than white
gaussian..

This is true, since... in practice any other noise distribution can always
be
"Gaussianized" and/or "whitened" by additional processing of the signal
plus noise before detection and decoding.

For example simply passing noise with non-Gaussian distribution through a
sufficient number of linear system "poles" will ultimately change or "smear"
the original
arbitrary distribution and cause it to become more nearly a Gaussian
distribution, or at
least as close to Gaussian as one might desire.  And of course, the
whitening of the spectrum
is just as easily accomplished.

Thus... for the additive noise channel, white Gaussian additive noise
creates the "worst",
channel, and all other noise spectra and distributions can ultimately be
reduced to the capacity
of the white Gaussian noise channel with sufficient linear signal
processing.

--
Peter
Consultant
Indialantic By-the-Sea, FL.


"santosh nath" <santosh.nath@ntlworld.com> wrote in message
news:6afd943a.0309131351.2d798818@posting.google.com...
> Hi all, > > It is not very common to discuss Shannon's channel capacity with an > arbitrary noise. I guess we are more familiar with Shannon's channel > capacity perturbed by > white Gaussian noise. I also think there had been lot of discussions > with Shannon's limit,Shannon capacity etc. previously. My post is > slightly different from those threads. > > Shannon, in his classic "A Mathematical Theory of Communication",The > Bell System Technical Journal,Vol. 27, pp. 379-423, 623-656, July, > October, 1948. > described theorem 18 page 44. as: > > "Theorem 18: The capacity of a channel of band W perturbed by an > arbitrary noise is bounded by the inequalities > Wlog2((P+N1)/N1) <= C < = Wlog2((P+N)/N1) > > where > P = average transmitter power > N = average noise power > N1 = entropy power of the noise." > > if N=N1 for white noise it gives our well known formula > C=Wlog2((1+P)/N). > > But if N!=N1(which is possible for arbitrary noise) what could be the > upper bound of channel capacity. Is it possible ((P+N)/N1)>((P+N)/N) > or N1<N for a given P? If so, does it mean the following thing: > 1. There could exist a new maximum capacity that might exceed white > noise's maximum capacity. > > 2. Every noise distribution can lead to (including white Gaussian) > its own maximum channel capacity. For simplicity of discussion we > assumed all noise > sources are uncorrelated with information signal - I guess that does > not necessarily mean white! > > It just came in my mind - could be something which I am missing > somewhere! > I hope somebody can give more light on it. > > Regards, > Santosh
"Peter O. Brackett" <ab4bc@ix.netcom.com> wrote in message news:<jEM8b.2577$Aq2.547@newsread1.news.atl.earthlink.net>...
> Santosh: > > Shannon's Additive White Gaussian Noise [AWGN] channel is really the > "worst" case channel one has to consider when determining capacity > limitations, > since in a very no other noise can ultimately be any worse than white > gaussian..
If that is the case, one could think of a new detection/coding mechanism tuned to the optimum noise distribution to get maximum channel capacity. Question may arise for the following things: 1. Real life noise distribution may not be of the type "optimum noise distribution". So one would think of addition prrocessing for noise conversion(?) 2. If some constraints like non-linearity arises one needs to think of proper detection etc. regards, santosh
> > This is true, since... in practice any other noise distribution can always > be > "Gaussianized" and/or "whitened" by additional processing of the signal > plus noise before detection and decoding. > > For example simply passing noise with non-Gaussian distribution through a > sufficient number of linear system "poles" will ultimately change or "smear" > the original > arbitrary distribution and cause it to become more nearly a Gaussian > distribution, or at > least as close to Gaussian as one might desire. And of course, the > whitening of the spectrum > is just as easily accomplished. > > Thus... for the additive noise channel, white Gaussian additive noise > creates the "worst", > channel, and all other noise spectra and distributions can ultimately be > reduced to the capacity > of the white Gaussian noise channel with sufficient linear signal > processing. > > -- > Peter > Consultant > Indialantic By-the-Sea, FL. > > > "santosh nath" <santosh.nath@ntlworld.com> wrote in message > news:6afd943a.0309131351.2d798818@posting.google.com... > > Hi all, > > > > It is not very common to discuss Shannon's channel capacity with an > > arbitrary noise. I guess we are more familiar with Shannon's channel > > capacity perturbed by > > white Gaussian noise. I also think there had been lot of discussions > > with Shannon's limit,Shannon capacity etc. previously. My post is > > slightly different from those threads. > > > > Shannon, in his classic "A Mathematical Theory of Communication",The > > Bell System Technical Journal,Vol. 27, pp. 379-423, 623-656, July, > > October, 1948. > > described theorem 18 page 44. as: > > > > "Theorem 18: The capacity of a channel of band W perturbed by an > > arbitrary noise is bounded by the inequalities > > Wlog2((P+N1)/N1) <= C < = Wlog2((P+N)/N1) > > > > where > > P = average transmitter power > > N = average noise power > > N1 = entropy power of the noise." > > > > if N=N1 for white noise it gives our well known formula > > C=Wlog2((1+P)/N). > > > > But if N!=N1(which is possible for arbitrary noise) what could be the > > upper bound of channel capacity. Is it possible ((P+N)/N1)>((P+N)/N) > > or N1<N for a given P? If so, does it mean the following thing: > > 1. There could exist a new maximum capacity that might exceed white > > noise's maximum capacity. > > > > 2. Every noise distribution can lead to (including white Gaussian) > > its own maximum channel capacity. For simplicity of discussion we > > assumed all noise > > sources are uncorrelated with information signal - I guess that does > > not necessarily mean white! > > > > It just came in my mind - could be something which I am missing > > somewhere! > > I hope somebody can give more light on it. > > > > Regards, > > Santosh
Correction: if N=N1 for white noise it gives our well known formula C=Wlog2((N+P)/N).
santosh.nath@ntlworld.com (santosh nath) wrote in message news:<6afd943a.0309132323.78d74f50@posting.google.com>...
> "Peter O. Brackett" <ab4bc@ix.netcom.com> wrote in message news:<jEM8b.2577$Aq2.547@newsread1.news.atl.earthlink.net>... > > Santosh: > > > > Shannon's Additive White Gaussian Noise [AWGN] channel is really the > > "worst" case channel one has to consider when determining capacity > > limitations, > > since in a very no other noise can ultimately be any worse than white > > gaussian.. > > If that is the case, one could think of a new detection/coding > mechanism > tuned to the optimum noise distribution to get maximum channel > capacity.
The optimum noise distribution to improve channel capacity is quite easy: No noise at all. Of course, removing the noise completely is not an option in real-life situations, so one tries to deal the best one can with whatever noise one faces. Shannon tells you the worst case scenario, given SNR and bandwidth. If you design a system that meets the specs in a "Shannon channel", chances are that your system will meet the specs in the real world too. Now, communications is a bit peripheral to my main interests, but I *think* taking advantage of whatever peculiarities in the com channel is termed "channel coding". I can't think of any reason why people would do this, though, except possibly for reducing the power consumption in the transmitter. Rune
if you have a covariance constraint then this has been solved nicely by
suhas diggavi in his phd thesis a few years ago:

Suhas N. Diggavi, Thomas M. Cover; The worst additive noise under a
covariance constraint, IEEE Trans. Info. Theory, vol. IT-47, pp. 3072 -
3081, November 2001.

-- 
The most rigorous proofs will be shown by vigorous handwaving.
http://www.mit.edu/~kusuma

opinion of author is not necessarily of the institute
"James K." <txdiversity@hotmail.com> wrote in message news:<3f67350c@shknews01>...
> "Rune Allnor" <allnor@tele.ntnu.no> wrote in message > news:f56893ae.0309140725.22e4b481@posting.google.com... > > Maybe I don't fully catch up with this issue. > > > Shannon tells you the worst case scenario, given SNR and bandwidth. If you > > design a system that meets the specs in a "Shannon channel", chances are > > that your system will meet the specs in the real world too. > > However, is *the worst case* really worse than the case > where the noise is significantly correlated with the source data?
Again, this is slightly peripheral to my main interests, but I have a vague recollection of that Shannon's theory is valid in the simple, linear case, i.e. where the recieved signal comprises one coherent copy of the source signal and is only messed up by added white, Gaussian noise. In the real world there may be other effects (multipath propagation, fading channels) but I don't think they are included in the framework of Shannon theory. Hence my proviso ("chances are") in my first post. Of course, you should get second opinions from people who actually know what they are talking about in these matters... Rune
On Wed, 17 Sep 2003, James K. wrote:

> "Rune Allnor" <allnor@tele.ntnu.no> wrote in message > news:f56893ae.0309140725.22e4b481@posting.google.com... > > Maybe I don't fully catch up with this issue. > > > Shannon tells you the worst case scenario, given SNR and bandwidth. If you > > design a system that meets the specs in a "Shannon channel", chances are > > that your system will meet the specs in the real world too. > > However, is *the worst case* really worse than the case > where the noise is significantly correlated with the source data? >
if the noise is correlated with the data you can use it to help estimate your message. the problem is now how to contruct the best estimator.
> > Rune > > Sorry if it came from my lack awareness. > -- > Best regards, > James K. (txdiversity@hotmail.com) > - Private opinions: These are not the opinions from my affiliation. > > >
-- The most rigorous proofs will be shown by vigorous handwaving. http://www.mit.edu/~kusuma opinion of author is not necessarily of the institute
allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0309170321.2a088de6@posting.google.com>...
> "James K." <txdiversity@hotmail.com> wrote in message news:<3f67350c@shknews01>... > > "Rune Allnor" <allnor@tele.ntnu.no> wrote in message > > news:f56893ae.0309140725.22e4b481@posting.google.com... > > > > Maybe I don't fully catch up with this issue. > > > > > Shannon tells you the worst case scenario, given SNR and bandwidth. If you > > > design a system that meets the specs in a "Shannon channel", chances are > > > that your system will meet the specs in the real world too. > > > > However, is *the worst case* really worse than the case > > where the noise is significantly correlated with the source data? > > Again, this is slightly peripheral to my main interests, but I have a vague > recollection of that Shannon's theory is valid in the simple, linear case, > i.e. where the recieved signal comprises one coherent copy of the source > signal and is only messed up by added white, Gaussian noise. > > In the real world there may be other effects (multipath propagation, > fading channels) but I don't think they are included in the framework of > Shannon theory. Hence my proviso ("chances are") in my first post. > > Of course, you should get second opinions from people who actually know > what they are talking about in these matters... > > Rune
Hi Rune, My purpose of writing the article is to get rid of all " white noise" discussions since it is discussed so many times and well known to every reader familiar with basic Shannon's capacity. At the same time, I did not intend "multipath/fading/correlation" channels to interfere or dilute the purpose of main discussion. ISI is treated/removed separtely by equalizer and only additive white(Gaussian) noise is ensured before a channel decoder(e.g Viterbi decoder)which is quite robust against white noise. My purpose is to start from the block where ISI due to multipath fading channel is removed already,we also get rid of any substential interference(ACI/CCI) etc. i.e channel is only corrupted by noise. I guess you pointed out that white noise is the worst case and close to real life purtubation and Shannon capacity is thus based on AWGN. I fully agree and did not deny in my post. My question was: Can we build a noise distribution and also a receiver which can exeeds white noise capacity? If so the key challenges would be 1. White noise to optimum noise(don't confuse with "no noise" - it should be some noise! ) conversion 2. Receiver tuned to the optimum noise distribution. Regards, Santosh
On 17 Sep 2003 15:14:46 -0700, santosh.nath@ntlworld.com (santosh
nath) wrote:

>allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0309170321.2a088de6@posting.google.com>... >> "James K." <txdiversity@hotmail.com> wrote in message news:<3f67350c@shknews01>... >> > "Rune Allnor" <allnor@tele.ntnu.no> wrote in message >> > news:f56893ae.0309140725.22e4b481@posting.google.com... >> > >> > Maybe I don't fully catch up with this issue. >> > >> > > Shannon tells you the worst case scenario, given SNR and bandwidth. If you >> > > design a system that meets the specs in a "Shannon channel", chances are >> > > that your system will meet the specs in the real world too. >> > >> > However, is *the worst case* really worse than the case >> > where the noise is significantly correlated with the source data? >> >> Again, this is slightly peripheral to my main interests, but I have a vague >> recollection of that Shannon's theory is valid in the simple, linear case, >> i.e. where the recieved signal comprises one coherent copy of the source >> signal and is only messed up by added white, Gaussian noise. >> >> In the real world there may be other effects (multipath propagation, >> fading channels) but I don't think they are included in the framework of >> Shannon theory. Hence my proviso ("chances are") in my first post. >> >> Of course, you should get second opinions from people who actually know >> what they are talking about in these matters... >> >> Rune > >Hi Rune, > >My purpose of writing the article is to get rid of all " white noise" >discussions since it is discussed so many times and well known to >every reader >familiar with basic Shannon's capacity. At the same time, I did not >intend "multipath/fading/correlation" channels to interfere or dilute >the purpose of main discussion. ISI is treated/removed separtely by >equalizer and only additive white(Gaussian) noise is ensured before a >channel decoder(e.g Viterbi decoder)which is quite robust against >white noise. > >My purpose is to start from the block where ISI due to multipath >fading channel >is removed already,we also get rid of any substential >interference(ACI/CCI) etc. >i.e channel is only corrupted by noise. >I guess you pointed out that white noise is the worst case and close >to real >life purtubation and Shannon capacity is thus based on AWGN. I fully >agree and >did not deny in my post. > >My question was: >Can we build a noise distribution and also a receiver which can exeeds >white noise capacity? >If so the key challenges would be > >1. White noise to optimum noise(don't confuse with "no noise" - it >should be some noise! ) conversion > >2. Receiver tuned to the optimum noise distribution. > >Regards, >Santosh
I'm still not completely sure what you're asking about, but if you're asking whether it is possible to get more information through a channel than what is described by Shannon's single-use AWGN model, then the answer is yes. This is what MIMO (miltiple-input-multiple-output) is essentially about, which exploits multiple uses of the same channel by taking advantage of channel decorrelation due to multipath propagation. The very general idea is that a system with NxN ports (i.e., outputs x inputs) can get a roughly Nx improvement in channel capacity. There are caveats, and to my knowledge no one has ever built a practical system that really fully achieves this, but that's the general idea. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
eric.jacobsen@ieee.org (Eric Jacobsen) wrote in message news:<3f6b564a.48641277@news.west.earthlink.net>...
> On 17 Sep 2003 15:14:46 -0700, santosh.nath@ntlworld.com (santosh > nath) wrote: > > >allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0309170321.2a088de6@posting.google.com>... > >> "James K." <txdiversity@hotmail.com> wrote in message news:<3f67350c@shknews01>... > >> > "Rune Allnor" <allnor@tele.ntnu.no> wrote in message > >> > news:f56893ae.0309140725.22e4b481@posting.google.com... > >> > > >> > Maybe I don't fully catch up with this issue. > >> > > >> > > Shannon tells you the worst case scenario, given SNR and bandwidth. If you > >> > > design a system that meets the specs in a "Shannon channel", chances are > >> > > that your system will meet the specs in the real world too. > >> > > >> > However, is *the worst case* really worse than the case > >> > where the noise is significantly correlated with the source data? > >> > >> Again, this is slightly peripheral to my main interests, but I have a vague > >> recollection of that Shannon's theory is valid in the simple, linear case, > >> i.e. where the recieved signal comprises one coherent copy of the source > >> signal and is only messed up by added white, Gaussian noise. > >> > >> In the real world there may be other effects (multipath propagation, > >> fading channels) but I don't think they are included in the framework of > >> Shannon theory. Hence my proviso ("chances are") in my first post. > >> > >> Of course, you should get second opinions from people who actually know > >> what they are talking about in these matters... > >> > >> Rune > > > >Hi Rune, > > > >My purpose of writing the article is to get rid of all " white noise" > >discussions since it is discussed so many times and well known to > >every reader > >familiar with basic Shannon's capacity. At the same time, I did not > >intend "multipath/fading/correlation" channels to interfere or dilute > >the purpose of main discussion. ISI is treated/removed separtely by > >equalizer and only additive white(Gaussian) noise is ensured before a > >channel decoder(e.g Viterbi decoder)which is quite robust against > >white noise. > > > >My purpose is to start from the block where ISI due to multipath > >fading channel > >is removed already,we also get rid of any substential > >interference(ACI/CCI) etc. > >i.e channel is only corrupted by noise. > >I guess you pointed out that white noise is the worst case and close > >to real > >life purtubation and Shannon capacity is thus based on AWGN. I fully > >agree and > >did not deny in my post. > > > >My question was: > >Can we build a noise distribution and also a receiver which can exeeds > >white noise capacity? > >If so the key challenges would be > > > >1. White noise to optimum noise(don't confuse with "no noise" - it > >should be some noise! ) conversion > > > >2. Receiver tuned to the optimum noise distribution. > > > >Regards, > >Santosh > > I'm still not completely sure what you're asking about, but if you're > asking whether it is possible to get more information through a > channel than what is described by Shannon's single-use AWGN model, > then the answer is yes. This is what MIMO > (miltiple-input-multiple-output) is essentially about, which exploits > multiple uses of the same channel by taking advantage of channel > decorrelation due to multipath propagation.
I am aware of MIMO since we are currently working on 2 x 2 MIMO as part of project. Definitely I meant single link - no resource resuage like MIMO. My Belief is that there could some system - may be we have to wait for future!! Regards, Santosh
> > The very general idea is that a system with NxN ports (i.e., outputs x > inputs) can get a roughly Nx improvement in channel capacity. There > are caveats, and to my knowledge no one has ever built a practical > system that really fully achieves this, but that's the general idea. > > > > Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions. > http://www.ericjacobsen.org