Reply by Tom Gardner January 3, 20162016-01-03
On 03/01/16 04:15, Sharan123 wrote:
> Dear all, > > I am re-opening this thread to clarify another point. > > I have been going through lectures on wireless communication and it is > mentioned that multipath effect that exists in wireless channel is more > problematic than random noise in the channel. > > So, when it comes to wireless channels, considering the equation C = BW > log2(1 + S/N) takes into account - > > 1) I assume that the S/N takes multipath factor into account. So, > essentially, N is not only channel noise but also effects due to > multipath
No. To be overly simplistic, the S/N is measured at the "decision point" - and multipath may have been ignored or utilised earlier in the chain. Note that the "decision point" isn't at an ADC. It can be before or after conversion to a digital signal. But I expect you realise that.
> 2) Secondly, it is mentioned that SNR can be increased using signal power. > While this is true when it comes to channel noise, this factor can have a > diminishing returns when it comes to multipath effects, as increasing > signal power would proportionally increase the power of the multipath > signals also.
Basically yes, provided you choose the definition of "noise" appropriately. Don't forget that multipath is A Good Thing - without it cellular phones wouldn't be possible. Oh, wait a minute, on second thoughts multipath is a bad thing :)
Reply by Sharan123 January 3, 20162016-01-03
>>2) Secondly, it is mentioned that SNR can be increased using signal
power.
>>While this is true when it comes to channel noise, this factor can have
a
>>diminishing returns when it comes to multipath effects, as increasing >>signal power would proportionally increase the power of the multipath >>signals also. > >Multipath isn't always bad. There is such a thing as "multipath >gain", and how the receiver handles multipath (e.g., equalization) has >a big impact on ultimate performance.
Dear Eric, I agree. Multipath can add constructively or destructively. But from a design perspective, one has to assume the worst case (that is, assume more of destructive effect). --------------------------------------- Posted through http://www.DSPRelated.com
Reply by Eric Jacobsen January 3, 20162016-01-03
On Sat, 02 Jan 2016 22:15:47 -0600, "Sharan123" <99077@DSPRelated>
wrote:

>Dear all, > >I am re-opening this thread to clarify another point. > >I have been going through lectures on wireless communication and it is >mentioned that multipath effect that exists in wireless channel is more >problematic than random noise in the channel. > >So, when it comes to wireless channels, considering the equation C = BW >log2(1 + S/N) takes into account - > >1) I assume that the S/N takes multipath factor into account. So, >essentially, N is not only channel noise but also effects due to >multipath
Don't assume that. There aren't universal definitions for SNR in the presence of multipath, and this is often a source of contention/discussion/confusion/argument. For systems that have to deal with multipath it is useful to get clarification of how the multipath energy is accounted for in receiver metrics like SNR.
>2) Secondly, it is mentioned that SNR can be increased using signal power. >While this is true when it comes to channel noise, this factor can have a >diminishing returns when it comes to multipath effects, as increasing >signal power would proportionally increase the power of the multipath >signals also.
Multipath isn't always bad. There is such a thing as "multipath gain", and how the receiver handles multipath (e.g., equalization) has a big impact on ultimate performance.
>Thanks a lot ... >--------------------------------------- >Posted through http://www.DSPRelated.com
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Reply by Sharan123 January 3, 20162016-01-03
Dear all,

I am re-opening this thread to clarify another point.

I have been going through lectures on wireless communication and it is
mentioned that multipath effect that exists in wireless channel is more
problematic than random noise in the channel. 

So, when it comes to wireless channels, considering the equation C  =  BW
log2(1 + S/N) takes into account -

1) I assume that the S/N takes multipath factor into account. So,
essentially, N is not only channel noise but also effects due to
multipath

2) Secondly, it is mentioned that SNR can be increased using signal power.
While this is true when it comes to channel noise, this factor can have a
diminishing returns when it comes to multipath effects, as increasing
signal power would proportionally increase the power of the multipath
signals also.

Thanks a lot ...
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by Eric Jacobsen December 3, 20152015-12-03
On Thu, 3 Dec 2015 10:46:08 -0800 (PST), RichD
<r_delaney2001@yahoo.com> wrote:

>On November 27, Eric Jacobsen wrote: >>>>> C = BW . log2(1+S/N) >>>>> So, where is the encoding method >>>>> taking into account in the above equation? >> >> >>>> Encoding (and modulation) is only mentioned in passing. >>>> The actual fundamental measure is the number of discrete regions >>>> that can be packed into the space that you can construct >>>> with a given bandwidth and a given signal to noise ratio over >>>> some large amount of time. Shannon used the word "spheres". >> >>> Yes. However, there remains an enigma here. >>> Shannon discussed both the discrete and continuous channel. >>> For the former, he suggested the obvious solution: >>> error correction via redundancy. >> >>> Regarding the latter, Shannon framed it >>> geometrically, as you alluded, to produce a mathematical >>> work of art. The messages reside in a N-dimensional space. >>> Then, it becomes a packing problem: pack the hyper- >>> spheres optimally into that space, in a minimax sense, >>> to achieve the capacity (or as close as feasible). Each >>> modulated waveform transmitted,representing a particular >>> message, then fits into a unique sphere. >> >>> Seen this way, Shannon is talking about analog waveforms, >>> and analog channel. Even though the information is >>> measured in digital bits, the encoding is analog. >> >> In this context terminology is fairly important. I'm not sure >> "encoding" is the right word here, as the distinction between >> discrete and continuous wrt the signal carries many subtleties in >> this context. > >I mean, a block of bits presented to the channel, >to be transmitted, is mapped to a unique analog waveform. > >In this portion of his paper, Shannon says nothing >about digital, error correction redundancy.
Yup, and since it is now known that the FEC/ECC plays a huge part in achieving capacity, distinguishing which "encoding" is meant is useful. These days, usually, the waveform is the "modulation", how the bits are assigned within a constellation is "mapping", the FEC/ECC is the "coding", etc., etc.. Terminology isn't hard and fast in this industry, which is often a problem.
>>> But in practice, engineers simply source encode >>> with redundancy, then send those bit sequences, >>> with a variety of modulation schemes. No one has >>> tried to implement Shannon's idea directly: i.e. >>> encode blocks of source bits into these optimally >>> separated hyper-spheres. >> >> I think you mean channel coding rather than source coding, >> >> The sphere thing is partly just a viewpoint. Managing distance >> spectrums in algebraic codes is sort of a sphere-packing exercise, >> and the existing of capacity-approaching codes suggests that the >> sphere-packing is being done efficiently, even if the development of >> the codes weren't done with that in mind. > >That may be so, but it's certainly a roundabout way to >get there, almost miraculous.
Well, it took decades to figure it out, years to refine it, and research continues, so I wouldn't call it miraculous.
>It would be theoretically significant, a breakthrough, >to show that these algebraic methods do indeed address >the sphere packing framework, that they are in fact >equivalent (asymptotically). But perhaps someone has >already done this, and I slept through it -
Sorry, I didn't mean that there were capacity-approaching algebraic codes, although there may be (but none I can name off the top of my head). I meant to say, separately, that designing an algebraic code with good distance-spectrum properties is like a sphere-packing exercise, and, also, that the existence of capacity-approaching codes (separate from the algebraic code idea), suggests that they are efficiently sphere-packed. For the case of the algebraic codes, it is useful to try to have most codewords at similar Hamming distances from each other (or greater), so that a few codewords at or near a minimum-distance that is smaller than the more common distances don't dominate the code performance. From this standpoint one is trying to make all the spheres (with a radius of the Hamming distance to the nearest spheres, where the spheres are codewords), roughly the same size, hopefully raising the overall code performance in the process. Capacity-approaching codes tend to be much more complex in analyzing distances, even though they're systematic codes. The design criteria and methodologies for things like Turbo Codes or LDPCs, at least not that I've ever noticed, don't take into account sphere packing in any significant sense beyond analyzing error distributions. There have been some good code designs that were generated either automatically or heuristically just by looking at performance and tweaking things until it worked. Claude Berrou had incredible intuition in designing the first Turbo Codes, as it took a fair amount of time (and other researchers) to sort out why it worked, and that he'd picked constituent codes with the right properties before knowing what the right properties were. Remember, many random codes are good codes, just some are better than others. ;) Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Reply by RichD December 3, 20152015-12-03
On November 27, Eric Jacobsen wrote:
>>>> C = BW . log2(1+S/N) >>>> So, where is the encoding method >>>> taking into account in the above equation? > >> >>> Encoding (and modulation) is only mentioned in passing. >>> The actual fundamental measure is the number of discrete regions >>> that can be packed into the space that you can construct >>> with a given bandwidth and a given signal to noise ratio over >>> some large amount of time. Shannon used the word "spheres". > >> Yes. However, there remains an enigma here. >> Shannon discussed both the discrete and continuous channel. >> For the former, he suggested the obvious solution: >> error correction via redundancy. > >> Regarding the latter, Shannon framed it >> geometrically, as you alluded, to produce a mathematical >> work of art. The messages reside in a N-dimensional space. >> Then, it becomes a packing problem: pack the hyper- >> spheres optimally into that space, in a minimax sense, >> to achieve the capacity (or as close as feasible). Each >> modulated waveform transmitted,representing a particular >> message, then fits into a unique sphere. > >> Seen this way, Shannon is talking about analog waveforms, >> and analog channel. Even though the information is >> measured in digital bits, the encoding is analog. > > In this context terminology is fairly important. I'm not sure > "encoding" is the right word here, as the distinction between > discrete and continuous wrt the signal carries many subtleties in > this context.
I mean, a block of bits presented to the channel, to be transmitted, is mapped to a unique analog waveform. In this portion of his paper, Shannon says nothing about digital, error correction redundancy.
>> But in practice, engineers simply source encode >> with redundancy, then send those bit sequences, >> with a variety of modulation schemes. No one has >> tried to implement Shannon's idea directly: i.e. >> encode blocks of source bits into these optimally >> separated hyper-spheres. > > I think you mean channel coding rather than source coding, > > The sphere thing is partly just a viewpoint. Managing distance > spectrums in algebraic codes is sort of a sphere-packing exercise, > and the existing of capacity-approaching codes suggests that the > sphere-packing is being done efficiently, even if the development of > the codes weren't done with that in mind.
That may be so, but it's certainly a roundabout way to get there, almost miraculous. It would be theoretically significant, a breakthrough, to show that these algebraic methods do indeed address the sphere packing framework, that they are in fact equivalent (asymptotically). But perhaps someone has already done this, and I slept through it - -- Rich
Reply by robert bristow-johnson November 30, 20152015-11-30
On Saturday, November 28, 2015 at 6:41:23 PM UTC-5, Steve Pope wrote:
> robert bristow-johnson <rbj@audioimagination.com> wrote: > > >BTW, this is no *proof*, but a simple exercise (or "thought experiment") > >that will come up with the > > > > C = BW log2(1 + S/N) > > > >equation, is imagining perfectly Huffman compressed data organized into > > I-bit words (so each of the 2^I possible words have exactly the same > >probability of 2^(-I) and have exactly I bits of information) and > >transmitting those words out an ideal D/A converter and being received > >by an ideal A/D converter. > > > >so far the bits are transmitted with no errors. now add a *uniform* > >p.d.f. noise signal to the analog signal (in the "channel") before the > >ideal A/D. if the width of the uniform p.d.f. error signal is \Delta, > >the spacing between adjacent levels of the D/A or A/D, then you've added > >as much random noise as you can without changing what is received out of > >the A/D converter. > > > >you will see that the situation so described fits the channel capacity > >equation exactly. > > Excellent. Thanks. >
i realize now that i re-used the symbol "N" (which was originally the noise energy in the Shannon channel capacity equations) to be the number of bits per word, which above i renamed "I". that can cause confusion if one actually does the math because (with the correct replacement) we can see that: noise power: N = (\Delta)^2/12 signal power + noise power: S+N = 2^(2I) * N the channel capacity C = I times the word rate (or I times the sample rate of the D/A and A/D). and remember that BW is the *single-sided* bandwidth of the channel. at baseband you have a channel with signal-to-noise ration S/N from -BW through DC to +BW. if the S/N is not constant, the channel capacity formula becomes BW C = integral{ log2(1 + S(f)/N(f)) df } 0 and it works in both directions (which tells us the theoretical maximum S/N we can expect from noise-shaped quantizers such as sigma-delta converters, given the bit rate bottleneck, which is C. this is called the Gerzon/Craven limit in the audio world). r b-j
Reply by Steve Pope November 28, 20152015-11-28
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>BTW, this is no *proof*, but a simple exercise (or "thought experiment") >that will come up with the > > C = BW log2(1 + S/N) > >equations is imagining perfectly Huffman compressed data organized into >N-bit words (so each of the 2^N possible words have exactly the same >probability of 2^(-N) and have exactly N bits of information) and >transmitting those words out an ideal D/A converter and being received >by an ideal A/D converter. > >so far the bits are transmitted with no errors. now add a *uniform* >p.d.f. noise signal to the analog signal (in the "channel") before the >ideal A/D. if the width of the uniform p.d.f. error signal is \Delta, >the spacing between adjacent levels of the D/A or A/D, then you've added >as much random noise as you can without changing what is received out of >the A/D converter. > >you will see that the situation so described fits the channel capacity >equation exactly.
Excellent. Thanks. Steve
Reply by robert bristow-johnson November 28, 20152015-11-28
On Thursday, November 26, 2015 at 11:29:16 PM UTC-5, RichD wrote:
> On November 18, Tim Wescott wrote: > >> C = BW . log2(1+S/N) > >> So, where is the encoding method > >> taking into account in the above equation? > > > > Encoding (and modulation) is only mentioned in passing. > > The actual fundamental measure is the number of discrete regions > > that can be packed into the space that you can construct > > with a given bandwidth and a given signal to noise ratio over > > some large amount of time. Shannon used the word "spheres". > > Yes. However, there remains an enigma here. > > Shannon discussed both the discrete and continuous channel. > For the former, he suggested the obvious solution: > error correction via redundancy. > > Regarding the latter, Shannon framed the latter > geometrically, as you alluded, to produce a mathematical > work of art. The messages reside in a N-dimensional space. > Then, it becomes a packing problem: pack the hyper- > spheres optimally into that space, in a minimax sense, > to achieve the capacity (or as close as feasible). Each > modulated waveform transmitted,representing a particular > message, then fits into a unique sphere. > > Seen this way, Shannon is talking about analog waveforms, > and analog channel. Even though the information is > measured in digital bits, the encoding is analog. > > But in practice, engineers simply source encode > with redundancy, then send those bit sequences, > with a variety of modulation schemes. No one has > tried to implement Shannon's idea directly: i.e. > encode blocks of source bits into these optimally > separated hyper-spheres. >
BTW, this is no *proof*, but a simple exercise (or "thought experiment") that will come up with the C = BW log2(1 + S/N) equations is imagining perfectly Huffman compressed data organized into N-bit words (so each of the 2^N possible words have exactly the same probability of 2^(-N) and have exactly N bits of information) and transmitting those words out an ideal D/A converter and being received by an ideal A/D converter. so far the bits are transmitted with no errors. now add a *uniform* p.d.f. noise signal to the analog signal (in the "channel") before the ideal A/D. if the width of the uniform p.d.f. error signal is \Delta, the spacing between adjacent levels of the D/A or A/D, then you've added as much random noise as you can without changing what is received out of the A/D converter. you will see that the situation so described fits the channel capacity equation exactly.
Reply by Eric Jacobsen November 27, 20152015-11-27
On Thu, 26 Nov 2015 20:29:08 -0800 (PST), RichD
<r_delaney2001@yahoo.com> wrote:

>On November 18, Tim Wescott wrote: >>> C = BW . log2(1+S/N) >>> So, where is the encoding method >>> taking into account in the above equation? >> >> Encoding (and modulation) is only mentioned in passing. >> The actual fundamental measure is the number of discrete regions >> that can be packed into the space that you can construct >> with a given bandwidth and a given signal to noise ratio over >> some large amount of time. Shannon used the word "spheres". > >Yes. However, there remains an enigma here. > >Shannon discussed both the discrete and continuous channel. >For the former, he suggested the obvious solution: >error correction via redundancy. > >Regarding the latter, Shannon framed the latter >geometrically, as you alluded, to produce a mathematical >work of art. The messages reside in a N-dimensional space. >Then, it becomes a packing problem: pack the hyper- >spheres optimally into that space, in a minimax sense, >to achieve the capacity (or as close as feasible). Each >modulated waveform transmitted,representing a particular >message, then fits into a unique sphere. > >Seen this way, Shannon is talking about analog waveforms, >and analog channel. Even though the information is >measured in digital bits, the encoding is analog.
In this context terminology is fairly important. I'm not sure "encoding" is the right word here, as the distinction between discrete and continuous wrt the signal carries many subtleties in this context. Similarly, the word "encoding" has multiple, but distinct meanings.
>But in practice, engineers simply source encode >with redundancy, then send those bit sequences, >with a variety of modulation schemes. No one has >tried to implement Shannon's idea directly: i.e. >encode blocks of source bits into these optimally >separated hyper-spheres.
I think you mean channel coding rather than source coding, as they're very different and do different things. The sphere thing is partly just a viewpoint. Managing distance spectrums in algebraic codes is sort of a sphere-packing exercise, and the existing of capacity-approaching codes suggests that the sphere-packing is being done efficiently, even if the development of the codes weren't done with that in mind. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com