DSPRelated.com
Forums

Help w/ PSK, Nyquist filtering etc

Started by Shimbun May 13, 2005
Hi,

Some help with explaining some digital communications stuff would be much
appreciated. This is not really my field and I'm having a hard time
understanding some concepts that are probably pretty basic.

It's not really DSP related as such; it is more fundamental.

I think I understand the need for Nyquist filtering to reduce bandwidth
while maintaining the ability to unambigously detect the transmitted
symbols in the receiver. However, my understanding only goes as far as
baseband PAM and I can't quite grasp how this filtering is done for
carrier modulated systems (except ASK.)

Specifically, Proakis gives the following expression for M-PSK:

s(t) = Re[g(t)e^{j*2*pi(m-1)/M} * e^{j*2*pi*f_c*t}]
     = g(t) * cos[2*pi*f_c*t + 2*pi*(m-1)/M]

where f_c is the carrier frequency, m=1, 2, ..., M are the M possible
phases, and g(t) is the pulse shape.

All good and well, except here's what I don't understand:

Why is the pulse shape function g(t) applied in that way? It will
shape the amplitude but I would have thought it should rather affect
the phase. As far as I know, abrupt changes in either amplitude,
phase, or frequency result in larger bandwidth. Hence why in baseband
PAM, rectangular pulses require a wider spectrum than Nyquist pulses.
Since PSK modulates the phase rather than the amplitude, it would make
sense to me if g(t) were applied to the second term inside the cosine
expression, but this is apparently not the case? What am I missing?

In addition, according to my understanding, changing the amplitude by
way of g(t) as above, wouldn't this induce a dual side band as in AM?
Thus actually reducing spectral efficiency rather than improving it?!

Something is obviously very wrong in my understanding and I would be
very glad if someone could help clarify things for me.

Thanks!


PS. If someone could also explain linear vs. non-linear modulations I
would be even happier. Proakis states that "Linearity of a modulation
method requires that the principle of superposition applies in the
mapping of the digital sequence into successive waveforms." This is too
terse for me to grasp; I will need some expounding...


		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
Shimbun wrote:
> Hi, > > Some help with explaining some digital communications stuff would be much > appreciated. This is not really my field and I'm having a hard time > understanding some concepts that are probably pretty basic. > > It's not really DSP related as such; it is more fundamental. > > I think I understand the need for Nyquist filtering to reduce bandwidth > while maintaining the ability to unambigously detect the transmitted > symbols in the receiver. However, my understanding only goes as far as > baseband PAM and I can't quite grasp how this filtering is done for > carrier modulated systems (except ASK.) > > Specifically, Proakis gives the following expression for M-PSK: > > s(t) = Re[g(t)e^{j*2*pi(m-1)/M} * e^{j*2*pi*f_c*t}] > = g(t) * cos[2*pi*f_c*t + 2*pi*(m-1)/M] > > where f_c is the carrier frequency, m=1, 2, ..., M are the M possible > phases, and g(t) is the pulse shape. > > All good and well, except here's what I don't understand: > > Why is the pulse shape function g(t) applied in that way? It will > shape the amplitude but I would have thought it should rather affect > the phase.
Either Proakis or your transcription is lacking something. I can't remember what the M- in M-PSK means, but when you represent PSK as a bunch of pulses you need to make one pulse for each symbol, so your s(t) would be correct if there were a chain of them coming out at the symbol rate.
> As far as I know, abrupt changes in either amplitude, > phase, or frequency result in larger bandwidth. Hence why in baseband > PAM, rectangular pulses require a wider spectrum than Nyquist pulses. > Since PSK modulates the phase rather than the amplitude, it would make > sense to me if g(t) were applied to the second term inside the cosine > expression, but this is apparently not the case? What am I missing?
The fact that it's coming out in a chain, I think. When you demodulate simple binary PSK it's fairly easy to think of it as consisting of a bunch of individual rectangular pulses, each one representing a bit, that just happen to be jammed together and phase coherent. Add in the shaping and they overlap, but otherwise stay phase coherent.
> > In addition, according to my understanding, changing the amplitude by > way of g(t) as above, wouldn't this induce a dual side band as in AM? > Thus actually reducing spectral efficiency rather than improving it?!
PSK is a dual sideband mode. It doesn't "reduce" spectral efficency so much as "fail to improve it". Trying to take out one of the sidebands results in grief, if I recall correctly -- not least because with rectangular pulses applying the Hilbert transform to the baseband signals results in some inconvinient amplitude infinities.
> > Something is obviously very wrong in my understanding and I would be > very glad if someone could help clarify things for me. > > Thanks! > > > PS. If someone could also explain linear vs. non-linear modulations I > would be even happier. Proakis states that "Linearity of a modulation > method requires that the principle of superposition applies in the > mapping of the digital sequence into successive waveforms." This is too > terse for me to grasp; I will need some expounding... >
Take a system h that operates on a signal x to generate another signal y, such that y(t) = h(x(t)). If x(t) = x1(t) + x2(t) then superposition holds if y(t) = h(x1(t) + x2(t)) = h(x1(t)) + h(x2(t)). So it holds in the BPSK system that I was babbling about above, but it doesn't necessarily* hold in a FSK system because the value of one bit might color the phase of all succeeding bits. * there are FSK systems, such as MSK, where it does hold. ------------------------------------------- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Hello again,

I, of course, gained some understanding almost right after posting
my question... For example:

In a BPSK IQ-diagram, the constellation points will be located 180
degrees apart, and obviously, shaping the amplitude will allow for
smooth transition between the points.

I didn't think in terms of IQ-diagrams before so I didn't see this
relationship between amplitude and phase....

Still, sidebands should show up, right? And how does it work with
QPSK (and other MPSK with M > 2)?



		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
>Shimbun wrote:
[This is me again but the webinterface had some problems so I had to re-register...Thanks for the answers!]
>> Hi, >> >> Some help with explaining some digital communications stuff would be
much
>> appreciated. This is not really my field and I'm having a hard time >> understanding some concepts that are probably pretty basic. >> >> It's not really DSP related as such; it is more fundamental. >> >> I think I understand the need for Nyquist filtering to reduce
bandwidth
>> while maintaining the ability to unambigously detect the transmitted >> symbols in the receiver. However, my understanding only goes as far as >> baseband PAM and I can't quite grasp how this filtering is done for >> carrier modulated systems (except ASK.) >> >> Specifically, Proakis gives the following expression for M-PSK: >> >> s(t) = Re[g(t)e^{j*2*pi(m-1)/M} * e^{j*2*pi*f_c*t}] >> = g(t) * cos[2*pi*f_c*t + 2*pi*(m-1)/M] >> >> where f_c is the carrier frequency, m=1, 2, ..., M are the M possible >> phases, and g(t) is the pulse shape. >> >> All good and well, except here's what I don't understand: >> >> Why is the pulse shape function g(t) applied in that way? It will >> shape the amplitude but I would have thought it should rather affect >> the phase. > >Either Proakis or your transcription is lacking something. I can't >remember what the M- in M-PSK means, but when you represent PSK as a >bunch of pulses you need to make one pulse for each symbol, so your s(t)
>would be correct if there were a chain of them coming out at the symbol >rate.
Yes, my fault. Proakis gives s_m(t) = [...] ie the expression is for one symbol, not a train of symbols.
> >> As far as I know, abrupt changes in either amplitude, >> phase, or frequency result in larger bandwidth. Hence why in baseband >> PAM, rectangular pulses require a wider spectrum than Nyquist pulses. >> Since PSK modulates the phase rather than the amplitude, it would make >> sense to me if g(t) were applied to the second term inside the cosine >> expression, but this is apparently not the case? What am I missing? > >The fact that it's coming out in a chain, I think. When you demodulate >simple binary PSK it's fairly easy to think of it as consisting of a >bunch of individual rectangular pulses, each one representing a bit, >that just happen to be jammed together and phase coherent. Add in the >shaping and they overlap, but otherwise stay phase coherent.
OK. But even though I can now see how changing amplitude according to a Nyquist shape while keeping the phase in discrete steps can work in the case of BPSK, I'm having trouble understanding how this would work in case of QPSK etc. An IQ-diagram of a Nyquist filtered QPSK system with different roll-off factors can be seen in: http://www.rootshell.be/~nocturne/tmp/QPSK.jpg How does this work if one does not also smooth the phase transition? I think I misunderstand something fundamental here...
>> >> In addition, according to my understanding, changing the amplitude by >> way of g(t) as above, wouldn't this induce a dual side band as in AM? >> Thus actually reducing spectral efficiency rather than improving it?! > >PSK is a dual sideband mode. It doesn't "reduce" spectral efficency so >much as "fail to improve it".
Right, understood.
>Trying to take out one of the sidebands >results in grief, if I recall correctly -- not least because with >rectangular pulses applying the Hilbert transform to the baseband >signals results in some inconvinient amplitude infinities.
OK.
>> >> Something is obviously very wrong in my understanding and I would be >> very glad if someone could help clarify things for me. >> >> Thanks! >> >> >> PS. If someone could also explain linear vs. non-linear modulations I >> would be even happier. Proakis states that "Linearity of a modulation >> method requires that the principle of superposition applies in the >> mapping of the digital sequence into successive waveforms." This is
too
>> terse for me to grasp; I will need some expounding... >> >Take a system h that operates on a signal x to generate another signal >y, such that y(t) = h(x(t)). If x(t) = x1(t) + x2(t) then superposition
>holds if y(t) = h(x1(t) + x2(t)) = h(x1(t)) + h(x2(t)). So it holds in >the BPSK system that I was babbling about above, but it doesn't >necessarily* hold in a FSK system because the value of one bit might >color the phase of all succeeding bits.
See, I know what superposition means and I know PSK is linear and FSK is supposed to be non-linear, and I can follow your argument about h(t), x(t) etc. and when I apply it to the PSK equation I can see that the superposition holds. The problem is that when I apply it to FSK, the same thing happens and it appears superposition holds in this case too! I must not understand how to apply it correctly, so is there any chance you could walk me through the process?
>* there are FSK systems, such as MSK, where it does hold.
Wait, MSK is linear? I'm pretty sure I have seen it referred to as non-linear? This message was sent using the Comp.DSP web interface on www.DSPRelated.com
[Once again realizing something shortly after posting...]

>>The fact that it's coming out in a chain, I think. When you demodulate
>>simple binary PSK it's fairly easy to think of it as consisting of a >>bunch of individual rectangular pulses, each one representing a bit, >>that just happen to be jammed together and phase coherent. Add in the >>shaping and they overlap, but otherwise stay phase coherent.
I think I have figured this out now...I wasn't considering the chain correctly before. As you point out, this is the key. This message was sent using the Comp.DSP web interface on www.DSPRelated.com
On Sun, 15 May 2005 07:31:19 -0500, "MouIkkai" <blink@metawire.org>
wrote:

>Hello again, > >I, of course, gained some understanding almost right after posting >my question... For example: > >In a BPSK IQ-diagram, the constellation points will be located 180 >degrees apart, and obviously, shaping the amplitude will allow for >smooth transition between the points. > >I didn't think in terms of IQ-diagrams before so I didn't see this >relationship between amplitude and phase.... > >Still, sidebands should show up, right? And how does it work with >QPSK (and other MPSK with M > 2)?
It's the same, only you apply the filter in both the I and Q dimension. If you do that (correctly) you can maintain zero ISI detectability for M-PSK and M-QAM. Pretty much any orthogonal modulation, really... ;) Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
MouIkkai wrote:


> > OK. But even though I can now see how changing amplitude according to > a Nyquist shape while keeping the phase in discrete steps can work in > the case of BPSK, I'm having trouble understanding how this would work > in case of QPSK etc. An IQ-diagram of a Nyquist filtered QPSK system > with different roll-off factors can be seen in: > > http://www.rootshell.be/~nocturne/tmp/QPSK.jpg > > How does this work if one does not also smooth the phase transition? > > I think I misunderstand something fundamental here...
In the case of bandwidth-limited PSK with sharp phase transitions the amplitude goes to zero whenever the phase goes through a discontinuity. With no amplitude at the phase discontinuity, no high-frequency energy is generated.
> >
-snip-
> >>>Something is obviously very wrong in my understanding and I would be >>>very glad if someone could help clarify things for me. >>> >>>Thanks! >>> >>> >>>PS. If someone could also explain linear vs. non-linear modulations I >>>would be even happier. Proakis states that "Linearity of a modulation >>>method requires that the principle of superposition applies in the >>>mapping of the digital sequence into successive waveforms." This is > > too > >>>terse for me to grasp; I will need some expounding... >>> >> >>Take a system h that operates on a signal x to generate another signal >>y, such that y(t) = h(x(t)). If x(t) = x1(t) + x2(t) then superposition > > >>holds if y(t) = h(x1(t) + x2(t)) = h(x1(t)) + h(x2(t)). So it holds in >>the BPSK system that I was babbling about above, but it doesn't >>necessarily* hold in a FSK system because the value of one bit might >>color the phase of all succeeding bits. > > > See, I know what superposition means and I know PSK is linear and FSK > is supposed to be non-linear, and I can follow your argument about h(t), > x(t) etc. and when I apply it to the PSK equation I can see that the > superposition holds. The problem is that when I apply it to FSK, the same > thing happens and it appears superposition holds in this case too! > > I must not understand how to apply it correctly, so is there any chance > you could walk me through the process? >
In the case of PSK (B-, Q- or M-, you choose) the contribution of the n'th symbol is always the same thing for the same signal, and it's always additive. In the general case for FSK the n'th symbol causes the phase of the signal to rotate through a certain amount, starting from wherever it left off at the end of the (n-1)'th bit-time. Thus superposition (in general) fails to hold. Note that differential PSK, where the information is encoded as either the presense or absense of a phase shift, is nonlinear by the above definition, but it's just a little nonlinear lump on an otherwise linear system.
> >>* there are FSK systems, such as MSK, where it does hold. > > > Wait, MSK is linear? I'm pretty sure I have seen it referred to as > non-linear? >
Argh -- according to my "note" above MSK is nonlinear, but only sorta. MSK, AKA "minimum-shift-keying", which is defined as frequency shift keying where the frequency shift is equal to _exactly_ 1/2 the bit rate, can be expressed as differential offset quadrature PSK with a symbol weighting of 1/2 of a cycle of a sine wave. If you start with the OQPSK and do a whole bunch of math you'll find out that's true. So the only "nonlinear" part is at the very end, where you apply the differential decoding. ------------------------------------------- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Hello again,

I asked some questions about a a month ago. First, thanks for all the
answers!

I've been keeping busy with some other stuff but now I want to return
to this subject once more. Unfortunately I still don't seem to be able
to grasp much. To reiterate, this is not my field at all! If anyone does
take time to help me, please feel free to treat me as much like an idiot
as you feel is necessary for me to have a chance at understanding this
stuff :)

>>>Take a system h that operates on a signal x to generate another signal
>>>y, such that y(t) = h(x(t)). If x(t) = x1(t) + x2(t) then
superposition
>> >> >>>holds if y(t) = h(x1(t) + x2(t)) = h(x1(t)) + h(x2(t)). So it holds in
>>>the BPSK system that I was babbling about above, but it doesn't >>>necessarily* hold in a FSK system because the value of one bit might >>>color the phase of all succeeding bits. >> >> >> See, I know what superposition means and I know PSK is linear and FSK >> is supposed to be non-linear, and I can follow your argument about
h(t),
>> x(t) etc. and when I apply it to the PSK equation I can see that the >> superposition holds. The problem is that when I apply it to FSK, the
same
>> thing happens and it appears superposition holds in this case too! >> >> I must not understand how to apply it correctly, so is there any
chance
>> you could walk me through the process? >> >In the case of PSK (B-, Q- or M-, you choose) the contribution of the >n'th symbol is always the same thing for the same signal, and it's >always additive. In the general case for FSK the n'th symbol causes the
>phase of the signal to rotate through a certain amount, starting from >wherever it left off at the end of the (n-1)'th bit-time. Thus >superposition (in general) fails to hold.
I'm afraid I still don't understand you. Can you help me through it again, with tiny, tiny baby steps this time? :) I'm not mathematically gifted but I think it still might be easier for me to follow equations. Let's consider a general BFSK system where the modulated signal is generated by switching between two LOs, ie not phase-continuous. Using some "pseudo"-LaTeX notation, I represent such an FSK signal by: s(t) = Re[ sum_{m} e^{j*2*pi*a_{m} * f * (t - mT)} * e^{j*2*pi*f_{c}*t} ] where a_{m} are the symbols, f is the frequency deviation, f_{c} is the carrier frequency, and T is the symbol period. Say that we want to transmit the sequence a={1, -1}. Using the criterion for linearity given above, the superposition h(x1(t) + x2(t)) = h(x1(t)) + h(x2(t)) should hold. Plugging in the appropriate values from the BFSK equation gives: x1(t) = e^{j*2*pi*f*t} x2(t) = e^{-j*2*pi*f*(t-T)} h(u(t) = Re[ u(t) * e^{j*2*pi*f_{c}*t} ] which results in: y(t) = h(x1(t) + x2(t)) = = Re[ (x1(t) + x2(t)) * e^{j*2*pi*f_{c}*t} ] = = Re[ x1(t) * e^{j*2*pi*f*t} ] + Re[ x2(t) * e^{j*2*pi*f*t} ] = = h(x1(t)) + h(x2(t)) => superposition holds => linear modulation?! Where my error(s) at? :( This message was sent using the Comp.DSP web interface on www.DSPRelated.com