Sign in

Not a member? | Forgot your Password?

Search blogs

Search tips

Find us on Facebook!





Free PDF Downloads

A Quadrature Signals Tutorial: Complex, But Not Complicated

Understanding the 'Phasing Method' of Single Sideband Demodulation

Complex Digital Signal Processing in Telecommunications

Introduction to Sound Processing

C++ Tutorial

Introduction of C Programming for DSP Applications

Fixed-Point Arithmetic: An Introduction

Cascaded Integrator-Comb (CIC) Filter Introduction

Articles by category

IIR Filter Design Software

See Also

Embedded SystemsFPGA

Blogs > Rick Lyons > Understanding the 'Phasing Method' of Single Sideband Demodulation

Rick Lyons (contact)
Richard Lyons is a Contracting Systems Engineer and Lecturer at Besser Associates, Mountain View, Calif. He has written over 30 articles and conference papers on DSP top...show full bio

Would you like to be notified by email when Rick Lyons publishes a new blog?

  




Pageviews: 4397

Understanding the 'Phasing Method' of Single Sideband Demodulation

Posted by Rick Lyons on Aug 8 2012 under Communication   

Download the pdf version

There are four ways to demodulate a transmitted single sideband (SSB) signal. Those four methods are:

  • synchronous detection,
  • phasing method,
  • Weaver method, and
  • filtering method.

Here we review synchronous detection in preparation for explaining, in detail, how the phasing method works. This blog contains lots of preliminary information, so if you're already familiar with SSB signals you might want to scroll down to the 'SSB DEMODULATION BY SYNCHRONOUS DETECTION' section.

BACKGROUND
I was recently involved in trying to understand the operation of a discrete SSB demodulation system that was being proposed to replace an older analog SSB demodulation system. Having never built an SSB system, I wanted to understand how the "phasing method" of SSB demodulation works.

However, in searching the Internet for tutorial SSB demodulation information I was shocked at how little information was available. The web's wikipedia 'single-sideband modulation' gives the mathematical details of SSB generation [1]. But SSB demodulation information at that web site was terribly sparse. In my Internet searching, I found the SSB information available on the net to be either badly confusing in its notation or downright ambiguous. That web-based material showed SSB demodulation block diagrams, but they didn't show spectra at various stages in the diagrams to help me understand the details of the processing.

A typical example of what was frustrating me about the web-based SSB information is given in the analog SSB generation network shown in Figure 1.

In reading the text associated with that figure, the left 90° rectangle was meant to represent a Hilbert transform. Well, in that case, the "90°" label should more correctly be "-90° because in the time domain a Hilbert transformer shifts a sinusoid by –90°." In Figure 1, assuming the rightmost 90° rectangle means some sort of 90° phase-delay element, then it's output would not be sin(ωct), it would be -sin(ωct). Ambiguous "90°" notation often occurs in the literature of SSB systems. (Reading Internet SSB material is like reading a medical billing statement; the information is confusing! So much of it doesn't "add up".) OK, enough of my ranting.

TRANSMITTED SSB SIGNALS
Before we illustrate SSB demodulation, it's useful to quickly review the nature of standard double-sideband amplitude modulation (AM) commercial broadcast transmissions that your car radio is designed to receive. In standard AM communication systems, an analog real-valued baseband input signal may have a spectral magnitude, for example, like that shown in Figure 2(a). Such a signal might well be a 4 kHz-wide audio output of a microphone having no spectral energy at DC (zero Hz). This baseband audio signal is multiplied, in the time domain, by a pure-tone carrier to generate what's called the modulated signal whose spectral magnitude content is given in Figure 2(b).

In this example the carrier frequency is 80 kHz, thus the transmitted AM signal contains pure-tone carrier spectral energy at ±80 kHz. The purpose of a remote AM receiver, then, is to demodulate that transmitted DSB AM signal and generate the baseband signal given in Figure 2(c). The analog demodulated audio signal could then be amplified and routed to a loudspeaker. We note at this point that the two transmitted sidebands, on either side of ±80 kHz, each contain the same audio information.

In an SSB communication system the baseband audio signal modulates a carrier, in what's called the "upper sideband" (USB) mode of transmission, such that the transmitted analog signal would have the spectrum shown in Figure 3(b). Notice in this scenario, the lower (upper) frequency edge of the baseband signal’s USB (LSB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). (The phasing method of SSB radio frequency (RF) generation is given in Appendix A.)

The purpose of a remote SSB receiver is to demodulate that transmitted SSB signal, generating the baseband audio signal given in Figure 3(c). The analog demodulated baseband signal can then be amplified and drive a loudspeaker.

In a "lower sideband" (LSB) mode of SSB transmission, the transmitted analog signal would have the spectrum shown in Figure 4(b). In this case, the upper (lower) frequency edge of the baseband signal’s LSB (USB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). The baseband signal in Figure 4(a) is real-valued, so the positive-frequency portion of its spectrum is the complex conjugate of the negative-frequency portion. Both sidebands contain the same information, and that's why LSB transmission and USB transmission communicate identical information.

And again, in the LSB mode of transmission, the remote receiver must demodulate that transmitted LSB SSB signal and generate the baseband audio signal given in Figure 4(c).

WHY BOTHER USING SSB SYSTEMS?
Standard broadcast AM signal transmission, Figure 2, wastes a lot of transmitter power. At a minimum, two thirds of an AM transmitter's power is used to transmit the 80 kHz carrier signal which contains no information. And half of the remaining one third of the transmitted power is wasted by radiating a redundant sideband. So why are standard commercial AM broadcast systems used at all? It's because DSB AM broadcast receivers are simple and inexpensive.

In SSB transmission systems, 100% of their transmitter power is used to transmit a single baseband sideband. Thus they exhibit no wasted transmitter power as do AM systems. In addition, due to their narrower bandwidth, SSB systems can have twice the number of transmitted signals over a given RF range than standard double-sideband AM signals. The disadvantage of SSB communications, however, is that the remote receiver's demodulation circuitry is more complicated than that needed by AM receivers.

SSB DEMODULATION BY SYNCHRONOUS DETECTION
One method, called "synchronous detection", to implement the demodulation process in Figure 3 is shown in Figure 5. This method is relatively simple. In Figure 5 the analog RF input USB SSB signal has a carrier frequency of 80 kHz, so ωc = 2π•80000 radians/second. We multiply that input SSB signal by what’s called a “beat frequency oscillator” (BFO) signal, cos(ωct), to translate the SSB signal’s USB (LSB) down (up) in frequency toward zero Hz. That multiplication also produces spectral energy in the vicinity of ±160 kHz. The analog lowpass filter (LPF), whose frequency magnitude response is shown at the upper right side of Figure 5, attenuates the high frequency spectral energy producing our desired baseband audio signal.

A DSP version of our simple Figure 5 USB demodulation process is shown in Figure 6 where, for example, we chose the A/D converter’s sample rate to be 200 kHz. Notice the spectral wrap-around that occurs at half the sample rate, ±100 kHz, in the multiplier’s output signal. The digital LPF, having a cutoff frequency of just a bit greater than 4 kHz, serves two purposes. It attenuates any unwanted out-of-baseband spectral energy in the down-converted signal, and eliminates any spectral aliasing caused by decimation. The decimation-by-10 process reduces the baseband signal’s sample rate to 20 kHz.

The analog LPF in Figure 6 attenuates the unwanted high-frequency analog spectral images that are produced, at multiples of 20 kHz, by the D/A conversion process.

Returning to the analog demod process in Figure 5, had the incoming SSB signal been a lower sideband (LSB) transmission our analog processing would be that shown in Figure 7. The processing performed in Figure 7 is identical to that shown in Figure 5. So, happily, our simple ‘down-convert and lowpass filter’ synchronous detection demodulation process works for both USB and LSB transmitted signals.

THERE'S TROUBLE IN PARADISE
The simple demodulation process in Figure 7 has one unpleasant shortcoming that renders it impractical in real-world SSB communications. Here’s the story.

In the United States commercial AM radio broadcasting is carefully restricted in that radio stations are assigned a specific RF carrier frequency at which they can transmit their radio programs. Those carrier frequencies are always at multiples of 10 kHz. So it’s possible for us the receive one AM radio signal at a carrier frequency of, say, 1200 kHz while another AM radio station is transmitting its program at a carrier frequency of 1210 kHz. (Other parts of the world use a 9 kHz carrier spacing for their commercial radio broadcasts.)

[In the States, those commercial AM broadcast carrier frequencies are monitored with excruciating rigor. Many years ago while attending college I worked part time at a commercial radio station in Ohio. One of my responsibilities was to monitor the station’s transmitter’s output power level and carrier frequency, and record those values in a log book. Those power and frequency measurements, by law, had to be performed every 15 minutes, 24 hours a day!]

That careful control of transmitted signal carrier frequencies does not exist in today’s world of SSB communications. Think about the situation where two independent, unrelated, SSB Users are transmitting their signals as shown in Figure 8(a). User# 1 is transmitting a USB signal at a carrier frequency of 80 kHz and User# 2 is transmitting an LSB signal at a carrier frequency of 80 kHz. The operation of our simple ‘down-convert and lowpass filter’ demod process is given in Figure 8(b). There we see that spectral overlap prevents us from demodulating either of the two SSB signals.

This troublesome overlapped-spectra problem in Figure 8(b) can be solved by a clever quadrature processing scheme. Here's how.

QUADRATURE PROCESSING TO THE RESCUE
Our dual-User SSB problem has been solved by a quadrature processing technique, called the “phasing method, which makes use of the Hilbert transform. See Appendix B for brief explanation of the Hilbert transform.

To explain the details of that process, let’s assume that a User#1 and a User# 2 have transmitted two sinusoidal signals whose baseband spectra are those shown in Figure 9(a). User# 1’s baseband signal is a sinewave tone whose frequency is ±3 kHz and it’s transmitted as an USB signal at a carrier frequency of 80 kHz, as shown in Figure 9(b). Let’s also assume that User# 2’s baseband signal is a lower-amplitude cosine wave tone whose frequency is ±1 kHz, and it’s transmitted as an LSB signal also at a carrier frequency of 80 kHz.

To understand the phasing method of SSB demodulation, we must pay attention to the real and imaginary parts of our spectra, as is done in Figure 9(b).

Figure 10 presents the block diagram of a “phasing method” demodulator.

What the Figure 10 quadrature processing does for us, to eliminate the overlapped-spectral component problem in Figure 8, is to generate two down-converted signals (i(t) and q(t)) with appropriate phase relationships so that selected spectral components either reinforce or cancel each other at the final output addition and subtraction operations. Let's see how this all works.

The real and imaginary parts of the transmitted RF spectra from the bottom of Figure 9 are shown at the lower left side of Figure 11.

In the phasing method of SSB demodulation, we perform a complex down-conversion of the real-valued RF input, using a complex-valued BFO of e-jct) = cos(ωct) -jsin(ωct), to generate a complex i(t) + jq(t) signal whose spectrum is shown at the upper right side of Figure 11. That spectrum of a complex-valued time sequence, is merely the demodulator's input spectrum shifted down in frequency by 80 kHz.

Figure 12 shows the spectra at the output of the mixers, the output of the Hilbert transformer, and the final baseband spectra. There we see that the output of the upper signal path produces User# 1’s baseband signal, with no interference from User# 2. And the output of the lower signal path yields User# 2’s baseband signal with no interference from User# 1. That’s the phasing method of SSB demodulation.

A DSP SSB DEMODULATOR
Figure 13 shows an example of a DSP SSB phasing method demodulator. Once the complex-valued BFO of e-j(ωcnts) = cos(ωcnts) - jsin(ωcnts) down-converts the RF SSB to zero Hz, it’s sensible to decimate the multipliers’ outputs to a lower fs sample rate to reduce the processing workload of the Hilbert transformer. We could have performed decimation by a factor greater than 10, but doing so would make the design of the post-D/A analog lowpass filter more complicated. The digital LPFs, whose positive-frequency cutoff frequency is slightly greater than 4 kHz, attenuate any unwanted out-of-baseband spectral energy in the down-converted signal and eliminate any spectral aliasing caused by decimation.

The Delay element in the upper path in Figure 13 is needed to maintain data synchronization with the time-delayed Hilbert transformer output sequence in the bottom path. For example, if a 21-tap digital Hilbert transformer is used, then the upper path’s Delay element would be a 10-stage delay line [2].

With DSP techniques enabling us to implement high performance, guaranteed linear-phase, Hilbert transformation, the phasing method of SSB demodulation has become popular in modern times.

LISTENING TO DONALD DUCK
You’ll notice that the phasing method of SSB demodulation assumes we have a BFO available in our receiver that’s identical in frequency and phase with the ωc oscillator in the SSB transmitter. If this not the case, then our demodulated baseband signals may have both frequency and phase errors. Those potential errors can be described as follows:

Let's assume an SSB transmitter baseband signal contains a single sinusoid of cos(ωmt + φ). If the demodulator’s local BFO, the cos() and -sin() oscillator combination, has a frequency error of Δω radians/second and a phase error of θ radians, then the SSB demodulated baseband sinusoids will be

     USB demod sinusoid = cos[(ωm - Δω)t + φ - θ],

and an LSB mode demodulated baseband signal will be

     LSB demod sinusoid = cos[(ωm + Δω)t + φ + θ].

The origin of those expressions is given in Appendix C.

If Δω = 0, a constant phase error of θ radians over the demodulated baseband signal’s full frequency range is not a problem in voice communications. The human ear/brain combination can tolerate audio phase errors, so we can correctly interpret such demodulated speech signals. I’m not a digital communications guy but I imagine that a few degrees of BFO phase error would render any sort of digital phase-modulated baseband signal useless in an SSB receiver.

When θ = 0, a BFO’s +Δω frequency error causes pitch shifting in that the demodulated baseband signal will be shifted in frequency. Figure 14(b) shows the situation where a BFO’s Δω frequency error causes the positive- and negative-frequency components of the baseband signal to overlap at zero Hz. In this situation a +Δω frequency error greater than roughly 75 Hz –to- 100 Hz renders the demodulated voice baseband unintelligible.

Figure 14(c) shows the demodulated baseband spectrum when a BFO’s -Δω frequency error causes the positive- and negative-frequency components of the baseband signal to be shifted away from zero Hz. This distorts the harmonic relation between baseband voice spectral components. In this scenario a -Δω frequency error greater than roughly 150 Hz –to- 200 Hz causes a demodulated voice baseband signal to sound like Donald Duck.

Intelligibility tests indicate that a Figure 14(c) BFO -Δω frequency error of less than, say, 150 Hz can be tolerated. The bottom line here is that using modern day high-precision frequency synthesis techniques, the Δω error of receiver BFOs can be kept small making SSB systems, with their narrow RF bandwidth requirement and transmission power efficiency, quite useful for voice communications over radio links.

CONCLUSION
So now we know how the synchronous detection and phasing methods of SSB demodulation work. We'll leave the "Weaver method" of SSB demodulation, itself a form of quadrature processing, as a topic for another blog. The "filtering method", as far as I can tell, doesn't seem to be used in modern digital implementations of SSB communications systems. If you'd like to review the mathematics of SSB systems, I recommend you check out the Internet references [3] and [4].

ACKNOWLEDGMENTS
I say “Thanks” to Tauno Voipio and Mark Goldberg for explaining so much SSB theory to me. You guys rock! Without your help this blog would not exist.

REFERENCES
[1] http://en.wikipedia.org/wiki/Single-sideband_modulation
[2] R. Lyons, “Understanding Digital Signal Processing”, 2nd & 3rd Editions, Prentice Hall Publishing, Chapter 9.
[3] http://local.eleceng.uct.ac.za/courses/EEE3086F/notes/508-AM_SSB_2up.pdf
[4] http://www.ece.umd.edu/~tretter/commlab/c6713slides/ch7.pdf

APPENDIX A – GENERATING SSB SIGNALS
The phasing method of SSB generation is shown in Figure A-1(a), where m(t) is some generic baseband modulating signal. Some people call Figure A-1(a) a "Hartley modulator." A specific SSB generation example is given in Figure A-1(b). In that figure the baseband input is a single low-frequency analog cosine wave whose frequency is ωm radians/second. The output carrier frequency is ωc = 2π80000 radians/second (80 kHz).

A real-world example of a DSP version of this SSB generation method is shown in Figure A-2, where interpolation is needed so that multiplication by the high-frequency oscillator signals does not cause spectral wrap-around errors, as would happen if no interpolation was performed.

The baseband input sequence m(n) had a one-sided bandwidth of 3 kHz, and the final SSB output carrier frequency is 9 MHz. The interpolation by 3000 was performed by a cascade of three interpolation stages (interpolation factors 15, 25, and 8), with each stage using CIC lowpass filters. The output sample rate was chosen to be 36 MHz so that the oscillators' cos() and sin() sequences were [1,0,-1,0,...] and [0,1,0,-1,...], which eliminated the need for high-frequency multiplication.

APPENDIX B – THE HILBERT TRANSFORM AS A TRANSFER FUNCTION
In the time domain, the Hilbert transform (HT) of a real-valued cosine wave is a real-valued sinewave of the same frequency. And the HT of a real-valued sinewave is a real-valued negative cosine wave of the same frequency. Stated in different words, in the time domain the HT of a real-valued sinusoid is another real-valued sinusoid of the same frequency whose phase has been shifted by -90° relative to the original sinusoid. We validate these statements as follows:

If we treat the HT as a frequency-domain H(ω) transfer function, its |H(ω)| magnitude response is unity as shown in Figure B-1(b).

The phase response of H(ω) is that shown in Figure B-1(c), which we can describe using

where "arg" means the argument, or angle, of H(ω). This means that the HT of a real-valued cosine wave is

And the HT of a real-valued sinewave is

A detailed description of the HT and techniques for designing digital Hilbert transformers are given in reference [2].

I'll briefly mention that there are three reasonable ways to depict the HT in block diagrams. Those ways are shown in Figure B-2, where signal xH(t) represents the HT of input x(t) signal.

Although I understand why an author might use it, I don't particularly like the Figure B-2(a) notation. I prefer the notation in Figure B-2(b). By the way, I encountered the interesting Figure B-2(c) depiction of the HT on a web page produced by a professor at the University of Maryland. (It shows the professor's inclination to describe things in strictly mathematical terms.)

APPENDIX C – THE EFFECT OF LOCAL BFO FREQUENCY AND PHASE ERRORS
Using the phrase "BFO" to represent our phasing method demodulator's cos() and -sin() oscillators, Figure C-1 shows the USB-mode demodulated output baseband signal under the conditions:

  • The transmitter's baseband signal is a single cos(ωmt + φ) sinusoid,
  • The transmitted USB RF signal is a cos[(ωc + ωm)t + φ] sinusoid,
  • The demodulator's BFO has a frequency error of Δω radians/second and a phase error of θ radians.

If Δω = 0 and θ = 0, then the demodulated output signal would be the original cos(ωmt + φ) baseband signal.

Figure C-2 gives a graphical derivation of a demodulated LSB-mode output signal when frequency and phase errors exist in the local BFO. Notice in the LSB-mode case, if an LSB transmitter's baseband signal contains a single sinusoid of cos(ωmt + φ), the transmitted RF LSB signal will be cos(ωmt - φ) having a negative initial phase angle.



Rate this article:
5
Rating: 5 | Votes: 2
 
   
 
posted by Rick Lyons
Richard Lyons is a Contracting Systems Engineer and Lecturer at Besser Associates, Mountain View, Calif. He has written over 30 articles and conference papers on DSP topics, and authored Amazon.com's top selling DSP book "Understanding Digital Signal Processing, 3rd Ed.". He served as an Associate Editor at IEEE Signal Processing Magazine, for nine years, where he created and edited the "DSP Tips & Tricks" column. Lyons is the editor of, and contributor to, the book "Streamlining Digital Signal Processing-A Tricks of the Trade Guidebook, 2nd Ed." (Wiley & Sons, 2012).

Previous post by Rick Lyons: How Discrete Signal Interpolation Improves D/A Conversion
Next post by Rick Lyons: Setting the 3-dB Cutoff Frequency of an Exponential Averager
all articles by Rick Lyons

Comments / Replies


Ravi
Said:
Superb article as usual. Eagerly waiting for the post on Weaver method of SSB demodulation.
2 years ago
0
Reply
Sorry, you need javascript enabled to post any comments.
cfelton
Said:
Rick,

Thanks for the thorough and well writing article. In appendix B you indicate a fifth reference. But in your reference list there are only four.
2 years ago
0
Reply
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Replied:
Hello" Eagle-Eye" Chris,
That '[5]' in Appendix B should be a '[2]'. I'll have that typo
corrected. Thanks for pointing that out to me.

[-Rick Lyons-]
2 years ago
0
cfelton
Replied:
@rick Perfect! Ah the information is in my favorite DSP book. I always forget the range of information available in that great reference
2 years ago
0
Ravi72
Said:
In a dual-user scenario there would be overlap of spectra, how can we effectively separate sidebands of interest. Secondly, how do I combine real-imaginary to form a complex signal in Hardware ?
2 years ago
0
Reply
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Said:
Hello Ravi72,
Sorry, I don't understand your 1st question. Perhaps you could be more specific and tell me which paragraph, or figure, in my blog you're referring to when you ask that 1st question.

Your 2nd question is a super valid question. Here's the best answer I have:
Thinking about a complex signal that is A+jB, you cannot go to your local electronics store and buy a 'j-operator' component and solder it to a printed circuit board to form a complex signal. Implementing the j-operator is a conceptual "definition" that everyone agrees to.

For example:
Let's say you build a circuit that generates two real valued signals, Signal_1 and Signal_2. Then you bring everyone into the lab and say, "I am going to define a complex signal. That complex signal is Signal_1 + jSignal_2. Does everyone understand?" They all nod their heads in agreement. Then you continue with, "From now on, my Signal_1 is the real part (in-phase part, East-West part) of my complex signal and my Signal_2 is the imaginary part (quadrature phase part, North-South part) of my complex signal. Does everyone agree to this?" Everyone in the room nods their heads in agreement.
By having everyone agree to your definitions of what are the real and imaginary parts of your complex signal you have implemented the j-operator.

Another way to view your 2nd question is: Let's say real-valued Signal_1 and real-valued Signal_2 are analog sinewaves of the same frequency but with 90-degree phase between them. Apply Signal_2 to the vertical input of an oscilloscope and apply Signal_1 to the external horizontal input of the oscilloscope. By doing this, the screen of the oscilloscope becomes your complex plane and the scope's electron beam position on the screen is the instantaneous value of your complex signal Signal_1 + jSignal_2. If the frequency of the two sinusoids was 2 Hz, the electron beam will orbit two revolutions/second around in a circle on the oscilloscope's screen. So the act of connecting the two cables to the oscilloscope was in itself implementing the j-operator.

Hope that helps,
[-Rick-]
2 years ago
0
Reply
Sorry, you need javascript enabled to post any comments.
Sorry, you need javascript enabled to post any comments.