# Understanding the 'Phasing Method' of Single Sideband Demodulation

There are four ways to demodulate a transmitted single sideband (SSB) signal. Those four methods are:

• synchronous detection,
• phasing method,
• Weaver method, and
• filtering method.

Here we review synchronous detection in preparation for explaining, in detail, how the phasing method works. This blog contains lots of preliminary information, so if you're already familiar with SSB signals you might want to scroll down to the 'SSB DEMODULATION BY SYNCHRONOUS DETECTION' section.

BACKGROUND
I was recently involved in trying to understand the operation of a discrete SSB demodulation system that was being proposed to replace an older analog SSB demodulation system. Having never built an SSB system, I wanted to understand how the "phasing method" of SSB demodulation works.

However, in searching the Internet for tutorial SSB demodulation information I was shocked at how little information was available. The web's wikipedia 'single-sideband modulation' gives the mathematical details of SSB generation [1]. But SSB demodulation information at that web site was terribly sparse. In my Internet searching, I found the SSB information available on the net to be either badly confusing in its notation or downright ambiguous. That web-based material showed SSB demodulation block diagrams, but they didn't show spectra at various stages in the diagrams to help me understand the details of the processing.

A typical example of what was frustrating me about the web-based SSB information is given in the analog SSB generation network shown in Figure 1.

In reading the text associated with that figure, the left 90° rectangle was meant to represent a Hilbert transform. Well, in that case, the "90°" label should more correctly be "-90° because in the time domain a Hilbert transformer shifts a sinusoid by –90°." In Figure 1, assuming the rightmost 90° rectangle means some sort of 90° phase-delay element, then it's output would not be sin(ωct), it would be -sin(ωct). Ambiguous "90°" notation often occurs in the literature of SSB systems. (Reading Internet SSB material is like reading a medical billing statement; the information is confusing! So much of it doesn't "add up".) OK, enough of my ranting.

TRANSMITTED SSB SIGNALS
Before we illustrate SSB demodulation, it's useful to quickly review the nature of standard double-sideband amplitude modulation (AM) commercial broadcast transmissions that your car radio is designed to receive. In standard AM communication systems, an analog real-valued baseband input signal may have a spectral magnitude, for example, like that shown in Figure 2(a). Such a signal might well be a 4 kHz-wide audio output of a microphone having no spectral energy at DC (zero Hz). This baseband audio signal is effectively multiplied, in the time domain, by a pure-tone carrier to generate what's called the modulated signal whose spectral magnitude content is given in Figure 2(b).

In this example the carrier frequency is 80 kHz, thus the transmitted AM signal contains pure-tone carrier spectral energy at ±80 kHz plus sideband energy. The purpose of a remote AM receiver, then, is to demodulate that transmitted DSB AM signal and generate the baseband signal given in Figure 2(c). The analog demodulated audio signal could then be amplified and routed to a loudspeaker. We note at this point that the two transmitted sidebands, on either side of ±80 kHz, each contain the same audio information.

In an SSB communication system the baseband audio signal modulates a carrier, in what's called the "upper sideband" (USB) mode of transmission, such that the transmitted analog signal would have the spectrum shown in Figure 3(b). Notice in this scenario, the lower (upper) frequency edge of the baseband signal’s USB (LSB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). (The phasing method of SSB radio frequency (RF) generation is given in Appendix A.)

The purpose of a remote SSB receiver is to demodulate that transmitted SSB signal, generating the baseband audio signal given in Figure 3(c). The analog demodulated baseband signal can then be amplified and drive a loudspeaker.

In a "lower sideband" (LSB) mode of SSB transmission, the transmitted analog signal would have the spectrum shown in Figure 4(b). In this case, the upper (lower) frequency edge of the baseband signal’s LSB (USB) has been translated in frequency so that it’s located at 80 kHz (-80 kHz). The baseband signal in Figure 4(a) is real-valued, so the positive-frequency portion of its spectrum is the complex conjugate of the negative-frequency portion. Both sidebands contain the same information, and that's why LSB transmission and USB transmission communicate identical information.

And again, in the LSB mode of transmission, the remote receiver must demodulate that transmitted LSB SSB signal and generate the baseband audio signal given in Figure 4(c).

WHY BOTHER USING SSB SYSTEMS?
Standard broadcast AM signal transmission, Figure 2, wastes a lot of transmitter power. At a minimum, two thirds of an AM transmitter's power is used to transmit the 80 kHz carrier signal which contains no information. And half of the remaining one third of the transmitted power is wasted by radiating a redundant sideband. So why are standard commercial AM broadcast systems used at all? It's because DSB AM broadcast receivers are simple and inexpensive.

In SSB transmission systems, 100% of their transmitter power is used to transmit a single baseband sideband. Thus they exhibit no wasted transmitter power as do AM systems. In addition, due to their narrower bandwidth, SSB systems can have twice the number of transmitted signals over a given RF range than standard double-sideband AM signals. The disadvantage of SSB communications, however, is that the remote receiver's demodulation circuitry is more complicated than that needed by AM receivers.

SSB DEMODULATION BY SYNCHRONOUS DETECTION
One method, sometimes called "synchronous detection", to implement the demodulation process in Figure 3 is shown in Figure 5. This method is relatively simple. In Figure 5 the analog RF input USB SSB signal has a carrier frequency of 80 kHz, so ωc = 2π•80000 radians/second. We multiply that input SSB signal by what’s called a “beat frequency oscillator” (BFO) signal, cos(ωct), to translate the SSB signal’s USB (LSB) down (up) in frequency toward zero Hz. That multiplication also produces spectral energy in the vicinity of ±160 kHz. The analog lowpass filter (LPF), whose frequency magnitude response is shown at the upper right side of Figure 5, attenuates the high frequency spectral energy producing our desired baseband audio signal.

A DSP version of our simple Figure 5 USB demodulation process is shown in Figure 6 where, for example, we chose the A/D converter’s sample rate to be 200 kHz. Notice the spectral wrap-around that occurs at half the sample rate, ±100 kHz, in the multiplier’s output signal. The digital LPF, having a cutoff frequency of just a bit greater than 4 kHz, serves two purposes. It attenuates any unwanted out-of-baseband spectral energy in the down-converted signal, and eliminates any spectral aliasing caused by decimation. The decimation-by-10 process reduces the baseband signal’s sample rate to 20 kHz.

The analog LPF in Figure 6 attenuates the unwanted high-frequency analog spectral images that are produced, at multiples of 20 kHz, by the D/A conversion process.

Returning to the analog demod process in Figure 5, had the incoming SSB signal been a lower sideband (LSB) transmission our analog processing would be that shown in Figure 7. The processing performed in Figure 7 is identical to that shown in Figure 5. So, happily, our simple ‘down-convert and lowpass filter’ synchronous detection demodulation process works for both USB and LSB transmitted signals.

The simple demodulation process in Figure 7 has one unpleasant shortcoming that renders it impractical in real-world SSB communications. Here’s the story.

In the United States commercial AM radio broadcasting is carefully restricted in that radio stations are assigned a specific RF carrier frequency at which they can transmit their radio programs. Those carrier frequencies are always at multiples of 10 kHz. So it’s possible for us the receive one AM radio signal at a carrier frequency of, say, 1200 kHz while another AM radio station is transmitting its program at a carrier frequency of 1210 kHz. (Other parts of the world use a 9 kHz carrier spacing for their commercial radio broadcasts.)

[In the States, those commercial AM broadcast carrier frequencies are monitored with excruciating rigor. Many years ago while attending college I worked part time at a commercial radio station in Ohio. One of my responsibilities was to monitor the station’s transmitter’s output power level and carrier frequency, and record those values in a log book. Those power and frequency measurements, by law, had to be performed every 15 minutes, 24 hours a day!]

That careful control of transmitted signal carrier frequencies does not exist in today’s world of SSB communications. Think about the situation where two independent, unrelated, SSB Users are transmitting their signals as shown in Figure 8(a). User# 1 is transmitting a USB signal at a carrier frequency of 80 kHz and User# 2 is transmitting an LSB signal at a carrier frequency of 80 kHz. The operation of our simple ‘down-convert and lowpass filter’ demod process is given in Figure 8(b). There we see that spectral overlap prevents us from demodulating either of the two SSB signals.

This troublesome overlapped-spectra problem in Figure 8(b) can be solved by a clever quadrature processing scheme. Here's how.

Our dual-User SSB problem has been solved by a quadrature processing technique, called the “phasing method, which makes use of the Hilbert transform. See Appendix B for brief explanation of the Hilbert transform.

To explain the details of that process, let’s assume that a User#1 and a User# 2 have transmitted two sinusoidal signals whose baseband spectra are those shown in Figure 9(a). User# 1’s baseband signal is a sinewave tone whose frequency is ±3 kHz and it’s transmitted as an USB signal at a carrier frequency of 80 kHz, as shown in Figure 9(b). Let’s also assume that User# 2’s baseband signal is a lower-amplitude cosine wave tone whose frequency is ±1 kHz, and it’s transmitted as an LSB signal also at a carrier frequency of 80 kHz.

To understand the phasing method of SSB demodulation, we must pay attention to the real and imaginary parts of our spectra, as is done in Figure 9(b).

Figure 10 presents the block diagram of a “phasing method” demodulator.

What the Figure 10 quadrature processing does for us, to eliminate the overlapped-spectral component problem in Figure 8, is to generate two down-converted signals (i(t) and q(t)) with appropriate phase relationships so that selected spectral components either reinforce or cancel each other at the final output addition and subtraction operations. Let's see how this all works.

The real and imaginary parts of the transmitted RF spectra from the bottom of Figure 9 are shown at the lower left side of Figure 11.

In the phasing method of SSB demodulation, we perform a complex down-conversion of the real-valued RF input, using a complex-valued BFO of e-jct) = cos(ωct) -jsin(ωct), to generate a complex i(t) + jq(t) signal whose spectrum is shown at the upper right side of Figure 11. That spectrum of a complex-valued time sequence, is merely the demodulator's input spectrum shifted down in frequency by 80 kHz.

Figure 12 shows the spectra at the output of the mixers, the output of the Hilbert transformer, and the final baseband spectra. There we see that the output of the upper signal path produces User# 1’s baseband signal, with no interference from User# 2. And the output of the lower signal path yields User# 2’s baseband signal with no interference from User# 1. That’s the phasing method of SSB demodulation.

A DSP SSB DEMODULATOR
Figure 13 shows an example of a DSP SSB phasing method demodulator. Once the complex-valued BFO of e-j(ωcnts) = cos(ωcnts) - jsin(ωcnts) down-converts the RF SSB to zero Hz, it’s sensible to decimate the multipliers’ outputs to a lower fs sample rate to reduce the processing workload of the Hilbert transformer. We could have performed decimation by a factor greater than 10, but doing so would make the design of the post-D/A analog lowpass filter more complicated. The digital LPFs, whose positive-frequency cutoff frequency is slightly greater than 4 kHz, attenuate any unwanted out-of-baseband spectral energy in the down-converted signal and eliminate any spectral aliasing caused by decimation.

The Delay element in the upper path in Figure 13 is needed to maintain data synchronization with the time-delayed Hilbert transformer output sequence in the bottom path. For example, if a 21-tap digital Hilbert transformer is used, then the upper path’s Delay element would be a 10-stage delay line [2].

With DSP techniques enabling us to implement high performance, guaranteed linear-phase, Hilbert transformation, the phasing method of SSB demodulation has become popular in modern times.

LISTENING TO DONALD DUCK
You’ll notice that the phasing method of SSB demodulation assumes we have a BFO available in our receiver that’s identical in frequency and phase with the ωc oscillator in the SSB transmitter. If this not the case, then our demodulated baseband signals may have both frequency and phase errors. Those potential errors can be described as follows:

Let's assume an SSB transmitter baseband signal contains a single sinusoid of cos(ωmt + φ). If the demodulator’s local BFO, the cos() and -sin() oscillator combination, has a frequency error of Δω radians/second and a phase error of θ radians, then the SSB demodulated baseband sinusoids will be

USB demod sinusoid = cos[(ωm - Δω)t + φ - θ],

and an LSB mode demodulated baseband signal will be

LSB demod sinusoid = cos[(ωm + Δω)t + φ + θ].

The origin of those expressions is given in Appendix C.

If Δω = 0, a constant phase error of θ radians over the demodulated baseband signal’s full frequency range is not a problem in voice communications. The human ear/brain combination can tolerate audio phase errors, so we can correctly interpret such demodulated speech signals. I’m not a digital communications guy but I imagine that a few degrees of BFO phase error would render any sort of digital phase-modulated baseband signal useless in an SSB receiver.

When θ = 0, a BFO’s +Δω frequency error causes pitch shifting in that the demodulated baseband signal will be shifted in frequency. Figure 14(b) shows the situation where a BFO’s Δω frequency error causes the positive- and negative-frequency components of the baseband signal to overlap at zero Hz. In this situation a +Δω frequency error greater than roughly 75 Hz –to- 100 Hz renders the demodulated voice baseband unintelligible.

Figure 14(c) shows the demodulated baseband spectrum when a BFO’s -Δω frequency error causes the positive- and negative-frequency components of the baseband signal to be shifted away from zero Hz. This distorts the harmonic relation between baseband voice spectral components. In this scenario a -Δω frequency error greater than roughly 150 Hz –to- 200 Hz causes a demodulated voice baseband signal to sound like Donald Duck.

Intelligibility tests indicate that a Figure 14(c) BFO -Δω frequency error of less than, say, 150 Hz can be tolerated. The bottom line here is that using modern day high-precision frequency synthesis techniques, the Δω error of receiver BFOs can be kept small making SSB systems, with their narrow RF bandwidth requirement and transmission power efficiency, quite useful for voice communications over radio links.

CONCLUSION
So now we know how the synchronous detection and phasing methods of SSB demodulation work. We'll leave the "Weaver method" of SSB demodulation, itself a form of quadrature processing, as a topic for another blog. The "filtering method", as far as I can tell, doesn't seem to be used in modern digital implementations of SSB communications systems. If you'd like to review the mathematics of SSB systems, I recommend you check out the Internet references [3] and [4].

ACKNOWLEDGMENTS
I say “Thanks” to Tauno Voipio and Mark Goldberg for explaining so much SSB theory to me. You guys rock! Without your help this blog would not exist.

REFERENCES
[1] http://en.wikipedia.org/wiki/Single-sideband_modulation
[2] R. Lyons, “Understanding Digital Signal Processing”, 2nd & 3rd Editions, Prentice Hall Publishing, Chapter 9.
[3] http://local.eleceng.uct.ac.za/courses/EEE3086F/notes/508-AM_SSB_2up.pdf
[4] http://www.ece.umd.edu/~tretter/commlab/c6713slides/ch7.pdf

APPENDIX A – GENERATING SSB SIGNALS
The phasing method of SSB generation is shown in Figure A-1(a), where m(t) is some generic baseband modulating signal. Some people call Figure A-1(a) a "Hartley modulator." A specific SSB generation example is given in Figure A-1(b). In that figure the baseband input is a single low-frequency analog cosine wave whose frequency is ωm radians/second. The output carrier frequency is ωc = 2π80000 radians/second (80 kHz).

A real-world example of a DSP version of this SSB generation method is shown in Figure A-2, where interpolation is needed so that multiplication by the high-frequency oscillator signals does not cause spectral wrap-around errors, as would happen if no interpolation was performed.

The baseband input sequence m(n) had a one-sided bandwidth of 3 kHz, and the final SSB output carrier frequency is 9 MHz. The interpolation by 3000 was performed by a cascade of three interpolation stages (interpolation factors 15, 25, and 8), with each stage using CIC lowpass filters. The output sample rate was chosen to be 36 MHz so that the oscillators' cos() and sin() sequences were [1,0,-1,0,...] and [0,1,0,-1,...], which eliminated the need for high-frequency multiplication.

APPENDIX B – THE HILBERT TRANSFORM AS A TRANSFER FUNCTION
In the time domain, the Hilbert transform (HT) of a real-valued cosine wave is a real-valued sinewave of the same frequency. And the HT of a real-valued sinewave is a real-valued negative cosine wave of the same frequency. Stated in different words, in the time domain the HT of a real-valued sinusoid is another real-valued sinusoid of the same frequency whose phase has been shifted by -90° relative to the original sinusoid. We validate these statements as follows:

If we treat the HT as a frequency-domain H(ω) transfer function, its |H(ω)| magnitude response is unity as shown in Figure B-1(b).

The phase response of H(ω) is that shown in Figure B-1(c), which we can describe using

where "arg" means the argument, or angle, of H(ω). This means that the HT of a real-valued cosine wave is

And the HT of a real-valued sinewave is

A detailed description of the HT and techniques for designing digital Hilbert transformers are given in reference [2].

I'll briefly mention that there are three reasonable ways to depict the HT in block diagrams. Those ways are shown in Figure B-2, where signal xH(t) represents the HT of input x(t) signal.

Although I understand why an author might use it, I don't particularly like the Figure B-2(a) notation. I prefer the notation in Figure B-2(b). By the way, I encountered the interesting Figure B-2(c) depiction of the HT on a web page produced by a professor at the University of Maryland. (It shows the professor's inclination to describe things in strictly mathematical terms.)

APPENDIX C – THE EFFECT OF LOCAL BFO FREQUENCY AND PHASE ERRORS
Using the phrase "BFO" to represent our phasing method demodulator's cos() and -sin() oscillators, Figure C-1 shows the USB-mode demodulated output baseband signal under the conditions:

• The transmitter's baseband signal is a single cos(ωmt + φ) sinusoid,
• The transmitted USB RF signal is a cos[(ωc + ωm)t + φ] sinusoid,
• The demodulator's BFO has a frequency error of Δω radians/second and a phase error of θ radians.

If Δω = 0 and θ = 0, then the demodulated output signal would be the original cos(ωmt + φ) baseband signal.

Figure C-2 gives a graphical derivation of a demodulated LSB-mode output signal when frequency and phase errors exist in the local BFO. Notice in the LSB-mode case, if an LSB transmitter's baseband signal contains a single sinusoid of cos(ωmt + φ), the transmitted RF LSB signal will be cos(ωmt - φ) having a negative initial phase angle.

Previous post by Rick Lyons:
How Discrete Signal Interpolation Improves D/A Conversion
Next post by Rick Lyons:
Setting the 3-dB Cutoff Frequency of an Exponential Averager

Ravi
Said:
Superb article as usual. Eagerly waiting for the post on Weaver method of SSB demodulation.
3 years ago
0
Sorry, you need javascript enabled to post any comments.
cfelton
Said:
Rick,

Thanks for the thorough and well writing article. In appendix B you indicate a fifth reference. But in your reference list there are only four.
3 years ago
0
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Replied:
Hello" Eagle-Eye" Chris,
That '[5]' in Appendix B should be a '[2]'. I'll have that typo
corrected. Thanks for pointing that out to me.

[-Rick Lyons-]
3 years ago
0
cfelton
Replied:
@rick Perfect! Ah the information is in my favorite DSP book. I always forget the range of information available in that great reference
3 years ago
0
Ravi72
Said:
In a dual-user scenario there would be overlap of spectra, how can we effectively separate sidebands of interest. Secondly, how do I combine real-imaginary to form a complex signal in Hardware ?
3 years ago
0
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Said:
Hello Ravi72,
Sorry, I don't understand your 1st question. Perhaps you could be more specific and tell me which paragraph, or figure, in my blog you're referring to when you ask that 1st question.

Your 2nd question is a super valid question. Here's the best answer I have:
Thinking about a complex signal that is A+jB, you cannot go to your local electronics store and buy a 'j-operator' component and solder it to a printed circuit board to form a complex signal. Implementing the j-operator is a conceptual "definition" that everyone agrees to.

For example:
Let's say you build a circuit that generates two real valued signals, Signal_1 and Signal_2. Then you bring everyone into the lab and say, "I am going to define a complex signal. That complex signal is Signal_1 + jSignal_2. Does everyone understand?" They all nod their heads in agreement. Then you continue with, "From now on, my Signal_1 is the real part (in-phase part, East-West part) of my complex signal and my Signal_2 is the imaginary part (quadrature phase part, North-South part) of my complex signal. Does everyone agree to this?" Everyone in the room nods their heads in agreement.
By having everyone agree to your definitions of what are the real and imaginary parts of your complex signal you have implemented the j-operator.

Another way to view your 2nd question is: Let's say real-valued Signal_1 and real-valued Signal_2 are analog sinewaves of the same frequency but with 90-degree phase between them. Apply Signal_2 to the vertical input of an oscilloscope and apply Signal_1 to the external horizontal input of the oscilloscope. By doing this, the screen of the oscilloscope becomes your complex plane and the scope's electron beam position on the screen is the instantaneous value of your complex signal Signal_1 + jSignal_2. If the frequency of the two sinusoids was 2 Hz, the electron beam will orbit two revolutions/second around in a circle on the oscilloscope's screen. So the act of connecting the two cables to the oscilloscope was in itself implementing the j-operator.

Hope that helps,
[-Rick-]
3 years ago
0
Sorry, you need javascript enabled to post any comments.
Videogamer555
Said:
There is another way to accomplish this without the ridiculous (almost impossible to implement with discrete/computer math) construct called the Hilbert Transform. You can use instead the much simpler Intermediate frequency filter technique. First downconvert your radio signal to something that will fit in the frequency space of 0 to 24khz for a standard 48khz soundcard (or 0 to 96khz for a professional 192khz soundcard). Do this with the direct conversion method by multiplying the RF signal with your BFO's signal, then output this to the soundcard after proper attenuation. Record it to a WAV file. Then use GoldWave audio processing software and perform an FFT filter on it to remove the unwanted signal. Then again perform multiplication (this time with the software) between the signal and a sinewave at the proper frequency. Lastly, you will again use the FFT filter to remove the higher frequency signal generated in this multiplication process, and keep the lower frequency (baseband) signal.

Or write your own software to do this (but I recommend avoiding FFTs, as they are VERY VERY VERY difficult to write code to do), and use FIR Sinc filters in place of the FFT filters.

This is what I usually do for this kind of thing. I don't waste my time implementing Hilbert transforms and other nasty mathematical constructs for this arduous task of "quadrature", "complex", phase stuff. I just do the straight forward approach to removing unwanted nearby signals.
2 years ago
-2
Sorry, you need javascript enabled to post any comments.
N2CQR
Said:
I think there are some really fundamental flaws in the presentation of SSB signals: Audio baseband signals coming from a microphone do not contain sidebands. It is just audio. The sidebands are created when the audio mixes with the RF carrier. And these sidebands appear above and below the carrier. So if your carrier were to be 7.2 MHz, and your audio tone was 1000 Hz, the sidebands would be at 7.201 and 7.199 MHz. Your diagrams show two carriers of 80KHz with two sets of sidebands around them. That doesn't happen. I think all the diagrams in figure 2 are wrong, as is the explanation of AM and SSB.
2 months ago
0
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Replied:
Hello N2CQR. My Figure 2 is correct. In that figure, thanks to dead Swiss mathematician Herr Leonhard Euler, I'm using the interpretation that a real-valued signal contains both positive and negative spectral components. (Hopefully you've heard of 'Euler's equations'.) The explanation of why my spectral plots look the way they do is given in my long-winded blog at:
http://www.dsprelated.com/showarticle/192.php
2 months ago
0
Mochael Black
Said:
This is incredibly garbled.

All AM signals require mixing the sideband(s) down to audio.

With "AM", ie carrier with two sidebands, the carrier of the transmitted signal mixes with the sidebands on the receiver's detector to get back to "baseband".

With SSB, ie a single sideband with no carrier, the "carrier" has to be inserted at the receiver end, so the sideband can be mixed down to "baseband". With SSB, there is nothing to synchronize the reinserted carrier with the missing carrier.

Thus "synchronous detection" doesn't work with SSB. If the reinserted carrier is not in the right place, the result is odd sounding "baseband".

The only way to get "perfectly" tuned SSB is to either keep to very absolute frequencies, in both transmitter and receiver, or send the carrier from the transmitter (which wastes transmitter power, and causes heterodynes)

If you send both sidebands, the reinserted carrier at the receiver can be set just right because the two sidebands provide enough information. That's synchronized detection.

If both sidebands are sent, you cannot place the carrier properly unless you synchronize the reinserted carrier to the incoming signal. Mistuning the reinserted carrier means not only that the audio tones at "baseband" are in the wrong place, but each sideband (which are mirror images of each other) land in the wrong place at baseband so it's all comprehensible. A 1KHz tone at the transmitter should be at 1KHz at the receiver, but if the reinserted carrier at the receiver is mistuned, say by one Hertz, one sideband would convert to 1001KHz at baseband, and the other sideband to 999KHz.

One common trick is to convert the DSB signal to SSB at the receiver, using an SSB filter, so the receiver only sees an SSB signal. Though, sending both sidebands can be of value, if demodulated properly, the redundant sideband can improve reception (despite the cost of the wider bandwidth and the power used at the transmitter to send that extra sideband.

If the carrier is sent along with the sidebands, the reinserted carrier can be placed properly, either by syncing to the carrier from the transmitter, or from the two sidebands. There is use to this, if the carrier fades.

The filter method, the phasing method and the Weaver method are not about getting the signal back to baseband, but methods eliminating the unwanted sideband. Note that in the case of an SSB signal from the transmitter, it's about knocking out the interfering signal the other side of zero beat.

Mixing causes sidebands or images, whether in a superheterodyne or balanced modulator in an SSB transmitter, or getting signal back down to baseband. The terminology changes, even if it really is the same thing. You can use filters or the phasing method to get rid of the image in a receiver, or use the same thing to get rid of the unwanted sideband.

Michael
2 months ago
0
Sorry, you need javascript enabled to post any comments.
Rick Lyons
Said:
Michael, I regret that you're unhappy with my blog. I'll try harder in my future blogs to make you happy.
2 months ago
0
Sorry, you need javascript enabled to post any comments.
KC4AF
Said:
Rick, thanks for your Blog. I find it very informative. I believe the tutorial could be enhanced and made easier to understand by adding some oscilloscope displays of the RF signal. These displays would make it easier for folks to understand and follow. I have a math degree and I find it difficult to follow. The pictorials would help explain things. Your "Blog", which I prefer to call a tutorial would make a nice chapter in the ARRL handbook - or perhaps some other text book on SSB. Thanks again for the information. Bob Thomas
2 months ago
0