For the sake of discussion, let's say I sample a signal at a rate of Fs, and the signal contains a spurious tone at 0.9*Fs. The signal first passes through a low-pass filter (LPF), where the roll off reaches -30 dB at 0.9*Fs. The signal then continues through the system and eventually reaches a high-pass filter (HPF), where the roll off reaches -30 dB at 0.1*Fs.
I'm looking for a reality check on how to model this system, to understand how an aliased spurious tone appears at the output. I can think about 2 approaches:
Approach 1: For the ease of mathematics, I'm tempted to simply model the system as a band-pass filter (BPF), whose high- and low-frequency cutoffs are defined by the LPF and HPF respectively. In this case, I *think* the spurious tone would simply be attenuated by 30 dB (from the LPF roll off) and appear (aliased) at 0.1*Fs in the output spectrum.
Approach 2: I could also model each filter separately, as the signal observes them passing through the system. When the spurious tone passes the LPF, it aliases to 0.1*Fs with 30 dB less magnitude. Then, sometime later, it passes through the HPF to appear at the output at 0.1*Fs, but another 30 dB down (for a total of 60 dB down from its original magnitude).
If there's no signal above Fs/2, I think there's no issue because both approaches give the same result. But if there IS energy above Fs/2, as discussed above, is approach 2 the correct way to model this?
Approach 2 is the correct one. There may be different stylistic approaches to arranging the math for the same result, but 60dB of attenuation is the correct figure.
Thanks Tim, Let me dig a little deeper, if I may.
The example above is based on a clock (carrier) signal with frequency Fs. This clock is phase modulated with a tone at 0.9*Fs. By this I mean, if we look at the clock signal on a spectrum analyzer, there's the fundamental tone at Fs, and a spur at 1.9*Fs. However, if we look at the phase modulated signal alone, which rides on top of the clock, for example by down converting the clock to baseband, this modulation spectrum simply contains a spur at 0.9*Fs.
The LPF is an analog PLL, whose phase-detector (PD) samples the clock signal at its rising edges ONLY (e.g. the PD ignores the falling edges). Thus, the PD samples the modulated signal at Fs, and as usual, the PLL's filter is applied to the phase-modulated (baseband) input signal (as opposed to the carrier signal).
If you've followed my setup so far, then my confusion relates to how the PLL "sees" the spurious tone to filter it. Let me try to explain:
It's important to understand here that, an FFT of the modulated signal (computed using both rising AND falling edges; that is, sampled at 2*Fs) that drives the PD, contains the spurious tone at 0.9*Fs.
However, the PD only "sees" rising edges because it samples the modulated signal at Fs. So, my question, does the LPF of the PLL get applied to (A) the FFT spectrum containing both rising and falling edges to produce a spur at 0.1*Fs with a magnitude 30 dB down, or, (B) perhaps the PLL operates on the aliased spur to begin with (since this is the only signal it sees), which appears in the passband of its LPF, to simply output the aliased spur at 0.1*Fs with its ORIGINAL magnitude? Perhaps it's a question of whether the PLL filters the spur before or after it aliases?
Hopefully I'm explaining this clearly (let me know if not).
If the clock signal truly shows up in a spectrum analyzer as a tone at \( 1 F_s \) and a tone at \( 1.9 F_s \), then when you sample it with the phase detector what comes out should have a "tone" at DC (from the \( 1 F_s \) component) and a tone at \( 0.1 F_s \) (from the \( 1.9 F_s \) component).
Yes, good catch. There's a tone at DC as well. Same question though. Does the PLL see the spurious tone originally at 0.9*Fs in the roll-off region of its LPF (and therefore attenuates it 30 dB), or already aliased in it's passband (and therefore it does not attenuate it)?
What confuses me is the 2nd filter in the original question filters the spurious tone at its aliased frequency (0.1*Fs). Thus, making me wonder, if the 1st filter is a PLL operating only on the rising edges, it "sees" the spurious tone at 0.1*Fs, so does it pass through its passband region, or does the PLL still see this tone at 0.9*Fs and attenuate it down 30 dB?
Ah. I see your problem. You are trying to model the PLL as a linear, time-invariant lowpass filter, which it isn't. Because of the mixing action of the phase detector, it's time-varying.
The PLL is going to "see" the tone at \( 0.1 F_s \).
The way to work this out on your own is to block diagram the whole system, while paying attention to your assumptions (like assuming that you can treat the PLL as a linear time-invariant low-pass filter for everything that might be riding on the clock phase).
Can you help me understand how the time-variant nature of a PLL translates to seeing the tone at 0.1*Fs?
"Can you help me understand how the time-variant nature of a PLL translates to seeing the tone at 0.1*Fs?"
Because your phase detector is acting like a sampler. Samplers alias. Given a tone at a frequency of 0.9Fs, or at 1.9Fs, is a tone with a frequency at 0.1Fs.
I think that you really need to draw a block diagram of your PLL and ask yourself what each block does in frequency-domain terms.
Great, thank you Tim!
This may not relate exactly to your problem, but take a look at this article. It shows how to plot the image response of a decimator. This method automatically provides the image rejection with no hand calculations required.