DSPRelated.com
Forums

FM demodulation at high frequency - technology limitations?

Started by Bernhard Holzmayer February 1, 2007
Hi fellows,

since I appreciate very much your opinion and expertise, I'd launch some
brainstorming around a future project we plan. This is the idea:

We have a sine wave signal @ 1MHz which carries analog modulation and -
naturally lots of unwanted noise etc.
Frequency of the unmodulated signal is known, the phase however isn't
(might vary but can be assumed as stable, but unknown).
We're interested in the analog modulation, which ranges from DC...20kHz,
so it's like stereo audio on an IF carrier?!

Current products do I/Q demodulation in analog realm, then feed the "audio"
signal into a high-quality-stereo-ADC (which achieves around 100dB dynamic
range, which we need). Signal processing is then done in a DSP.

We found that the demodulator is rather sensible wrt phase errors,
even steady differences, but most of all phase jitter (must<10ps). 

Now conversion shall be done at the carrier frequency. 
This implies, that demodulation has to be done in the digital realm, 
ideally in an FPGA, and triggered by the frequency of the sine wave.

Can anybody give me an idea if there is any chance to achieve a demodulated
signal with the required dynamic range - or is it too far off the field of
today's technology.  

Ideas are to sample the signal with high 10...70MS/s (not yet clear), 
since we don't know the phase relations, and we guess to gain more phase
stability by using a higher sample rate.
Sampling will certainly be triggered by the sine wave's frequency or a
multiple of it (we have the required reference signal).
An FPGA should then take the signal, perform the demodulation and pass
the result to a DSP for signal processing on the "audio" stream.

Now my questions:
1) Will sampling at a higher sampling rate have any advantage for the
demodulation, or would sampling at 4 times the signal frequency
(0&#4294967295;,90&#4294967295;,180&#4294967295;,270&#4294967295;) produce the same quality (provided a converter resolution
according to the frequency)?
2) Will demodulation in the digital realm be solvable with today's FPGAs
(e.g. Xilinx SPARTAN) and have good enough SNR?
3) What about phase issues? (10ps sounds terribly small.)
4) Is a dynamic range of 100dB (on the low freq. output stream) feasible?
5) If we had 2 sine waves (1MHz, 5MHz) in the same signal, each modulated,
is it possible to achieve both demodulations if we don't separate the
signals before conversion?
(This makes sense if we have more than two frequencies where parallel paths
would become expensive, or if this had to be decided/selected at run time).

I'm very interested in your responses.
Best regards - and sorry for the long posting.

Bernhard

The direct conversion from RF to I/Q is the inherently bad idea. It=20
guarantees for the poor performance. It does not really matter whether=20
it is done in the digital or analog domains. The bottlenecks are the=20
nonlinearity of the mixer, the DC offset, the AC noise, the modulation=20
of your own heterodyne by all kinds of nonlinear devices around, the=20
influence from one device to another. In the other words, there is no=20
way to make a candy from a turd.



Bernhard Holzmayer wrote:
> Hi fellows, >=20 > since I appreciate very much your opinion and expertise, I'd launch som=
e
> brainstorming around a future project we plan. This is the idea: >=20 > We have a sine wave signal @ 1MHz which carries analog modulation and -=
> naturally lots of unwanted noise etc. > Frequency of the unmodulated signal is known, the phase however isn't > (might vary but can be assumed as stable, but unknown). > We're interested in the analog modulation, which ranges from DC...20kHz=
,
> so it's like stereo audio on an IF carrier?! >=20 > Current products do I/Q demodulation in analog realm, then feed the "au=
dio"
> signal into a high-quality-stereo-ADC (which achieves around 100dB dyna=
mic
> range, which we need). Signal processing is then done in a DSP. >=20 > We found that the demodulator is rather sensible wrt phase errors, > even steady differences, but most of all phase jitter (must<10ps).=20 >=20 > Now conversion shall be done at the carrier frequency.=20 > This implies, that demodulation has to be done in the digital realm,=20 > ideally in an FPGA, and triggered by the frequency of the sine wave. >=20 > Can anybody give me an idea if there is any chance to achieve a demodul=
ated
> signal with the required dynamic range - or is it too far off the field=
of
> today's technology. =20 >=20 > Ideas are to sample the signal with high 10...70MS/s (not yet clear),=20 > since we don't know the phase relations, and we guess to gain more phas=
e
> stability by using a higher sample rate. > Sampling will certainly be triggered by the sine wave's frequency or a > multiple of it (we have the required reference signal). > An FPGA should then take the signal, perform the demodulation and pass > the result to a DSP for signal processing on the "audio" stream. >=20 > Now my questions: > 1) Will sampling at a higher sampling rate have any advantage for the > demodulation, or would sampling at 4 times the signal frequency > (0=B0,90=B0,180=B0,270=B0) produce the same quality (provided a convert=
er resolution
> according to the frequency)?
It depends.
> 2) Will demodulation in the digital realm be solvable with today's FPGA=
s
> (e.g. Xilinx SPARTAN) and have good enough SNR?
It depends.
> 3) What about phase issues? (10ps sounds terribly small.)
It depends.
> 4) Is a dynamic range of 100dB (on the low freq. output stream) feasibl=
e? The dynamic range is determined by the processing at the input.
> 5) If we had 2 sine waves (1MHz, 5MHz) in the same signal, each modulat=
ed,
> is it possible to achieve both demodulations if we don't separate the > signals before conversion?
It depends.
> (This makes sense if we have more than two frequencies where parallel p=
aths
> would become expensive, or if this had to be decided/selected at run ti=
me).
>=20 > I'm very interested in your responses. > Best regards - and sorry for the long posting. >=20 > Bernhard >=20
Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Vladimir Vassilevsky wrote:
 >
 > The direct conversion from RF to I/Q is the inherently bad idea. It
 > guarantees for the poor performance. It does not really matter whether
 > it is done in the digital or analog domains. The bottlenecks are the
 > nonlinearity of the mixer, the DC offset, the AC noise, the modulation
 > of your own heterodyne by all kinds of nonlinear devices around, the
 > influence from one device to another. In the other words, there is no
 > way to make a candy from a turd.

Sure there is - it's called Tootsie Roll. ;)

RF->IQ isn't fundamentally flawed. It works. I've done it. Hundreds of 
products based on my implementation are working fine in the field, and 
my office phone isn't ringing.

Here's the process:

- If you have to, bandpass filter your source RF signal to eliminate 
harmonics and other ugliness.
- Sample RF with a high speed ADC (eg, TI ADS55x2 family) fed with a 
high quality, quartz derived sample clock.
- Feed digitized RF into FPGA (eg, Xilinx Spartan3)
- Mix (complex multiply) the RF signal with a local quadrature DDS, 
yielding high speed I/Q.
- Downsample (CIC->FIR) the I/Q result to a usable bandwidth for 
subsequent DSP demodulation or whatever.

There's a couple of catches - DC offset on your RF ADC will be modulated 
to a spurious tone by your channel selection DDS. But that's not too 
hard to fix - I've used a first order IIR highpass on the ADC input to 
cut out DC before the modulation process.

You'll also want to read up on CIC filters - they're not perfect and may 
require amplitude compensation, and be sure you lay both compensating 
and CIC responses on top of each other to make sure you're attenuating 
what you want to.

There you go.

GM

Gary Marsh wrote:

> > The direct conversion from RF to I/Q is the inherently bad idea. It > > guarantees for the poor performance.
> RF->IQ isn't fundamentally flawed.
The dynamic range sucks. The input is going to be overwhelmed by the 2-nd order nonlinearity products from the adjacent channels. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Hi Gary, Vladimir 

thanks for your inputs.
I'll think over it...

Bernhard



Vladimir Vassilevsky <antispam_bogus@hotmail.com> wrote:

>The dynamic range sucks. The input is going to be overwhelmed by the >2-nd order nonlinearity products from the adjacent channels.
It does not sound like a communications projet what Bernhard is trying to solve...so I guess there will be no adjacent channel, this is the stuff we radio/comms guys have to cope with.
Ralph A. Schmid, DK5RAS wrote:

> Vladimir Vassilevsky <antispam_bogus@hotmail.com> wrote: > >>The dynamic range sucks. The input is going to be overwhelmed by the >>2-nd order nonlinearity products from the adjacent channels. > > It does not sound like a communications projet what Bernhard is trying > to solve...so I guess there will be no adjacent channel, this is the > stuff we radio/comms guys have to cope with.
You're right, Ralph. We're measuring the impact of surface effects on electromagnetic transmission. Goal is to measure the quality of the material (NDT). It just happens that the modulation, which carries the desired information, is so much like audio signals, that we can benefit of the technological efforts in that range. Only the fact that our range goes downto DC sucks! Therefore it's very enticing to overcome this handicap by digitizing the modulated carrier, and then do the demodulation in digital realm and avoid dealing with DC voltages. However, the uncertainty principle makes me suspect that every advantage comes with a trade off which might hurt us somewhere else... Bernhard