# Spectral Purity Measurement

Started by December 19, 2014
```I want to analyze the output of a DDS circuit and am wondering if an FFT
is the best way to do this.  I'm mainly concerned with the "close in"
spurs that are often generated by a DDS.  My analysis of the errors
involved in the sine generation is that they will be on the order of 1
ppm which I believe will be -240 dBc.  Is that right?  Sounds far too
easy to get such good results.  I guess I'm worried that it will be hard
to measure such low levels.

Any suggestions?  I'll be coding both the implementation and the
measurement code.  The implementation will be synthesizable and the
measurement code will not.  I'm thinking a fairly large FFT, 2048 or
maybe 4096 bins in floating point.

--

Rick
```
```May be this work from AD will help:

http://www.analog.com/static/imported-files/tutorials/450968421DDS_Tutorial_rev12-2-99.pdf

kaz

_____________________________
Posted through www.DSPRelated.com
```
```>> of 1 ppm which I believe will be -240 dBc

shouldn't that be 20*log10(1e-6) = -120 dBc ?
For comparison, double precision floating point achieves around -300 dBc,
53 bits * 6 dB/bit, give or take some.

What I'd do is take a full cycle of the steady state output, correlate with
sine and cosine of the known frequency (i.e. FFT and pick the correct bin),
reconstruct and subtract. The remaining signal energy is unwanted.

6 dB / bit + 1.77 dB is the theoretical SNR limit across the whole
bandwidth for a sine wave, and less if dithering is involved.

_____________________________
Posted through www.DSPRelated.com
```
```On Fri, 19 Dec 2014 10:10:57 -0600, "mnentwig" <24789@dsprelated>
wrote:

>>> of 1 ppm which I believe will be -240 dBc
>
>shouldn't that be 20*log10(1e-6) = -120 dBc ?
>For comparison, double precision floating point achieves around -300 dBc,
>53 bits * 6 dB/bit, give or take some.
>
>What I'd do is take a full cycle of the steady state output, correlate with
>sine and cosine of the known frequency (i.e. FFT and pick the correct bin),
>reconstruct and subtract. The remaining signal energy is unwanted.
>
>6 dB / bit + 1.77 dB is the theoretical SNR limit across the whole
>bandwidth for a sine wave, and less if dithering is involved.

For a DDS you really want a long output sample, of many cycles, to get
a lot of averaging on the spurs, wherever they may be.  These days
doing long FFTs, much longer than the 2k or 4k proposed by the OP, is
not difficult and should be strongly considered.   The longer the FFT
the more processing gain, which is what you want to be able to see the
low-level spurs.

The paper kaz linked is good, but there are better ones out there that
include spur reduction techniques and how to test for them.   Between
ADI, TI, Qualcomm, et al, there are a lot of good white papers,
tutorials, etc., to get a good idea of what you may need, both for
implementation and verification.

But, yeah, -120dB or -240dB is agressive unless you have a lot of
output bits.

There are some spur reduction techniques that provide very good bang
for the buck, but some are application or implementation dependent.
Having a really good understanding of trigonometry helps, as does
understanding of the details of numerical precision and error sources,
etc.   It's not as hard as it may sound to make a very clean DDS
trickier than actually getting the DDS working well.  ;)

Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
```
```On Fri, 19 Dec 2014 10:06:50 -0500, rickman wrote:

> I want to analyze the output of a DDS circuit and am wondering if an FFT
> is the best way to do this.  I'm mainly concerned with the "close in"
> spurs that are often generated by a DDS.  My analysis of the errors
> involved in the sine generation is that they will be on the order of 1
> ppm which I believe will be -240 dBc.  Is that right?  Sounds far too
> easy to get such good results.  I guess I'm worried that it will be hard
> to measure such low levels.
>
> Any suggestions?  I'll be coding both the implementation and the
> measurement code.  The implementation will be synthesizable and the
> measurement code will not.  I'm thinking a fairly large FFT, 2048 or
> maybe 4096 bins in floating point.

If you mean a real circuit and not an FPGA configuration, and if you have
any analog components in there, then you need to measure the thing with a
spectrum analyzer.  No spectrum analyzer in the world has a 240dB dynamic
range, so you'd need to notch out the carrier with something absurdly deep
and narrow-band, like a crystal filter.  Measuring spurs down to that
level would be a significant challenge for an experienced RF engineer -- I
don't know that I could, or if I'd trust my results without double-
checking from someone who did it every day.

Even if you're measuring this numerically I think you need to do some
careful and close analysis of whatever method you choose.

An FFT that short will only be good to -240dBc if it collects an exact
integer number of samples -- if it collects more or less, the artifacts
from truncating the series will overwhelm any real effects.

-240dBc implies 40 bits of precision, so you'll need to be sure that the
error build-up in your FFT (or whatever) doesn't exceed that.  You're
talking a 12-stage FFT, and double-precision floating point has a 52-bit
mantissa, so if everything stacks up wrong you've just blown your error
budget.  Such errors tend to be smeared out rather than to build up -- but
you need to check with analysis to be sure.

If you can, it may be best to generate a file of DDS outputs, and then do
the analysis in some separate package like Scilab, Octave or Matlab.  Even
there, however, I would be concerned about the needed precision, and I'd
seriously consider finding an FFT package that is, or can be compiled to,

All of this really makes me want to ask _why_ -- if you're working in some
application where you need to keep your DDS that spectrally pure, then
chances are good that even with an absolutely perfect DDS, you're already
screwed.  You may want to review how well this thing is going to work when
your input signal has noise, and has the inevitable distortion that comes
from being measured by analog components.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
```
```>> For a DDS you really want a long output sample, of many cycles, to get
a lot of averaging on the spurs

Agree.
Now if the _spurs_ are known to be periodic (discrete lines), there is some
cycle length involved, some smallest common multiple.
For example, if a simple modulo phase accumulator is guaranteed to be
bitwise identical every n cycles, looking at one period of the spurs gives
me all the information.
In this case, the output signal (wanted and spurs) is strictly bit-wise
periodic.

In a non-trivial design this approach may be cumbersome / impossible. If
so, the longer the observed time interval, the better.

_____________________________
Posted through www.DSPRelated.com
```
```On Fri, 19 Dec 2014 13:32:21 -0600, "mnentwig" <24789@dsprelated>
wrote:

>>> For a DDS you really want a long output sample, of many cycles, to get
>a lot of averaging on the spurs
>
>Agree.
>Now if the _spurs_ are known to be periodic (discrete lines), there is some
>cycle length involved, some smallest common multiple.
>For example, if a simple modulo phase accumulator is guaranteed to be
>bitwise identical every n cycles, looking at one period of the spurs gives
>me all the information.
>In this case, the output signal (wanted and spurs) is strictly bit-wise
>periodic.
>
>In a non-trivial design this approach may be cumbersome / impossible. If
>so, the longer the observed time interval, the better.

That's part of why the observation time needs to be long;  the phase
accumulator is essentially an address generator into a polyphase
representation of a sine LUT.   Depending on the source(s) of the
spur(s), the periodicity may depend on the realized phases in the LUT
or the frequency of the repetition in the LUT addresses, or other
things.   The affects can be very subtle when looking for the sources
of the low spurs, but getting a long observation helps to see them
both for the processing gain as well as likelihood of detection.

Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
```
```-240dbc is a very low signal level and will be below the noise floor of
the environment being tested in. With a good spectrum analyser you may
get down to -160dbm. Are you really sure about the power level.

Compare with
http://www.rohde-schwarz.co.uk/en/product/fsu-productstartpage_63493-7993.html

On 19/12/14 15:06, rickman wrote:
> I want to analyze the output of a DDS circuit and am wondering if an FFT
> is the best way to do this.  I'm mainly concerned with the "close in"
> spurs that are often generated by a DDS.  My analysis of the errors
> involved in the sine generation is that they will be on the order of 1
> ppm which I believe will be -240 dBc.  Is that right?  Sounds far too
> easy to get such good results.  I guess I'm worried that it will be hard
> to measure such low levels.
>
> Any suggestions?  I'll be coding both the implementation and the
> measurement code.  The implementation will be synthesizable and the
> measurement code will not.  I'm thinking a fairly large FFT, 2048 or
> maybe 4096 bins in floating point.
>

```
```Andy Botterill wrote:
> -240dbc is a very low signal level and will be below the noise floor of
> the environment being tested in. With a good spectrum analyser you may
> get down to -160dbm. Are you really sure about the power level.
>
> Compare with
> http://www.rohde-schwarz.co.uk/en/product/fsu-productstartpage_63493-7993.html
>
>
> On 19/12/14 15:06, rickman wrote:
>> I want to analyze the output of a DDS circuit and am wondering if an FFT
>> is the best way to do this.  I'm mainly concerned with the "close in"
>> spurs that are often generated by a DDS.  My analysis of the errors
>> involved in the sine generation is that they will be on the order of 1
>> ppm which I believe will be -240 dBc.  Is that right?  Sounds far too
>> easy to get such good results.  I guess I'm worried that it will be hard
>> to measure such low levels.
>>
>> Any suggestions?  I'll be coding both the implementation and the
>> measurement code.  The implementation will be synthesizable and the
>> measurement code will not.  I'm thinking a fairly large FFT, 2048 or
>> maybe 4096 bins in floating point.
>>
>

Are decibels used differently for dBc than for other usages?  I would
have thought that 6 orders of magnitude (1 ppm) was -120 dB not -240 dB
20 * log10 (10**-6) = 20 * -6 = -120

--
Gabor
```
```On Fri, 19 Dec 2014 17:22:14 -0500, GaborSzakacs wrote:

> Andy Botterill wrote:
>> -240dbc is a very low signal level and will be below the noise floor of
>> the environment being tested in. With a good spectrum analyser you may
>> get down to -160dbm. Are you really sure about the power level.
>>
>> Compare with
>> http://www.rohde-schwarz.co.uk/en/product/fsu-
productstartpage_63493-7993.html
>>
>>
>> On 19/12/14 15:06, rickman wrote:
>>> I want to analyze the output of a DDS circuit and am wondering if an
>>> FFT is the best way to do this.  I'm mainly concerned with the "close
>>> in" spurs that are often generated by a DDS.  My analysis of the
>>> errors involved in the sine generation is that they will be on the
>>> order of 1 ppm which I believe will be -240 dBc.  Is that right?
>>> Sounds far too easy to get such good results.  I guess I'm worried
>>> that it will be hard to measure such low levels.
>>>
>>> Any suggestions?  I'll be coding both the implementation and the
>>> measurement code.  The implementation will be synthesizable and the
>>> measurement code will not.  I'm thinking a fairly large FFT, 2048 or
>>> maybe 4096 bins in floating point.
>>>
>>>
>>
> Are decibels used differently for dBc than for other usages?  I would
> have thought that 6 orders of magnitude (1 ppm) was -120 dB not -240 dB
> 20 * log10 (10**-6) = 20 * -6 = -120

No, Rick made an arithmetic mistake, or he doubled his dB twice.  And I
didn't notice in my posting where I went on and on about the difficulty of
verifying -240dBc, and the uselessness thereof.  (-120dBc is still
exceedingly hard to achieve in analog-land, and not necessarily useful in
digital-land unless your goal is to be so damned good that you never have