Forums

fft frequency and phase resolution.

Started by Unknown July 21, 2015
hey guys,

This should be a simple questions for the pros here, in an ideal situation a noise free sinusoid of 2V amplitude and 2 degree @ 10MHz is sampled at 50MSa/sec. I use the Goertzel algorithm to find the magnitude and phase of this 10MHz from 500 samples, what is the maximum precision of the magnitude and phase calculated using this method i.e. how close will magnitude and phase estimate be to their true value of 2V and 2 degrees?

Another question pestering me is the smallest change in magnitude and phase that can be detected using the above method i.e if the signal deviates from 2V and 2 degrees what is the smallest deviation the Goertzel algorithm can detect from 500 samples? Thanks very much.

Best wishes
Zoul
On Tue, 21 Jul 2015 16:37:07 -0700, zoulzubazz wrote:

> hey guys, > > This should be a simple questions for the pros here, in an ideal > situation a noise free sinusoid of 2V amplitude and 2 degree @ 10MHz is > sampled at 50MSa/sec. I use the Goertzel algorithm to find the magnitude > and phase of this 10MHz from 500 samples, what is the maximum precision > of the magnitude and phase calculated using this method i.e. how close > will magnitude and phase estimate be to their true value of 2V and 2 > degrees?
Well, if you calculate it with infinite precision, then exactly 0 degrees. The degradation comes either with using finite precision arithmetic (i.e., using actual numbers in an actual computer), noise in the signal, or both.
> Another question pestering me is the smallest change in magnitude and > phase that can be detected using the above method i.e if the signal > deviates from 2V and 2 degrees what is the smallest deviation the > Goertzel algorithm can detect from 500 samples? Thanks very much.
Again, 0 degrees. Unless you just want to verify that yes, the underlying algorithm is "perfect" and that any degradation in accuracy comes from noise, then perhaps you're not asking the right question? -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
> > Well, if you calculate it with infinite precision, then exactly 0 degrees. > > The degradation comes either with using finite precision arithmetic > (i.e., using actual numbers in an actual computer), noise in the signal, > or both. > > > Again, 0 degrees. > > Unless you just want to verify that yes, the underlying algorithm is > "perfect" and that any degradation in accuracy comes from noise, then > perhaps you're not asking the right question? > > -- > > Tim Wescott > Wescott Design Services > http://www.wescottdesign.com
Thanks Tim, I guess the question was wrong then. Was always under the impression that the magnitude and phase resolution was dependent on the sampling frequency. Now, in the same noise-free case if a 12 bit ADC is used at 50MSa/sec and signed fixed point arithmetic (13 bits for signed integer part and 18 bits for decimal part, like my application) is used for computations how would this effect the magnitude and phase resolution? I got some clues earlier from this post, but in post #3 the author refers to "temporal sampling interval" does this have something to do with the sampling rate? http://www.dsprelated.com/showthread/comp.dsp/40352-1.php thanks very much
On Wed, 22 Jul 2015 06:32:59 -0700, zoulzubazz wrote:


>> Well, if you calculate it with infinite precision, then exactly 0 >> degrees. >> >> The degradation comes either with using finite precision arithmetic >> (i.e., using actual numbers in an actual computer), noise in the >> signal, or both. >> >> >> Again, 0 degrees. >> >> Unless you just want to verify that yes, the underlying algorithm is >> "perfect" and that any degradation in accuracy comes from noise, then >> perhaps you're not asking the right question? >> >> -- >> >> Tim Wescott Wescott Design Services http://www.wescottdesign.com > > Thanks Tim, I guess the question was wrong then. Was always under the > impression that the magnitude and phase resolution was dependent on the > sampling frequency.
It is, but multiplies rather than adds. If you know the signal frequency exactly and know that there is no DC bias, and if you make perfect measurements, then two measurements, 90 degrees apart, is all you need to know amplitude and phase exactly. This is actually something you can verify if you want to go playing in Trig Land: take x = A * cos(theta) y = A * sin(theta) and see if you can't get exact expressions for A and theta. However if the measurement is noisy you can only estimate A and theta, and for noise that is independent over measurements you can make a better estimate with more measurements.
> Now, in the same noise-free case if a 12 bit ADC is used at 50MSa/sec > and signed fixed point arithmetic (13 bits for signed integer part and > 18 bits for decimal part, like my application) is used for computations > how would this effect the magnitude and phase resolution?
Giving an exact number depends on a bunch of stuff, not least of which is way more of my time than I'm willing to give up for free. Basically, the easy way is to model your system with quantization noise being injected at the ADC and at each summing junction in the block diagram for the filter. Analyze the thing for the inphase and quadrature outputs sensitivity to noise at each of those points, then analyze your phase & amplitude resolution algorithm for its sensitivity to noise. Multiply up all the sensitivities along with the quantization noise injected, and you'll get your numbers. This should be discussed in any good book on signal processing -- it's even in my book (http://wescottdesign.com/actfes/actfes.html) which is really about digital control, but since noise matters in control systems, too, I needed to discuss it.
> I got some clues earlier from this post, but in post #3 the author > refers to "temporal sampling interval" does this have something to do > with the sampling rate?
He spells it out: T is the interval, 1/T is the sampling rate. -- www.wescottdesign.com
>hey guys, > >This should be a simple questions for the pros here, in an ideal
situation
>a noise free sinusoid of 2V amplitude and 2 degree @ 10MHz is sampled at >50MSa/sec. I use the Goertzel algorithm to find the magnitude and phase
of
>this 10MHz from 500 samples, what is the maximum precision of the
magnitude
>and phase calculated using this method i.e. how close will magnitude and >phase estimate be to their true value of 2V and 2 degrees? > >Another question pestering me is the smallest change in magnitude and >phase that can be detected using the above method i.e if the signal
deviates
>from 2V and 2 degrees what is the smallest deviation the Goertzel
algorithm
>can detect from 500 samples? Thanks very much. > >Best wishes >Zoul
Hi Zoul, Your question is a little confusing because it conflicts with your title. In the title you mention "fft" and "frequency resolution", yet in the question, you talk about the Goertzel algorithm and seem to assume a known frequency. If the frequency is known and fixed with the values you have given, here is how I recommend you proceed: If you are sampling at 50 MHz on a 10 MHz signal, that means you have precisely five samples per cycle. With 500 samples, that covers 100 cycles. This is overkill. I have found with double precision variables, which is more than what you are working with, with more than two hundred samples that there is significant error due to truncation accumulation in a DFT calculation. If you reduce your number of samples to 100, or even 50, you will get just as good results. What Tim Wescott refers to as "playing in trigonometry land" is really "playing in linear algebra land". If you will be performing the operation repeatedly it is worth building two basis vectors in lookup tables. The two vectors should be sine and cosine functions of the frequency you are measuring. If your given frequency and sampling rates are exact, these basis vectors will have integer number of cycles and thus be orthogonal, making your calculation much easier. This approach is equivalent to a single bin DFT. The precision of your results is going to be much better than the precision of your measurements. Ced --------------------------------------- Posted through http://www.DSPRelated.com
On Wed, 22 Jul 2015 06:32:59 -0700 (PDT), zoulzubazz@googlemail.com
wrote:

>>=20 >> Well, if you calculate it with infinite precision, then exactly 0 degrees= >. >>=20 >> The degradation comes either with using finite precision arithmetic=20 >> (i.e., using actual numbers in an actual computer), noise in the signal,= >=20 >> or both. >>=20 >>=20 >> Again, 0 degrees. >>=20 >> Unless you just want to verify that yes, the underlying algorithm is=20 >> "perfect" and that any degradation in accuracy comes from noise, then=20 >> perhaps you're not asking the right question? >>=20 >> --=20 >>=20 >> Tim Wescott >> Wescott Design Services >> http://www.wescottdesign.com > >Thanks Tim, I guess the question was wrong then. Was always under the impre= >ssion that the magnitude and phase resolution was dependent on the sampling= > frequency.
In a way it is, but only in the ways that it may affect the end game. You can always trade an increased sample rate for fewer bits at the ADC, and then apply decimation filtering to increase the precision (i.e., number of bits). You may get the same result as sampling with more bits at the slower rate, but maybe that's what you were thinking of in that context.
>Now, in the same noise-free case if a 12 bit ADC is used at 50MSa/sec and s= >igned fixed point arithmetic (13 bits for signed integer part and 18 bits f= >or decimal part, like my application) is used for computations how would th= >is effect the magnitude and phase resolution?
You can visualize this pretty easily by imagining a two-dimensional grid of quantized numbers. If you only have two bits in each dimension, there are only sixteen points in the 4x4 grid. You can then sort out (or at least visualize) how vectors occupying all the spaces in between the points have error vectors in the magnitude and phase dimensions. As the grid density increases by increasing the number of bits in each dimension, the error vectors get smaller.
>I got some clues earlier from this post, but in post #3 the author refers t= >o "temporal sampling interval" does this have something to do with the samp= >ling rate? > >http://www.dsprelated.com/showthread/comp.dsp/40352-1.php
Yes, that's the sampling rate in the time domain.
> >thanks very much
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
zoulzubazz@googlemail.com wrote:

(snip, Tim wrote)
>> Well, if you calculate it with infinite precision, then exactly 0 degrees.
>> The degradation comes either with using finite precision arithmetic >> (i.e., using actual numbers in an actual computer), noise in >> the signal, or both.
> Thanks Tim, I guess the question was wrong then. Was always under > the impression that the magnitude and phase resolution was > dependent on the sampling frequency.
You conveniently use the case with a single sinusoid and infinite precision sampling.
> Now, in the same noise-free case if a 12 bit ADC is used > at 50MSa/sec and signed fixed point arithmetic (13 bits for > signed integer part and 18 bits for decimal part,
I presume the 18 bits are binary, and not decimal.
> like my application) is used for computations how would this > effect the magnitude and phase resolution?
Quantizing adds quantization noise. Even more, you should add dither before quantization for best results. (Well, it might depend on the system, but for audio you want dither.) If it is still the case of a single sinusoid, you can do a least squares fit to the data points. In the usual case, that reduces your error by about sqrt(number of points). Because some points contribute more to the phase calculation and some more to the amplitude, I suspect the reduction is more like sqrt(number of points/2). Again, a single sinusoid is a special case.
> I got some clues earlier from this post, but in post #3 the > author refers to "temporal sampling interval" does this have > something to do with the sampling rate?
> http://www.dsprelated.com/showthread/comp.dsp/40352-1.php
-- glen
On Wed, 22 Jul 2015 17:31:41 GMT, eric.jacobsen@ieee.org (Eric
Jacobsen) wrote:

>On Wed, 22 Jul 2015 06:32:59 -0700 (PDT), zoulzubazz@googlemail.com >wrote: > >>>=20 >>> Well, if you calculate it with infinite precision, then exactly 0 degrees= >>. >>>=20 >>> The degradation comes either with using finite precision arithmetic=20 >>> (i.e., using actual numbers in an actual computer), noise in the signal,= >>=20 >>> or both. >>>=20 >>>=20 >>> Again, 0 degrees. >>>=20 >>> Unless you just want to verify that yes, the underlying algorithm is=20 >>> "perfect" and that any degradation in accuracy comes from noise, then=20 >>> perhaps you're not asking the right question? >>>=20 >>> --=20 >>>=20 >>> Tim Wescott >>> Wescott Design Services >>> http://www.wescottdesign.com >> >>Thanks Tim, I guess the question was wrong then. Was always under the impre= >>ssion that the magnitude and phase resolution was dependent on the sampling= >> frequency. > >In a way it is, but only in the ways that it may affect the end game. > >You can always trade an increased sample rate for fewer bits at the >ADC, and then apply decimation filtering to increase the precision >(i.e., number of bits). You may get the same result as sampling with >more bits at the slower rate, but maybe that's what you were thinking >of in that context. > >>Now, in the same noise-free case if a 12 bit ADC is used at 50MSa/sec and s= >>igned fixed point arithmetic (13 bits for signed integer part and 18 bits f= >>or decimal part, like my application) is used for computations how would th= >>is effect the magnitude and phase resolution? > >You can visualize this pretty easily by imagining a two-dimensional >grid of quantized numbers. If you only have two bits in each >dimension, there are only sixteen points in the 4x4 grid. You can >then sort out (or at least visualize) how vectors occupying all the >spaces in between the points have error vectors in the magnitude and >phase dimensions. As the grid density increases by increasing the >number of bits in each dimension, the error vectors get smaller.
I should clarify that I meant the two-dimensional grid to be the I and Q components, so that a vector would represented by the quantized elements and the magnitude and phase would be represented on the two-dimensional plane. I hope that helps. Probably didn't make too much sense without that.
>>I got some clues earlier from this post, but in post #3 the author refers t= >>o "temporal sampling interval" does this have something to do with the samp= >>ling rate? >> >>http://www.dsprelated.com/showthread/comp.dsp/40352-1.php > >Yes, that's the sampling rate in the time domain. > >> >>thanks very much > >Eric Jacobsen >Anchor Hill Communications >http://www.anchorhill.com
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com