DSPRelated.com
Forums

Accurate frequency measurement problem

Started by Unknown February 11, 2016
On Sat, 13 Feb 2016 00:24:30 +0000, Eric Jacobsen wrote:

> On Fri, 12 Feb 2016 16:48:42 -0600, Tim Wescott <tim@seemywebsite.com> > wrote: > >>On Thu, 11 Feb 2016 20:35:33 +0000, Eric Jacobsen wrote: >> >>> There's really not much way around the time-frequency uncertainty >>> principle; the more estimation accuracy you need, the longer you need >>> to observe the signal. >> >>I beg to differ, at least on a detail: there's no way around the time- >>frequency-noise uncertainty. If you have a purely noiseless measurement >>of a perfect sine wave, then you need only four measurements to get an >>exact result (three, I'm pretty sure, if the middle measurement is not >>at phase = 0 or phase = pi). > > I should add that this also requires some apriori knowledge of the > frequency, i.e., the frequency uncertainty is already low.
My example is rather extreme (noiseless? perfect sine?), but the assertion should be correct for any set of measurements that are all within 1/4 of a cycle. -- www.wescottdesign.com
On Fri, 12 Feb 2016 23:45:24 +0000, Eric Jacobsen wrote:

> On Fri, 12 Feb 2016 16:48:42 -0600, Tim Wescott <tim@seemywebsite.com> > wrote: > >>On Thu, 11 Feb 2016 20:35:33 +0000, Eric Jacobsen wrote: >> >>> There's really not much way around the time-frequency uncertainty >>> principle; the more estimation accuracy you need, the longer you need >>> to observe the signal. >> >>I beg to differ, at least on a detail: there's no way around the time- >>frequency-noise uncertainty. If you have a purely noiseless measurement >>of a perfect sine wave, then you need only four measurements to get an >>exact result (three, I'm pretty sure, if the middle measurement is not >>at phase = 0 or phase = pi). > > That requires that the frequency is not changing during the observation. > > Time-frequency uncertainty is just that you can't simultaneously get > ultimate resolution in both domains. So if the frequency is changing, > at all, you can't get a simultaneous high-resultion measure of when it > is at a particular frequency, or what the frequency might be at a given > time. It is intuitive in the difference between the shape of an > impulse and a constant tone (or a horizontal line), which are, not > coincidentally, transforms of each other. > > If you want the impulse for high time resolution, you can't > simultaneously have high frequency resolution. > > It really is intuitive, and extending the logic it applies to many (or > maybe even most) frequency estimation problems since the need for the > estimation is because the frequency may change. > > In this case the frequency stability is high, so the time motion is slow > and the ability to get high frequency resolution is very good. In the > limit of estimation capabilitites, however, the principle still applies.
Ah. That's what you meant. Yes, a rapidly varying frequency would screw things up, particularly if the amplitude varies with frequency (more missing information in the problem statement!). I think that a DPLL, or any number of "batch" measurement methods could still get an accurate- enough measure of the average frequency within a 1-second window, assuming low enough noise and distortion.
>>If it were just time-frequency then there wouldn't be enough information >>in a 1-second chuck to make the determination. >> >>> 10ppb is pretty tight. >> >>Yea verily. >> >>> The DPLL does this by "observing" the signal from the time you turn it >>> on, so it is continually converging up to the limits of the system >>> (i.e., stability of the reference, numerical precision, etc.). >> >>Well... >> >>A DPLL will exponentially weigh the effect of the samples, so it'll >>effectively "forget" old readings, with the forgetting time being >>proportional to the inverse of the loop bandwidth. One easy way to >>screw up the measurements would be to have a too-loose loop bandwidth >>that lets too much noise through. Another easy way would be to have a >>too-tight bandwidth that doesn't track changes fast enough. > > Yup, hence careful analysis and design is imperative. > >>> An algorithmic method, like Kay's or other methods, >>> gets better as the observation interval increases. Usually a DPLL is >>> a good choice because the effective observation interval is much >>> longer than what is often practical for an algorithmic method. >> >>Maybe, maybe not. It really depends on how rapidly the signal is >>expected to change. I'm still suspecting a DPLL is a good approach, but >>there are things that could make my knee-jerk preference incorrect. > > We'd probably both agree that there hasn't been enough detail given yet > to select a preferred approach, especially since the implementation > constraints haven't been outlined, which could make a difference.
Absolutely. If one ever wanted to teach a general engineering class one could present the problem as outlined and say "now, what's missing from the problem statement?"
> I think it's a solvable problem, though. And an interesting one!
Unless there's something really dire that the OP hasn't stated, yes. -- www.wescottdesign.com
On Fri, 12 Feb 2016 19:36:03 -0600, "Cedron" <103185@DSPRelated>
wrote:

>[...snip...] > >>>It is true that in the presence of noise, >>>having more samples can mitigate the effect of the noise. This is also >>>true for imprecision. >> >>My statement was true for a fixed sample rate. For arbitrary sample >>rates, improvement in ultimate accuracy for a given SNR requires >>additional observations (or samples), i.e., an increase in N. The >>proof of this is well-known in the CRLB. It can't be avoided. >> > >You simply restated what I said. Yes, more points will reduce the effects >of noise. If there is no noise, more points will not improve accuracy. >The same is true for imprecision. Since the noise level is low according >to the OP, there is a real trade off between selecting an interval long >enough to minimize the effect of noise and imprecision, yet short enough >that the varying frequency doesn't skew the results. > >If there is a varying frequency, the very question of "What is the >frequency for the interval?" is meaningless. You can talk about the >center frequency, or the average frequency, but there is no singular >frequency by definition. > >>>One final time, there is no uncertainty principle within a DFT >>>calculation. >> >>So Heisenberg was wrong. Got it. >> >>Sorry I'll stick with the well-known math. >> >> >>Eric Jacobsen >>Anchor Hill Communications >>http://www.anchorhill.com > >We aren't talking about probability distributions here unless you consider >the limits of the precision and the presence of the noise as such.
Or the real world. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
[...snip...]

>Ah. That's what you meant. Yes, a rapidly varying frequency would screw
>things up, particularly if the amplitude varies with frequency >--
[...snip...]
>www.wescottdesign.com
The OP stated that the frequency doesn't vary that rapidly and the amplitude stays constant. I come at this from a mathematical perspective, not a practical one, meaning I don't have that deep of an understanding of how ADC circuits really function in these situations. I have my doubts that any ADC circuit sampling at 5 kHz is going to accurately capture signal values from a ~150 kHz tone. Since the OP only needs readings every second, I wonder why the signal frequency needs to be so high. It sounds like a system that is using a VCO (voltage controlled oscillator) to measure some physical property. It would seem that he would be better off, if possible, to redesign the hardware with a much lower frequency, and sample at a higher frequency. If the frequency were brought down to around 10 kHz, a standard 44.1 kHz sound card would be more than adequate for the job. The precision would probably improve too. Ced --------------------------------------- Posted through http://www.DSPRelated.com
On Friday, February 12, 2016 at 10:39:21 AM UTC-8, Eric Jacobsen wrote:
> On Fri, 12 Feb 2016 08:03:37 -0800 (PST), radams2000@gmail.com wrote:
> >Mis-read the frequency as 150 Hz. But anyway I still think the tried-and tr= > >ue FFT method is very robust > > > >Bob
> It is, and, for obvious reasons, I'm usually a big advocate of this > approach. To get to 10ppb this way, though, might be a stretch. If > the observation window is only a second, the interpolator has to > increase the estimation accuracy from the 1Hz bin with to 10ppb, which > is asking a lot. Increasing the observation window by a factor of N > also increases the FFT size by a factor of N, so that's not > necessarily a happy trend.
> Anything can be made to work, but I think in this case other > techniques beckon.
> Eric Jacobsen
In comp.dsp almost all of the suggested bin interpolations have been selected and evaluated for minimal computational load for easy implementation. There are also high accuracy methods. If and only if there is no interference that prevents ppb order accuracy, Cedron's rectangular window interpolator could be used. The OP's definition of pure sine wave is however, rather loose: "As I stated in OP, the measured signal is a pure sine wave (by pure I mean the harmonics are below -30dB of the base frequency)." Pure as "-30dB harmonics" might look good for a sine wave on an oscilloscope, it doesn't seem like an adequate understanding of the likely sources of interference and the possibilities for mitigation for 10ppb accuracy. Understanding the effects of undersampling schemes, for example requires a knowledge of the inteference structure of the signal. There are DFT bin interpolation algorithms such as John Burgess's for cosine sum windows of up to 4 coefficients that are quite accurate. Burgess' formula can be accurate to 10E-6 to 10E-8 of a bin for common windows. These numbers are of course without interference, but they allow the use of windows for the purpose of reducing effects of discrete interferers. This would not require the use of greatly enlarged fft sizes. Burgess paper: Accurate analysis of multitone signals using a DFT John C. Burgess J. Acoust. Soc. Am. 116 (1), July 2004 Still, for any approach to be implemented to the desired accuracy, much better characterization of the noise and interference environment of the signal is required. Dale B. Dalrymple
On 12.02.2016 1:13, Steve Pope wrote:
> Evgeny Filatov <filatov.ev@mipt.ru> wrote: > >> On 11.02.2016 23:40, Steve Pope wrote: > >>> Tim Wescott <seemywebsite@myfooter.really> wrote: > >>>> Basically I think it's very doable, but proving that it has a chance of >>>> being doable, and knowing what additional data you need to do the proof, >>>> isn't trivial. > >>> A bit of handwaving (definitely not a proof) suggests that given >>> a one-second sample of a sinusoid, one can measure its frequency >>> to a resolution of (1 / SNR) Hz, where SNR is expressed as a power >>> ratio. > >> ... which is effectively the MCRB for variance of frequency estimation >> (e.g., Mengali and D'Andrea "Synchronization Techniques..." pp. 60-61), >> accurate to a factor of 3/pi. > > Thanks. (When handwaving is accurate to a factor of 3/pi, that's > really pretty successful handwaving...) > > S. >
I admit being wrong. The equation I referred to (2.4.23 at p. 60) describes the variance of frequency estimation -- that value is measured in Hz^2, rather than Hz. While an idea of "frequency resolution" is given by standard deviation, i.e. the square root of variance. Essentially for each 20 dB increase in SNR you get 10 times more precise estimation. So I decided to have some fun and plotted the dependence of standard deviation of frequency estimation on SNR of a real sine wave. Here's what I got for frequency estimation over an observation interval of 1 second (for two values of the sampling frequency fs): http://i.imgur.com/13tInuH.png I've used standard R&B estimator with K=2 to get an initial frequency estimate, then ~25 binary divisions to improve the initial estimate. (I don't suggest the OP to use that approach in a real application; just use it to prove the concept.) An interesting feature is that there is a "shelf" for large SNRs, which decreases when you increase the sampling frequency fs. What does it mean? I'm tempted to say it's due to time-frequency uncertainty, but it might also be a faulty estimator. Regards, Evgeny. p.s. "As is" source code for that figure (with reduced number of iterations): http://pastebin.com/jMe0gsgG
[...snip...]
> >An interesting feature is that there is a "shelf" for large SNRs, which >decreases when you increase the sampling frequency fs. What does it >mean? I'm tempted to say it's due to time-frequency uncertainty, but it >might also be a faulty estimator. > >Regards, >Evgeny. > >p.s. >"As is" source code for that figure (with reduced number of iterations): >http://pastebin.com/jMe0gsgG
I think it's due to a faulty estimator. Compare your graph with Figure 4 in Julien's comparison paper: http://www.tsdconseil.fr/log/scriptscilab/festim/freqestim-comp.pdf Notice that most the estimators have the same shelf feature. Note too, that mine remains the most accurate as the noise level decreases. Towards theoretical exactness. Ced --------------------------------------- Posted through http://www.DSPRelated.com
If you use a phase detector of the type used in GPS receivers with really good carrier phase tracking, this just simplifies to a classical least squares fit to a straight line.  I'm not a receiver designer but I've processed those detector outputs and gotten spectacular accuracies for velocity time histories and off-vertical tilt -- from in-flight data with presence of severe vibration.

With a phase detector of that quality and your 5 kHz sampling rate you have, every second, a vastly overdetermined set of linear equations in two unknowns -- At time 't_m' for 'm' = 1 to 5000:   [ y_m = 2*pi*x_2*t_m + x_1 ]
where 'x_2' is the best linear estimate for 'f_diff' with 'x_1' as the phase at zero time and 'y_m' is  that phase detector output, which includes the cumulative effect due to frequency in addition to the "double-argument arctan" from the partial cycle remainder.  In the absence of any error a plot of 'y_m' would be a ramp.  Every second you have 5000 equations with just two unknowns,
expressible as one vector equation [ Y=AX ] where 'Y' is a 5000x1 vector, 'A' is a 5000x2 matrix, and 'X' is the 2x1 vector to be solved for phase ('x_1') and frequency (x_2).  Classical least squares solution is [ X = A# Y ] where 'A#' is the Penrose generalized inverse of 'A' -- alternatively, in Matlab, just use " A\Y "
On Fri, 12 Feb 2016 18:52:45 -0800 (PST), dbd
<d.dalrymple@sbcglobal.net> wrote:

>On Friday, February 12, 2016 at 10:39:21 AM UTC-8, Eric Jacobsen wrote: >> On Fri, 12 Feb 2016 08:03:37 -0800 (PST), radams2000@gmail.com wrote: > >> >Mis-read the frequency as 150 Hz. But anyway I still think the tried-and= > tr=3D >> >ue FFT method is very robust >> > >> >Bob > >> It is, and, for obvious reasons, I'm usually a big advocate of this >> approach. To get to 10ppb this way, though, might be a stretch. If >> the observation window is only a second, the interpolator has to >> increase the estimation accuracy from the 1Hz bin with to 10ppb, which >> is asking a lot. Increasing the observation window by a factor of N >> also increases the FFT size by a factor of N, so that's not >> necessarily a happy trend. > >> Anything can be made to work, but I think in this case other >> techniques beckon. > >> Eric Jacobsen > >In comp.dsp almost all of the suggested bin interpolations have been select= >ed and evaluated for minimal computational load for easy implementation. Th= >ere are also high accuracy methods.=20 > >If and only if there is no interference that prevents ppb order accuracy, C= >edron's rectangular window interpolator could be used. The OP's definition = >of pure sine wave is however, rather loose: > >"As I stated in OP, the measured signal is a pure sine wave (by pure I mean= > the harmonics are below -30dB of the base frequency)." > >Pure as "-30dB harmonics" might look good for a sine wave on an oscilloscop= >e, it doesn't seem like an adequate understanding of the likely sources of = >interference and the possibilities for mitigation for 10ppb accuracy. Under= >standing the effects of undersampling schemes, for example requires a knowl= >edge of the inteference structure of the signal.
That's a good point. I'd missed the -30dB spec.
>There are DFT bin interpolation algorithms such as John Burgess's for cosin= >e sum windows of up to 4 coefficients that are quite accurate. Burgess' for= >mula can be accurate to 10E-6 to 10E-8 of a bin for common windows. These n= >umbers are of course without interference, but they allow the use of window= >s for the purpose of reducing effects of discrete interferers. This would n= >ot require the use of greatly enlarged fft sizes. > >Burgess paper: >Accurate analysis of multitone signals using a DFT >John C. Burgess >J. Acoust. Soc. Am. 116 (1), July 2004 > >Still, for any approach to be implemented to the desired accuracy, much bet= >ter characterization of the noise and interference environment of the signa= >l is required. > >Dale B. Dalrymple
I wonder if the OP has put a spectrum analyzer on the tone being measured to see what the spurious levels are at or was the -30dB arrived at some other way? Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Sat, 13 Feb 2016 06:48:30 +0300, Evgeny Filatov
<filatov.ev@mipt.ru> wrote:

>On 12.02.2016 1:13, Steve Pope wrote: >> Evgeny Filatov <filatov.ev@mipt.ru> wrote: >> >>> On 11.02.2016 23:40, Steve Pope wrote: >> >>>> Tim Wescott <seemywebsite@myfooter.really> wrote: >> >>>>> Basically I think it's very doable, but proving that it has a chance of >>>>> being doable, and knowing what additional data you need to do the proof, >>>>> isn't trivial. >> >>>> A bit of handwaving (definitely not a proof) suggests that given >>>> a one-second sample of a sinusoid, one can measure its frequency >>>> to a resolution of (1 / SNR) Hz, where SNR is expressed as a power >>>> ratio. >> >>> ... which is effectively the MCRB for variance of frequency estimation >>> (e.g., Mengali and D'Andrea "Synchronization Techniques..." pp. 60-61), >>> accurate to a factor of 3/pi. >> >> Thanks. (When handwaving is accurate to a factor of 3/pi, that's >> really pretty successful handwaving...) >> >> S. >> > >I admit being wrong. > >The equation I referred to (2.4.23 at p. 60) describes the variance of >frequency estimation -- that value is measured in Hz^2, rather than Hz. >While an idea of "frequency resolution" is given by standard deviation, >i.e. the square root of variance. Essentially for each 20 dB increase in >SNR you get 10 times more precise estimation. > >So I decided to have some fun and plotted the dependence of standard >deviation of frequency estimation on SNR of a real sine wave. Here's >what I got for frequency estimation over an observation interval of 1 >second (for two values of the sampling frequency fs): > >http://i.imgur.com/13tInuH.png > >I've used standard R&B estimator with K=2 to get an initial frequency >estimate, then ~25 binary divisions to improve the initial estimate. (I >don't suggest the OP to use that approach in a real application; just >use it to prove the concept.) > >An interesting feature is that there is a "shelf" for large SNRs, which >decreases when you increase the sampling frequency fs. What does it >mean? I'm tempted to say it's due to time-frequency uncertainty, but it >might also be a faulty estimator.
Usually the flattening at high SNR is due to estimator bias that starts dominating the output error as the noise level decreases. That might scale with N depending on the algorithm.
>Regards, >Evgeny. > >p.s. >"As is" source code for that figure (with reduced number of iterations): >http://pastebin.com/jMe0gsgG
I tried to run this but there is a term, sgn, near the end that appears to be undefined? I'm attempting to run it on Octave, so maybe there's a difference there. Cool stuff, though. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com