DSPRelated.com
Forums

Accurate frequency measurement problem

Started by Unknown February 11, 2016
On Fri, 12 Feb 2016 16:49:15 GMT, no-one@notreal.invalid (Robert
Scott) wrote:

>On Fri, 12 Feb 2016 07:04:50 -0800 (PST), makolber@yahoo.com wrote: > >> >>>=20 >>> >Well, my instinct is to not use a feedback-based algorithm in a=20 >>> >situation that can be fully addressed with a feed-forward >>> >algorithm. >>> > >>> >> >>Everyone sees the advantage a reducing the bandwidth to reduce noise.=20 >>You reduce the BW as much as possible before sampling but that BW needs to = >>be at least as wide as the expected RANGE of the input signal. >> >>The advantage of a tracking loop (feedback PLL) is that you can further red= >>uce the BW since this narrow BW tracks the frequency as it moves. THe BW o= >>f the tracking filter needs to be only sufficently wide to track the RATE o= >>f movement. The pre filter needs to be sufficently wide to track the RANGE= >> of movment. If the RATE is << the RANGE, then a tracking filter can be ve= >>ry helpful to increase the SNR by reducing the BW and hence noise. A PLL i= >>s one way to implement a tracking filter. >> > >Any advantage you can gain in S/N with a PLL you can also gain with >proper filtered phase detection in a totally feed-forward process. >Plus the feed-forward design gives you: > >1. Instant signal acquisition on start-up (no lock-up problem) > >2. No need for careful loop dynamics trimming. > >3. Lowest possible latency between frequency change and detection. > >-Robert Scott > Hopkins, MN >
You're still up against the problem of the observation time. There's that pesky T term in the CRLB. Unless the SNR in the estimator actually supports 10ppb in a one-second observation, which must also consider the quanitization noise, you need to do something else. I haven't actually crunched the numbers here to see where the limits are. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Fri, 12 Feb 2016 01:45:52 -0800, a.turowski wrote:

> W dniu czwartek, 11 lutego 2016 17:22:05 UTC u&#380;ytkownik Tim Wescott > napisa&#322;: >> On Thu, 11 Feb 2016 07:33:24 -0800, a.turowski wrote: >> >> > W dniu czwartek, 11 lutego 2016 13:42:02 UTC u&#380;ytkownik >> > mako...@yahoo.com napisa&#322;: >> >> On Thursday, February 11, 2016 at 7:14:42 AM UTC-5, Tauno Voipio >> >> wrote: >> >> > On 11.2.16 12:21, a.turowski@ymail.com wrote: >> >> > > Hi all, >> >> > > >> >> > > Recently I've been faced with a problem of very accurate sine >> >> > > wave frequency measurement. Let me give you some details: >> >> > > I've got a sensor which outputs measurement result as a >> >> > > frequency of a sine wave. The frequency of the signal is within >> >> > > 149.4kHz - 150.6kHz range and is very stable, as sensed >> >> > > parameter changes very slowly. Say the frequency can change no >> >> > > more than tens of ppb (parts per billion) per second. Also the >> >> > > signal amplitude should be very stable - it may vary very slowly >> >> > > with temperature. The goal is to measure the frequency to >> >> > > 10-100ppb accuracy at least every second - it may be just a >> >> > > frequency change or if possible, absolute frequency value. Lets >> >> > > denote the frequency we want to measure as f. >> >> > > >> >> > > I spent some time thinking about a solution and here is the idea >> >> > > I came up with. Lets assume that I undersample the signal using >> >> > > 5kHz sampling frequency. Then I use FFT to estimate roughly what >> >> > > the signal frequency is. Lets denote it as f_approx. Using >> >> > > automatic gain control I make sure that incoming signal has >> >> > > known and stable amplitude. Lets denote this signal as >> >> > > sig_in_agc. In digital domain I generate a sine wave of >> >> > > frequency f_approx and the same amplitude as sig_in_agc. Lets >> >> > > denote this signal as sig_approx. Next step is to multiply >> >> > > sig_in_agc by sig_approx. This is a standard mixing, which gives >> >> > > me a signal which is the sum of two cosines: one having >> >> > > frequency equal to f_approx+f, the other f_approx-f. Then I >> >> > > filter out the high frequency component with low pass filter and >> >> > > as a result I get only a signal which frequency is the >> >> > > difference between f_approx-f. Lets denote it f_diff. Actually >> >> > > this signal represents the instantaneous phase difference >> >> > > between sig_in_agc and sig_app >> >> > rox. I sample this phase difference periodically and then can >> >> > calculate the f_diff as a derivative over time of the phase >> >> > difference. >> >> > > >> >> > > Do you think that above makes any sense? Are there other >> >> > > (better) methods that you can recommend to measure the frequency >> >> > > accurately? >> >> > > >> >> > > Best regards, >> >> > > Adam Turowski >> >> > >> >> > >> >> > How about getting a reference-quality 150 kHz complex oscillator >> >> > and doing I/Q mixing with it? You'll get an analytic signal (two >> >> > channels) >> >> > of -600 Hz to 600 Hz, which should then be sampled with good >> >> > high-resolution A/D converters. >> >> > >> >> > You have to follow the phase of the signal to get ppb frequency >> >> > changes every second. >> >> > >> >> > Your frequency reference will be critical at ppb accuracy levels. >> >> > >> >> > -- >> >> > >> >> > -Tauno Voipio >> >> >> >> right.. >> >> >> >> OP...fundamentally, if you are measueing the absolute frq, you will >> >> need a reference with accuracy better than what you are trying to >> >> read.... >> >> >> >> or >> >> >> >> if you are measuring only changes, then you will still need a >> >> reference with STABILITY better than what you are trying to read. >> >> >> >> next, is filter the BW to as low as possible, if the changes are >> >> slow, some type of PLL or tracking filter where the loop BW is very >> >> narrow. >> >> >> >> no free lunch >> >> >> >> M >> > >> > Hi all, >> > >> > Thank you for the responses. Yes, I realize that the frequency >> > reference is critical here (or at least its short term stability if I >> > only measure frequency changes). >> > >> > Evgeny, >> > >> > Thank you for the hint about noise increase when doing undersampling. >> > I think I will have to consider using say 1MHz sampling frequency and >> > then down-converting, low pass filtering and decimating in the >> > digital domain. >> > >> > Tauno, >> > I think that using external analog I/Q mixer may not be a very good >> > idea in my application. The reason is that I have to be measure the >> > phase very precisely and I feel that having to have two mixers (one >> > for I the other for Q) will introduce phase mismatches obscuring >> > actual measurement. Anyway, thanks for the suggestion. >> >> I think you're best off to keep things in the digital domain as much as >> possible. If you need mixing, do it in digital-land. >> >> > Mako, >> > I fail to see how PLL would help me in solving the problem. I >> > understand that one could infer the phase difference between signals >> > from the voltage/signal driving VCO/NCO, but it wouldn't be easy. I >> > think that the solution I've proposed originally would give me the >> > same answer without having to go through PLL behavior analysis (this >> > is usually a very hard to crack problem). >> >> Phase-locked loop analysis is _easy_ if you have the chops. Lock to >> the signal, then "infer" the frequency by looking at the frequency of >> your reference. >> >> I don't think you need a super-fancy PLL here. >> >> > Any other ideas? >> >> Get a good book on PLL theory and read it. >> >> -- >> >> Tim Wescott Wescott Design Services http://www.wescottdesign.com > > Hi Tim, > > Can you recommend any good book on PLL theory and/or DPLL design?
The only book I use is "Phase Locked Loop Circuit Design" by Wolavar -- and as the name implies, it's aimed squarely at circuit designers. I used to design digital PLLs by sketching a circuit diagram and then writing code from it, which is probably not the best approach if you just want to write code. Other people with more current libraries have recommended good-sounding books. It may be worthwhile asking about PLL books in a separate thread. I do think that your biggest concern is to answer the question "is the information there, and not masked by harmonics or random noise?". If the answer is "yes", then there's a whole bunch of different valid ways to skin the cat (one of which is a PLL). If the answer is "no" then you can spin endlessly and never get anywhere. -- www.wescottdesign.com
On 2/12/2016 11:03 AM, radams2000@gmail.com wrote:
> Mis-read the frequency as 150 Hz. But anyway I still think the tried-and true FFT method is very robust
The FFT would be hugely computationally expensive with 1 second of data at even 300 kHz sample rate. The DFT would be indicated here where you only need a few bins. This reminded me about a couple of methods of pitch estimation I considered once, Average Magnitude Difference Function and Average Squared Difference Function. I remember doing some simulation that showed you could get a very sharp null the AMDF limited only by the noise level. Consider this. Capture 1 second of data, then do the AMDF on it at a number of spacings corresponding to the 1200 Hz range of interest. Calculating the value for every possible point would be similar order of complexity as doing a DFT on a full spectrum. This is similar to only calculating the bins of interest for a DFT. The resolution of the bins will be similar to a DFT, but I expect you can interpolate the location of the null more accurately in the absence of significant noise. -- Rick
[...snip...]

> There's really not much way >around the time-frequency uncertainty principle; the more estimation >accuracy you need, the longer you need to observe the signal.
[...snip...]
> >Eric Jacobsen >Anchor Hill Communications >http://www.anchorhill.com
This statement is simply not true for a pure tone in a DFT. In the absence of noise, a very accurate value can be calculated with very few data points, irrespective of how close the frequency is to being an integer value within the frame. It is true that in the presence of noise, having more samples can mitigate the effect of the noise. This is also true for imprecision. The proof of this can be found in my blog article titled "Exact Frequency Formula for a Pure Real Tone in a DFT" which can be found here: http://www.dsprelated.com/showarticle/773.php Now returning to the OP's original problem. He has stated he has a pure sinusoidal tone with the following features: * 149.4kHz - 150.6kHz * 5kHz sampling frequency * Say the frequency can change no more than tens of ppb (parts per billion) per second. * The goal is to measure the frequency to 10-100ppb accuracy at least every second Basically he is searching for an extremely high alias frequency relative to his sampling rate, by a factor of about 30. This means he is only getting one sample point for about every 30 cycles of his tone. However, the range of possible values is only 1.2 kHZ, which is considerably less than the Nyquist frequency of his sampling rate. Therefore, assuming that the sampling precision is high enough, this should be doable. Since the signal can vary its frequency, you actually want to limit the sample interval to be as short as possible to get a good read. My frequency formula, and any estimator, is based on the assumption that the signal is steady in amplitude and frequency within the sampling interval. Suppose the OP selects 64 data points as the sampling interval. At 5 kHz sampling this is .0128 seconds, or just over a hundredth of a second, therefore the signal should only change less than a tenth of a ppb during that interval. He should use the formulas in my article to calculate the frequency. When he gets to Equation 20, he will need to adjust the value returned by the inverse cosine function so that the frequency he gets falls within his range. One final time, there is no uncertainty principle within a DFT calculation. Ced --------------------------------------- Posted through http://www.DSPRelated.com
[...snip...]
> >He should use the formulas in my article to calculate the frequency.
When
>he gets to Equation 20, he will need to adjust the value returned by the >inverse cosine function so that the frequency he gets falls within his >range. > >One final time, there is no uncertainty principle within a DFT >calculation. > >Ced > >--------------------------------------- >Posted through http://www.DSPRelated.com
One more consideration. Since his target value falls at an even multiple of his sampling rate, there will be some disambiguation problems. He is much better off sampling at something like 24 kHz, so his center frequency falls at half the Nyquist frequency. Ced --------------------------------------- Posted through http://www.DSPRelated.com
On Thu, 11 Feb 2016 20:35:33 +0000, Eric Jacobsen wrote:

> There's really not much way around the > time-frequency uncertainty principle; the more estimation accuracy you > need, the longer you need to observe the signal.
I beg to differ, at least on a detail: there's no way around the time- frequency-noise uncertainty. If you have a purely noiseless measurement of a perfect sine wave, then you need only four measurements to get an exact result (three, I'm pretty sure, if the middle measurement is not at phase = 0 or phase = pi). If it were just time-frequency then there wouldn't be enough information in a 1-second chuck to make the determination.
> 10ppb is pretty tight.
Yea verily.
> The DPLL does this by "observing" the signal from the time you > turn it on, so it is continually converging up to the limits of the > system (i.e., stability of the reference, numerical precision, etc.).
Well... A DPLL will exponentially weigh the effect of the samples, so it'll effectively "forget" old readings, with the forgetting time being proportional to the inverse of the loop bandwidth. One easy way to screw up the measurements would be to have a too-loose loop bandwidth that lets too much noise through. Another easy way would be to have a too-tight bandwidth that doesn't track changes fast enough.
> An algorithmic method, like Kay's or other methods, > gets better as the observation interval increases. Usually a DPLL is a > good choice because the effective observation interval is much longer > than what is often practical for an algorithmic method.
Maybe, maybe not. It really depends on how rapidly the signal is expected to change. I'm still suspecting a DPLL is a good approach, but there are things that could make my knee-jerk preference incorrect. -- www.wescottdesign.com
On Fri, 12 Feb 2016 15:49:49 -0600, "Cedron" <103185@DSPRelated>
wrote:

>[...snip...] > >> There's really not much way >>around the time-frequency uncertainty principle; the more estimation >>accuracy you need, the longer you need to observe the signal. > >[...snip...] > >> >>Eric Jacobsen >>Anchor Hill Communications >>http://www.anchorhill.com > >This statement is simply not true for a pure tone in a DFT. In the >absence of noise, a very accurate value can be calculated with very few >data points, irrespective of how close the frequency is to being an >integer value within the frame. It is true that in the presence of noise, >having more samples can mitigate the effect of the noise. This is also >true for imprecision.
My statement was true for a fixed sample rate. For arbitrary sample rates, improvement in ultimate accuracy for a given SNR requires additional observations (or samples), i.e., an increase in N. The proof of this is well-known in the CRLB. It can't be avoided.
>One final time, there is no uncertainty principle within a DFT >calculation.
So Heisenberg was wrong. Got it. Sorry I'll stick with the well-known math. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Fri, 12 Feb 2016 16:48:42 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Thu, 11 Feb 2016 20:35:33 +0000, Eric Jacobsen wrote: > >> There's really not much way around the >> time-frequency uncertainty principle; the more estimation accuracy you >> need, the longer you need to observe the signal. > >I beg to differ, at least on a detail: there's no way around the time- >frequency-noise uncertainty. If you have a purely noiseless measurement >of a perfect sine wave, then you need only four measurements to get an >exact result (three, I'm pretty sure, if the middle measurement is not at >phase = 0 or phase = pi).
That requires that the frequency is not changing during the observation. Time-frequency uncertainty is just that you can't simultaneously get ultimate resolution in both domains. So if the frequency is changing, at all, you can't get a simultaneous high-resultion measure of when it is at a particular frequency, or what the frequency might be at a given time. It is intuitive in the difference between the shape of an impulse and a constant tone (or a horizontal line), which are, not coincidentally, transforms of each other. If you want the impulse for high time resolution, you can't simultaneously have high frequency resolution. It really is intuitive, and extending the logic it applies to many (or maybe even most) frequency estimation problems since the need for the estimation is because the frequency may change. In this case the frequency stability is high, so the time motion is slow and the ability to get high frequency resolution is very good. In the limit of estimation capabilitites, however, the principle still applies.
>If it were just time-frequency then there wouldn't be enough information >in a 1-second chuck to make the determination. > >> 10ppb is pretty tight. > >Yea verily. > >> The DPLL does this by "observing" the signal from the time you >> turn it on, so it is continually converging up to the limits of the >> system (i.e., stability of the reference, numerical precision, etc.). > >Well... > >A DPLL will exponentially weigh the effect of the samples, so it'll >effectively "forget" old readings, with the forgetting time being >proportional to the inverse of the loop bandwidth. One easy way to screw >up the measurements would be to have a too-loose loop bandwidth that lets >too much noise through. Another easy way would be to have a too-tight >bandwidth that doesn't track changes fast enough.
Yup, hence careful analysis and design is imperative.
>> An algorithmic method, like Kay's or other methods, >> gets better as the observation interval increases. Usually a DPLL is a >> good choice because the effective observation interval is much longer >> than what is often practical for an algorithmic method. > >Maybe, maybe not. It really depends on how rapidly the signal is >expected to change. I'm still suspecting a DPLL is a good approach, but >there are things that could make my knee-jerk preference incorrect.
We'd probably both agree that there hasn't been enough detail given yet to select a preferred approach, especially since the implementation constraints haven't been outlined, which could make a difference. I think it's a solvable problem, though. And an interesting one! Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Fri, 12 Feb 2016 16:48:42 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Thu, 11 Feb 2016 20:35:33 +0000, Eric Jacobsen wrote: > >> There's really not much way around the >> time-frequency uncertainty principle; the more estimation accuracy you >> need, the longer you need to observe the signal. > >I beg to differ, at least on a detail: there's no way around the time- >frequency-noise uncertainty. If you have a purely noiseless measurement >of a perfect sine wave, then you need only four measurements to get an >exact result (three, I'm pretty sure, if the middle measurement is not at >phase = 0 or phase = pi).
I should add that this also requires some apriori knowledge of the frequency, i.e., the frequency uncertainty is already low. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
[...snip...]

>>It is true that in the presence of noise, >>having more samples can mitigate the effect of the noise. This is also >>true for imprecision. > >My statement was true for a fixed sample rate. For arbitrary sample >rates, improvement in ultimate accuracy for a given SNR requires >additional observations (or samples), i.e., an increase in N. The >proof of this is well-known in the CRLB. It can't be avoided. >
You simply restated what I said. Yes, more points will reduce the effects of noise. If there is no noise, more points will not improve accuracy. The same is true for imprecision. Since the noise level is low according to the OP, there is a real trade off between selecting an interval long enough to minimize the effect of noise and imprecision, yet short enough that the varying frequency doesn't skew the results. If there is a varying frequency, the very question of "What is the frequency for the interval?" is meaningless. You can talk about the center frequency, or the average frequency, but there is no singular frequency by definition.
>>One final time, there is no uncertainty principle within a DFT >>calculation. > >So Heisenberg was wrong. Got it. > >Sorry I'll stick with the well-known math. > > >Eric Jacobsen >Anchor Hill Communications >http://www.anchorhill.com
We aren't talking about probability distributions here unless you consider the limits of the precision and the presence of the noise as such. Ced --------------------------------------- Posted through http://www.DSPRelated.com