# frequency measurement and time-frequency uncertainty

Started by July 25, 2018
```Randy Yates wrote:

> We have a requirement to measure a 32.768 KHz TTL output quickly
> and with a certain accuracy.

Can you expand on that is meant by "TTL output"?

Is this a 1-bit signal? A multi-bit PCM signal?

There is some merit to sampling a 1-bit signal at a fast sample
rate over a suitably long interval of time, and deriving a frequency
from that.  What could be called a "frequency counter".

Steve
```
```theman@ericjacobsen.org (Eric Jacobsen) writes:

> On Wed, 25 Jul 2018 12:27:58 -0400, Randy Yates
> <randyy@garnerundergroundinc.com> wrote:
>
>>We have a requirement to measure a 32.768 KHz TTL output quickly
>>and with a certain accuracy.
>>
>>If one used an N-bit ADC sampling at F samples per second, what is the
>>relationship between T_m and F_delta, where T_m is the minimum
>>measurement time (say, in seconds) for a maximum frequency error of
>>F_delta (say, in PPM)?
>>
>>For this discussion assume the frequency is stable over the measurement
>>time. Also assume the only noise is the quantization noise of the ADC.
>>
>>What is making my head hurt is some (seemingy) contradictory pieces of
>>information I've come across over the years:
>>
>>  1. If the input signal was noiseless and known to ba a sinusoid, it
>>  only requires 3 samples to determine the frequency.
>>
>>  2. The signal isn't noiseless, so I think we're getting into some
>>  estimation theory here?
>>
>>  3. How can you square up the time-frequency uncertainty principle,
>>  which I take to mean that in order to reduce the uncertainty in
>>  frequency of a measurement, we have to increase the measurement
>>  time (with some magic proportion involved), with 1? It sames that
>>  if the assumptions of 1 were made, we can make aribtrarily faster
>>  measurements by increasing the sample rate.
>>
>>Can you guys set me straight?
>>--
>>Randy Yates
>>Embedded Linux Developer
>>http://www.garnerundergroundinc.com
>
> The description of the signal sounds like a watch crystal clock
> oscillator.   You said it is TTL levels, so is it a rectangular wave?
> Or is it filtered before you process it, or do you have control of
> that?
>
> If you can sample that signal at a reasonably high rate and just look
> for the transitions, if it really is NRZ rather than a waveform, then
> you can estimate the period twice per signal period (each half period)
> and filter those to increase the accuracy with time.   If you know
> something about the jitter behavior you may be able to optimize the
> averaging filter to improve performance.
>
> There will be some dependence on how fast you can sample the signal.
> If you can get phase lock, you really only need to sample quickly
> around the transitions, but that's probably not that helpful.
>
> But if it isn't an NRZ signal, then ignore most of what I just said.

Eric,

It's just a square wave. I said TTL but I think it's actually open-drain
CMOS. It is the clock output from an Intersil ISL1208 Real-Time Clock.
--
Randy Yates, DSP/Embedded Firmware Developer
Digital Signal Labs
http://www.digitalsignallabs.com
```
```makolber@yahoo.com writes:

>>
>> > On 25.07.2018 19:27, Randy Yates wrote:
>> >> We have a requirement to measure a 32.768 KHz TTL output quickly
>> >> and with a certain accuracy.
>> >>
>> >> If one used an N-bit ADC sampling at F samples per second, what is the
>> >> measurement time (say, in seconds) for a maximum frequency error of
>> >> F_delta (say, in PPM)?
>
> does the Cramer Rao bound fundamentally apply here?
> mark

It's been so long since I had my course in Detection/Estimation that I
can't really say, Mark. I think it would if we actually had a specific
estimator for estimating frequency and it was unbiased, but as of yet I
hadn't really specified the actual estimator.

One estimator would be to simply count the number of pulses (cycles) in
a perfectly known period T. Let's assume everything is perfect: no
noise, no waveform variation, etc. So if there were N cycles in T
seconds, the estimated frequency would be N/T.

Is that estimator unbiased? I don't think so. I'm thinking this way: the
number of cycles counted, N, depends on the phase of the input relative
to the window T. Thinking of a perfect input of exactly 32.768 kHz, we
would get an N of 32,768 for -Tf/2 < phi <= 0 and 32,767 for 0 <= phi <
+Tf/2, where Tf = 1/32,768.

So the average, 32,767.5, would be biased.

Note that we are in fact performing the current estimate this way. We
are sampling for 20 seconds to be able to detect a minimum error of 2
PPM (actually a little less).

In fact the question which prompted me to post this was, if we used an
ADC, could we do the estimate in signicantly less time? This is one step
in our manufacturing process in which we are determing the ATR value to
program in and the less time, the better. (See my post to Eric where
I stated this is an Intersil ISL1208 Real-Time Clock).
--
Randy Yates, DSP/Embedded Firmware Developer
Digital Signal Labs
http://www.digitalsignallabs.com
```
```Gene Filatov <evgeny.filatov@ieee.org> writes:

> On 25.07.2018 23:38, Randy Yates wrote:
>> Gene Filatov <evgeny.filatov@ieee.org> writes:
>>
>>> On 25.07.2018 19:27, Randy Yates wrote:
>>>> We have a requirement to measure a 32.768 KHz TTL output quickly
>>>> and with a certain accuracy.
>>>>
>>>> If one used an N-bit ADC sampling at F samples per second, what is the
>>>> measurement time (say, in seconds) for a maximum frequency error of
>>>> F_delta (say, in PPM)?
>>>>
>>>> For this discussion assume the frequency is stable over the measurement
>>>> time. Also assume the only noise is the quantization noise of the ADC.
>>>>
>>>> What is making my head hurt is some (seemingy) contradictory pieces of
>>>> information I've come across over the years:
>>>>
>>>>     1. If the input signal was noiseless and known to ba a sinusoid, it
>>>>     only requires 3 samples to determine the frequency.
>>>>
>>>>     2. The signal isn't noiseless, so I think we're getting into some
>>>>     estimation theory here?
>>>>
>>>>     3. How can you square up the time-frequency uncertainty principle,
>>>>     which I take to mean that in order to reduce the uncertainty in
>>>>     frequency of a measurement, we have to increase the measurement
>>>>     time (with some magic proportion involved), with 1? It sames that
>>>>     if the assumptions of 1 were made, we can make aribtrarily faster
>>>>     measurements by increasing the sample rate.
>>>>
>>>> Can you guys set me straight?
>>>>
>>>
>>> I've opened the "Synchronization Techniques for Digital Receivers" by
>>> Mengali and D'Andrea, just to see that the estimation of symbol rate
>>> is not considered there.
>>>
>>> It can definitely be done (and probably has been done by someone in
>>> the past), just I cannot point you to the ready answer.
>>>
>>> As a rough guess, you can split the observation interval into two
>>> parts, estimate the symbol timing for each, and derive the symbol rate
>>> from those two measurements. That would be a suboptimal estimator, but
>>> if you need the answer tomorrow, might work for you.
>>>
>>> In that case, you could use pp. 64-67 of Mengali and D'Andrea's book.
>>
>> Er..., did you hit reply to the wrong post? There is no data on this
>> signal, just a sinusoid.
>>
>
> Did you say it's a TTL signal? So I assumed it's a square wave signal,
> in which case you can approach it as a baseband PAM signal, and use
> data-aided estimators.
>
> A square wave signal is not a sinusoid, because it occupies a very
> wide (in theory, infinite) region of the spectrum.
>
> However, if your signal is just a sinusoid, then I have been gravely
> wrong, and none of what I've said above applies to your case.

Gene,

Yes, it is a TTL signal, or more precisely, a square wave signal from a1
to a2 volts (ideally a1 = 0 and a2 = 3.3V). (Actually it is an
open-drain CMOS signal, see the Intersil ISL1208 IRQ*/FOUT pin.)

So yes, I guess you could analyze it this way. Sorry, my error.
--
Randy Yates, DSP/Embedded Firmware Developer
Digital Signal Labs
http://www.digitalsignallabs.com
```
```Randy Yates <yates@digitalsignallabs.com> writes:
> [...]
> One estimator would be to simply count the number of pulses (cycles) in
> a perfectly known period T. Let's assume everything is perfect: no
> noise, no waveform variation, etc. So if there were N cycles in T
> seconds, the estimated frequency would be N/T.
>
> Is that estimator unbiased? I don't think so. I'm thinking this way: the
> number of cycles counted, N, depends on the phase of the input relative
> to the window T. Thinking of a perfect input of exactly 32.768 kHz, we
> would get an N of 32,768 for -Tf/2 < phi <= 0 and 32,767 for 0 <= phi <
> +Tf/2, where Tf = 1/32,768.
>
> So the average, 32,767.5, would be biased.

Could we make it a biased estimator by simply adding 0.5?
--
Randy Yates, DSP/Embedded Firmware Developer
Digital Signal Labs
http://www.digitalsignallabs.com
```
```Randy Yates <yates@digitalsignallabs.com> writes:

> Randy Yates <yates@digitalsignallabs.com> writes:
>> [...]
>> One estimator would be to simply count the number of pulses (cycles) in
>> a perfectly known period T. Let's assume everything is perfect: no
>> noise, no waveform variation, etc. So if there were N cycles in T
>> seconds, the estimated frequency would be N/T.
>>
>> Is that estimator unbiased? I don't think so. I'm thinking this way: the
>> number of cycles counted, N, depends on the phase of the input relative
>> to the window T. Thinking of a perfect input of exactly 32.768 kHz, we
>> would get an N of 32,768 for -Tf/2 < phi <= 0 and 32,767 for 0 <= phi <
>> +Tf/2, where Tf = 1/32,768.
>>
>> So the average, 32,767.5, would be biased.
>
> Could we make it a biased estimator by simply adding 0.5?

I meant could we make it an UNbiased estimator...
--
Randy Yates, DSP/Embedded Firmware Developer
Digital Signal Labs
http://www.digitalsignallabs.com
```
```On Wednesday, July 25, 2018 at 12:28:07 PM UTC-4, Randy Yates wrote:
> We have a requirement to measure a 32.768 KHz TTL output quickly
> and with a certain accuracy.
>
> If one used an N-bit ADC sampling at F samples per second, what is the
> relationship between T_m and F_delta, where T_m is the minimum
> measurement time (say, in seconds) for a maximum frequency error of
> F_delta (say, in PPM)?
>
> For this discussion assume the frequency is stable over the measurement
> time. Also assume the only noise is the quantization noise of the ADC.
>
> What is making my head hurt is some (seemingy) contradictory pieces of
> information I've come across over the years:
>
>   1. If the input signal was noiseless and known to ba a sinusoid, it
>   only requires 3 samples to determine the frequency.
>
>   2. The signal isn't noiseless, so I think we're getting into some
>   estimation theory here?
>
>   3. How can you square up the time-frequency uncertainty principle,
>   which I take to mean that in order to reduce the uncertainty in
>   frequency of a measurement, we have to increase the measurement
>   time (with some magic proportion involved), with 1? It sames that
>   if the assumptions of 1 were made, we can make aribtrarily faster
>   measurements by increasing the sample rate.
>
> Can you guys set me straight?
> --
> Randy Yates
> Embedded Linux Developer
> http://www.garnerundergroundinc.com
========================================================You're right -- estimation theory.  There's a known closed form solution.
First, the time of each zero crossing is accurately interpolated and the RMS
error in it is a function of the sample rate.  So you have a 2-state system
with a sequence of observables, each being the sum of 'X(1)' which is the time
of the first zero crossing and an integer multiple of the period 'X(2)' giving
a vector expression for any number 'M' of consecutive zero crossings:
z = HX + (measurement error)
with dimensions Mx1 for 'z' and Mx2 for 'H'  All elements in the first column
of 'H' are 1 and the 'jth' element in the second column is just (j-1).  For
uniform variance in all measurement errors, unweighted least squares readily
provides the solution, and also its variance, as a function of 'M' .
I coauthored a more general case (bit sync, with arbitrary presence/absence of
zero crossings instead of having an observable available every time) appearing
in "Statistical Bit Synchronization in Digital Communications" IEEE Trans-COM,
August 1971.  Also a very closely related form, involving position and speed
along a line, appears (along with variance expressions) on pages 289-290 of
```
```On Wed, 25 Jul 2018 19:30:53 -0400, Randy Yates
<yates@digitalsignallabs.com> wrote:

>theman@ericjacobsen.org (Eric Jacobsen) writes:
>
>> On Wed, 25 Jul 2018 12:27:58 -0400, Randy Yates
>> <randyy@garnerundergroundinc.com> wrote:
>>
>>>We have a requirement to measure a 32.768 KHz TTL output quickly
>>>and with a certain accuracy.
>>>
>>>If one used an N-bit ADC sampling at F samples per second, what is the
>>>relationship between T_m and F_delta, where T_m is the minimum
>>>measurement time (say, in seconds) for a maximum frequency error of
>>>F_delta (say, in PPM)?
>>>
>>>For this discussion assume the frequency is stable over the measurement
>>>time. Also assume the only noise is the quantization noise of the ADC.
>>>
>>>What is making my head hurt is some (seemingy) contradictory pieces of
>>>information I've come across over the years:
>>>
>>>  1. If the input signal was noiseless and known to ba a sinusoid, it
>>>  only requires 3 samples to determine the frequency.
>>>
>>>  2. The signal isn't noiseless, so I think we're getting into some
>>>  estimation theory here?
>>>
>>>  3. How can you square up the time-frequency uncertainty principle,
>>>  which I take to mean that in order to reduce the uncertainty in
>>>  frequency of a measurement, we have to increase the measurement
>>>  time (with some magic proportion involved), with 1? It sames that
>>>  if the assumptions of 1 were made, we can make aribtrarily faster
>>>  measurements by increasing the sample rate.
>>>
>>>Can you guys set me straight?
>>>--
>>>Randy Yates
>>>Embedded Linux Developer
>>>http://www.garnerundergroundinc.com
>>
>> The description of the signal sounds like a watch crystal clock
>> oscillator.   You said it is TTL levels, so is it a rectangular wave?
>> Or is it filtered before you process it, or do you have control of
>> that?
>>
>> If you can sample that signal at a reasonably high rate and just look
>> for the transitions, if it really is NRZ rather than a waveform, then
>> you can estimate the period twice per signal period (each half period)
>> and filter those to increase the accuracy with time.   If you know
>> something about the jitter behavior you may be able to optimize the
>> averaging filter to improve performance.
>>
>> There will be some dependence on how fast you can sample the signal.
>> If you can get phase lock, you really only need to sample quickly
>> around the transitions, but that's probably not that helpful.
>>
>> But if it isn't an NRZ signal, then ignore most of what I just said.
>
>Eric,
>
>It's just a square wave. I said TTL but I think it's actually open-drain
>CMOS. It is the clock output from an Intersil ISL1208 Real-Time Clock.
>--
>Randy Yates, DSP/Embedded Firmware Developer
>Digital Signal Labs
>http://www.digitalsignallabs.com

I'll back up a little bit and first throw out some opinions on your
three bullet points:

>>>  1. If the input signal was noiseless and known to ba a sinusoid, it
>>>  only requires 3 samples to determine the frequency.

Of academic interest but not of much practical value, especially since
you're dealing with a clock signal.

>>>  2. The signal isn't noiseless, so I think we're getting into some
>>>  estimation theory here?

IMHO it's *always* estimation.   There are no pure frequencies in
noiseless signals, and we are always limited to observing them over a
finite time.   So, yeah, estimation.

>>>  3. How can you square up the time-frequency uncertainty principle,
>>>  which I take to mean that in order to reduce the uncertainty in
>>>  frequency of a measurement, we have to increase the measurement
>>>  time (with some magic proportion involved), with 1? It sames that
>>>  if the assumptions of 1 were made, we can make aribtrarily faster
>>>  measurements by increasing the sample rate.

People have pointed out previously that it kind of boils down to the
number of observations made rather than the time.   That's a pretty
easy thought experiment to sort out, since in DSP-world time is pretty
arbitrary, but the number of samples isn't.

I like the idea of the time-frequency uncertainty principle and I
think it's a very useful tool to explain and understand a lot of
things, but it isn't such a concrete thing that one should get hung up
on it.   e.g., time is abstract in DSP, but the idea that getting
resolution in one domain across a transform usually degrades
resolution in the other is useful, even though not always absolute.
You gave a good supporting example that is pertinent, though.

But, back to the task at hand, you haven't really said much as far as
details of any constraints or specs you're working with or what you're
actually trying to accomplish.    If you want to calibrate or test the
accuracy of the clock frequency, you'll need a reliable outside
reference for comparison to either drive a sampling clock or a PLL to
lock with or something.   The pps from a GPS is often used as a stable
reference for that kind of thing (and, yeah, the longer you let it
settle and stabilize, the more accurate the test can be, so t-f
uncertainty is useful there, too).

Once you have a stable reference, you can make a PLL to lock to the
signal under test and just measure the offset in the loop filter (the
integrator tells you how far off they are from each other).

Or, you can use the outside reference to drive a fast sampling clock
and count zero crossings or measure periods and filter the jitter.

Or insert other favorite method here.

Which one works fastest or easiest or meets all your requirements is
something you have to evaluate against your constraints.

1.  I d
```
```I think in this case the length of the observation window is
important, but one does not need to sample continuously
throughout the observation window -- all you need is the
time-position of one edge near the beginning of the window,
another near the end of the window, and a count of the number
of in-between edges.  This gives you the frequency.

I've used this for frequency-tracking, it can work perfectly well.

Steve
```
```On Thu, 26 Jul 2018 04:59:36 +0000 (UTC), spope384@gmail.com (Steve
Pope) wrote:

>I think in this case the length of the observation window is
>important, but one does not need to sample continuously
>throughout the observation window -- all you need is the
>time-position of one edge near the beginning of the window,
>another near the end of the window, and a count of the number
>of in-between edges.  This gives you the frequency.
>
>I've used this for frequency-tracking, it can work perfectly well.
>
>Steve

Yeah, just depends on how much accuracy is needed in how much time.

```