# frequency measurement and time-frequency uncertainty

Started by July 25, 2018
```We have a requirement to measure a 32.768 KHz TTL output quickly
and with a certain accuracy.

If one used an N-bit ADC sampling at F samples per second, what is the
relationship between T_m and F_delta, where T_m is the minimum
measurement time (say, in seconds) for a maximum frequency error of
F_delta (say, in PPM)?

For this discussion assume the frequency is stable over the measurement
time. Also assume the only noise is the quantization noise of the ADC.

What is making my head hurt is some (seemingy) contradictory pieces of
information I've come across over the years:

1. If the input signal was noiseless and known to ba a sinusoid, it
only requires 3 samples to determine the frequency.

2. The signal isn't noiseless, so I think we're getting into some
estimation theory here?

3. How can you square up the time-frequency uncertainty principle,
which I take to mean that in order to reduce the uncertainty in
frequency of a measurement, we have to increase the measurement
time (with some magic proportion involved), with 1? It sames that
if the assumptions of 1 were made, we can make aribtrarily faster
measurements by increasing the sample rate.

Can you guys set me straight?
--
Randy Yates
Embedded Linux Developer
http://www.garnerundergroundinc.com
```
```OK, here is one thing jumping out at me.

If the frequency is known (and it is, within some fairly narrow
bandwidth), you can bandpass filter the ADC signal to reduce the noise
and increase the measurement accuracy (speaking roughly). But we all
know that the narrower the filter, the longer it will take for the
output to settle. So there you go, time-frequency uncertainty.

Randy Yates <randyy@garnerundergroundinc.com> writes:

> We have a requirement to measure a 32.768 KHz TTL output quickly
> and with a certain accuracy.
>
> If one used an N-bit ADC sampling at F samples per second, what is the
> relationship between T_m and F_delta, where T_m is the minimum
> measurement time (say, in seconds) for a maximum frequency error of
> F_delta (say, in PPM)?
>
> For this discussion assume the frequency is stable over the measurement
> time. Also assume the only noise is the quantization noise of the ADC.
>
> What is making my head hurt is some (seemingy) contradictory pieces of
> information I've come across over the years:
>
>   1. If the input signal was noiseless and known to ba a sinusoid, it
>   only requires 3 samples to determine the frequency.
>
>   2. The signal isn't noiseless, so I think we're getting into some
>   estimation theory here?
>
>   3. How can you square up the time-frequency uncertainty principle,
>   which I take to mean that in order to reduce the uncertainty in
>   frequency of a measurement, we have to increase the measurement
>   time (with some magic proportion involved), with 1? It sames that
>   if the assumptions of 1 were made, we can make aribtrarily faster
>   measurements by increasing the sample rate.
>
> Can you guys set me straight?

--
Randy Yates
Embedded Linux Developer
http://www.garnerundergroundinc.com
```
```Am 25.07.2018 um 18:27 schrieb Randy Yates:
> We have a requirement to measure a 32.768 KHz TTL output quickly
> and with a certain accuracy.
>
> If one used an N-bit ADC sampling at F samples per second, what is the
> relationship between T_m and F_delta, where T_m is the minimum
> measurement time (say, in seconds) for a maximum frequency error of
> F_delta (say, in PPM)?

Basically the inaccuracy is 1 cycle, i.e. if you measure 32 kHz for one
second you get about 1/32k = 30ppm.

The situation changes, if the waveform is *exactly* known. In this case
less than a single cycle might be sufficient to get the same accuracy.
It all depends on the definition of /exactly/ and of course you ADC is
precise enough.

> What is making my head hurt is some (seemingy) contradictory pieces of
> information I've come across over the years:
>
>    1. If the input signal was noiseless and known to ba a sinusoid, it
>    only requires 3 samples to determine the frequency.

In theory yes, unless your samples are degenerated.

>    2. The signal isn't noiseless, so I think we're getting into some
>    estimation theory here?

If you know the amplitude of your noise you can calculate the effect on
the result by error calculation with partial derivatives.
The more difficult part is often to get reliable values for your noise.
E.g. how exact does your source have a sinusoid wave form?

>    3. How can you square up the time-frequency uncertainty principle,
>    which I take to mean that in order to reduce the uncertainty in
>    frequency of a measurement, we have to increase the measurement
>    time (with some magic proportion involved), with 1? It sames that
>    if the assumptions of 1 were made, we can make aribtrarily faster
>    measurements by increasing the sample rate.

Increasing the sample rate is not sufficient unless any noise is absent.
The closer two samples are together the less is the expected value
difference and the more noise counts.

It all depends on your measurement setup. With smart setups very much is
possible. I have measured a delay of 5 ns of a short cable with an
ordinary AC97 on board sound device (i.e. approx. 50&#4294967295;s per sample)

Marcel
```
```On 25.07.2018 19:27, Randy Yates wrote:
> We have a requirement to measure a 32.768 KHz TTL output quickly
> and with a certain accuracy.
>
> If one used an N-bit ADC sampling at F samples per second, what is the
> measurement time (say, in seconds) for a maximum frequency error of
> F_delta (say, in PPM)?
>
> For this discussion assume the frequency is stable over the measurement
> time. Also assume the only noise is the quantization noise of the ADC.
>
> What is making my head hurt is some (seemingy) contradictory pieces of
> information I've come across over the years:
>
>    1. If the input signal was noiseless and known to ba a sinusoid, it
>    only requires 3 samples to determine the frequency.
>
>    2. The signal isn't noiseless, so I think we're getting into some
>    estimation theory here?
>
>    3. How can you square up the time-frequency uncertainty principle,
>    which I take to mean that in order to reduce the uncertainty in
>    frequency of a measurement, we have to increase the measurement
>    time (with some magic proportion involved), with 1? It sames that
>    if the assumptions of 1 were made, we can make aribtrarily faster
>    measurements by increasing the sample rate.
>
> Can you guys set me straight?
>

I've opened the "Synchronization Techniques for Digital Receivers" by
Mengali and D'Andrea, just to see that the estimation of symbol rate is
not considered there.

It can definitely be done (and probably has been done by someone in the

As a rough guess, you can split the observation interval into two parts,
estimate the symbol timing for each, and derive the symbol rate from
those two measurements. That would be a suboptimal estimator, but if you
need the answer tomorrow, might work for you.

In that case, you could use pp. 64-67 of Mengali and D'Andrea's book.

Gene.

```
```Hi Marcel,

Marcel Mueller <news.5.maazl@spamgourmet.org> writes:

> Am 25.07.2018 um 18:27 schrieb Randy Yates:
>> We have a requirement to measure a 32.768 KHz TTL output quickly
>> and with a certain accuracy.
>>
>> If one used an N-bit ADC sampling at F samples per second, what is the
>> relationship between T_m and F_delta, where T_m is the minimum
>> measurement time (say, in seconds) for a maximum frequency error of
>> F_delta (say, in PPM)?
>
> Basically the inaccuracy is 1 cycle, i.e. if you measure 32 kHz for
> one second you get about 1/32k = 30ppm.

Where do you get this from?

> The situation changes, if the waveform is *exactly* known. In this
> case less than a single cycle might be sufficient to get the same
> accuracy. It all depends on the definition of /exactly/ and of course
> you ADC is precise enough.

I'm not sure I follow you here, but let's say we know it's a sinusoid
(the shape is sinusoid), but we don't know its amplitude, frequency, or
phase. However we do know its frequency within a certain range.

>> What is making my head hurt is some (seemingy) contradictory pieces of
>> information I've come across over the years:
>>
>>    1. If the input signal was noiseless and known to ba a sinusoid, it
>>    only requires 3 samples to determine the frequency.
>
> In theory yes, unless your samples are degenerated.
>
>>    2. The signal isn't noiseless, so I think we're getting into some
>>    estimation theory here?
>
> If you know the amplitude of your noise you can calculate the effect
> on the result by error calculation with partial derivatives.
> The more difficult part is often to get reliable values for your
> noise. E.g. how exact does your source have a sinusoid wave form?

This doesn't move my Understand-O-Meter at all.

>>    3. How can you square up the time-frequency uncertainty principle,
>>    which I take to mean that in order to reduce the uncertainty in
>>    frequency of a measurement, we have to increase the measurement
>>    time (with some magic proportion involved), with 1? It sames that
>>    if the assumptions of 1 were made, we can make aribtrarily faster
>>    measurements by increasing the sample rate.
>
> Increasing the sample rate is not sufficient unless any noise is
> absent.

"Sufficient"? It is a fact that increasing the sample rate will indeed
reduce the amount of noise within a fixed bandwidth since the
quantization noise spectral power density decreases with increasing
sample rate. So increasing the sample rate is going to help reduce
the noise. Seems like that ought to help out somehow.

> [...]

Thanks for the response, Marcel.
--
Randy Yates
Embedded Linux Developer
http://www.garnerundergroundinc.com
```
```Gene Filatov <evgeny.filatov@ieee.org> writes:

> On 25.07.2018 19:27, Randy Yates wrote:
>> We have a requirement to measure a 32.768 KHz TTL output quickly
>> and with a certain accuracy.
>>
>> If one used an N-bit ADC sampling at F samples per second, what is the
>> measurement time (say, in seconds) for a maximum frequency error of
>> F_delta (say, in PPM)?
>>
>> For this discussion assume the frequency is stable over the measurement
>> time. Also assume the only noise is the quantization noise of the ADC.
>>
>> What is making my head hurt is some (seemingy) contradictory pieces of
>> information I've come across over the years:
>>
>>    1. If the input signal was noiseless and known to ba a sinusoid, it
>>    only requires 3 samples to determine the frequency.
>>
>>    2. The signal isn't noiseless, so I think we're getting into some
>>    estimation theory here?
>>
>>    3. How can you square up the time-frequency uncertainty principle,
>>    which I take to mean that in order to reduce the uncertainty in
>>    frequency of a measurement, we have to increase the measurement
>>    time (with some magic proportion involved), with 1? It sames that
>>    if the assumptions of 1 were made, we can make aribtrarily faster
>>    measurements by increasing the sample rate.
>>
>> Can you guys set me straight?
>>
>
> I've opened the "Synchronization Techniques for Digital Receivers" by
> Mengali and D'Andrea, just to see that the estimation of symbol rate
> is not considered there.
>
> It can definitely be done (and probably has been done by someone in
> the past), just I cannot point you to the ready answer.
>
> As a rough guess, you can split the observation interval into two
> parts, estimate the symbol timing for each, and derive the symbol rate
> from those two measurements. That would be a suboptimal estimator, but
> if you need the answer tomorrow, might work for you.
>
> In that case, you could use pp. 64-67 of Mengali and D'Andrea's book.

Er..., did you hit reply to the wrong post? There is no data on this
signal, just a sinusoid.
--
Randy Yates
Embedded Linux Developer
http://www.garnerundergroundinc.com
```
```>
> > On 25.07.2018 19:27, Randy Yates wrote:
> >> We have a requirement to measure a 32.768 KHz TTL output quickly
> >> and with a certain accuracy.
> >>
> >> If one used an N-bit ADC sampling at F samples per second, what is the
> >> measurement time (say, in seconds) for a maximum frequency error of
> >> F_delta (say, in PPM)?

does the Cramer Rao bound fundamentally apply here?
mark
```
```On Wednesday, July 25, 2018 at 4:51:53 PM UTC-4, mako...@yahoo.com wrote:
> >
> > > On 25.07.2018 19:27, Randy Yates wrote:
> > >> We have a requirement to measure a 32.768 KHz TTL output quickly
> > >> and with a certain accuracy.
> > >>
> > >> If one used an N-bit ADC sampling at F samples per second, what is the
> > >> measurement time (say, in seconds) for a maximum frequency error of
> > >> F_delta (say, in PPM)?
>
> does the Cramer Rao bound fundamentally apply here?
> mark

ps

http://www.dtic.mil/dtic/tr/fulltext/u2/a167992.pdf

mark
```
```On Wed, 25 Jul 2018 12:27:58 -0400, Randy Yates
<randyy@garnerundergroundinc.com> wrote:

>We have a requirement to measure a 32.768 KHz TTL output quickly
>and with a certain accuracy.
>
>If one used an N-bit ADC sampling at F samples per second, what is the
>relationship between T_m and F_delta, where T_m is the minimum
>measurement time (say, in seconds) for a maximum frequency error of
>F_delta (say, in PPM)?
>
>For this discussion assume the frequency is stable over the measurement
>time. Also assume the only noise is the quantization noise of the ADC.
>
>What is making my head hurt is some (seemingy) contradictory pieces of
>information I've come across over the years:
>
>  1. If the input signal was noiseless and known to ba a sinusoid, it
>  only requires 3 samples to determine the frequency.
>
>  2. The signal isn't noiseless, so I think we're getting into some
>  estimation theory here?
>
>  3. How can you square up the time-frequency uncertainty principle,
>  which I take to mean that in order to reduce the uncertainty in
>  frequency of a measurement, we have to increase the measurement
>  time (with some magic proportion involved), with 1? It sames that
>  if the assumptions of 1 were made, we can make aribtrarily faster
>  measurements by increasing the sample rate.
>
>Can you guys set me straight?
>--
>Randy Yates
>Embedded Linux Developer
>http://www.garnerundergroundinc.com

The description of the signal sounds like a watch crystal clock
oscillator.   You said it is TTL levels, so is it a rectangular wave?
Or is it filtered before you process it, or do you have control of
that?

If you can sample that signal at a reasonably high rate and just look
for the transitions, if it really is NRZ rather than a waveform, then
you can estimate the period twice per signal period (each half period)
and filter those to increase the accuracy with time.   If you know
something about the jitter behavior you may be able to optimize the
averaging filter to improve performance.

There will be some dependence on how fast you can sample the signal.
If you can get phase lock, you really only need to sample quickly
around the transitions, but that's probably not that helpful.

But if it isn't an NRZ signal, then ignore most of what I just said.

```
```On 25.07.2018 23:38, Randy Yates wrote:
> Gene Filatov <evgeny.filatov@ieee.org> writes:
>
>> On 25.07.2018 19:27, Randy Yates wrote:
>>> We have a requirement to measure a 32.768 KHz TTL output quickly
>>> and with a certain accuracy.
>>>
>>> If one used an N-bit ADC sampling at F samples per second, what is the
>>> measurement time (say, in seconds) for a maximum frequency error of
>>> F_delta (say, in PPM)?
>>>
>>> For this discussion assume the frequency is stable over the measurement
>>> time. Also assume the only noise is the quantization noise of the ADC.
>>>
>>> What is making my head hurt is some (seemingy) contradictory pieces of
>>> information I've come across over the years:
>>>
>>>     1. If the input signal was noiseless and known to ba a sinusoid, it
>>>     only requires 3 samples to determine the frequency.
>>>
>>>     2. The signal isn't noiseless, so I think we're getting into some
>>>     estimation theory here?
>>>
>>>     3. How can you square up the time-frequency uncertainty principle,
>>>     which I take to mean that in order to reduce the uncertainty in
>>>     frequency of a measurement, we have to increase the measurement
>>>     time (with some magic proportion involved), with 1? It sames that
>>>     if the assumptions of 1 were made, we can make aribtrarily faster
>>>     measurements by increasing the sample rate.
>>>
>>> Can you guys set me straight?
>>>
>>
>> I've opened the "Synchronization Techniques for Digital Receivers" by
>> Mengali and D'Andrea, just to see that the estimation of symbol rate
>> is not considered there.
>>
>> It can definitely be done (and probably has been done by someone in
>> the past), just I cannot point you to the ready answer.
>>
>> As a rough guess, you can split the observation interval into two
>> parts, estimate the symbol timing for each, and derive the symbol rate
>> from those two measurements. That would be a suboptimal estimator, but
>> if you need the answer tomorrow, might work for you.
>>
>> In that case, you could use pp. 64-67 of Mengali and D'Andrea's book.
>
> Er..., did you hit reply to the wrong post? There is no data on this
> signal, just a sinusoid.
>

Did you say it's a TTL signal? So I assumed it's a square wave signal,
in which case you can approach it as a baseband PAM signal, and use
data-aided estimators.

A square wave signal is not a sinusoid, because it occupies a very wide
(in theory, infinite) region of the spectrum.

However, if your signal is just a sinusoid, then I have been gravely
wrong, and none of what I've said above applies to your case.

Gene.

```