## Measuring Allan Variance of atomic clock

Started by 8 years ago●13 replies●latest reply 8 years ago●475 viewsI need to measure the Allan Variance of 2-3 atomic clocks for a remote application, so can't use the traditional test equipment. We need to collect the data over several weeks so we can measure long term stability. Our plan is to sample 2 of the clocks with ADCs and use the third clock as the reference for the ADC clocks. I have two options for ADCs and was wondering which would be better. First choice would be a 14 bit ADC that can be sampled at 9-10x the atomic clock rate using clock multipliers to get the reference. Second choice would be a 12 bit ADC that can be sampled at >400x the atomic clock rate, but the clock for this would be generated from a PLL which might have an Allan Variance higher than what we are trying to measure.

So, I guess the question is, is a lower sample rate with cleaner clocks and higher precision better than lower precision with a higher sample rate and maybe not as clean clocks for collecting the data needed to measure Allan Variance. We will have an FPGA that will not be doing much else that should have plenty of processing power to do the calculations needed.

Are there any issues with the approach we are trying to take? Is there a better way to make this measurement?

Thanks,

Brian

What signal would you be sampling where an ADC would be of help? A traditional square wave will look like "small" followed by "big".

I suppose that band-pass filtering your test clocks and measuring them with ADCs clocked off of your reference should work. My inclination is toward frequency multipliers -- probably just three doublers, because it's straightforward and because eight samples per cycle is nice and digital.

Either way I'd analyze the crap out of it -- even your multiplied clock is going to have some cyclic variation to it, but I think that'll mostly show up as a slight anomaly in the phase differences that you calculate, not as actual Allan variance. There will be noise in the doublers and whatever circuit you use to square up your multiplied reference, and that noise will show up as variance.

Fortunately, if you mostly care about long-term variation then either of the methods you're contemplating should work.

The signals are sinusoidal, so the idea of using an ADC was to try to be able to get the phase measured as accurately as possible and be able to do some signal procesing on the clock source if needed. This was modeled after a particular piece of test equipment that is designed to measure Allan Variance, but not sure if that is the best model to be using for what we are trying to accomplish.

Brian

It makes sense to me. No matter what you do you're going to have lots of nit-picky work to get it all right, and there will be a point where the variance of the clocks is washed out by the variance of the measurement method. Just closing my eyes and doing the math on the inside of my eyeballs, the measurement's variance should mostly take the form of white noise in the phase measurement, which will translate into time variance only in the very short term.

No matter how you implement the measurement system, the Allan Variance of your reference clock must be better than that of the clock that you're attempting to measure, at least over the averaging time of interest.

Hugh

Given time bases \(A\), \(B\), and \(C\), for any time interval \(\tau\), the measured variance between two instruments \( \sigma_{AB}^2(\tau) = \sigma_A^2(\tau) + \sigma_B^2(\tau) \). If you know \( \sigma_{AB}^2(\tau) \), \( \sigma_{AC}^2(\tau) \), and \( \sigma_{BC}^2(\tau) \), then you can calculate the individual instruments' variances. That's just garden variety statistics.

I guess it does require not using just one as a time base against which to compare the other two -- you'd need to swap around which one you used as a time base.

Or -- ooh! ooh! -- you could just measure all three simultaneously with one ADC that's clocked by any old thing, and choose which two instruments you're comparing to each other via software.

So, I was wondering if I could have time base C be the ADC clock and just measure A and B through the ADC. It seems like the direct calculation of the Allan Variance would produce *σ*2*A**C*(*τ*), and *σ*2*B**C*(*τ*). Then I can calculate the relative variance *σ*2*A**B*(*τ*) to come up with the individual variances.

I'm not sure exactly what you're proposing, but you'd need to get \(\sigma_{BC}^2\) by measurement. Basically, you'll need three equations in three unknowns (with four clocks you'd have six equations in four unknowns, which, while it's over-constrained, also gives you more useful info). However, if you measure both clocks with the same ADC, or with ADCs that are both clocked off the same clock, then you could get their relative phase.

However, if you did this then the ADC clock couldn't be any old clock -- it would have to have an Allan variance that is at least almost as good as the ones you're measuring. If you don't have a good third reference clock then you'll be doing one of those "big number minus big number equals trash" sort of calculations. In particular, if you're measuring variances, there's variance in the variance measurement itself, which takes a lot of repeated measurements to beat down.

So, if your ADC clock is coming from an atomic clock or maybe from a GPS disciplined* clock then you might make the measurement work.

* The phrase "Disciplined clock" makes me wish I was a cartoonist, good at anthropomorphizing satellites and drawing convincing cartoons of shiny black leather.

Since we are dealing with atomic clocks, the three sources will have comparable variances. In my research into this problem, it sounded to me like if you were measuring the relative variance of 3 or more clocks simultaneously, you can do some math to get the absolute variance of each clock. I'm not sure if I need a fourth clock or if I can use the ADC sample clock as one of the clocks or not though.

Brian

If you have known-identical clocks then two should be enough (unless their variance is synchronized somehow). I can see how three comparable clocks should let you figure out each one in detail -- the devil would be if one or more had significantly more variance than the other two.

I have no experience in measuring Allan Variance, but I guess you need a very stable time interval to make the measures for all atomic clocks. So the question is how to generate a time interval with a relative variance much lower than all atomic clock. Well, I was considering a laser pulse traveling a long distance hitting a mirror and returning to a sensor. I guess the time interval between trigger the laser pulse and the sensor response has a relative variance that decais with the distance that laser will travel, since the light speed is not subject to noise (I'm not a Physicist).

So you can count the number of atomic clock cycles between these two pulses to take a measure of the atomic clock frequency.

The problem with that logic is that it breaks down when you invent the World's Best Clock -- because then you have nothing to check it against. In your example, your laser pulse thing is the World's Best Clock, and how do you figure out how good *it* is?

(In reality your laser pulse thing would be very dependent for accuracy on maintaining the dimensional stability of the mirror mounts, and if that was the best way to get the job done, that's probably how we'd make super-accurate clocks).

Fortunately, you *can* measure these things if you have either two clocks of identical design, or better yet, three clocks of comparable capabilities.

The proposed laser thing is not a clock, since it only generates two pulse. It only generates one time period that has very low variation between experiment realizations.

By increasing the distance and considering the light speed as a fixed value, we can make all source of time variation, like the laser triggering time and sensor response, become negligible compared to the time period between the pulses. So, there is no limit for the accuracy. The problem is the implementation, as you said the mirror precision to reflect the laser from a long distance.

The relative variance is

$$ \frac{\sigma(T)}{E[T]} $$

where T is the random variable that represents the period between the two pulses. T = t_1 + T_2, where t_1 is a constant and T_2 is a random variable that represents the time for triggering the laser and the sensor response time.

$$\sigma(T) = \sigma(T_2)$$ and $$E[T]$$ is approximately equal to the laser travel time for a high enough distance.

First, if it measures time, it's a clock.

Second, if it makes one precise interval, you can easily modify it to keep on making precise intervals -- then it's a clock by anyone's definition.

Third, the only way there is no limit to the accuracy is if you've found a magical way to maintain the distance between the mirrors constant, and if you have a good hard vacuum in the path. If it was a viable approach then *clocks would already be made that way*. Things move. Thermal expansion is a thing, so are trucks passing by and so are plate tectonics. So are tides.

And, at any rate, there's a known, obvious, and understood method to do this with three clocks -- so why does one need a special test article?