[time-nuts] Characterising frequency standards
bruce.griffiths at xtra.co.nz
Tue Apr 7 12:39:10 UTC 2009
Steve Rooke wrote:
> A while back when we were discussing the performance of the Shortt
> free pendulum clock a reference was made to tvb's paper on allen
> deviation, http://www.leapsecond.com/hsn2006/ch2.pdf, which I found to
> be an excellent primer on the subject. It was interesting to see that
> with only a subset of the data, the allen deviations up to about the
> total of the data collection period could be calculated with
> reasonable accuracy. This had me thinking that if just a proportion of
> the data covering up to a specific averaging time gave good results,
> would disconnected data amounting to the same period give the same
> results. To me it seems that accuracy of the results is not related to
> the need to capture every event consecutively, it is more a case of
> collecting the same size data set even though samples were not
> consecutive. My reasoning behind this is that any set of data for a
> DUT should give the same results even though the data sets are not
> related time wise. OK, there are affects caused by different
> environmental conditions and drift but these can be calculated out.
> The only think that would shoot a big hole in this is if there was a
> repeatable difference between alternate cycles.
> So why am I saying this, well from what I have read on this group and
> on the web, I have been left with a feeling that it was vital to
> capture every event over a samplig period to ensure an accurate
> measurement. This requires equipment capable of time-stamping each
> event or employing such techniques as picket-fence. This is due to the
> limitations of most counters being unable to reset in time to measure
> the next time period of an input. At this stage I cannot see why it is
> not possible to just measure a cycle, let the counter/timer reset and
> then let it measure the next full cycle that follows. Agreed this
> would mean that alternate cycles were lost (assuming the counter/timer
> can reset within the space of one cycle) but the measurement could
> still collect the same amount of data points, it would just take twice
> as long. In fact, it could be possible to make the counter/timer
> measure alternate cycles on the opposite transitions, thereby reducing
> the total measurement time to just one and a half times the 'normal'
> time. With respect to any problem related to alternate cycles, the
> measurement system could be made to collect two data sets with single
> cycle skipped between each set.
> The difference will be that the data set would consist of measurements
> of each individual non-sequential cycle as opposed to a history of the
> start times of each cycle.
> So the short story is, does the data stream really have to consist of
> sequential samples or is it just a statistical thing so for the same
> size of data set, the results should be similar.
It is essential to measure the phase differences between every Nth zero
crossing without missing any such cycles.
You don't have to time stamp every zero crossing every Nth one will
suffice but one then has no information for shorter time intervals than
More accurate estimation of the Allan deviation is possible if the time
interval between time stamps is shorter.
The reason that you can't omit one of the time stamps in the sequence
(if you wish to accurately characterise the frequency stability of the
source under test) is that the process isn't stationary.
Estimates of classical measures such as the mean and standard deviation
from the samples diverge as the number of samples increase.
Whilst attempts have been made to estimate the error due to deadtime,
the corrections require that the phase noise characteristics of the 2
(or more) sources being compared are accurately known.
Avoiding deadtime problems is fairly easy if you use an instrument that
can timestamp events on the fly.
It is almost trivial to build such an instrument within a single FPGA or
More information about the time-nuts