[time-nuts] Characterising frequency standards

Magnus Danielson magnus at rubidium.dyndns.org
Sun Apr 12 13:10:04 UTC 2009


Steve Rooke skrev:
> 2009/4/11 Magnus Danielson <magnus at rubidium.dyndns.org>:
>> Tom Van Baak skrev:
>>>> Nevertheless leaving every second sample out is NOT exactly the same as
>>>> continous data with Tau0 = 2 s. Instead it is data with Tau0 = 1 s and a
>>>> DEAD TIME of 1s. There are dead time correction schemes available in the
>>>> literature.
>>> Ulrich, and Steve,
>>>
>>> Wait, are we talking phase measurements here or frequency
>>> measurements? My assumption with this thread is that Steve
>>> is simply taking phase (time error) measurements, as in my
>>> GPS raw data page, in which case there is no such thing as
>>> dead time.
>> I agree. I was also considering this earlier but put my mind to rest by
>> assuming phase/time samples.
>>
>> Dead time is when the counter looses track of time in between two
>> consecutive measurements. A zero dead-time counter uses the stop of one
>> measure as the start of the next measure.
> 
> This becomes very important when the data to be measured has a degree
> of randomness and it is therefore important to capture all the data
> without any dead time. In the case of measurements of phase error in
> an oscillator, it should be possible to miss some data points provided
> that the frequency of capture is still known (assuming that accuracy
> of drift measurements is required).

Depending on the dominant noise type, the ADEV measure will be biased.

>> If you have a series of time-error values taken each second and then
>> drop every other sample and just recall that the time between the
>> samples is now 2 seconds, then the tau0 has become 2s without causing
>> dead-time. However, if the original data would have been kept, better
>> statistical properties would be given, unless there is a strong
>> repetitive disturbance at 2 s period, in which case it would be filtered
>> out.
> 
> Indeed, there would be a loss of statistical data but this could be
> made up by sampling over a period of twice the time. This system is
> blind to noise at 1/2 f but ways and means could be taken to account
> for that, IE. taking two data sets with a single cycle space between
> them or taking another small data set with 2 cycles skipped between
> each measurement.

Actually, you can take any number of 2 cycle measures and be unable to 
detect the 1/2 f oscillation without detecting it. In order to be able 
to detect it you will need to take 2 measures and be able to make an odd 
number of cycles trigger difference between them to have a chance.

The trouble is that the modulation is at the Nyquist frequency of the 1 
cycle data, so it will fold down to DC on sampling it at half-rate. 
Canceling it from other DC offset errors could be challenging.

Sampling it at 1/3 rate would discover it thought.

>> An example when one does get dead-time, consider a frequency counter
>> which measures frequency with a gate-time of say 2 s. However, before it
>> re-arms and start the next measures is takes 300 ms. The two samples
>> will have 2,3 s between its start and actually spans 4,3 seconds rather
>> than 4 seconds. When doing Allan Deviation calculations on such a
>> measurement series, it will be biased and the bias may be compensated,
>> but these days counters with zero dead-time is readily available or the
>> problem can be avoided by careful consideration.
> 
> I'm looking at what can be acheieved by a budget strapped amateur who
> would have trouble purchasing a later counter capable of measuring
> with zero dead time.

Beleive me, that's where I am too. Patience and saving money for things 
I really want and allowing accumulation over time has allowed me some 
pretty fancy tools in my private lab. Infact I have to lend some of my 
gear to commercial labs as I outperform them...

>> I believe Grenhall made some extensive analysis of the biasing of
>> dead-time, so it should be available from NIST F&T online library.
> 
> I'll see what I can find.

I recalled wrong. You should look for Barnes "Tables of Bias Functions, 
B1 and B2, for Variance Based on Finite Samples of Processes with Power 
Law Spectral Densities", NBS Technical Note 375, Janurary 1969 as well 
as Barnes and Allan "Variance Based on Data with Dead Time Between the 
Mesurements" NIST Technical Note 1318, 1990.

A ahort into to the subject is found in NIST Special Publication 1065 by 
W.J. Riley as found on http://www.wriley.com along other excelent 
material. The good thing about that material is that he gives good 
references, as one should.

>> Before zero dead-time counters was available, a setup of two counters
>> was used so that they where interleaved so the dead-time was the measure
>> time of the other.
> 
> I could look at doing that perhaps.

You should have two counters of equivalent performance, preferably same 
model. It's a rather expensive approach IMHO.

Have a look at the possibility of picking up a HP 5371A or 5372A. You 
can usually snag one for about 600 USD or 1000 USD respectively on Ebay.

Cheers,
Magnus



More information about the time-nuts mailing list