[time-nuts] uncertainty calculations

Bill Byrom time at radio.sent.com
Sat Apr 15 21:35:30 EDT 2017


I believe that the problem is that the error in any one measurement is
not uniformly distributed in exactly that way. If you are trying to
count how many 10 MHz (100 ns interval) intervals occur between the 1
PPS edges in a period counter, you have to deal with the following?


* You might have any possible phase relationship between the two
  signals. If they are exactly related by a 10^7 ratio, it's possible
  for the 1 PPS edges to exactly coincide with the 10 MHz edges.
  Depending on the type of gating circuit, you will have jitter and
  possibly metastability resolving whether which edge occured first. The
  same thing happens on the end of the measured interval, but (depending
  on how it's set up) the propagation delays and metastability and
  jitter might be different. So you could get millions of sequential
  counts which were 1 count low, followed by millions of counts which
  were one count high, with no counts exactly at 10^7.


* To stay away from such problems, most precision counters add a small
  amount of controlled jitter (phase modulation) to the clock. When
  averaged over many measurements the effects of the two edges (gate and
  clock) lining up exactly are greatly reduced, since you are sliding
  one back and forth across the other with the modulation and the chance
  of metastability is small (assuming the signal being measured doesn't
  happen to match the phase modulation frequency).


* The metastability problem depends on how the edges are compared. Some
  traditional flip-flops and latches can be thought of as analog gain
  elements connected so that they tend to sit in state A or state B,
  which involve analog voltages and currents. If you graph the energy in
  the system, the energy is low in state A, rises to a peak halfway
  between A and B, and falls to a low value at state B. If the
  recognition of the timing edge occurs early enough the system remains
  in state A. If the timing is later the system is pushed toward the
  peak, but doesn't get over it and returns to state A. But if the
  timing is at the perfect location the system is balanced at the
  potential energy peak, and only random noise can push the system into
  a final state A or B over a significant length of time.


Sorry if this is considered obvious or trivial.

--

Bill Byrom N5BB







----- Original message -----

From: jimlux <jimlux at earthlink.net>

To: Discussion of precise time and frequency measurement <time-nuts at febo.com>
Subject: Re: [time-nuts] uncertainty calculations

Date: Fri, 14 Apr 2017 08:49:07 -0700



On 4/14/17 8:37 AM, jimlux wrote:

> If one is counting an unknown 1pps source with a counter that
> runs at 10
> MHz (e.g. the error in any one measurement is uniformly
> distributed over
> 1 ppm) and you collect 100 samples,

> is the (1 sigma) measurement uncertainty 0.1ppm * sqrt(100)/sqrt(12)

>

> (standard deviation of a uniform distribution is 1/sqrt(12) )

>

> (assuming for the moment that both sources have no underlying

> variability - we're talking about the *measurement uncertainty*)

>



Oops..

0.1 ppm * 1/sqrt(N) * 1/sqrt(12)



That is, the standard  deviation goes down as sqrt(N)

_______________________________________________

time-nuts mailing list -- time-nuts at febo.com

To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.




More information about the time-nuts mailing list