[time-nuts] looking for good description/generalized model for time adjustments

Hal Murray hmurray at megapathdsl.net
Mon Aug 3 04:20:33 UTC 2009


This has been a fun problem to think about.  Thanks.

Let's see if I under stand things...

You have code running on a system with a crappy clock.  This system is the 
Master.  It's clock is, by definition, the reference that runs everything.

The crappy clock has temperature fluctuations on the ballpark of an hour.  
Temperature and hence frequency is roughly a square wave so time offset 
(relative to UTC) will be triangular.

Master talks to devices with clocks that are much better than Master's clock. 
 Master can tell the devices to "Do X at time T".  (Where T is Master time.)

One of the Xs is to wiggle a wire that the master can get a good timestamp 
on.  The master can use that to read the device clocks.

Master wants to say to Device A, "Do A at time T1" where T1 is Master time.  
So it actually says "Do A at time T1+a" where a is the fudge factor to 
correct for the clock offsets.

Master needs to know a accurately enough so that A happens within a ms of T1.



Suppose the offset of Master from UTC is M(t) and the offset of Device A is 
A(t).  From Master's point of view, the clock on A will look like it has an 
offset of A(t)-M(t).  (If Master's clock is fast, it looks like A's clock is 
slow.)

I'm assuming you (somehow) have a rough idea of a.


Plan 1:

Shortly before T1, ask A what time it is to get an updated reading on a.  Then tell A, "Do A at time T1+a" with the up-to-date a.

I think you will have to poll A regularly to track the clock.  I'm thinking of, say, once per second.  As long as you can poll fast enough this should work.  You can work out the math on how fast you have to poll.  You need the max error on reading the clock and the max drift time between reading a and using it.

"Fast enough" isn't really the right term.  You could poll slower to roughly track a, but then slip in an extra calibration cycle just before you need an accurate a to send the command.

I think this is just a simple PLL with a fast time constant (high cutoff frequency) on the filter.

Plan 2:

If that's not good enough, you have to model the clock.  Hopefully that's as simple as estimating the frequency offset as well as estimating the time offset.

That frequency offset is a square wave with rounded corners.  I'm not sure what happens if the corners are too sharp.  I'm assuming the corners are rounded enough that you can track them.

The idea would be to say "Do A at time T+a0+a1" where a0 is the a above and a1 is current freq-offset multiplied by the time between measurement and T.

I'm not sure how to think about this as a PLL.  The filter cutoff frequency would be lower than above, averaging over several samples.  But it has to be fast enough to track the corners.


-- 
These are my opinions, not necessarily my employer's.  I hate spam.






More information about the time-nuts mailing list