[time-nuts] Software Sawtooth correction prerequisites?

Dr Bruce Griffiths bruce.griffiths at xtra.co.nz
Sat May 12 03:53:16 EDT 2007


Tom Clark, K3IO wrote:
> Bruce Griffiths wrote:
>
>   
>> The Dallas delay lines aren't all that accurate, you need to calibrate
>> them to acheive 1ns accuracy (read the specs) and then you have to
>> worry about temperature variations.
>> To use them you need to decode the sawtooth correction message from
>> the GPS timing receiver.
>> If you've decoded this message then you have all the information
>> needed to make a software correction to the measured phase error.
>>     
> I need to correct some impressions that seem to have gone astray. To
> help me, I refer you to a PowerPoint presentation that I gave to the
> technicians and operators at the world's VLBI (Very Long Baseline
> Interferometry) sites. The presentation is available at
> http://gpstime.com as the 2007 version of  "Timing for VLBI".
>
> [Aside -- If you are interested in learning about some of VLBI's
> buzz-words, I also gave a tutorial "What's all this VLBI stuff, anyway?"
> that was intended as a view of the Physics and Radio Astronomy of making
> VLBI measurements. Some people find my de-mystifying of Heisenberg's
> Uncertainty Principle interesting -- especially the Schroedinger quotes
> at #21. This "plays" best if you view it as a PPT presentation.]
>
> Starting on Slide #20, I describe the reason that the Motorola receivers
> have the sawtooth "dither". Basically clock edges of the receiver's 1PPS
> pulse are locked to a crystal oscillator in the receiver and that
> oscillator is on a frequency that is not neatly commensurate with the
> "true" second marks. As has been pointed out in these discussions,
> Motorola reports an estimate of the error on the NEXT 1PPS tick. Slides
> 21 and 22 show some of the pathological example we have seen on typical
> receivers. AFAIK, all the bizarre behavior has been traced to firmware
> problems.
>
> The reason for making sawtooth corrections (and not simply averaging
> multiple samples) can be seen in the "hanging bridges" (22:34 to 22:36
> on #22, 01:04:30 to 01:05:30 on #23) when the 1PPS signal went thru a
> zero-beat. For these 1-2 minute windows, all statistical averaging
> breaks down and typical GPSDO's perform badly. However, when the
> sawtooth is corrected in software (blue line on #23) the resulting
> "paper clock" is well behaved (at ~1.5 nsec RMS level).
>
> Slides #24 & #25 describe an annoying problem in VLBI -- we want to be
> able to blindly trust ANY 1PPS pulse whenever (rarely) we need to reset
> the "working" VLBI clock. Slide #26 is the block diagram of the circuit
> that Rick has implemented in his newest clock. Slide #29 shows a (more
> noisy than normal) comparison between the hardware ans software
> correction performance with only 0.3 nsec RMS noise between the two.
>
> Bruce noted a misconception that may have come from our earlier
> implementation of the correction algorithm. What we found was that EVERY
> sample of the 1 nsec step Dallas/Maxim delay line showed considerably
> more scatter.What we found, on closer examination, was that it seems
> that the DSI delay line chip defines "one nsec" about 10% differently
> than Motorola's "one nsec". After correcting for this "definition"
> problem, as you see in #30, the hardware  and software correction are in
> agreement with an observed regression coefficient of 0.9962 (on this
> sample, which shows correlation coefficient > 0.999) and good tracking
> between samples.
>
> Bruce also made some disparaging comments on the stability of the delay
> lone. I can say that we have not seen any stability problems at all.
> This is quite logical when you carefully reverse engineer the DSI chip
> based on its data sheets. The delay inside the chip is really an analog
> delay. The 8-bit number you sent to the chip programs a D/A converter to
> produce a (256 step) constant current source. When the input pulse is
> applied to the DSI delay line, the constant current charges an on-chip
> capacitor. When the resulting ramp matches the level defined by a
> comparator, the output is changed. The comparator level and capacitor
> value are temperature compensated by a second, fixed rate ramp. This is
> pretty much the same thing that you all have been described here.
>
>   
I know exactly how these delay devices work, the problem is that using 
them in this way relies on a one time calibration of the device delays, 
it would be far better if delay calibration cycles could be interspersed 
between PPS transitions. This would technique would cope with any ageing 
or temperature drifts. The variable slope technique (see Dallas 
application note AN107) for setting the delays used in these devices 
means that the delay jitter increases faster with increasing delay than 
in the equivalent fixed slope variable threshold ramp timed delay 
technique. The DS1020 -15-datasheet specifies a 4ns max deviation from 
the programmed delay. There is no statement on the datasheet as to if 
this error is due to scale factor error, offset error or integral 
linearity error or differential linearity error. If one doesn't 
calibrate each individual chip how can one relay on achieving a delay  
accurate to within 1ns? The datasheet certainly gives one no confidence 
that this is indeed possible. Surely one cannot rely on the manufacturer 
achieving significantly better specifications than the specified 
datasheet limits for every chip produced?
> The place where I suspect that there may be some temperature sensitivity
> is in the modular GPS receivers. If you look at my slide #19 from late
> 2000, the really great "Never Happened" receiver had to be temperature
> controlled (to ~ 1ºC), otherwise it showed diurnal room temperature
> variations. All these receivers have a bandpass filter with ~1.8 MHz
> wide somewhere in their IF chain; this filter's bandwidth is matched to
> the 1.023 MHz C/A code chip rate that is the root of the timing
> performance. Heisenberg would argue that a filter this wide will show a
> group delay ~500 nsec and it is often implemented as a SAW (Surface
> Acoustic Wave) device at an IF in the 50-200 MHz range. This is a
> measurement topic itching for some work! Regarding the SAW filters, on
> slide #33 you will see that the 4 M12+ receivers that Rick tested at
> USNO fell into two groups with ~4-5 nsec "DC" timing difference between
> them.You will also note on #36 that the one sample of the new iLotus
> M12M that I've seen has ~30 nsec of bias.
>   
There are receivers installations where everything from and including 
the antenna, the coax connecting the receiver to the receiver, and the 
receiver itself are temperature controlled.
The variation in the group delay of the ceramic bandpass filters 
typically used in timing antennas may be problematic, it would be nice 
if there were specifications/measurements for these.

Bruce



More information about the time-nuts mailing list