[time-nuts] How does sawtooth compensation work?

Tom Van Baak tvb at LeapSecond.com
Mon Jul 18 19:29:49 EDT 2016


> I've read Tom's page about sawtooth PPS jitter and I believe I understand where it comes from.

A GPS timing receiver solves a bunch of equations at least once a second and it ends up with a pretty good idea, numerically speaking, of what the time is internally, relative to its local oscillator.

It conveys this precise time to the user through a 1PPS signal. That pulse has to come from somewhere, and in practice the chip uses a gated edge of its LO clock to create the 1PPS edge. That means the 1PPS has some granularity. For example, if the LO is 25 MHz then the period is 40 ns which means the physical 1PPS will be somewhere between -20 ns and +20 ns of the numerical ideal. Similarly, if a mythical GPS receiver had a 500 MHz LO, then the 1PPS could be +/- 1 ns of ideal.

So does that make sense so far? These GPS boards do not have a way to electronically generate an electrical pulse that has arbitrary sub-ns phase from a periodic clock edge. They just "cheat" and pick the closest LO clock edge and call that the 1PPS. Some receivers get 2x advantage by using either clock edge. So this is the jitter part of 1PPS.

Now the other factors are 1) the solutions of the equations tend to wander due to changing reception and changing SV positions. 2) the LO is likely not on-frequency or even deliberately off-frequency, so you get a modulus or beat-node effect, 3) the LO tends wander in frequency, especially since they are usually just cheap XO or TCXO. This is the wander part of 1PPS.

Combine all effects and you get a sawtooth pattern. It varies in look & feel quite a bit. I have some weird and wonderful plots at http://leapsecond.com/pages/MG1613S/ that show how the character of the sawtooth varies. The direction and pitch and

> What I'd like to understand is how sawtooth compensation works with receivers that support it. Is it that I expect an NMEA sentence with a nanosecond offset value that I add to 

The compensation is simple. The receiver knows the time internally. The receiver picks the closest edge that it can. It knows why it picked the edge it did and thus how far that edge is from the ideal. So it just outputs that number to the user in some binary message.

The user then, uses a TIC to compare the GPS/1PPS against the OCXO/1PPS, reads the binary quantization correction, and applies that to the TIC reading. With this scheme there is no need for the 1PPS to be *electrically* right as long as the GPS receiver also tells you *numerically* how far from right it is.

Does that help?

There are some tangents we could go down:
1) There are cases where the inherent dithering you get from sawtooth error is actually hugely beneficial to the design of a GPSDO.
2) One GPSDO design (Trimble Thunderbolt) is unique in that is has no sawtooth problem or TIC or XO or TCXO at all. Instead it directly uses the high-quality OCXO as the receiver's LO. They get away with this clean solution because they are a company that makes their own receiver h/w.
3) Carrier phase receivers with external clock input.

/tvb

----- Original Message ----- 
From: "Nick Sayer via time-nuts" <time-nuts at febo.com>
To: "Chris Arnold via time-nuts" <time-nuts at febo.com>
Sent: Monday, July 18, 2016 3:31 PM
Subject: [time-nuts] How does sawtooth compensation work?


> I've read Tom's page about sawtooth PPS jitter and I believe I understand where it comes from.    My current GPSDOs ignore the phenomenon. Certainly at the moment, I'm satisfied with that.  The systems gravitate towards PLL time constants that average it all away. 
> 
> What I'd like to understand is how sawtooth compensation works with receivers that support it. Is it that I expect an NMEA sentence with a nanosecond offset value that I add to any phase difference observation that I get?
> 
> Sent from my iPhone



More information about the time-nuts mailing list