[time-nuts] GPSDO using 100Hz

WarrenS warrensjmail-one at yahoo.com
Mon Nov 24 22:39:30 UTC 2008


Warren

The optimum loop time constant depends on the quality of the local
oscillator and the GPS timing receiver timing signals.
A time constant of several hours is only useful with a very high quality OCXO.

The 100Hz output of an M12+T is phase jerked into alignment with the the
second once every second as is the 10kHz output from Jupiter-T GPS receiver.
The variable pulse width of the 100Hz (and 10kHz) outputs do no favours
to an XOR phase detector, its better to use the leading edges of these signals.

When one uses a low resolution phase detector with dither as in the
Brooks Shera circuit then making 100 or 10,000 measurements of the phase
error every second can, if the dither is of the right form, improve the
effective resolution. However surely the timing quantisation error of
the leading edges of the 100Hz (or 10kHz) outputs limits the potential
improvement?

One can do much better with an inexpensive processor with little
external hardware other than a high resolution DACX (even that can be
implemented in software and hardware within the processor together with
a couple of opamps).

Bruce

******************
Bruce
It would seem we are now in agreement and on the same track in most areas.

   >B) A time constant of several hours is only useful with a very high quality OCXO.
Agree, a High quality Osc needs a long TC or else it will degrade the noise performance.
On the other hand if using a short time constant (for whatever reason) there is little 
need for a high quality OCXO. A short term stable osc will give about the same results.
AND if you don't care about short term noise such as when you are only averaging the counts   
over say an hour or a day, to compare small phase shifts to get very accurate frequency results, 
then any OSC even the most crappy VCO will do if it is updated fast enough to keep it from skipping counts.

  >B) The variable pulse width of the 100Hz (and 10kHz) outputs do no favours
     to an XOR phase detector, its better to use the leading edges of these signals.
We agree again, That is why they MUST be divided by two first, using the correct edge.

  >B) making 100 or 10,000 measurements of the phase error every second can, 
        if the dither is of the right form, improve the effective resolution. 
        However surely the timing quantisation error of the leading edges of the 100Hz (or 10kHz) 
       outputs limits the potential improvement?  
Yep, there is a limit to how much improvement is available, It can not get better than perfect.
The rule of thumb is the improvement is the square root of the number of samples for random noise.
For the non random noise of the 100 Hz the improvement can be anywhere from Zero to 1/number 
of samples. Typically I'm seeing about a 50 to one improvement, with a worse case of no improvement 
for short periods lasting under a minute (without the addition of a simple processor)

   >B) One can do much better with an inexpensive processor...
I completely agree. One can always do better with something.
But my point is One can do 'good enough' for many applications with a lot less.

 >B)  with little external hardware other than a high resolution DAC
      (even that can be implemented in software and hardware within 
      the processor together with a couple of opamps).
Don't even need that much, most of the time, by providing a seldom 
changed course adjustment along with the fine adjustment. 


Warren



More information about the time-nuts mailing list