[time-nuts] Space mission comes to an end because of a "computer time tagging" problem

Jim Lux jimlux at earthlink.net
Sat Sep 21 07:32:03 EDT 2013


On 9/21/13 2:30 AM, Rob Kimberley wrote:
> David,
>
> The satellite has probably got a Rb as its clock (hopefully more than one).

Very, very few deep space probes carry a Rb ( I can't think of any off 
hand).  Regular old quartz, usually some sort of tcxo.  If they are 
doing radio science, then it might carry a USO, which is essentially a 
high quality OCXO.

There's relatively little on a spacecraft that needs precision timing. 
What typically happens is that the telemetry coming down includes the 
"current time" from whatever clock is on board.  The "spacecraft time" 
tag will be referenced to a particular bit in the telemetry frame. In 
the "at the tone the time is" model, the "tone" is a particular bit.

Then on the ground, we time tag  (with an atomic clock) when the 
telemetry frame is received. (giving you "Earth Received Time" or ERT) 
Someone on the ground does a process of time correlation figuring out 
what spacecraft time corresponds to what TAI time, allowing for the 
various factors like the light time from spacecraft to the earth station.

When sequences (lists of commands and what time they are to be executed) 
are built, they're in spacecraft time. So the folks on the ground take 
the current best estimate of the transformation between "earth time" and 
"spacecraft time" and calculate it accordingly.

Since spacecraft operations are fairly slow paced, the accuracy needed 
is on the order of seconds.  Even for fairly critical things like 
trajectory correction maneuvers, I think they've got a fair amount of 
slop in the system (tens of seconds?) for when the burn has to occur 
(the thrust is small and they run for a long time).

This is partly why autonomous entry, descent and landing (EDL) for 
something like Curiosity is so impressive and useful. They spend a lot 
of time just before EDL carefully correlating the time tags and the 
measurements using doppler and range to get the state vector refined as 
well as possible, line everything up with the internal inertial 
measurement unit, give it the best starting point they can.  Then they 
send their last best estimates and the process starts.





> All I can imagine is that there has been a major clock failure of some sort,
> and everything is in free run and unable to sync up with ground.


Not likely. Nothing in the comm process requires time sync.

The radio on the spacecraft is always on, listening for a signal. it is 
at a fixed frequency (determined by the TCXO inside the radio).
The ground station transmits and the frequency is swept slowly across a 
range where the spacecraft is likely to be listening. Usually, we have 
been keeping track of what the "best lock frequency" is vs temperature, 
and with a temperature estimate, and the Doppler estimate, the range to 
sweep isn't all that big.

As the signal sweeps across the receiver's bandwidth, the oscillator in 
the receiver locks to the uplink signal (and follows it). It's a fairly 
straightforward PLL, with a 2nd or 3rd order loop filter. The loop 
bandwidth depends on the received signal strength but is in the 10 Hz range.

The radio's transmitter is driven by the same oscillator  (the VCTCXO) 
as in the receiver's carrier tracking loop PLL.  So the transmitted 
signal frequency and phase has a fixed relationship with the received 
frequency and phase. This is called the "turnaround ratio" and is 
880/749 for Deep Space X band.  By comparing the phase of the uplink 
signal at DSN and the downlink signal received, we can measure the range 
and Doppler very precisely.  (for instance, we know the range to Cassini 
within a few cm, even though it's at Saturn some billion km away)

What does happen over the life of the mission is that the "rest 
frequency" of the VCXO in the radio gradually shifts (radiation, aging, 
etc.) and it's possible that it shifts so far that either we can't find 
it (unlikely, because what you'd do is start sweeping a wider range), 
that something in the carrier tracking loop has drifted that it can't 
acquire and track the uplink carrier (the tuning voltage can't change 
far enough or fast enough), or something like that. Or the frequency has 
gotten out of the "sweet spot".  At some point, the gain falls off a 
bit, and if you're operating on the low gain antenna, even if you turn 
the power to 11 at DSN, there's just not enough SNR to get the carrier 
to lock.

These things are operating at the ragged edge of performance. When we 
test them, we're using received signal powers in the -160 to -150 dBm 
("threshold testing"). With a noise figure of a few dB and a loop 
(detection) bandwidth of 10 Hz, that's where the noise floor is. There's 
a few dB of variation in the "threshold Prec" as a function of receive 
frequency (hence the term "best lock frequency")

We typically monitor the "loop sress" on the receiver (essentially like 
watching the EFC voltage on a GPSDO) and if the locked tuning voltage 
starts getting close to the rail, people start to worry.

>


More information about the time-nuts mailing list