[time-nuts] Timing on Ethernet

Javier Serrano Javier.Serrano at cern.ch
Sun Aug 5 05:44:02 EDT 2007


Hi nuts,

I'm working with Pablo at CERN on the General Machine Timing (GMT)
system, and since this subject has generated quite a lot of questions I
thought I'd give you some more context information:

- CERN is a big (in the fact the world's biggest) complex of particle
accelerators. The main end product is proton collisions at the LHC at an
energy of 7 TeV, but that's not the only one. A typical trip of a proton
would begin in Linac 2 (linear accelerator) then pass to the Booster
(small synchrotron) then to the PS (a bigger synchrotron) then to the
SPS (yet bigger) and then to the LHC (the biggest one). But if LHC is
happy and collisions are ongoing, we can send the Linac 2 protons down
another path. For instance they could go to the Antiproton decelerator
from the PS, where they would hit a target producing anti-protons, which
would be decelerated almost to a stand-still, and then mixed with
positrons (anti-electrons) to create anti-hydrogen, i.e. anti-matter.
All this is clearly explained in Dan Brown's 'Angels & Daemons' ;)

- So CERN really looks like a factory with several production lines for
different types of particles. Orchestrating all this time-multiplexed
traffic of particles is a card (in fact a set of cards) called Central
Timing Generator, which drives (mainly for legacy reasons) a set of
fiber then multi-drop RS-422 networks with messages sent at very precise
times. On the receiving end, VME, PCI and PMC cards can listen to these
messages and be pre-programmed to react to a given message by a certain
action. These actions can be: generating a pulse in the front panel to
synchronize external hardware or generate a bus interrupt to synchronize
real-time task running in different computers all around the complex.
These modules also contain complicated counters with many modes of
operation, and they can time-tag events precisely using CERN-made HPTDC
(High Performance Time to Digital Converter) chips. The time-tagging
precision is constrained by our timing distribution network rather than
by the HPTDC performance, which can go down into the 10s of ps region.

- For LHC, due to begin operation in 2008, we took the safest route and
installed the same timing system. This made managing injection from the
SPS straight forward but it brought some other problems. One of them is
that we are requested, as Pablo said, to guarantee that every timing
receiver board (around 500 of them in the LHC) will receive the timing
messages at the same time, within 1 us. Our timing network does not have
two-way calibration capability, so we end up walking around with a
battery-powered Cs4000 from Symmetricom and calibrating all the outputs.
The problem with this is that cabling people can change the fiber
routing without asking us during the winter shutdown (CERN is a big
place) and there is no way for us to know.

- While I agree that a 1us change in cabling would be a major change, we
do have places at CERN which use the same timing system and would
greatly benefit from a 1ns two-way calibrated scheme. One of them is the
SPS extraction towards Gran Sasso: SPS protons are extracted towards a
target which generates muons, which after being stopped by some meters
of concrete, only leave a trace of neutrinos which happily make it to
the Gran Sasso National Laboratory in Italy, after a 732 km trip through
the crust of the Earth. We have a simple GPS time transfer scheme with
our friends in Gran Sasso to cross-correlate neutrino spills with events
seen in Gran Sasso and discriminate any events from cosmic rays, but our
GPS station is in the CERN Control Center (CCC), 3 km away from the SPS
extraction line where we need to do the time-tagging. We will be testing
a two-way scheme based on fiber and circulators at each end this autumn
for the Gran Sasso problem. Any experience we gain from that will
directly apply to the broader problem of re-inventing our timing system.

- To answer Bruce's question on LHC timing distribution: there are 8
surface buildings spaced at regular intervals in the 27 km
circumference. Fiber takes timing messages from the CCC to each of these
buildings in a start configuration. Then we go down around 100 meters to
the tunnel, where we use shielded twisted pair for compatibility with
the legacy standard we chose to maintain.

- This year we have been assigned in the Hardware and Timing team to
take responsibility for real-time field busses as well, so we figured we
could try to put a timing renovation project in the same basket and try
to look for some synergies. For instance, we could come up with a very
deterministic field-bus which could also be used as a timing system.
Two-way calibration would be a requirement, and we could also do with a
bit more bandwidth than at present (500 kb/s). Since we have complete
freedom to choose a physical layer, we wanted to first have a hard look
at Ethernet, which is probably the only physical layer I'd bet would
still exist in 20 years. There was some vague feeling that choosing
Ethernet would enable us to interface more easily to the rest of the
world, but under close scrutiny we arrived at the same conclusion as
Magnus and Jack: Ethernet brings us quite a lot of complications, and we
could interface to the outside in other ways. The debate is ongoing at
CERN. Ethernet's biggest point for is that it could merge two cables in
one: data and timing. Cabling at CERN is a major source of expenditure,
so any way to go from a two-cable to a one-cable solution is interesting
for us. Before moving on, we need to understand exactly what Ethernet
can give us and what the price to pay would be. This is why we are so
interested in Sam Stein's paper.  

- In any event, it has never been contemplated to mix timing with the
technical network supported by IT. First of all I am not sure they'd let
us do that. Secondly, I have a hard time imagining how we could use any
kind of worst-case analysis. The solution we are contemplating is
offering our users a field-bus base on Ethernet that can also serve as a
timing network through appropriate usage. But it is clear that each user
should build its own dedicated network.

So we're still in the brainstorming phase, we are very happy to listen
to the very knowledgeable people in this list, it's really a treat.
Thank you very much!

Javier  



More information about the time-nuts mailing list