The Advent Of Atomic Time

Apollo 11 launch
Click to enlarge
NASA image S69-39961

This is, to a large extent, a companion piece to my post about leap seconds, in which I described how the irregular rotation of the Earth means that the time as measured by our atomic clocks would fall out of synchrony with the actual movement of the sun in the sky, were it not for the occasional addition of a leap second. In this post, I’m going to look back at how the various systems of time measurement we inherited from the nineteenth century were forced to adjust to the advent of extremely accurate atomic clocks in the 1950s.

But this is also, as you might have guessed from the head photograph, relevant to my continuing project to derive Project Apollo orbital data. NASA’s early space programme was conducted during a period when time-keeping standards were in a state of flux, as I’ll describe, and that has implications for how we accurately specify the timing of significant orbital events, like translunar injection and atmospheric re-entry.

But first, something about the various timescales we use.

In the nineteenth century, time was a pretty straightforward thing. A day was the length of time it took the Earth to rotate on its axis relative to the sun. Because that duration varies a little during the course of a year, clocks were set according to the average position of the sun—to a “mean time”. And the mean time measured at the Greenwich Observatory in the UK was Greenwich Mean Time (GMT), which was adopted as an international standard at the Meridian Conference in Washington, in 1884.

This type of day, measured relative to the position of the sun, is strictly called a solar day, and it’s the only kind of day relevant to most people. So GMT was the basis for civil time—the time displayed on public clocks.

Astronomers are also interested in another type of day, however—the time it takes the Earth to rotate once on its axis relative to the fixed stars. This is called a sidereal day, and it’s about four minutes shorter than the solar day. Like the solar day, the sidereal day is measured in hours, minutes and seconds, but each of these measures is just 99.7% as long as the ones we’re used to. There’s a Greenwich Mean Sidereal Time (GMST) that describes the Earth’s rotational position relative to the stars, and that’s what tells astronomers in which direction they need to point their telescopes.

But astronomers are also interested in the solar day, if only because they need to know when the sun rises and sets. And for slightly complicated reasons that I’ve fudged straight past in the preceding three paragraphs*, they took to referring to their own version of GMT as Universal Time (UT) during the 1920s.

It turned out there was a certain irony to the “universal” bit of that designation, because since the nineteenth century there had been a suspicion that the Earth’s rate of rotation was not constant, and that therefore any definition of time based on the Earth’s rotation would be similarly inconstant. And so it proved to be—by the 1920s, it was evident that the movement of the moon and planets ran to a more steady timescale than the rotation of the Earth. While astronomers still needed Universal Time and Mean Sidereal Time, they also needed a more precise timescale against which to measure the dynamics of the solar system.

And so was born the idea of Dynamical Time—time calibrated by the observation of solar system events, particularly the movement of the moon. This was formally adopted under the name Ephemeris Time (ET) in 1952.

It was immediately evident that a clock ticking out Universal Time would diverge steadily from one marking Ephemeris Time—UT seconds were longer than ET seconds, because the Earth was rotating progressively more slowly. From the way in which Ephemeris Time was formally defined, it turned out that UT and ET had been in perfect agreement at some moment in 1902 but that, by the 1950s, the slower ticking of a UT clock meant that Universal Time was about 30 seconds behind Ephemeris Time, and that difference has been increasing almost ever since.

So we embarked on the 1950s with a set of timescales based on the rotation of the Earth (GMT, GMST, UT), and one based on the movement of the moon and planets (ET). Universal Time soon separated into three flavours: UT1, which tracked the Earth’s rotation; UT0, specific to certain local observations and affected by polar motion; and UT2, which is UT1 with a small correction applied to reproduce the predictable seasonal changes in Earth’s rotation. UT0 is no longer used, and won’t bother us here. UT2 has likewise fallen into disuse, but at the time I’m discussing was considered the gold standard for civil time-keeping, so will feature prominently in what follows.

But the time-keeping game changed forever during the 1950s, with the invention of the caesium atomic clock, which soon proved to be a more precise time-keeper than anything that could be achieved by the most careful astronomical observations. Given the variability of the UT second, atomic clocks were calibrated to tick off the standard second defined by Ephemeris Time.

The Système International (SI) system of units was a little slow to catch up with the advantages of atomic time. Until 1960, it continued to define the second as being 1/86400 of a solar day. Then it shifted to the Ephemeris Time definition, which was based on the Earth’s orbital period. And then, only seven years later, it switched to a definition of the second based on the calibration of the caesium atomic clock, which it has stuck with ever since.

It wasn’t until 1970 that we’d see a standard definition of atomic time, International Atomic Time (TAI), but TAI was just the culmination of series of other atomic time-scales used during the ’50s and ’60s, and continues seamlessly from them. As a result, we find that TAI was effectively synchronized with Universal Time (specifically, UT2) back in 1958.

Trouble was, of course, that Universal Time (and GMT) immediately started to drift away from the time kept by the atomic clocks. So during the 1960s we saw a struggle to come up with a way of somehow applying the extremely regular output of atomic clocks to the slippery and evolving timescale of the rotating Earth. This hybrid of atomic time and UT was eventually named Coordinated Universal Time (UTC). It consisted of a set of instructions issued by the Bureau International de l’Heure (BIH), telling people how to modify the output of an atomic clock in order to produce a time-signal that closely matched UT2. From 1961 to 1972, this consisted of a frequency shift and the occasional step-change of a twentieth or a tenth of a second, but in 1972 the BIH shifted to one-second steps designed to keep UTC within 0.9 seconds of UT1—the “leap seconds” I’ve previously written about. The BIH was dissolved in 1987, handing over leap-second duties to the International Earth Rotation Service, but UTC continues as the civil time standard applied around the world.

So that’s how atomic time became the basis for civil time. It soon also took over the role of Ephemeris Time. In 1976 an atomic standard called Terrestrial Dynamic Time (TDT) was synchronized with ET, and later renamed to just plain Terrestrial Time (TT). This is the timescale we currently use when figuring out orbital motions in the vicinity of the Earth. There’s another one, Barycentric Dynamical Time (TDB), used for calculating high-precision orbits in the rest of the solar system—it exists because of General Relativity, and is always within a couple of milliseconds of TT, so can often be neglected.

Because TAI and TT tick at the same rate, they bear a constant relationship to each other: TT=TAI + 32.184 seconds. Where does that offset come from? It’s because Ephemeris Time (uniform with TT) was synchronized with UT back in 1902, whereas TAI was synchronized with UT in 1958. And during that time, UT had drifted away from ET by 32.184 seconds.

So nowadays our timescales look very different from the way they were in 1950. We have the Earth’s rotation, defining UT1, monitored by Very Long Baseline Interferometry, and reported by the International Earth Rotation Service. UT1 is interconvertible with GMST, so if we need to calculate GMST, we go through UT1. Civil times everywhere are based on UTC, which is an atomic timescale with added leap seconds to keep it close to the observed values of UT1. And the fine detail of solar system dynamics are calculated using TT and its associated atomic times.

If I draw a diagram of my discussion so far, you can see that the 1950s, ’60s and ’70s were a period of intense flux for time measurement.

The advent of atomic timekeeping
Click to enlarge

This has relevance to my long, slow project of extracting orbital data from NASA’s original documentation, because NASA’s engineers would have been navigating to the moon using Ephemeris Time (which we can retrospectively call Terrestrial Time, because the one is a continuation of the other), but the mission documentation gives times according to GMT, or a time zone derived from it—either the time at the Florida launch site, or at Mission Control in Texas. So to correctly derive my Apollo orbits, I need to convert from GMT and TT—just at a time when the relationship between GMT and TT was at its most complicated.

To add to the complexity, the name “Greenwich Mean Time” was sometimes applied to UT1 when used for navigational purposes, so there’s a potential ambiguity to NASA’s usage of “GMT” in its Apollo documentation—did they mean UT1 or the evolving UTC standard? After wading through a lot of documents, I eventually turned up an answer to that question in the hefty Proceedings of the Apollo Unified S-Band Conference, which took place at the Goddard Space Center in July 1965. In the chapter entitled “Apollo Precision Frequency Source And Time Standard”, by R.L. Granata, we’re told that:

The method of obtaining time synchronization is to employ the WWV, HF signals.

WWV is the radio station used by the National Institute of Standards and Technology to broadcast time and frequency standards for the USA. These broadcasts were coordinated internationally by the BIH during the 1960s, and by 1965 had already been tied to the A3 atomic timescale, which was a direct precursor of TAI—so WWV was broadcasting the atomic-based civil time that would soon be known as UTC, and that is almost certainly what is meant by “GMT” in the Apollo documentation.

How do I get from UTC to TT? I need to turn to the Earth Orientation Center at the Paris Observatory, who maintain a dataset called the Earth Orientation Parameters, series C04. This provides daily values for UT1-TAI and UT1-UTC, stretching back to 1962. Bearing in mind the fixed relationship between TT and TAI, this is all I need to create a graph showing how UT1 and UTC were drifting away from TT during the course of the manned Apollo missions:

Time scales during the manned Apollo missions, 1968-72
Click to enlarge

You can see how, by dint of frequent small step changes and adjustments in clock rate, the BIH kept UTC extremely close to UT1 and UT2 right up to the start of 1972, which was when the leap second was introduced—a small step change at the start of that year brought UTC to exactly 10 seconds away from TAI (42.184 seconds from TT), and the first leap second then occurred at the start of July.

Pulling up the data for 16 July 1969, the date on which Apollo 11 launched, the Paris Observatory tells me that:

UT1-TAI = -7.5505119 s
UT1-UTC = 0.0115221 s

And we know that

TT-TAI = 32.184 s

So:

(TT-TAI)(UT1-TAI) + (UT1-UTC) = TT-UTC = 39.746 s

This value is sometimes symbolized ΔT, and it’s our route to converting NASA’s quoted GMT times to Ephemeris Time, or TT. And for strict accuracy, we also need to take note of the value for UT1-UTC, sometimes called ΔUT1. This is the conversion from UTC to UT1, and thence to GMST, which will be needed when I’m converting NASA’s state vectors to orbital elements in my next post on this topic.

So when NASA tells us Apollo 11 launched at 13:32:00 GMT, we can say that’s equivalent to 13:32:39.746 TT. It’s a significant difference—if we neglect it, it’s equivalent to a 16 kilometre displacement of the launch pad, and a 40 kilometre displacement of the moon. By comparison the conversion to 13:32:00.012 UT1 is trivial, and could easily be ignored, given the uncertainties in other data. More about that next time.


* Not like me, I know. If you want to know more about how the name “Greenwich Mean Time” came to be confusingly applied to several slightly different timescales, forcing the astronomers to give their own version a unique name, take a look at Dennis McCarthy’s “Evolution of Timescales from Astronomy to Physical Metrology”.
The progressive divergence of Universal Time stalled and reversed itself during 2020—solar days are getting shorter again, for reasons that are unclear.

2 thoughts on “The Advent Of Atomic Time”

    1. Thanks. Celestia can handle SPICE trajectories, which I’ve used on occasion for other projects.
      But the toolkit provides so much more than mere data, as you say. I’m just starting to play with the SPICE plug-in for Unreal Engine, but I’m learning Unreal Engine at the same time, which is a significant learning curve to surmount!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.