# Tertiary Rainbows, etc

In my last two posts about rainbows, I discussed the formation of the primary and secondary rainbows, respectively, tracing their origins to specific light paths through falling raindrops.

The primary rainbow ray follows a path like this:
For a raindrop at the apex of the rainbow arc, sunlight enters near the top of the drop, bounces once off the back, and then exits the bottom, descending towards the observer’s eye, making an angle of around 41.5º for the green ray shown.

For the secondary rainbow, sunlight enters near the bottom of the drop, is reflected internally twice, and then exits the front of the drop, descending towards the observer at an angle of around 51º.

These rainbow rays are special, representing the maximum or minimum angles of deflection of the incoming ray. And they are associated with a particular offset of the incoming ray in its interaction with the raindrop. I measured offset like this:The rainbow ray for the primary enters the raindrop at an offset of about 0.86; the secondary rainbow ray at about 0.95.

For more about these topics, in particular the importance of maximum and minimum deflections, I direct you back to my previous posts.

I finished my most recent post on this topic with a question: If a single internal reflection produces a primary rainbow, and two internal reflections produce a secondary rainbow, is there such a thing as a tertiary rainbow? And if so, where is it?

The path of the rainbow ray for a tertiary bow looks like this:

The three reflections carry it almost all the way around the raindrop, so that (in contrast to the primary and secondary bows) it leaves the drop heading away from the sun. This tells us that, to see a tertiary bow, we’d need to look towards the sun, rather than (as for the primary and secondary) towards the antisolar point.

Plotting my usual graph of light deflection and transmission against the full range of ray offsets with three internal reflections, I get this:

In contrast to my graphs for the primary and secondary, this one shows the deflection from the “solar point”—the position of the sun. The maximum occurs at an offset of about 0.97, reaching 42.5º for red light, and 37.7º for violet. The angular distance between red and violet is therefore about two-and-a-half times what we see in the primary bow. The light transmission is scaled to match my previous two graphs, and it shows that, because so much light is lost during multiple internal reflections, only about 1% of the tertiary rainbow ray survives to exit the drop. (But that’s equivalent to about 24% of the transmission for the primary rainbow ray, so not catastrophically dim.)

So it seems straightforward enough. We should see a broad, faint tertiary bow, about the same diameter as the primary but centred on the sun. Have you seen that? No, me neither.

The problem is that there’s a lot of light in the sky around the sun, particularly when rain is falling through the line of sight. Les Cowley at Atmospheric Optics calls this the “zero-order glow”, because it is formed from sunlight that passes through the raindrop without being reflected. So there’s always a directly transmitted, zero-order light ray parallel to the tertiary rainbow ray, like this:Much more light pours through the raindrop in the zero-order glow than survives through three internal reflections. This makes the tertiary rainbow very difficult to see.

But not impossible. There have been sporadic reports over the last few decades, by careful observers who knew what they were looking for. And finally, in 2011, a paper* appeared in the “Light And Color In The Open Air” edition of Applied Optics, entitled “Photographic evidence for the third-order rainbow”. The authors describe taking a photograph under favourable conditions (the sun obscured, a dark cloud in the region that would be occupied by the tertiary bow). The photographer could barely discern a hint of the tertiary rainbow—“only a faint trace of it at the limit of visibility for about 30 seconds”. But after a bit of image processing, a rainbow arc appeared in the resulting photograph. And, after careful analysis, the authors confirmed they had taken the first known photograph of a tertiary rainbow.

You’ll have discerned a pattern. Each successive rainbow (primary, secondary, tertiary) is produced by a rainbow ray which is increasingly offset from the centre of the drop, undergoing one more reflection that its predecessor. The increasing offset means that refraction is greater, with a larger difference between red and violet rays, leading to a broader rainbow. Each additional reflection along the light path means more light lost, and so a fainter rainbow.

Rather than bore you with light paths for higher-order rainbows, I’ll show you them all on one diagram. (The design is based on Jearl Walker’s “Amateur Scientist” column in the July 1977 issue of Scientific American—the calculations and drawing are all my own.)

The rosette shows the first twenty rainbows, produced by the first twenty internal reflections of light that enters the upper half of a water droplet. Light bounces around the drop clockwise, and the difference in deflection for successive rainbows quite quickly converges to settle at a little less than a right angle. So you can see that the primary (labelled “1” in the lower left quadrant) is followed by the secondary in the upper left quadrant, the tertiary in the upper right quadrant, and the quaternary in the lower right. The quinary rainbow (number five) brings the progression almost full circle, and appear just a little anticlockwise of the primary. I won’t bore you with the names for higher orders of reflection, but you should be able to pick out how they go around again, and again, and again, becoming successively wider and fainter.

Because violet is refracted more than red, all the rainbows have the same layout, with violet always lying clockwise of red.

Rainbows in the lower left quadrant are being reflected downwards, and back towards the light source. So an observer would look for them as circles around the antisolar point. Because the red light is descending more steeply than the violet, red will appear on the outer edges of all these rainbows. Rainbows in the lower right quadrant are formed from light that has been reflected downwards and away from the light source—so the observer must look towards the sun to see them. In this position, violet light is descending more steeply than red light, so all these rainbows have violet on their outer edges.

Light in the top half of the diagram is all spraying upwards—these rainbows would not be visible to an observer standing below the drop. But we’ve neglected the light that enters the lower half of the raindrop, and bounces anticlockwise. It produces its own rosette of rainbows, identical to the one above, except mirrored in the horizontal plane, like this:

The reversed light path means that violet always lies anticlockwise of red for this family of rainbows. So antisolar rainbows with this light path have violet as their outer colour, and solar rainbows have red outermost.

So the rainbows we see in the sky come from a mixture of the two sets of possible light paths in the two diagrams above. So let’s stack them together, and switch from a view centred on the water drop to a view centred on the observer:

The sky is full of overlapping rainbows! You should think of the upper and lower “copies” of each rainbow as being linked by a vertical circular arc sticking out at right angles to your screen, which the observer sees as a (potential) circular rainbow.

My diagram has the sun directly behind the observer, in which case the rainbows in the lower part of the diagram would be invisible under normal circumstances, superimposed on the ground, and each rainbow would form a semicircular arc against the sky. But the sun is usually some distance above the horizon—as it climbs higher, it pushes the antisolar rainbows lower in the sky, but carries the solar rainbows higher. So antisolar rainbows  generally form less than a semicircular arc, and when the sun is higher above the horizon than the radius of the rainbow they will drop entirely below the horizon. But solar rainbows will be lifted above the horizon by the sun, forming more than a semicircular arc, and at the extreme when the sun is higher above the horizon than the radius of the rainbow they can form complete circles. (Complete circles are also possible for antisolar rainbows, but only when the observer is looking down on water droplets suspended below the horizon, as from a plane flying over clouds.)

Are any of them visible to the naked eye? We know that the tertiary has been spotted very rarely. The quaternary sits right next to it, farther out in the zero-order glow, but additional light losses mean its rainbow ray transmission is just 15% of the primary. I don’t know of anyone who has seen it, but it has been photographed. In fact, the article entitled “Photographic observation of a natural fourth-order rainbow” appeared in the same themed edition of Applied Optics as the tertiary rainbow report. (Early reports of the tertiary photographs had inspired the author to search out the quaternary using image stacking.)

The quinary bow has also been photographed. With rainbow-ray transmission sitting at just 10% of the primary, it is also partially obscured by the brighter secondary bow. But its green, blue and violet portion sit within Alexander’s Dark Band, giving reasonable hope of detecting those colours. And Harald Edens managed to (just) pick out the green stripe of the quinary rainbow in a photograph taken in 2012.

The sixth-order rainbow sits within the bright sky at the inner edge of the primary bow, and seems like a poor candidate for detection. The seventh is better separated from the zero-order glow than the third and fourth, but its rainbow ray transmission is only 6% of the primary. However, it seems likely that some of the higher order bows will yield to the sort of photographic techniques commonly employed in astrophotography these days. Watch this space. Meanwhile, for your delectation, I’ve appended some basic descriptive data for the first twenty rainbows, as featured in my diagrams above.

* Großmann M, Schmidt E, Haußmann A. Applied Optics 50(28): F134-41
Oh, alright, I will. Beyond quinary, five, the sequence goes senary, septenary, octonary, nonary, denary, undenary, duodenary … at which point we need to start making up names until we get to vigenary, twenty.
Theusner M. Applied Optics 50(28): F129-F133

Note: All values, here and in previous posts, are calculated using a refractive index for red light (wavelength 700nm) of 1.33141, and for violet light (wavelength 400nm) of 1.34451. See the page dealing with refractive index on Philip Laven’s excellent website, The Optics Of A Water Drop, for further information and references.

 Order Outer Radius (º) Width (º) Outer Colour Solar/ Antisolar Transmission (% Primary) 1 42.4 1.9 Red Antisolar 100 2 53.8 3.4 Violet Antisolar 42.6 3 42.5 4.8 Red Solar 23.5 4 48.9 6.1 Violet Solar 14.9 5 52.9 7.4 Red Antisolar 10.3 6 39.6 8.7 Violet Antisolar 7.6 7 65.6 10.0 Red Solar 5.8 8 29.0 11.2 Violet Solar 4.5 9 79.1 12.5 Red Antisolar 3.7 10 17.8 13.8 Violet Antisolar 3.0 11 93.1* 15.1 Red Solar 2.6 12 10.1 10.1† Red Solar 2.2 13 90.3* 17.6 Violet Solar 1.8 14 24.4 18.9 Red Antisolar 1.6 15 78.5 20.1 Violet Antisolar 1.4 16 38.8 21.4 Red Solar 1.3 17 66.6 22.7 Violet Solar 1.2 18 53.3 23.9 Red Antisolar 1.0 19 54.6 25.2 Violet Antisolar 0.9 20 67.9 26.5 Red Solar 0.8

* The order-11 and order-13 rainbows lie predominantly in the solar hemisphere, but extend slightly into the antisolar
The order-12 rainbow spans the solar point, and therefore overlaps itself—a small disc of blue-violet rainbow (radius 6.1º) is superimposed on a larger disc of green-yellow-red rainbow (radius 10.1º)

# Secondary Rainbows

In my previous post about rainbows, I described how the light of the rainbow was reflected back to our eyes by falling water droplets. For a raindrop at the top of the rainbow arc, light follows a path that enters near the top of the raindrop, bounces off the back, and then exits from the bottom:

The angle between the incoming light ray and the reflected ray ranges from 42.4º for red light to 40.5º for violet light. All other light rays, entering the drop either closer to its centre or closer to its edge, are reflected back at smaller angles—and it’s their smeared and superimposed light which accounts for the white glow visible within the arc of the rainbow, above. I used the name “Offset” for the parameter that measures how close to central the incoming light ray hits the droplet. It’s measured like this:

And by plotting the deflection of light rays at various offsets, along with light transmission along the reflected pathway, I showed how the rainbow forms at the point of maximum deflection, corresponding to an offset of about 0.86:

I called the ray that follows this maximum-deflection route the “rainbow ray”. (See my previous post for much more detail.)

I also produced a little diagram of how light is lost from the rainbow ray each time it encounters a surface at which it is either reflected or transmitted, like this:

So the rainbow ray arrives at your eye containing only about 4.5% of the light that entered the water drop.

What I want to talk about this time is the fate of the light that undergoes a second internal reflection (labelled “0.5%” in the diagram above).

It’s possible for light from this second reflection to exit the drop when it next encounters the water-air interface, like this:

The second reflection takes the light to the front of the raindrop, and it exits on an upward course, crossing the path of the incoming ray. For this ray to reach the eyes of an observer looking towards the antisolar point (the centre of the rainbow arc), we have to flip the geometry upside-down, like this:

So now the incoming light enters near the bottom of the raindrop, and the reflected light is deflected downwards, towards an observer on the ground.

This light is the source of the secondary rainbow, which is larger and fainter than the primary rainbow I’ve been describing so far. (A secondary rainbow is dimly and partially visible in the photograph at the head of this post.)

Like the singly reflected light that forms the primary bow, the doubly reflected light that forms the secondary bow has its own characteristic angle of deflection, but this time (because of the flipped light-path described above) it’s the minimum angle of deflection from the antisolar point where light is concentrated to form the secondary rainbow ray:

I’ve kept the scale of the transmission curve the same, to allow comparison with that of the primary rainbow. And the range of the angle-of-deflection axis is the same, but it spans from 45º to 90º this time, rather than the 0º to 45º of the primary plot.

The minimum deflection occurs at an offset of about 0.95. The deflection is 50.4º for red light, and 53.8º for violet.

So there are several things going on with this secondary rainbow. Firstly, because the light enters closer to the edge of the raindrop, the refraction is greater, and that causes the separation between red and violet light to be greater—so the secondary rainbow has a width of 3.4º, compared to just 1.9º for the primary rainbow. Secondly, the fact the light-path is flipped over compared to the primary reverses the colour sequence—red is on the inside of the secondary bow, but on the outside of the primary. Thirdly, the additional reflection means more light is lost from the rainbow ray as it passes through the drop. This is partially offset by the fact that the rainbow-ray offset is closer to the offset of maximum light transmission—so the light transmitted into the secondary bow works out to be about 43% of that transmitted into the primary. Finally, because the rainbow ray occurs at a minimum deflection from the antisolar point, the rest of the sunlight entering the drop is reflected to greater angles than the rainbow ray—it lights up the sky outside the secondary bow.

So if we imagine a raindrop falling vertically towards the antisolar point, it at first sends a doubly reflected mixture of light, appearing white, towards the observer’s eye. When it has fallen to 53.8º from the antisolar point, it sends a relatively pure, doubly reflected violet light to the observer—the top of the secondary bow. As it falls from 53.8º to 50.4º, it reflects all the spectral colours in sequence, ending with red light at the inner rim of the secondary bow. Then, it quite literally goes dark. It is too low to send any doubly reflected light in the observer’s direction, but too high to send any singly reflected light. The region of sky between the secondary and primary bow is therefore noticeably darker than either the region outside the secondary, or inside the primary.

This dark region is called Alexander’s Dark Band*, in honour of the Greek philosopher Alexander of Aphrodisias, who described it in about 200AD.

After passing through the Dark Band, the raindrop lights up with singly reflected red wavelengths when it reaches 42.4º, and then runs through the spectral colours until it reaches violet at 40.5º. Below that point, it reflects a mixture of wavelengths (white light again) until it hits the ground.

The whole sequence looks like this:

The obvious question now is this: if the primary rainbow forms from singly reflected light, and the secondary from doubly reflected light, what happens to triply reflected light? Is there a tertiary rainbow? And if so, where is it?

That’s the topic for next time.

* Not to be confused with the 1911 Irving Berlin hit, Alexander’s Ragtime Band.

# Rainbow Rays

The COVID-19 lockdown, in my part of the world, has produced an outpouring of children’s rainbow art—often stuck up in people’s windows, but sometimes sketched on the pavements, too.

I’ve been struck by the generally good command of spectral colours on display, with red on the outside and an appropriate progression towards violet on the inside. I was amused by the one above, which is a pretty flawless piece of artistry, undermined only by the position of the sun.

It reminded me that I had been planning to write about rainbow colours for years, ever since I wrote about converging rainbows back in 2015.

That was when I posted this diagram, showing the relationship between the sun’s position and the rainbow arc:

The rainbow is a complete circle of coloured light, centred on the antisolar point—the direction exactly opposite the position of the sun, which is marked by the shadow of your head. We usually only see an upper arc, because the area below the horizon usually contains too few raindrops along the line of sight to generate bright colours.

Every raindrop 42½º away from the antisolar point reflects red light towards our eyes; every raindrop at 40½º reflects violet light in the same way. And between those extremes, raindrops reflect all the spectral colours in turn, changing their apparent colour as they fall. But quite why that happens is a little complicated, and that’s what I want to write about this time.

Here’s the route that a ray of green light takes when it passes through a rainbow-forming raindrop and bounces back towards our eyes. Let’s call this particular trajectory the “rainbow ray” for short:

If the raindrop is falling through the top of the rainbow arc, the light enters near the top of the drop, is refracted as it crosses the air-water interface, then reflected from the back of the drop, then exits through the bottom of the drop, being refracted again as it moves from water back to air. The angle between the incoming ray and the outgoing is about 41½º for green light. At the sides of the rainbow, you have to imagine the diagram above lying horizontal—the ray enters the outward-facing side of the drop, and is reflected towards you sideways. And, obviously, the rest of the rainbow is formed by light paths that are more or less tilted between the horizontal and the vertical.

Violet light is refracted more than green light, which is in turn refracted more than red light. So a ray of white light (drawn in black below) is split into a fan of coloured rays as it enters the raindrop. These follow slightly different courses within the drop, and exit at different angles (exaggerated here for clarity):

So we see red at 42½º, and violet at 40½º, with green between. Simple.

But why choose to consider just those rainbow rays? What about all the light that enters the drop closer to its rim, or closer to its centre? I’ll call this general group of light rays “reflected rays”, of which the rainbow ray is only one example.

If I plot a couple of examples, you can see that rays entering the drop farther out than the rainbow ray are refracted more strongly, and end up exiting the drop at an angle less than the rainbow ray. Those that enter the drop nearer the centre are refracted less, but bounce back at a narrower angle from the back of the drop, and also exit the drop at an angle less than the rainbow ray. So for a given colour of light, it turns out that the rainbow ray is the light path that results in the maximum deflection angle from the antisolar point. All other reflected rays exit at narrower angles, and so should appear within the visible coloured arc of the rainbow itself.

How does that help, though? Why is the rainbow ray’s role as the maximum angle of deflection important?

To show why, I’m going to make a plot of what happens to all the reflected rays, using a parameter I’ll call Offset*, which works like this:

It’s just the proportional distance from the centre of the raindrop at which the ray enters—a zero offset means that the ray spears straight into the centre of the drop; an offset of one means the ray just grazes the edge of the drop.

So here’s how red and violet reflected rays are deflected, for the full range of offsets:

Deflection peaks at an offset of about 0.86, the location of the rainbow ray, with lesser deflection occurring on either side, as shown in the diagram above. Red and violet are at their maximum separation at the peak, and the rounded peak of the curves means that a lot of rays close to the rainbow ray end up reflected in the same part of the sky as the rainbow ray. You can see from the chart that all the rays with offsets from 0.75 to 0.95 end up within two degrees of the rainbow ray; a similar span from 0.2 to 0.4 is spread over fifteen degrees.

So, in the vicinity of the rainbow ray, there’s a lot of light in a small area of sky, and the spectral colours are well separated. Farther from the rainbow ray, the deflected light is smeared over a large area of sky within the rainbow arc, and the spectral colours are not well separated—all these other rays average out to a patch of white light filling the curve of the rainbow, with no colour separation. This bright area within the rainbow can often be strikingly visible, if the rainbow has dark clouds behind it.

One other thing contributes to the colour intensity of the rainbow—oddly, that’s that fact that some light is lost from the reflected rays every time they interact with an air/water or water/air interface. Here’s a diagram of how much light goes missing from the green rainbow ray as it passes through the raindrop:

The large proportion of light that shoots straight out the back of the drop, doubly refracted but without being internally reflected, creates a bright patch around the sun that appears whenever the solar disc is viewed through falling rain. Les Cowley at the excellent Atmospheric Optics site has dubbed this the “zero-order glow”.

Interestingly, for a given ray each interaction with an interface results in exactly the same ratio of reflection to transmission (though not quite in my diagram, which features rounded figures). This is unexpected (at least to me), because the reflective properties of a water/air interface are generally different from those of an air/water interface; the former features the phenomenon of total internal reflection, for instance. But it turns out that the first passage through the air/water interface changes the angle of the ray just enough to make it interact with subsequent water/air interfaces in exactly the same way as its initial air/water encounter.

If I plot the final amount of transmitted light for all the different offset rays, and add it to my previous graph, it looks like this:

The transmission data are in brown, and refer to the new, brown axis on the right side of the chart. You can see that transmission starts to ramp up just as we get into the vicinity of the rainbow ray, boosting the brightness in the rainbow’s part of the sky. The peak of transmission does occur at very high offsets, beyond the rainbow ray, but in that region the angle of deflection changes very rapidly with slight changes in offset, which diffuses that light over a large arc.

The calculations I did to produce the transmission graph above involved Fresnel’s equations, so I had to track two different polarizations of light independently. For light reflecting from a surface between two transparent mediums, there’s a critical angle of incidence called Brewster’s angle, at which the reflected light becomes totally polarized. At that angle, the reflected light is entirely s-polarized; light polarized at right angles to this (p-polarized) is completely transmitted through the reflective surface. (Your polarizing sunglasses are designed to filter out s-polarized light reflected from horizontal surfaces, to reduce glare.)

The Brewster angle for an air/water interface is around 53º; for water/air it is about 37º. And it turns out that any light entering the water at an angle of incidence of 53º has its angle changed to 37º by refraction. So in the case of our raindrop, a ray that strikes the surface of the drop at 53º (corresponding to an offset of about 0.8) will continue through the drop and strike the water/air interface at 37º—it hits two Brewster angles in succession! This means that p-polarized light that hits the drop at Brewster’s angle is entirely refracted into the drop—none of it escapes by reflection from the air/water interface. But then it hits the back of the drop, and now none of it is re-reflected—it is all transmitted, again. So at an offset of 0.8, no p-polarized light gets into the reflected ray—it is all lost out the back of the raindrop.

So now if I mark up the total amount of transmitted light with its s- and p-polarized components, you can see that the light making up the rainbow will be strongly s-polarized, because the rainbow rays are pretty close to Brewster’s angle:

The resulting polarization follows the curve of the rainbow. Your polarizing sunglasses will largely block the light coming from the top curve of the rainbow, but will let light through from the sides. However, if you tilt your head, you’ll remove light from the sides of the rainbow, and bring the upper curve into view.

Here’s a nice short little YouTube video, by James Sheils, demonstrating how to make the lower curve of a rainbow appear and disappear using a polarizing filter:

And that’s it, for now. Some time in the future I’ll get around to discussing the secondary rainbow.

* The measure I’ve called Offset is sometimes called the “impact parameter”, a term borrowed from nuclear physics. While the analogy is strong, if you know its original application, I’m not sure the phrase itself helps with visualization, so I’m sticking with Offset in this and subsequent posts.

# Does The Sun Set On The British Empire?

In short, taking every thing into consideration, the British empire in power and strength may be stated as the greatest that ever existed on earth, as it far surpasses them all in knowledge, moral character, and worth. On her dominions the sun never sets. Before his evening rays leave the spires of Quebec, his morning beams have shone three hours on Port Jackson, and, while sinking from the waters of Lake Superior, his eye opens upon the mouth of the Ganges.

Caledonian Mercury, 15 October 1821, page 4: “The British Empire”

It’s noticeable, when reading the above, that none of the places it mentions by name still belong to the United Kingdom. The British empire is now much reduced in size; in fact, its overseas possessions are confined to a scatter of places that few people could reliably place on a map:

Overseas Territories
● Anguilla
● Bermuda
● British Virgin Islands
● Cayman Islands
● Falkland Islands
● Gibraltar
● Montserrat
● Pitcairn Islands
● Saint Helena (with Ascension & Tristan da Cunha)
● Turks and Caicos Islands
● British Indian Ocean Territory
● South Georgia and South Sandwich Islands
● British Antarctic Territory (in abeyance under Antarctic Treaty)

Dependent territory
● Sovereign Base Areas of Dhekelia & Akrotiri (Cyprus)

If you’re one of the people who would have trouble placing these names on a map, here’s a map:

And if you’d like to know more about all these places, I heartily recommend Stewart McPherson’s marvellous book, Britain’s Treasure Islands: A Journey To The UK Overseas Territories, as well as the accompanying BBC television series.

What stands out from the map above is that the UK still has the Atlantic, Caribbean and Mediterranean pretty well covered. There’s a solitary (and I do mean solitary) British possession in the Pacific, the Pitcairn Island group. (I’ve written about Pitcairn and its neighbouring islands a couple of years ago, when we were lucky enough to visit them.) And there’s another single possession in the Indian Ocean, the catchily named British Indian Ocean Territory (BIOT). BIOT occupies the whole of the Chagos Archipelago, and is inhabited entirely by British and American military personnel and contractors, based on the largest island, Diego Garcia. It used to be home to 2000 Chagossians, who were chucked out around 1970 to make way for the UK/US military installations. The poor Chagossians are still grinding through the courts attempting to get their homeland returned to them.

Anyway. Pitcairn and BIOT, which are a long way west and east of most UK territories, look like the key locations to examine when it comes to deciding whether the sun still “never sets on the British empire”. With Pitcairn’s time zone of GMT-8, and BIOT’s of GMT+6, there’s only ten hours of difference between the two territories, which should mean that the sun is visible from both locations for a couple of hours a day. But there’s a potential problem with the seasonal variation in day length—while BIOT sits close to the equator and won’t have much variation in the times at which the sun rises and sets, Pitcairn is south of the tropics, and so we can expect its sunsets to be noticeably earlier in June than they are in December.

So we’re going to need to plot daylight charts for the whole year. Here’s one for Greenwich:

Along the x-axis we have the months of the year, numbered from 1 to 12. On the y-axis, Greenwich Mean Time. The lower curve marks the time of sunrise, throughout the year, at Greenwich. The upper curve is sunset. The yellow area between the curves therefore represents the totality of daylight seen in Greenwich throughout the course of a year.

OK. Let’s superimpose the sunrise and sunset curves for Adamstown on Pitcairn, giving times in GMT:

The Pitcairn sunrise and sunset curves are in red, and Pitcairn daylight extends a long way through the Greenwich night. But sunset on Pitcairn always occurs before sunrise in Greenwich, so there’s a brief period when the sun is shining in neither location.

Will BIOT, with its sunrise earlier than Greenwich, fill the gap? Here are the BIOT curves (calculated for Diego Garcia) added in green:

It’s a close-run thing. Pitcairn’s midwinter sunset on 21 June 2020 comes just 38 minutes after BIOT’s sunrise. Here’s a south polar view of the Earth on that date, capturing the brief period when both territories are in sunlight:

But there’s no doubt the chart is full of daylight, and the sun still never sets on the British empire!

# Helium

I had a photograph of my own to illustrate this post, but it was a bit rubbish. I was inspired to write about helium when I discovered the wreckage of a mylar-foil helium balloon, like the one pictured above, tangled in a gorse bush on the slopes of Newtyle Hill. It’s the second foil balloon I’ve discovered on the hill, and (like the first one) I stuffed it into my rucksack and carried it down for disposal. I took a photograph to illustrate what a non-biodegradable blot on the landscape these things are, but in the photo the balloon looked like just another bit of plastic debris.

The picture above is actually more useful, because it demonstrates the key fact about helium gas, the one thing that pretty much everyone knows about it, and the property from which many of its other interesting qualities derive—it’s lighter than air.

The reason it’s lighter than air is because its atoms are considerably less massive than the molecules that make up air. Helium is a monatomic gas, made up of individual atoms, and the mass of a single helium atom is about four daltons.* (For comparison, the mass of a common carbon atom is 12 daltons, and the commonest kind of hydrogen atom weighs in at around one dalton.) Air, on the other hand, is mainly composed of two diatomic gases, nitrogen and oxygen. Their molecules, N2 and O2, come in at a 28 and 32 daltons, respectively, giving air an average molecular mass of 29 daltons.

The fact that individual helium atoms have a low mass feeds into two other important properties of helium.

Firstly, its atoms are small—just a single electron shell containing two electrons. A small atom with tightly bound electrons is reluctant to redistribute its charge in response to nearby polar molecules. This means that its relatively immune to the intermolecular Van der Waals forces which cause atoms and molecules to transiently adhere to each other, which in turn means that helium gas isn’t very soluble.

Secondly, at any given temperature the atoms in helium gas move faster, on average, than the atoms or molecules of heavier gases. This is because temperature is a measure of the kinetic energy of gas particles, and kinetic energy scales with both velocity squared and mass. A low mass means velocity must be higher to produce the same kinetic energy. Since helium is only 4/29 the mass of an average air molecule, the mean velocity of its atoms is correspondingly higher by the square root of 29/4, or about 2.7.

So: helium is light, fast and not very soluble. I’ll come back to each of these as we go along.

Firstly, lightness. It turns out that, at equal temperature and pressure, equal volumes of different gases contain the same number of fundamental particles (to a good first approximation). So a litre of helium is only 4/29 the weight of a litre of air. The only less dense gas is hydrogen, which has diatomic molecules massing about two daltons. So both hydrogen and helium are so buoyant in air that they’re able to lift considerable additional mass as they rise—making them ideal fillers for balloons, large and small. Hydrogen, being half the mass of helium, is by far the better lifting agent, but it has one significant disadvantage:

That’s a photograph of the German dirigible “Hindenburg”, fatally aflame at Lakehurst, New Jersey, in 1937. Hydrogen is flammable; helium is not. In fact, helium is notoriously chemically unreactive, being the lightest of the so-called “noble gases” (the others are neon, argon, krypton, xenon and radon). All of these elements have full outer electron shells, rendering them almost completely chemically inert. Which is why modern balloons and dirigibles are filled with helium, not hydrogen.

Next, speed. The faster gas molecules move, the more readily they diffuse through a barrier—which is why a rubber balloon full of helium will lose its shape within a day, and why helium balloons are often made of less-permeable mylar foil, like the one in the photograph at the head of this post. (Because they’re not biodegradable, foil balloons are supposed to be used only indoors—my experience of finding two on the open hillside shows how well that rule is working in practice.)

The rapid movement of helium gas atoms also affects the speed of sound, because sound waves travel through a gas at a velocity roughly comparable to the average speed of the gas molecules. At 0ºC, the speed of sound in air is about 330m/s; for helium it’s 970m/s, almost three times faster. So if you have a resonant cavity full of helium, it will resonate at a frequency about three times higher than it would if filled with air. And that’s what causes the “duck voice” effect we hear when someone breathes a gas mixture containing helium. Their vocal cords vibrate at exactly the same frequency as usual—but the resonant gas cavities of their larynx and airways pick out and emphasize the higher-pitched harmonics of their voice.

Some people achieve this effect by taking a breath from a helium-filled party balloon, which is very much not a good idea, since it violates The Oikofuge’s First Law:

Never breathe anything that contains no oxygen

Breathing gas that contains no oxygen causes oxygen to leave your circulation and diffuse into the gas in your lungs—your circulating oxygen levels therefore fall very rapidly indeed, and a single deep breath can take you to the edge of unconsciousness.

To illustrate the duck-voice effect of someone breathing helium, here’s a recording of a saturation diver, breathing a helium/oxygen mixture in a pressurized underwater habitat:

Which leads us to wonder why deep divers breathe helium and oxygen (a mixture referred to as Heliox), rather than air.

The ambient pressure rises with depth underwater, by about one atmosphere for every ten metres of descent. To counterbalance this, divers must breathe gas at the ambient pressure. But the higher the pressure of gas we breathe, the more of it dissolves in our tissues—and it turns out nitrogen is an anaesthetic agent at high pressures. Its effects are detectable at depths as shallow as ten metres, where the pressure is twice that at the surface. And by the time divers descends to 30 or 40 metres (four or five atmospheres), their judgement becomes sufficiently impaired by nitrogen narcosis that they’re a potential danger to themselves and others.

So for deep diving, nitrogen has to go. But it can’t be replaced by pure oxygen, because oxygen is toxic at higher-than-normal pressures, damaging the lungs and causing convulsions. Indeed, the need to keep the partial pressure of oxygen in the breathing mixture close to what we’re used to at the surface means that, with increasing depth and pressure, oxygen must make up a lower and lower percentage of the breathing mixture by volume.

Helium is a good replacement for nitrogen, for several reasons. Firstly, its low solubility and chemical inertness mean that it doesn’t produce any anaesthetic effect. Secondly, because helium is less soluble than nitrogen, less of it dissolves in the diver’s tissues during a long dive at high ambient pressure, so there’s less of it to get rid of during decompression at the end of the dive, and therefore less risk of gas-bubble formation in the blood and tissues as ambient pressure decreases. Such bubbles are the cause of decompression sickness (“the bends”), and in order to avoid their formation, divers are forced to make their return to the surface slowly. But because helium dissolves in smaller volumes than an equivalent pressure of nitrogen, there’s less risk of bubble formation, and so a faster safe decompression. And finally, the low density of helium comes into play again—because it’s less dense, it’s easier to breathe at high pressures.

Indeed, that last advantage is present even at one atmosphere of pressure. When a person’s airways are narrowed by disease or inflammation, air flow through the narrowed regions can shift from smooth, laminar flow to turbulent flow, which produces a higher resistance to flow through the airways and makes breathing more difficult. The transition from laminar to turbulent flow is determined, in part, by the density of the breathing gas. And, once turbulent flow occurs, the resistance to flow is higher for a denser gas. Substituting helium for nitrogen in the patient’s breathing gas drops its density by 60%, which delays the onset of turbulent flow, and causes less resistance to flow if turbulence occurs. That serves to reduce the work of breathing, decrease distress, and get a bit more oxygen into the patient—which is all good stuff.

So are there any disadvantages for divers breathing helium (apart from the funny voices)? There are. One is caused by that high average velocity of helium atoms—as well as conducting sound faster, helium is also more conductive of heat, with a thermal conductivity almost six times faster than nitrogen. Divers in a helium atmosphere find it harder to stay warm, and when submerged they lose heat to the water very quickly if they have helium filling their dry-suits. (So they often fill the insulating space in their dry-suits with argon, which has an even lower thermal conductivity than air.)

And finally, it turns out that the absence of anaesthetic effects with helium is actually a disadvantage for the deepest of dives. Below depths of about 150-300 metres (fifteen to thirty atmospheres of pressure), divers breathing Heliox develop a condition called High Pressure Nervous Syndrome (HPNS), associated with an apparent overactivity of the nervous system—tremors, muscle jerks, nausea, dizziness and cognitive impairment. No-one’s quite sure why this happens—it was at first blamed on a stimulant effect of helium that appeared only at high pressure, but it now seems more likely that it’s a direct pressure effect on nerve cell membranes, which are reduced in volume by such high ambient pressures. Ironically, the symptoms of HPNS can be damped down by introducing the sedative effects of nitrogen back into the mix, using a breathing mixture of nitrogen, helium and oxygen generically referred to as Trimix. Things get very technical at that point—not only must the ratio of helium and nitrogen be adjusted to minimize the effects of HPNS, but the proportion of oxygen in the mixture must be reduced with increasing depth, in order to limit the pressure of oxygen to a non-toxic level.

But there’s a problem with Trimix, which is that nitrogen at high pressures is difficult to breathe because of its density. What low-density gas could we substitute for nitrogen? Hydrogen, half the density of helium and a fourteenth as dense as nitrogen, turns out to be mildly anaesthetic at high pressures, and therefore it also limits the symptoms of HPNS.

But wait a minute, I hear you cry, glancing back at that photograph of the Hindenburg. Hydrogen is flammable. Can a breathing mixture containing hydrogen and oxygen be safe?

Well, yes it can. Remember that we have to wind down the proportion of oxygen in the breathing mixture as we go to greater depths, to keep the partial pressure of oxygen within safe limits. At thirty atmospheres pressure, a gas mixture containing just 1% oxygen provides an oxygen pressure equivalent to 30% oxygen at sea level—a little more than the 21% we’re used to, but within safe limits. Hydrogen/oxygen mixtures are flammable over a wide range of proportions—from 4% hydrogen in 96% oxygen to 95% hydrogen in 5% oxygen. But not at lower proportions of oxygen. So the low proportion of oxygen required for safety at great depth means that the hydrogen/oxygen ratio sits outside the flammable range. These Hydreliox mixtures are very experimental, but they’ve been used successfully, with 1% oxygen and roughly equal proportions of helium and hydrogen, at depths in excess of 500 metres.

And that’s about it for helium gas. Liquid helium, of course, has all sorts of interesting properties, but that’s perhaps a topic for another day.

* The dalton, also called the Atomic Mass Unit, is named after John Dalton, who first codified the idea that chemistry was due to atoms interacting with each other in a very systematic way.

There’s an important distinction here, though. The advantages of helium’s lower solubility only appear in what’s called “saturation diving”—when divers stay at depth in pressurized habitats for long periods, so that their tissues become saturated with dissolved breathing gas. But divers who descend and then reascend relatively quickly (called “bounce diving”) are never at depth long enough for their tissues to become saturated with nitrogen. For them, helium paradoxically produces a worse risk of decompression sickness than nitrogen, because helium diffuses so much faster than nitrogen. The volume dissolved in the tissues rises very quickly initially, and in the short term may exceed what would be reached by nitrogen in the same time period. Like this:

# Leap Seconds

The year 2020, newly begun as this post is published, is a leap year. I’ve written before about leap years, and how the occasional leap day added to the end of February keeps our calendar year synchronized with the seasons. For more on that topic, see my posts about February 30th and the Equinox.

But this year we are also fairly likely to observe a leap second. (I’ll come back later to the reason for that “fairly likely”.) A leap second is an additional second which will be added to either June 30th or December 31st, and it serves to keep our clocks synchronized with the rotation of the Earth.

The fundamental problem is that the Earth’s rotation is getting slower, primarily because the tidal bulges raised in the Earth’s oceans by the moon and sun generate friction in the ocean beds as the Earth rotates. Over the last couple of millennia the rate of slowing has averaged about 1.7 milliseconds per day, per century. Which sounds trivial, but it adds up to more than three hours over the last two thousand years. We can detect this problem if we look back at early records of astronomical events, particularly solar eclipses, which are visible from only a very limited region of the Earth’s surface. We know, for instance, that a solar eclipse was observed in Babylon early in the morning of 15 April, 136BC. But if we calculate back to the relative positions of the sun, Earth and moon on that date, and assume the Earth has rotated at a constant rate during the intervening centuries, we find the eclipse shadow sweeping through the Atlas Mountains of Morocco, 48.8º of longitude west of Babylon. The difference in longitude represents a three-hour lag in rotation—the naive calculation, ignoring the tidal slowing of the Earth’s rotation, has allowed Babylon to rotate out from under the eclipse track. That mismatch is one of the ways we know about the long-term slowing of the Earth’s rotation.

Here and now, we have three important ways by which we measure the passage of time. The first, and most important in everyday life, is by the rotation of the Earth. We define local noon as being the time at which the sun reaches its highest point in the sky, and we define a solar day as being the time between successive noons. Well, sort of. Because the Earth’s orbit is elliptical, and the Earth’s axis is tilted relative to its orbit, the elapsed time between successive noons varies during the course of the year. So we average those noon passages over a long time period in order to come up with a definition for the day—specifically, it’s a mean solar day.

This mean solar day is conventionally divided into the familiar hours, minutes and seconds, giving 24×60×60=86400 seconds per day. But you can see that there’s a problem with that, because seconds are of a fixed duration, established and defined as part of the Système International (SI) units in 1960. Whereas we now know that the length of the mean solar day is increasing as the Earth’s rotation slows.

Scientists were aware of this problem when the SI units were being defined, and decided they needed to use some other source, with a more fixed and regular motion, in order to define a constant second. Initially, they resorted to our second important means of time-keeping—the movement of the Earth around the sun. The length of the year is rather closer to being constant than the rotation period of the Earth. So the duration of the second was defined as being ​131,556,925.9747 of a tropical year. (The tropical year is a measure of the passage of the seasons—it’s the year that our calendar strives to approximate with all those leap days.)

So that was fine, then. But not quite, because the tropical year is itself a little variable. So what was adopted as the standard was derived from a very specific (and sort of fictitious) tropical year, based on formulae given in Simon Newcomb’s astronomical opus, The Elements Of The Four Inner Planets And The Fundamental Constants Of Astronomy, published in 1895 and based on astronomical observations made between 1750 and 1892. Specifically, the tropical year on which the SI second was based was something produced by Newcomb’s formulae if you plugged in a precise time and date near the start of 1900. So there was no actual year that corresponded to the value used in the definition. And of course Newcomb, and the observers who provided his data, had never used a constant definition of the second. Their definition was based on a second that was exactly 1/86400 of a mean solar day—so seconds, as defined in 1750, were a tiny bit shorter than seconds as defined in 1892. When Newcomb tabulated all these observational data and produced his summary formulae, he effectively averaged out the very slight drift in the value of the second over the observation period. Newcomb’s second, which became the SI second, was only ever exactly 1/86400 of a mean solar day somewhere around the year 1820. So even at the moment of its adoption in 1960, the SI second was slightly adrift from the duration of a mean solar day.

The astronomical definition of the SI second was always a bit unwieldy for general use. Fortunately, there was a third method of measuring time, which had been growing steadily more precise around the time the SI units were introduced—the atomic clock. So in 1967 the definition of the second was transferred to something you could actually measure in the laboratory—the behaviour of a particular kind of clock based on the element caesium. Thenceforth, the SI second was defined as:

… the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.

So now we have a precise and portable definition of the second, which carries over its duration from a previous astronomical definition, based on nineteenth-century observations. This is the basis for a standard called International Atomic Time (abbreviated TAI, for Temps Atomique International), which is based on pooled readings from multiple atomic clocks around the world.

But the Earth’s rotation is steadily lagging behind TAI. So to keep our everyday clocks in synchrony with the slowly lengthening mean solar day, we use a different time scale called Coordinated Universal Time (confusingly abbreviated UTC), which is the basis for Greenwich Mean Time and all the various time zones around the world.

UTC uses the same SI seconds as TAI, but every now and then needs to pause for a moment to allow the Earth’s rotation to “catch up”. Which is where the leap seconds come in—when required, at the end of a month, we add an extra second just before midnight, Greenwich Mean Time. A clock that displays such leap seconds reads 23:59:60 (as at the head of this post) before cycling through to 00:00:00. This leap second is added everywhere, simultaneously, so it occurs in the afternoon or evening in the Americas, but in the morning in Asia and Australia.

At present, the Earth’s rotation is slowing at about 1.4 milliseconds per day, per century (a little slower than the millennial trend I mentioned earlier). Since two centuries have elapsed since the second was exactly 1/86400 of the mean solar day, the day should now be 86400.0028 seconds long, which corresponds to almost exactly one extra second per year.

So why don’t we just schedule an extra second for every December 31st and have done with it? Because the Earth’s rotation rate varies irregularly, from day to day and year to year, around the long term mean rate of slowing. This happens because stuff (air, water, rock) is always moving around, sometimes shifting closer to the Earth’s axis of rotation, and sometimes farther away. If mass moves closer to the axis, the Earth speeds up a little; if mass moves away from the axis, the Earth slows down—this is caused by the same conservation of angular momentum that allows figure skaters and acrobatic divers to modify their rate of rotation by drawing in or spreading their arms. So earthquakes, glaciers melting and seasonal shifts in the air mass all contribute to the variability of the Earth’s rotation rate.

In the early days of the leap second (which was introduced in 1972) we did indeed have a leap second every year. But the Earth’s rotation rate has actually bucked the trend and speeded up a little of late, so leap seconds have become more sporadic—we had no leap seconds at all between 1999 and 2004, and the most recent was in 2016. The aim of the leap second is to keep UTC correct to within 0.9 seconds of the mean solar day, and the situation is constantly reviewed by the International Earth Rotation and Reference Systems Service, which issues a six-monthly Bulletin C, declaring or omitting a leap second at the end of each six-month period. Which is why I can’t (at time of writing) say for sure if we’ll have a leap second in 2020.

I’ll add a footnote as soon as I know. Meanwhile, have a Happy New Year.

Footnote: No leap second at the end of 2020, or at the end of June 2021.

# The Coordinate Axes Of Apollo-Saturn: Part 2

In my previous post on this topic, I described how flight engineers working on the Apollo programme assigned XYZ coordinate axes to the Saturn V launch vehicle and to the two Apollo spacecraft, the Command/Service Module (CSM) and the Lunar Module (LM). This time, I’m going to talk about how these axes came into play when the launch vehicle and spacecraft were in motion. At various times during an Apollo mission, they would need to orientate themselves with an axis pointing in a specific direction, or rotate around an axis so as to point in a new direction. These axial rotations were designated roll, pitch and yaw, and the names were assigned in a way that would be familiar to the astronauts from their pilot training. To pitch an aircraft, you move the nose up or down; to yaw, you move the nose to the left or right; and to roll, you rotate around the long axis of the vehicle.

These concepts translated most easily to the axes of the CSM (note the windows on the upper right surface of the conical Command Module, which indicate the orientation of the astronauts while “flying” the spacecraft):

With the astronauts’ feet pointing in the +Z direction as they looked out of the windows on the -Z side of the spacecraft, they could pitch the craft by rotating it around the Y axis, yaw around the Z axis, and roll around the X axis.

The rotation axes of the LM were similarly defined by the position of the astronauts:

Looking out of the windows in the +Z direction, with their heads pointing towards +X, they yawed the LM around the X axis, pitched it around the Y axis, and rolled it around the Z axis.

For the Saturn V, the roll axis was obviously along the length of the vehicle, its X axis.

But how do you decide which is pitch and which is yaw, in a vehicle that is superficially rotationally symmetrical? It turns out that the Saturn V was designed with a side that was intended to point down—its downrange side, marked by the +Z axis, which pointed due east when the vehicle was on the launch pad. This is the direction in which the space vehicle would travel after launch, in order to push the Apollo spacecraft into orbit—and to do that it needed to tilt over as it ascended, until its engines were pointing west and accelerating it eastwards. So the +Z side gradually became the down side of the vehicle, and various telemetry antennae were positioned on that side so that they could communicate with the ground. You’ll therefore sometimes see this side referred to as the “belly” of the space vehicle. And with +Z marking the belly, we can now tell that the vehicle will pitch around the Y axis, and yaw around the Z axis.

If you have read my previous post on this topic, you’ll know that the astronauts lay on their couches on the launch pad with their heads pointing east.* So as the space vehicle “pitched over” around its Y axis, turning its belly towards the ground, the astronauts ended up with their heads pointing downwards, all the way to orbit. This was done deliberately, so that they could have a view of the horizon during this crucial period.

But the first thing the Saturn V did, within a second of starting to rise from the launch pad, was yaw. It pivoted through a degree or so around its Z axis, tilting southwards and away from the Launch Umbilical Tower on its north side. Here you can see the Apollo 13 space vehicle in the middle of its yaw manoeuvre:

This was carried out so as to nudge the vehicle clear of any umbilical arms on the tower that had failed to retract.

Then, once clear of the tower, the vehicle rolled, turning on its vertical X axis. This manoeuvre was carried out because, although the belly of the Saturn V pointed east, the launch azimuth could actually be anything from 72º to 108º, depending on the timing of the launch within the launch window. (See my post on How Apollo Got To The Moon for more about that.) Here’s an aerial view of the two pads at Launch Complex 39, from which the Apollo missions departed, showing the relevant directions:

An Apollo launch which departed at the start of the launch window would be directed along an azimuth close to 72º, and so needed to roll anticlockwise (seen from above) through 18º to bring its +Z axis into alignment with the correct azimuth, before starting to pitch over and accelerate out over the Atlantic.

Once in orbit, the S-IVB stage continued to orientate with its belly towards the Earth, so that the astronauts could see the Earth and horizon from their capsule windows. This orientation was maintained right through to Trans-Lunar Injection (TLI), which sent the spacecraft on their way to the moon.

During the two hours after TLI, the CSM performed a complicated Transposition, Docking and Extraction manoeuvre, in which it turned around, docked nose to “roof” with the LM, and pulled the LM away from the S-IVB.

This meant that the X axes of CSM and LM were now aligned but opposed—their +X axes pointing towards each other. But they were also oddly rotated relative to each other. Here’s a picture from Apollo 9, taken by Rusty Schweickart, who was outside the LM hatch looking towards the CSM, where David Scott was standing up in the open Command Module hatch.

The Z axes of the two spacecraft are not aligned, nor are they at right angles to each other. In fact, the angle between the CSM’s -Z axis and the LM’s +Z axis is 60º. This odd relative rotation meant that, during docking, the Command Module Pilot, sitting in the left-hand seat of the Command Module and looking out of the left-hand docking window, had a direct line of sight to the docking target on the LM’s “roof”, directly to the left of the LM’s docking port.

Once the spacecraft were safely docked, roll thrusters on the CSM were fired to make them start rotating around their shared X axis. This was called the “barbecue roll” (formally, Passive Thermal Control), because it distributed solar heating evenly by preventing the sun shining continuously on one side of the spacecraft.

Once in lunar orbit, the LM separated from the CSM and began its powered descent to the lunar surface. This was essentially the reverse of the process by which the Saturn V pushed the Apollo stack into Earth orbit. Initially, the LM had to fire its descent engine in the direction in which it was orbiting, so as to cancel its orbital velocity and begin its descent. So its -X axis had to be pointed ahead and horizontally. During this phase the Apollo 11 astronauts chose to point their +Z axis towards the lunar surface, so that they could observe landmarks through their windows—they were flying feet-first and face-down. Later in the descent, as its forward velocity decreased, the LM needed to rotate to assume an ever more upright position (-X axis down) until it came to a hover and descended vertically to the lunar surface. So later in the powered descent, Armstrong and Aldrin had to roll the LM around its X axis into a “windows up” position, facing the sky. Then, as the LM gradually pitched into the vertical position, with its -X axis down, the +Z axis rotated to face forward, giving the astronauts the necessary view ahead towards their landing zone.

Finally, at the end of the mission, the XYZ axes turn out to be important for the re-entry of the Command Module (CM) into the Earth’s atmosphere. The CM hit the atmosphere blunt-end first, descending at an angle of about 6º to the horizontal. But it was also tilted slightly relative to the local airflow, with the +Z edge of its basal heat-shield a little ahead of the -Z edge. This tilt occurred because the centre of mass of the CM was deliberately offset very slightly in the +Z direction, so that the airflow pushed the CM into a slightly tilted position. This tilt, in turn, generated a bit of lift in the +Z direction—which made the Command Module steerable. It entered the atmosphere with its +Z axis pointing upwards (and the astronauts head-down, again, with a view of the horizon through their windows). The upward-directed lift prevented the CM diving into thicker atmosphere too early, and reduced the rate of heating from atmospheric compression.

Later in re-entry, the astronauts could use their roll thrusters to rotate the spacecraft around its X axis, using lift to steer the spacecraft right or left, or even rolling it through 180º so as to direct lift downwards, steepening their descent if they were in danger of overshooting their landing zone.

* As described in my previous post on this topic, the coordinate axes of the CSM were rotated 180º relative to those of the Saturn V—the astronauts’ heads pointed in the -Z direction of the CSM, but the +Z direction of the Saturn V.

I’m missing out a couple of steps here, in an effort to be succinct. (I know, I know … that’s not like me. Take a look at NASA Technical Memorandum X-58040 if you want to know all the details.)

# The Coordinate Axes Of Apollo-Saturn: Part 1

As a matter arising from my long, slow build of a Saturn V model, I became absorbed in the confusing multiplicity of coordinate systems and axes applied to the Apollo launch vehicle and spacecraft. So I thought I’d provide a guide to what I’ve learned, before I forget it all again. (Note, I won’t be talking about all the other coordinate systems used by Apollo, relating to orbital planes, the Earth and the Moon—just the ones connected to the machinery itself. And I’m going to talk only about the Saturn V launch vehicle, though much of what I write can be transferred to the Saturn IB, which launched several uncrewed Apollo missions, as well as Apollo 7.)

First up, some terminology. The Saturn V that sent Apollo on its way to the Moon is called the launch vehicle, consisting of three booster stages, with an Instrument Unit on top, responsible for controlling what the rest of the launch vehicle does. Sitting on top of the launch vehicle, mated to the Instrument Unit, is the spacecraft—all the specifically Apollo-related hardware that the launch vehicle launches. This bit is sometimes also called the Apollo stack, since it will eventually split up into two independent spacecraft—the Lunar Module (LM) and the Command/Service Module (CSM). The combination of launch vehicle and spacecraft (that is, the whole caboodle as it sat on the launch pad) is called the space vehicle.

The easiest set of coordinate axes to see and understand were the position numbers and fin letters which were labelled in large characters on the base of the Saturn V’s first stage, the S-IC. You can see them here, in my own model of the S-IC:

In this view you can see fins labelled C and D, and the marker for Position IIII, equidistant between them.

The numbering and lettering ran anticlockwise around the launch vehicle when looking down from above, creating an eight-point coordinate system of lettered quadrants (A to D) with numbered positions (I to IIII) between them, which applied to the whole launch vehicle. They marked out the distribution of black and white stripes—each stripe occupied the span between a letter and a number, with white stripes always to the left of the position numbers, and black stripes to the right. The five engines of the S-IC and S-II stages were each numbered according to the lettered quadrant in which they lay, with Engine 5 in the centre, Engine 1 in the A quadrant, Engine 2 in the B quadrant, and so on. The curious chequer pattern of the S-IVB aft interstage (the “shoulder” where the launch vehicle narrows down between the second and third stages) is distributed in the lettered quadrants, with A all black, B black high and white low, C white high and black low, and D all white.*

Position II of the launch vehicle was the side facing the Launch Umbilical Tower (LUT), so that side of the Saturn V was dotted with umbilical connections and personnel access hatches, as well as a prominent vertical dashed line painted on the second stage, called the vertical motion target, which made it easy for cameras to detect the first upward movement as the space vehicle left the launch pad. You don’t often get a clear view of the real thing from the Position II side, so I’ve marked up the appropriate view of my model instead, at left.

The two Cape Kennedy launch pads used for Apollo (39A and 39B) were oriented on a north-south axis, with the LUT positioned on the north side of the Saturn V, so Position II faced north. Position IIII, on the opposite side, faced south, looking back down the crawler-way along which the Saturn V had been transported on its Mobile Launcher Platform. Position IIII was also the side that faced the Mobile Service Structure, which was rolled up to service the Saturn V in its launch position, and then rolled away again before launch. And so Position I faced east, which was the direction in which the space vehicle had to travel in order to push the Apollo stack into orbit.

These letters and numbers seem to have been largely a reference for the contractors and engineers responsible for assembling and mating the different launch vehicle stages. Superimposed on them were the reference axes used by the flight engineers, who used them to talk about the orientation and movements of the launch vehicle and the two Apollo spacecraft. These axes were labelled X, Y and Z.

For the launch vehicle, LM and CSM the positive X axis was defined as pointing in the direction of thrust of the rocket engines. So the end with the engines was always -X, and the other end was +X. The +Z direction was defined as “the preferred down range direction for each vehicle, when operating independently”. For the launch vehicle, that’s straightforward—downrange is to the east as it sits on the pad (the direction in which it will travel after launch), so +Z corresponds to Position I, and -Z to Position III. The Y axis was always chosen to make a “right-handed” coordinate system, so +Y points south through Position IIII.

In the image below, we’re looking north. Once the Saturn V has launched it will tip over and head eastwards (to the right) to inject the Apollo stack into orbit.

These axes were actually labelled on the outside of the Instrument Unit (IU), at the very top of the launch vehicle. Here’s one in preparation, with the +Z label flanked by the casings of two chunky directional antennae—a useful landmark I’ll come back to later.

So here’s a summary of all the axes of the Saturn V:

Moving on to the Lunar Module, its downrange direction is the direction in which it travels during landing, when it is orientated with its two main windows facing forward—so +Z points in that direction, out the front. The right-hand coordinate system then puts +Y to the astronauts’ right as they stand looking out the windows.

The landing legs were designated according to their coordinate axis locations. In the descent stage, between the legs, were storage areas called quads—they were numbered from 1 to 4 anticlockwise (looking down), starting with Quad 1 between the +Z and -Y leg. The ascent stage, sitting on top of the descent stage, had four clusters of Reaction Control System (RCS) thrusters, which were situated between the principal axes and numbered with the same scheme as the descent-stage quads.

But it’s not clear that there is a natural downrange direction for the CSM—the +Z direction is defined (fairly randomly, I think) as pointing towards the astronauts’ feet, with -Z therefore corresponding to the position of the Command Module hatch. That places +Y to the astronauts’ right side as they lie in their couches.

The Command Module was fairly symmetrical around its Z axis, and its RCS thrusters were neatly place on the Z and Y axes. Not so the Service Module, which was curiously skewed. Its RCS thrusters, arranged in groups of four called quads, were offset from the principal axes by 7º15′ in a clockwise direction when viewed from ahead (that is, looking towards the pointed end of the CSM). The RCS quad next to the -Z axis was designated Quad A; Quad B was near the +Y axis, and the lettering continued in an anticlockwise direction through C and D. I’ve yet to find out why the RCS system was offset in this way, since it would necessarily produce translations and rotations that were offset from the “natural” orientation of the crew compartment, and from the translations and rotations produced by the RCS system of the Command Module.

The Service Module also contained six internal compartments, called sectors, numbered from 1 to 6. These were symmetrically placed relative to the RCS system, rather than the spacecraft’s principal axes. Finally, the prominent external umbilical tunnel connecting the Service Module to the Command Module wasn’t quite on the +Z axis, but offset by 2º20′ in the same sense as the RCS offset.

So those are the axes for the launch vehicle and spacecraft. But how did they line up when the Saturn V and Apollo stack were assembled? Badly, as it turns out.

First, the good news—all the X axes align, because the spacecraft and launch vehicle are all positioned engines-down for launch, for structural support reasons, if nothing else.

With regard to Y and Z, it’s easy to see the CSM’s orientation on the launch pad. Here’s a view from the Launch Umbilical Tower, which we’ve established (see above) is on the -Y side of the launch vehicle. The tunnel allowing access to the crew hatch of the Command Module (-Z) is on the left, and the umbilical tunnel connecting the Service Module to the Command Module is on the right (+Z), so the CSM +Y axis is pointing towards us.

Oops. The CSM YZ axes are rotated 180º relative to those of the Saturn V launch vehicle.

It’s more difficult to find out the orientation of the Lunar Module within the Apollo stack, since it’s concealed inside the shroud of the Spacecraft/Lunar Module Adapter. Various diagrams depict it as facing in any number of directions relative to the CSM, but David Weeks’s authoritative drawings show it turned so that its +Z and +Y axes align with those of the CSM—facing to the right in the picture above, then, with its YZ axes rotated 180º relative to those of the Saturn V launch vehicle below. We can check that this is actually the case by looking at photographs of the LM when it’s exposed on top of the S-IVB and Instrument Unit, during the transposition and docking manoeuvre. The viewing angles are never very favourable, but the big pair of directional antennae flanking the +Z direction on the IU are useful landmarks (see above).

We can see that the front of the Lunar Module (+Z) is indeed pointing in the opposite direction to the directional antennae marking the +Z axis of the IU and the rest of the launch vehicle. Weeks’s drawing are correct.

So, sitting on the launch pad, the axes of the launch vehicle are pointing in the opposite direction to those of the spacecraft. NASA rationalized this situation by stating that:

A Structural Body Axes coordinate system can be defined for each multi-vehicle stack. The Standard Relationship defining this coordinate system requires that it be identical with the Structural Body Axes system of the primary or thrusting vehicle.

NASA, Project Apollo Coordinate System Standards (June 1965)

So the whole space vehicle used the coordinate system of the Saturn V launch vehicle, and the independent coordinates of the LM and CSM didn’t apply until they were manoeuvring under their own power.

So, beware—there’s real potential for confusion here, when modelling the Apollo-Saturn space vehicle, because different sources use different coordinates; and many diagrams, even those prepared by NASA, do not reflect the final reality.

In Part 2, I write about what happens to all those XYZ axes once the vehicles start moving around.

* I suspect I’m not the first person to notice that the S-IVB aft interstage chequer can be interpreted as sequential two-digit binary numbers, with black signifying zero and white representing one. Reading the least significant digit in the “low” positions, we have 00 in the A quadrant, 01 in the B quadrant, 10 in C and 11 in D—corresponding to 0, 1, 2, 3 in decimal. (I doubt if it actually means anything, but it’s a useful aide-memoire. Well, if you have a particular kind of memory, I suppose.)

# Relativistic Ringworlds

No matter how many times he considered it, Jophiel shivered with awe. It was obviously an artefact, a made thing two light years in diameter. A ring around a supermassive black hole.

Stephen Baxter, Xeelee: Redemption (2018)

I’ve written about rotating space habitats in the past, and I’ve written about relativistic starships, so I guess it was almost inevitable I’d end up writing about the effect of relativity on space habitats that rotate really, really rapidly.

What inspired this post was my recent reading of Stephen Baxter’s novel Xeelee: Redemption. I’ve written about Baxter before—he specializes in huge vistas of space and time, exotic physics, and giant mysterious alien artefacts. This novel is part of his increasingly complicated Xeelee sequence, which I won’t even attempt to summarize for you. What intrigued me on this occasion was Baxter’s invocation of a relativistic ringworld, briefly described in the quotation above.

Ringworlds are science fiction’s big rotating space habitats, originally proposed by Larry Niven in his novel Ringworld (1970). Instead of spinning a structure a few tens of metres in diameter to produce centrifugal gravity, like the space station in the film 2001: A Space Odyssey, Niven imagined one that circled a star, with a radius comparable to Earth’s distance from the sun. Spin one of those so that it rotates once every nine days or so, and you have Earthlike centrifugal gravity on its inner, sun-facing surface.

If we stipulate that we want one Earth gravity (henceforth, 1g), then there are simple scaling laws to these things—the bigger they are, the longer it takes for them to rotate, but the faster the structure moves. The 11-metre diameter centrifuge in 2001: A Space Odyssey would have needed to rotate 13 times a minute, with a rim speed of 7m/s, to generate 1g.

Estimates vary for the “real” size of the space station in the same movie, but if we take the diameter of “300 yards” from Arthur C. Clarke’s novel, it would need to rotate once every 23.5 seconds, with a rim speed of 37m/s.

Niven’s Ringworld takes nine days to revolve, but has a rim speed of over a 1000 kilometres per second.

You get the picture. For any given level of centrifugal gravity, the rotation period and the rotation speed both vary with the square root of the radius.

So what Baxter noticed is that if you make a ringworld with a radius of one light-year, and rotate it with a rim speed equal to the speed of light, it will produce a radial acceleration of 1g.* In a sense, he pushed the ringworld concept to its extreme conclusion, since nothing can move faster than light. Indeed, nothing can move at the speed of light—so Baxter’s ring is just a hair slower. By my estimate, from figures given in the novel, the lowest “deck” of his complicated ringworld is moving at 99.999999999998% of light speed (that’s thirteen nines).

And this truly fabulous velocity is to a large extent the point. Clocks moving at close to the speed of light run slow, when checked by a stationary observer. This effect becomes more extreme with increasing velocity. The usual symbol for velocity when given as a fraction of the speed of light is β (beta), and from beta we can calculate the time dilation factor γ (gamma):

\Huge \gamma =\frac{1}{\sqrt{1-\beta ^2}}

Here’s a graph of how gamma behaves with increasing beta—it hangs about very close to one for a long time, and then starts to rocket towards infinity as velocity approaches lightspeed (beta approaches one).

Plugging the mad velocity I derived above into this equation, we find that anyone inhabiting the lowest deck of Baxter’s giant alien ringworld would experience time dilation by a factor of five million—for every year spent in this extreme habitat, five million years would elapse in the outside world. This ability to “time travel into the far future” is a key plot element.

But there’s a problem. Quite a big one, actually.

The quantity gamma has wide relevance to relativistic transformations (even though I managed to write four posts about relativistic optics without mentioning it). As I’ve already said, it appears in the context of time dilation, but it is also the conversion factor for that other well-known relativistic transformation, length contraction. Objects moving at close to the speed of light are shortened (in the direction of travel) when measured by an observer at rest. A moving metre stick, aligned with its direction of flight, will measure only 1/γ metres to a stationary observer. Baxter also incorporates this into his story, telling us that the inhabitants of his relativistic ringworld measure its circumference to be much greater than what’s apparent to an outside observer.

So far so good. But acceleration is also affected by gamma, for fairly obvious reasons. It’s measured in metres per second squared, and those metres and seconds are subject to length contraction and time dilation. An acceleration in the line of flight (for instance, a relativistic rocket boosting to even higher velocity) will take place using shorter metres and longer seconds, according to an unaccelerated observer nearby. So there is a transformation involving gamma cubed, between the moving and stationary reference frames, with the stationary observer always measuring lower acceleration than the moving observer. A rocket accelerating at a steady 1g (according to those aboard) will accelerate less and less as it approaches lightspeed, according to outside observers. The acceleration in the stationary reference frame decays steadily towards zero, the faster the rocket moves—which is why you can’t ever reach the speed of light simply by using a big rocket for a long time.

That’s not relevant to Baxter’s ringworld, which is spinning at constant speed. But the centripetal acceleration, experienced by those aboard the ringworld as “centrifugal gravity”, also undergoes a conversion between the moving and stationary reference frames. Because this acceleration is always transverse to the direction of movement of the ringworld “floor” at any given moment, it’s unaffected by length contraction, which only happens in the direction of movement. But things that occurs in one second of external time will occur in less than a second of time-dilated ringworld time—the ringworld inhabitants will experience an acceleration greater than that observed from outside, by a factor of gamma squared.

So the 1g centripetal acceleration required in order to keep something moving in a circle at close to lightspeed would be crushingly greater for anyone actually moving around that circle. In Baxter’s extreme case, with a gamma of five million, his “1g” habitat would experience 25 trillion gravities. Which is quite a lot.

To get the time-travel advantage of γ=5,000,000 without being catastrophically crushed to a monomolecular layer of goo, we need to make the relativistic ringworld a lot bigger. For a 1g internal environment, it needs to rotate to generate only one 25-trillionth of a gravity as measured by a stationary external observer. Keeping the floor velocity the same (to keep gamma the same), that means it has to be 25 trillion times bigger. Which is a radius of 25 trillion light-years, or 500 times the size of the observable Universe.

Even by Baxter’s standards, that would be … ambitious.

* This neat correspondence between light-years, light speed and one Earth gravity is a remarkable coincidence, born of the fact that a year is approximately 30,000,000 seconds, light moves at approximately 300,000,000 metres per second, and the acceleration due to Earth’s gravity is about 10 metres per second squared. Divide light-speed by the length of Earth’s year, and you have Earth’s gravity; the units match. This correspondence was a significant plot element in T.J. Bass’s excellent novel Half Past Human (1971).

Baxter’s novel is full of plot homages to Niven’s original Ringworld, including a giant mountain with a surprise at the top.

As Baxter also notes, this mismatch between the radius and circumference of a rapidly rotating object generates a fruitful problem in relativity called the Ehrenfest Paradox.

# How Apollo Got To The Moon

I’m posting this at 13:32 GMT on 16th July 2019—exactly fifty years after the launch of Apollo 11. It’s the last part of a loose trilogy of posts about Apollo—the first two being M*A*S*H And The Moon Landings and The Strange Shadows Of Apollo. This one’s about the rather complicated sequence of events required to get the Apollo spacecraft safely to the moon.

To get from the Earth to the moon, Apollo needed to be accelerated into a long elliptical orbit. The low point of this orbit was close to the Earth’s surface (for Apollo 11, the 190 kilometres altitude of its initial parking orbit); the high point of the ellipse had to reach out to the moon’s distance (380,000 kilometres), or even farther.

Extremely diagrammatically, it looked like this:

To be maximally fuel-efficient, the acceleration necessary to convert the low, circular parking orbit into the long, elliptical transfer orbit needs to be imparted at the lowest point of the ellipse—that is, on exactly the opposite side of the Earth from the planned destination. Since the moon is moving continuously in its orbit, the translunar trajectory actually has to “lead” the moon, and aim for where it will be when the spacecraft arrives at lunar orbit, about three days after leaving Earth.

Here’s the real elliptical transfer orbit followed by Apollo 11, drawn with the moon in the position it occupied at the time of launch (you’ll need to enlarge it to see detail):

(For reasons I’ll come back to, NASA gave the Apollo spacecraft a little extra acceleration, lengthening its translunar transfer ellipse so that it would peak well beyond the moon’s orbit.)

And here’s the situation three days later, with Apollo 11 arriving at the moon’s orbit just as the moon arrives in the right place for a rendezvous:

With the proximity of the moon at this point, lunar gravity in fact pulled the Apollo spacecraft away from the simple ellipse I’ve charted, warping its trajectory to wrap around the moon—something else I’ll come back to.

In the meantime, let’s go back to the fact that NASA needed to manoeuvre the Apollo spacecraft to a very exact position, on the opposite side of the Earth from the position the moon would occupy in three days’ time, and then accelerate it into the long elliptical orbit you can see in my diagrams. The process of accelerating from parking orbit to transfer orbit is called translunar injection, or TLI.

The point on the Earth’s surface opposite the moon at any given time is called the lunar antipode. (This is a horrible word, born of a misunderstanding of the word antipodes—I’ve written more about that topic in a previous post about words.) But, given that I don’t want to keep repeating the phrase “on the opposite side of the Earth from where the moon will be in three days’ time”, from now on I’ll use the word antipode with that meaning.

So TLI had to happen at this antipode, and NASA therefore needed to launch the Apollo lunar spacecraft into an Earth orbit that at some point passed through the antipode. Not only that, but they needed to do so using a minimum of fuel, and needed to get the spacecraft to the antipode reasonably quickly, so as to economize on consumables like air and food, thereby keeping the spacecraft’s launch weight as low as possible.

Now, the moon orbits the Earth in roughly the plane of the Earth’s orbit around the sun—the ecliptic plane. But the moon can stray 5.1º above or below the ecliptic. And the ecliptic is inclined at about 23.4º to the plane of the Earth’s equator. So the moon’s orbital plane can be inclined to the Earth’s equator at anything from 18.3º to 28.5º. This means the moon can never be overhead in the sky anywhere outside of a band between 28.5º north and south of the equator, and therefore its antipode is confined in the same way—always drifting around the Earth somewhere within, or just outside, the tropics.

The Cape Kennedy launch complex (now Cape Canaveral), lies at  28.6ºN. The most energy-efficient way to get a spacecraft into Earth orbit is to launch it due east, taking advantage of the Earth’s rotation to boost its speed. Such a trajectory puts the spacecraft into an orbit inclined at 28.6º to the equator. So a launch from Kennedy put a spacecraft into an orbit inclined relative to the plane of the moon’s orbit. The inclination might be a fractional degree, if the moon’s orbit were tilted favourably close to Kennedy; but generally it would be significantly larger than that, with the spacecraft’s orbit passing through the plane of the moon’s orbit at just two points.

As it happens, the situation at the time of the Apollo 11 mission shows all these angles between equator, ecliptic, moon’s orbit and Apollo parking orbit quite clearly, because all the tilts were roughly aligned with each other. Here’s a view from above the east Pacific at the time of Apollo 11’s launch: 13:32 GMT, 16 July 1969:

The red line is the ecliptic, the plane of Earth’s orbit around the sun. From the latitude and longitude grid I’ve laid on to the Earth, you can see how the Earth’s northern hemisphere is tilted towards the sun, enjoying northern summer. The plane of the moon’s orbit (in cyan) is carrying the moon above the ecliptic plane on the illuminated side of the Earth, so that the angle between the Apollo 11 parking orbit and the moon’s orbital plane is relatively small.

It wasn’t always like that, though. Here’s the situation at Apollo 14’s launch: 21:03 GMT, 31 January 1971. It’s southern summer, and the plane of the moon’s orbit crosses Australia, so Apollo 14’s parking orbit passes through the plane of the moon’s orbit at a fairly steep angle.

Whatever the crossing angle, NASA needed to launch the Apollo moon missions so that the spacecraft’s orbit took it through the moon’s orbital plane at the same moment the antipode drifted through that crossing point. And in order to economize on consumables, that needed to happen within the time it took to make two or three spacecraft orbits, each lasting an hour and a half. This requirement dictated that there was always a launch window for each lunar mission—any launch that didn’t take place within a very specific time frame had no chance of bringing the spacecraft and the antipode together to allow a successful TLI.

At first sight, it seems like the launch window should be vanishingly narrow, given that the parking orbit intersects the moon’s orbital plane at only two points, only one of which can be suitable for a TLI at any given time. In fact, by varying the direction in which the Saturn V launched, NASA was able to hit a fairly broad sector of the lunar orbital plane. Launching in any direction except due east was less energy-efficient, but with additional fuel the Apollo spacecraft could still be placed in orbit using launch directions 18º either side of due east. The technical name for the launch direction, as measured in the horizontal plane, is the launch azimuth. So Apollo could be launched on azimuths anywhere between 72º and 108º east of north.

You can see this range of orbital options drawn out in sinusoids on Apollo 11’s Earth Orbit Chart:

Cape Kennedy is at the extreme left edge of the chart, and all the options for launch azimuths between 72º and 108º are marked. Here’s a detail from that edge:

Notice how launches directed either north or south of east take the spacecraft to a higher latitude than Cape Kennedy’s, and therefore into a more inclined orbit—at the extremes, Apollo orbits were inclined at close to 33º.

So NASA could take aim at the antipode by adjusting the launch direction. By launching north of east, they could hit a more easterly antipode; by launching south of east, they could hit a more westerly antipode. This range of options allowed a launch window spanning about four hours. A launch early in the launch window would involve an azimuth close to 72º, as the launch vehicle was aimed at the antipode in its most extreme accessible eastern position. During the four-hour window, as the moon moved across the sky from east to west, the antipode would track across the Earth’s surface in the same direction, and the required launch azimuth would gradually increase, until the launch window closed when an azimuth of 108º was reached. NASA planned to have their launch vehicle ready to go just as the launch window opened, to give themselves maximum margin for delays. Apollo 11 launched on time, and so departed along an azimuth very close to 72º.

Here’s the Apollo 11 launch trajectory:

The huge S-IC stage (the first stage of the Saturn V) shut down and dropped away with its fuel exhausted after just 2½ minutes, falling into the western Atlantic (where one of its engines was recently retrieved from 4.3 kilometres underwater). The S-II second stage then burned for 6½ minutes before falling away in turn, dropping in a long trajectory that ended in mid-Atlantic. Meanwhile, the S-IVB third stage fired for another two minutes, shoving the Apollo spacecraft into Earth orbit before shutting down at a moment NASA calls Earth Orbit Insertion (EOI). The astronauts then had about two-and-a-half hours in orbit (completing about one-and-three-quarter revolutions around the Earth) before their scheduled rendezvous with the lunar antipode over the Pacific. This gave them time to check out the spacecraft systems and make sure everything was working properly before committing to the long translunar trajectory.

At two hours and forty-four minutes into the mission, the S-IVB engine was fired up again, and worked continuously for six minutes as Apollo 11 arced across the night-time Pacific. Here’s that trajectory with the S-IVB ignition and cutoff (TLI proper) marked, as well as the plane of the moon’s orbit and the position(s) of the antipode(s). On this occasion I’ve marked the true lunar antipode as “Antipode”, and the antipode of the moon’s position in 3 days’ time as “Antipode+3”.

See how Apollo 11 accelerated continuously through the lunar orbital plane, clipping neatly past the three-day antipode. The velocity change in those six minutes took the spacecraft from 7.8 kilometres per second (the orbital speed of the parking orbit) to the 10.8 kilometres per second necessary for the planned translunar trajectory.

I promised I’d come back to the reason NASA used extra energy to propel the spacecraft into an orbit that would take it well past the moon, if it were not captured by the moon’s gravity. In part, because it speeded the journey—Apollo took three days to reach its destination, rather than five. But the main reason was to put Apollo on to a free-return trajectory. It shaved past the eastern limb of the moon and then (held by the moon’s gravity) looped around behind it. If it had not fired its engine to slow down into lunar orbit at that point, it would have reemerged from behind the western limb of the moon and come straight back to Earth. So there was a safety feature built in, if the astronauts encountered a problem with the main engine of their spacecraft—any other arrival speed would have resulted in a free-return orbit that missed the Earth.

Another safety feature of the Apollo orbits was their inclination of around 30º to the equator, which was maintained as the spacecraft entered its transfer orbit. This meant that the spacecraft avoided most of the dangerous radiation trapped in Earth’s Van Allen Belts.

The Van Allen belts are trapped in the Earth’s magnetic field, which is tilted at about 10º relative to Earth’s rotation axis—and the tilt is almost directly towards Cape Kennedy, with the north geomagnetic pole sitting just east of Ellesmere Island in the Canadian Arctic.

This means that a spacecraft launched from Cape Kennedy, with an orbital inclination of 30º to Earth’s equator, has an inclination of about 40º to the geomagnetic equator. A departure orbit with that inclination rises up and over the Van Allen belts, passing through their fringes rather than through the middle. Of course, since the Earth rotates while the spacecraft’s orbital plane remains more or less fixed in space, it needs to depart within a few hours, otherwise it will lose the advantageous tilt of the radiation belts—but Apollo already had good reason to get going so as not to waste precious consumables.

To finish, here are a couple of diagrams I’ve prepared with Celestia, using an add-on created by user Cham. The add-on shows the Earth’s magnetic field lines, and the calculated trajectory of a few charged particles trapped in the radiation belt. I’ve used a subset of Cham‘s particle tracks, so I can show the position of the inner Van Allen Belt clearly—it’s the one that contains the high-energy protons which were of most danger to the astronauts.

Here’s Apollo 11’s departure orbit (red line) seen from above the Pacific; the plane of the moon’s orbit is also shown, in cyan. The plot is for the time of translunar injection.

And here’s a side view.

(You can ignore the lower part of the orbit, which is only there to show the full elliptical shape—Apollo 11 followed the upper, northern trajectory, starting from the vicinity of the equator.)

So that’s how Apollo got to the moon.