Category Archives: Phenomena

Rainbow Rays

Chalked lockdown rainbow
Click to enlarge

The COVID-19 lockdown, in my part of the world, has produced an outpouring of children’s rainbow art—often stuck up in people’s windows, but sometimes sketched on the pavements, too.

I’ve been struck by the generally good command of spectral colours on display, with red on the outside and an appropriate progression towards violet on the inside. I was amused by the one above, which is a pretty flawless piece of artistry, undermined only by the position of the sun.

It reminded me that I had been planning to write about rainbow colours for years, ever since I wrote about converging rainbows back in 2015.

That was when I posted this diagram, showing the relationship between the sun’s position and the rainbow arc:

Formation of rainbow
Click to enlarge

The rainbow is a complete circle of coloured light, centred on the antisolar point—the direction exactly opposite the position of the sun, which is marked by the shadow of your head. We usually only see an upper arc, because the area below the horizon usually contains too few raindrops along the line of sight to generate bright colours.

Every raindrop 42½º away from the antisolar point reflects red light towards our eyes; every raindrop at 40½º reflects violet light in the same way. And between those extremes, raindrops reflect all the spectral colours in turn, changing their apparent colour as they fall. But quite why that happens is a little complicated, and that’s what I want to write about this time.

Here’s the route that a ray of green light takes when it passes through a rainbow-forming raindrop and bounces back towards our eyes. Let’s call this particular trajectory the “rainbow ray” for short:

Green rainbow ray

If the raindrop is falling through the top of the rainbow arc, the light enters near the top of the drop, is refracted as it crosses the air-water interface, then reflected from the back of the drop, then exits through the bottom of the drop, being refracted again as it moves from water back to air. The angle between the incoming ray and the outgoing is about 41½º for green light. At the sides of the rainbow, you have to imagine the diagram above lying horizontal—the ray enters the outward-facing side of the drop, and is reflected towards you sideways. And, obviously, the rest of the rainbow is formed by light paths that are more or less tilted between the horizontal and the vertical.

Violet light is refracted more than green light, which is in turn refracted more than red light. So a ray of white light (drawn in black below) is split into a fan of coloured rays as it enters the raindrop. These follow slightly different courses within the drop, and exit at different angles (exaggerated here for clarity):

Violet, green and red rainbow rays, exaggerated

So we see red at 42½º, and violet at 40½º, with green between. Simple.

But why choose to consider just those rainbow rays? What about all the light that enters the drop closer to its rim, or closer to its centre? I’ll call this general group of light rays “reflected rays”, of which the rainbow ray is only one example.

If I plot a couple of examples, you can see that rays entering the drop farther out than the rainbow ray are refracted more strongly, and end up exiting the drop at an angle less than the rainbow ray. Those that enter the drop nearer the centre are refracted less, but bounce back at a narrower angle from the back of the drop, and also exit the drop at an angle less than the rainbow ray. So for a given colour of light, it turns out that the rainbow ray is the light path that results in the maximum deflection angle from the antisolar point. All other reflected rays exit at narrower angles, and so should appear within the visible coloured arc of the rainbow itself.

Rainbow ray and other rays
How does that help, though? Why is the rainbow ray’s role as the maximum angle of deflection important?

To show why, I’m going to make a plot of what happens to all the reflected rays, using a parameter I’ll call Offset*, which works like this:

Rainbow offset parameterIt’s just the proportional distance from the centre of the raindrop at which the ray enters—a zero offset means that the ray spears straight into the centre of the drop; an offset of one means the ray just grazes the edge of the drop.

So here’s how red and violet reflected rays are deflected, for the full range of offsets:

Deflection of red and violet light relative to antisolar point for various ray offsets
Click to enlarge

Deflection peaks at an offset of about 0.86, the location of the rainbow ray, with lesser deflection occurring on either side, as shown in the diagram above. Red and violet are at their maximum separation at the peak, and the rounded peak of the curves means that a lot of rays close to the rainbow ray end up reflected in the same part of the sky as the rainbow ray. You can see from the chart that all the rays with offsets from 0.75 to 0.95 end up within two degrees of the rainbow ray; a similar span from 0.2 to 0.4 is spread over fifteen degrees.

So, in the vicinity of the rainbow ray, there’s a lot of light in a small area of sky, and the spectral colours are well separated. Farther from the rainbow ray, the deflected light is smeared over a large area of sky within the rainbow arc, and the spectral colours are not well separated—all these other rays average out to a patch of white light filling the curve of the rainbow, with no colour separation. This bright area within the rainbow can often be strikingly visible, if the rainbow has dark clouds behind it.

Rainbow enclosing bright sky
Click to enlarge
Photo by Binyamin Mellish via Good Free Photos

One other thing contributes to the colour intensity of the rainbow—oddly, that’s that fact that some light is lost from the reflected rays every time they interact with an air/water or water/air interface. Here’s a diagram of how much light goes missing from the green rainbow ray as it passes through the raindrop:

Light losses from rainbow ray

The large proportion of light that shoots straight out the back of the drop, doubly refracted but without being internally reflected, creates a bright patch around the sun that appears whenever the solar disc is viewed through falling rain. Les Cowley at the excellent Atmospheric Optics site has dubbed this the “zero-order glow”.

Interestingly, for a given ray each interaction with an interface results in exactly the same ratio of reflection to transmission (though not quite in my diagram, which features rounded figures). This is unexpected (at least to me), because the reflective properties of a water/air interface are generally different from those of an air/water interface; the former features the phenomenon of total internal reflection, for instance. But it turns out that the first passage through the air/water interface changes the angle of the ray just enough to make it interact with subsequent water/air interfaces in exactly the same way as its initial air/water encounter.

If I plot the final amount of transmitted light for all the different offset rays, and add it to my previous graph, it looks like this:

Transmission, and deflection of red and violet light relative to antisolar point for various ray offsets
Click to enlarge

The transmission data are in brown, and refer to the new, brown axis on the right side of the chart. You can see that transmission starts to ramp up just as we get into the vicinity of the rainbow ray, boosting the brightness in the rainbow’s part of the sky. The peak of transmission does occur at very high offsets, beyond the rainbow ray, but in that region the angle of deflection changes very rapidly with slight changes in offset, which diffuses that light over a large arc.

The calculations I did to produce the transmission graph above involved Fresnel’s equations, so I had to track two different polarizations of light independently. For light reflecting from a surface between two transparent mediums, there’s a critical angle of incidence called Brewster’s angle, at which the reflected light becomes totally polarized. At that angle, the reflected light is entirely s-polarized; light polarized at right angles to this (p-polarized) is completely transmitted through the reflective surface. (Your polarizing sunglasses are designed to filter out s-polarized light reflected from horizontal surfaces, to reduce glare.)

The Brewster angle for an air/water interface is around 53º; for water/air it is about 37º. And it turns out that any light entering the water at an angle of incidence of 53º has its angle changed to 37º by refraction. So in the case of our raindrop, a ray that strikes the surface of the drop at 53º (corresponding to an offset of about 0.8) will continue through the drop and strike the water/air interface at 37º—it hits two Brewster angles in succession! This means that p-polarized light that hits the drop at Brewster’s angle is entirely refracted into the drop—none of it escapes by reflection from the air/water interface. But then it hits the back of the drop, and now none of it is re-reflected—it is all transmitted, again. So at an offset of 0.8, no p-polarized light gets into the reflected ray—it is all lost out the back of the raindrop.

So now if I mark up the total amount of transmitted light with its s- and p-polarized components, you can see that the light making up the rainbow will be strongly s-polarized, because the rainbow rays are pretty close to Brewster’s angle:

Polarization of light transmitted by the rainbow
Click to enlarge

The resulting polarization follows the curve of the rainbow. Your polarizing sunglasses will largely block the light coming from the top curve of the rainbow, but will let light through from the sides. However, if you tilt your head, you’ll remove light from the sides of the rainbow, and bring the upper curve into view.

Here’s a nice short little YouTube video, by James Sheils, demonstrating how to make the lower curve of a rainbow appear and disappear using a polarizing filter:

And that’s it, for now. Some time in the future I’ll get around to discussing the secondary rainbow.


* The measure I’ve called Offset is sometimes called the “impact parameter”, a term borrowed from nuclear physics. While the analogy is strong, if you know its original application, I’m not sure the phrase itself helps with visualization, so I’m sticking with Offset in this and subsequent posts.

Does The Sun Set On The British Empire?

British Empire map, 1886
Click to enlarge
Walter Crane’s map of the British Empire, 1886

In short, taking every thing into consideration, the British empire in power and strength may be stated as the greatest that ever existed on earth, as it far surpasses them all in knowledge, moral character, and worth. On her dominions the sun never sets. Before his evening rays leave the spires of Quebec, his morning beams have shone three hours on Port Jackson, and, while sinking from the waters of Lake Superior, his eye opens upon the mouth of the Ganges.

Caledonian Mercury, 15 October 1821, page 4: “The British Empire”

It’s noticeable, when reading the above, that none of the places it mentions by name still belong to the United Kingdom. The British empire is now much reduced in size; in fact, its overseas possessions are confined to a scatter of places that few people could reliably place on a map:

Overseas Territories
● Anguilla
● Bermuda
● British Virgin Islands
● Cayman Islands
● Falkland Islands
● Gibraltar
● Montserrat
● Pitcairn Islands
● Saint Helena (with Ascension & Tristan da Cunha)
● Turks and Caicos Islands
● British Indian Ocean Territory
● South Georgia and South Sandwich Islands
● British Antarctic Territory (in abeyance under Antarctic Treaty)

Dependent territory
● Sovereign Base Areas of Dhekelia & Akrotiri (Cyprus)

If you’re one of the people who would have trouble placing these names on a map, here’s a map:

British overseas territories
Click to enlarge
(Source of base map)

Cover of Britain's Treasure Islands by Stewart McPhersonAnd if you’d like to know more about all these places, I heartily recommend Stewart McPherson’s marvellous book, Britain’s Treasure Islands: A Journey To The UK Overseas Territories, as well as the accompanying BBC television series.

What stands out from the map above is that the UK still has the Atlantic, Caribbean and Mediterranean pretty well covered. There’s a solitary (and I do mean solitary) British possession in the Pacific, the Pitcairn Island group. (I’ve written about Pitcairn and its neighbouring islands a couple of years ago, when we were lucky enough to visit them.) And there’s another single possession in the Indian Ocean, the catchily named British Indian Ocean Territory (BIOT). BIOT occupies the whole of the Chagos Archipelago, and is inhabited entirely by British and American military personnel and contractors, based on the largest island, Diego Garcia. It used to be home to 2000 Chagossians, who were chucked out around 1970 to make way for the UK/US military installations. The poor Chagossians are still grinding through the courts attempting to get their homeland returned to them.

Anyway. Pitcairn and BIOT, which are a long way west and east of most UK territories, look like the key locations to examine when it comes to deciding whether the sun still “never sets on the British empire”. With Pitcairn’s time zone of GMT-8, and BIOT’s of GMT+6, there’s only ten hours of difference between the two territories, which should mean that the sun is visible from both locations for a couple of hours a day. But there’s a potential problem with the seasonal variation in day length—while BIOT sits close to the equator and won’t have much variation in the times at which the sun rises and sets, Pitcairn is south of the tropics, and so we can expect its sunsets to be noticeably earlier in June than they are in December.

So we’re going to need to plot daylight charts for the whole year. Here’s one for Greenwich:

Sunrise and sunset in Greenwich
Click to enlarge

Along the x-axis we have the months of the year, numbered from 1 to 12. On the y-axis, Greenwich Mean Time. The lower curve marks the time of sunrise, throughout the year, at Greenwich. The upper curve is sunset. The yellow area between the curves therefore represents the totality of daylight seen in Greenwich throughout the course of a year.

OK. Let’s superimpose the sunrise and sunset curves for Adamstown on Pitcairn, giving times in GMT:

Sunrise and sunset in Greenwich & Pitcairn
Click to enlarge

The Pitcairn sunrise and sunset curves are in red, and Pitcairn daylight extends a long way through the Greenwich night. But sunset on Pitcairn always occurs before sunrise in Greenwich, so there’s a brief period when the sun is shining in neither location.

Will BIOT, with its sunrise earlier than Greenwich, fill the gap? Here are the BIOT curves (calculated for Diego Garcia) added in green:

Sunrise and sunset in Greenwich, Pitcairn & BIOT
Click to enlarge

It’s a close-run thing. Pitcairn’s midwinter sunset on 21 June 2020 comes just 38 minutes after BIOT’s sunrise. Here’s a south polar view of the Earth on that date, capturing the brief period when both territories are in sunlight:

Diego Garcia and Adamstown both in sunlight on 21 June 2020
Click to enlarge
(Prepared using Celestia)

But there’s no doubt the chart is full of daylight, and the sun still never sets on the British empire!

Helium

Helium balloon
Source

I had a photograph of my own to illustrate this post, but it was a bit rubbish. I was inspired to write about helium when I discovered the wreckage of a mylar-foil helium balloon, like the one pictured above, tangled in a gorse bush on the slopes of Newtyle Hill. It’s the second foil balloon I’ve discovered on the hill, and (like the first one) I stuffed it into my rucksack and carried it down for disposal. I took a photograph to illustrate what a non-biodegradable blot on the landscape these things are, but in the photo the balloon looked like just another bit of plastic debris.

The picture above is actually more useful, because it demonstrates the key fact about helium gas, the one thing that pretty much everyone knows about it, and the property from which many of its other interesting qualities derive—it’s lighter than air.

The reason it’s lighter than air is because its atoms are considerably less massive than the molecules that make up air. Helium is a monatomic gas, made up of individual atoms, and the mass of a single helium atom is about four daltons.* (For comparison, the mass of a common carbon atom is 12 daltons, and the commonest kind of hydrogen atom weighs in at around one dalton.) Air, on the other hand, is mainly composed of two diatomic gases, nitrogen and oxygen. Their molecules, N2 and O2, come in at a 28 and 32 daltons, respectively, giving air an average molecular mass of 29 daltons.

The fact that individual helium atoms have a low mass feeds into two other important properties of helium.

Firstly, its atoms are small—just a single electron shell containing two electrons. A small atom with tightly bound electrons is reluctant to redistribute its charge in response to nearby polar molecules. This means that its relatively immune to the intermolecular Van der Waals forces which cause atoms and molecules to transiently adhere to each other, which in turn means that helium gas isn’t very soluble.

Secondly, at any given temperature the atoms in helium gas move faster, on average, than the atoms or molecules of heavier gases. This is because temperature is a measure of the kinetic energy of gas particles, and kinetic energy scales with both velocity squared and mass. A low mass means velocity must be higher to produce the same kinetic energy. Since helium is only 4/29 the mass of an average air molecule, the mean velocity of its atoms is correspondingly higher by the square root of 29/4, or about 2.7.

So: helium is light, fast and not very soluble. I’ll come back to each of these as we go along.

Firstly, lightness. It turns out that, at equal temperature and pressure, equal volumes of different gases contain the same number of fundamental particles (to a good first approximation). So a litre of helium is only 4/29 the weight of a litre of air. The only less dense gas is hydrogen, which has diatomic molecules massing about two daltons. So both hydrogen and helium are so buoyant in air that they’re able to lift considerable additional mass as they rise—making them ideal fillers for balloons, large and small. Hydrogen, being half the mass of helium, is by far the better lifting agent, but it has one significant disadvantage:

Zeppelin LZ129 "Hindenburg" burning at Lakehurst, New Jersey, 6 May 1937
Click to enlarge
Source

That’s a photograph of the German dirigible “Hindenburg”, fatally aflame at Lakehurst, New Jersey, in 1937. Hydrogen is flammable; helium is not. In fact, helium is notoriously chemically unreactive, being the lightest of the so-called “noble gases” (the others are neon, argon, krypton, xenon and radon). All of these elements have full outer electron shells, rendering them almost completely chemically inert. Which is why modern balloons and dirigibles are filled with helium, not hydrogen.

Next, speed. The faster gas molecules move, the more readily they diffuse through a barrier—which is why a rubber balloon full of helium will lose its shape within a day, and why helium balloons are often made of less-permeable mylar foil, like the one in the photograph at the head of this post. (Because they’re not biodegradable, foil balloons are supposed to be used only indoors—my experience of finding two on the open hillside shows how well that rule is working in practice.)

The rapid movement of helium gas atoms also affects the speed of sound, because sound waves travel through a gas at a velocity roughly comparable to the average speed of the gas molecules. At 0ºC, the speed of sound in air is about 330m/s; for helium it’s 970m/s, almost three times faster. So if you have a resonant cavity full of helium, it will resonate at a frequency about three times higher than it would if filled with air. And that’s what causes the “duck voice” effect we hear when someone breathes a gas mixture containing helium. Their vocal cords vibrate at exactly the same frequency as usual—but the resonant gas cavities of their larynx and airways pick out and emphasize the higher-pitched harmonics of their voice.

Some people achieve this effect by taking a breath from a helium-filled party balloon, which is very much not a good idea, since it violates The Oikofuge’s First Law:

Never breathe anything that contains no oxygen

Breathing gas that contains no oxygen causes oxygen to leave your circulation and diffuse into the gas in your lungs—your circulating oxygen levels therefore fall very rapidly indeed, and a single deep breath can take you to the edge of unconsciousness.

To illustrate the duck-voice effect of someone breathing helium, here’s a recording of a saturation diver, breathing a helium/oxygen mixture in a pressurized underwater habitat:

Which leads us to wonder why deep divers breathe helium and oxygen (a mixture referred to as Heliox), rather than air.

The ambient pressure rises with depth underwater, by about one atmosphere for every ten metres of descent. To counterbalance this, divers must breathe gas at the ambient pressure. But the higher the pressure of gas we breathe, the more of it dissolves in our tissues—and it turns out nitrogen is an anaesthetic agent at high pressures. Its effects are detectable at depths as shallow as ten metres, where the pressure is twice that at the surface. And by the time divers descends to 30 or 40 metres (four or five atmospheres), their judgement becomes sufficiently impaired by nitrogen narcosis that they’re a potential danger to themselves and others.

So for deep diving, nitrogen has to go. But it can’t be replaced by pure oxygen, because oxygen is toxic at higher-than-normal pressures, damaging the lungs and causing convulsions. Indeed, the need to keep the partial pressure of oxygen in the breathing mixture close to what we’re used to at the surface means that, with increasing depth and pressure, oxygen must make up a lower and lower percentage of the breathing mixture by volume.

Helium is a good replacement for nitrogen, for several reasons. Firstly, its low solubility and chemical inertness mean that it doesn’t produce any anaesthetic effect. Secondly, because helium is less soluble than nitrogen, less of it dissolves in the diver’s tissues during a long dive at high ambient pressure, so there’s less of it to get rid of during decompression at the end of the dive, and therefore less risk of gas-bubble formation in the blood and tissues as ambient pressure decreases. Such bubbles are the cause of decompression sickness (“the bends”), and in order to avoid their formation, divers are forced to make their return to the surface slowly. But because helium dissolves in smaller volumes than an equivalent pressure of nitrogen, there’s less risk of bubble formation, and so a faster safe decompression. And finally, the low density of helium comes into play again—because it’s less dense, it’s easier to breathe at high pressures.

Indeed, that last advantage is present even at one atmosphere of pressure. When a person’s airways are narrowed by disease or inflammation, air flow through the narrowed regions can shift from smooth, laminar flow to turbulent flow, which produces a higher resistance to flow through the airways and makes breathing more difficult. The transition from laminar to turbulent flow is determined, in part, by the density of the breathing gas. And, once turbulent flow occurs, the resistance to flow is higher for a denser gas. Substituting helium for nitrogen in the patient’s breathing gas drops its density by 60%, which delays the onset of turbulent flow, and causes less resistance to flow if turbulence occurs. That serves to reduce the work of breathing, decrease distress, and get a bit more oxygen into the patient—which is all good stuff.

So are there any disadvantages for divers breathing helium (apart from the funny voices)? There are. One is caused by that high average velocity of helium atoms—as well as conducting sound faster, helium is also more conductive of heat, with a thermal conductivity almost six times faster than nitrogen. Divers in a helium atmosphere find it harder to stay warm, and when submerged they lose heat to the water very quickly if they have helium filling their dry-suits. (So they often fill the insulating space in their dry-suits with argon, which has an even lower thermal conductivity than air.)

And finally, it turns out that the absence of anaesthetic effects with helium is actually a disadvantage for the deepest of dives. Below depths of about 150-300 metres (fifteen to thirty atmospheres of pressure), divers breathing Heliox develop a condition called High Pressure Nervous Syndrome (HPNS), associated with an apparent overactivity of the nervous system—tremors, muscle jerks, nausea, dizziness and cognitive impairment. No-one’s quite sure why this happens—it was at first blamed on a stimulant effect of helium that appeared only at high pressure, but it now seems more likely that it’s a direct pressure effect on nerve cell membranes, which are reduced in volume by such high ambient pressures. Ironically, the symptoms of HPNS can be damped down by introducing the sedative effects of nitrogen back into the mix, using a breathing mixture of nitrogen, helium and oxygen generically referred to as Trimix. Things get very technical at that point—not only must the ratio of helium and nitrogen be adjusted to minimize the effects of HPNS, but the proportion of oxygen in the mixture must be reduced with increasing depth, in order to limit the pressure of oxygen to a non-toxic level.

But there’s a problem with Trimix, which is that nitrogen at high pressures is difficult to breathe because of its density. What low-density gas could we substitute for nitrogen? Hydrogen, half the density of helium and a fourteenth as dense as nitrogen, turns out to be mildly anaesthetic at high pressures, and therefore it also limits the symptoms of HPNS.

But wait a minute, I hear you cry, glancing back at that photograph of the Hindenburg. Hydrogen is flammable. Can a breathing mixture containing hydrogen and oxygen be safe?

Well, yes it can. Remember that we have to wind down the proportion of oxygen in the breathing mixture as we go to greater depths, to keep the partial pressure of oxygen within safe limits. At thirty atmospheres pressure, a gas mixture containing just 1% oxygen provides an oxygen pressure equivalent to 30% oxygen at sea level—a little more than the 21% we’re used to, but within safe limits. Hydrogen/oxygen mixtures are flammable over a wide range of proportions—from 4% hydrogen in 96% oxygen to 95% hydrogen in 5% oxygen. But not at lower proportions of oxygen. So the low proportion of oxygen required for safety at great depth means that the hydrogen/oxygen ratio sits outside the flammable range. These Hydreliox mixtures are very experimental, but they’ve been used successfully, with 1% oxygen and roughly equal proportions of helium and hydrogen, at depths in excess of 500 metres.

Gas mix considerations for diving
Click to enlarge

And that’s about it for helium gas. Liquid helium, of course, has all sorts of interesting properties, but that’s perhaps a topic for another day.


* The dalton, also called the Atomic Mass Unit, is named after John Dalton, who first codified the idea that chemistry was due to atoms interacting with each other in a very systematic way.

There’s an important distinction here, though. The advantages of helium’s lower solubility only appear in what’s called “saturation diving”—when divers stay at depth in pressurized habitats for long periods, so that their tissues become saturated with dissolved breathing gas. But divers who descend and then reascend relatively quickly (called “bounce diving”) are never at depth long enough for their tissues to become saturated with nitrogen. For them, helium paradoxically produces a worse risk of decompression sickness than nitrogen, because helium diffuses so much faster than nitrogen. The volume dissolved in the tissues rises very quickly initially, and in the short term may exceed what would be reached by nitrogen in the same time period. Like this:

Diagrammatic comparison of helium and nitrogen tissue saturation in diving
Click to enlarge

Leap Seconds

Leap second clockThe year 2020, newly begun as this post is published, is a leap year. I’ve written before about leap years, and how the occasional leap day added to the end of February keeps our calendar year synchronized with the seasons. For more on that topic, see my posts about February 30th and the Equinox.

But this year we are also fairly likely to observe a leap second. (I’ll come back later to the reason for that “fairly likely”.) A leap second is an additional second which will be added to either June 30th or December 31st, and it serves to keep our clocks synchronized with the rotation of the Earth.

The fundamental problem is that the Earth’s rotation is getting slower, primarily because the tidal bulges raised in the Earth’s oceans by the moon and sun generate friction in the ocean beds as the Earth rotates. Over the last couple of millennia the rate of slowing has averaged about 1.7 milliseconds per day, per century. Which sounds trivial, but it adds up to more than three hours over the last two thousand years. We can detect this problem if we look back at early records of astronomical events, particularly solar eclipses, which are visible from only a very limited region of the Earth’s surface. We know, for instance, that a solar eclipse was observed in Babylon early in the morning of 15 April, 136BC. But if we calculate back to the relative positions of the sun, Earth and moon on that date, and assume the Earth has rotated at a constant rate during the intervening centuries, we find the eclipse shadow sweeping through the Atlas Mountains of Morocco, 48.8º of longitude west of Babylon. The difference in longitude represents a three-hour lag in rotation—the naive calculation, ignoring the tidal slowing of the Earth’s rotation, has allowed Babylon to rotate out from under the eclipse track. That mismatch is one of the ways we know about the long-term slowing of the Earth’s rotation.

Babylon eclipse, 136 BC
Click to enlarge
Source of base map

Here and now, we have three important ways by which we measure the passage of time. The first, and most important in everyday life, is by the rotation of the Earth. We define local noon as being the time at which the sun reaches its highest point in the sky, and we define a solar day as being the time between successive noons. Well, sort of. Because the Earth’s orbit is elliptical, and the Earth’s axis is tilted relative to its orbit, the elapsed time between successive noons varies during the course of the year. So we average those noon passages over a long time period in order to come up with a definition for the day—specifically, it’s a mean solar day.

This mean solar day is conventionally divided into the familiar hours, minutes and seconds, giving 24×60×60=86400 seconds per day. But you can see that there’s a problem with that, because seconds are of a fixed duration, established and defined as part of the Système International (SI) units in 1960. Whereas we now know that the length of the mean solar day is increasing as the Earth’s rotation slows.

Scientists were aware of this problem when the SI units were being defined, and decided they needed to use some other source, with a more fixed and regular motion, in order to define a constant second. Initially, they resorted to our second important means of time-keeping—the movement of the Earth around the sun. The length of the year is rather closer to being constant than the rotation period of the Earth. So the duration of the second was defined as being ​131,556,925.9747 of a tropical year. (The tropical year is a measure of the passage of the seasons—it’s the year that our calendar strives to approximate with all those leap days.)

So that was fine, then. But not quite, because the tropical year is itself a little variable. So what was adopted as the standard was derived from a very specific (and sort of fictitious) tropical year, based on formulae given in Simon Newcomb’s astronomical opus, The Elements Of The Four Inner Planets And The Fundamental Constants Of Astronomy, published in 1895 and based on astronomical observations made between 1750 and 1892. Specifically, the tropical year on which the SI second was based was something produced by Newcomb’s formulae if you plugged in a precise time and date near the start of 1900. So there was no actual year that corresponded to the value used in the definition. And of course Newcomb, and the observers who provided his data, had never used a constant definition of the second. Their definition was based on a second that was exactly 1/86400 of a mean solar day—so seconds, as defined in 1750, were a tiny bit shorter than seconds as defined in 1892. When Newcomb tabulated all these observational data and produced his summary formulae, he effectively averaged out the very slight drift in the value of the second over the observation period. Newcomb’s second, which became the SI second, was only ever exactly 1/86400 of a mean solar day somewhere around the year 1820. So even at the moment of its adoption in 1960, the SI second was slightly adrift from the duration of a mean solar day.

The astronomical definition of the SI second was always a bit unwieldy for general use. Fortunately, there was a third method of measuring time, which had been growing steadily more precise around the time the SI units were introduced—the atomic clock. So in 1967 the definition of the second was transferred to something you could actually measure in the laboratory—the behaviour of a particular kind of clock based on the element caesium. Thenceforth, the SI second was defined as:

… the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.

So now we have a precise and portable definition of the second, which carries over its duration from a previous astronomical definition, based on nineteenth-century observations. This is the basis for a standard called International Atomic Time (abbreviated TAI, for Temps Atomique International), which is based on pooled readings from multiple atomic clocks around the world.

But the Earth’s rotation is steadily lagging behind TAI. So to keep our everyday clocks in synchrony with the slowly lengthening mean solar day, we use a different time scale called Coordinated Universal Time (confusingly abbreviated UTC), which is the basis for Greenwich Mean Time and all the various time zones around the world.

UTC uses the same SI seconds as TAI, but every now and then needs to pause for a moment to allow the Earth’s rotation to “catch up”. Which is where the leap seconds come in—when required, at the end of a month, we add an extra second just before midnight, Greenwich Mean Time. A clock that displays such leap seconds reads 23:59:60 (as at the head of this post) before cycling through to 00:00:00. This leap second is added everywhere, simultaneously, so it occurs in the afternoon or evening in the Americas, but in the morning in Asia and Australia.

At present, the Earth’s rotation is slowing at about 1.4 milliseconds per day, per century (a little slower than the millennial trend I mentioned earlier). Since two centuries have elapsed since the second was exactly 1/86400 of the mean solar day, the day should now be 86400.0028 seconds long, which corresponds to almost exactly one extra second per year.

So why don’t we just schedule an extra second for every December 31st and have done with it? Because the Earth’s rotation rate varies irregularly, from day to day and year to year, around the long term mean rate of slowing. This happens because stuff (air, water, rock) is always moving around, sometimes shifting closer to the Earth’s axis of rotation, and sometimes farther away. If mass moves closer to the axis, the Earth speeds up a little; if mass moves away from the axis, the Earth slows down—this is caused by the same conservation of angular momentum that allows figure skaters and acrobatic divers to modify their rate of rotation by drawing in or spreading their arms. So earthquakes, glaciers melting and seasonal shifts in the air mass all contribute to the variability of the Earth’s rotation rate.

In the early days of the leap second (which was introduced in 1972) we did indeed have a leap second every year. But the Earth’s rotation rate has actually bucked the trend and speeded up a little of late, so leap seconds have become more sporadic—we had no leap seconds at all between 1999 and 2004, and the most recent was in 2016. The aim of the leap second is to keep UTC correct to within 0.9 seconds of the mean solar day, and the situation is constantly reviewed by the International Earth Rotation and Reference Systems Service, which issues a six-monthly Bulletin C, declaring or omitting a leap second at the end of each six-month period. Which is why I can’t (at time of writing) say for sure if we’ll have a leap second in 2020.

I’ll add a footnote as soon as I know. Meanwhile, have a Happy New Year.

The Coordinate Axes Of Apollo-Saturn: Part 2

In my previous post on this topic, I described how flight engineers working on the Apollo programme assigned XYZ coordinate axes to the Saturn V launch vehicle and to the two Apollo spacecraft, the Command/Service Module (CSM) and the Lunar Module (LM). This time, I’m going to talk about how these axes came into play when the launch vehicle and spacecraft were in motion. At various times during an Apollo mission, they would need to orientate themselves with an axis pointing in a specific direction, or rotate around an axis so as to point in a new direction. These axial rotations were designated roll, pitch and yaw, and the names were assigned in a way that would be familiar to the astronauts from their pilot training. To pitch an aircraft, you move the nose up or down; to yaw, you move the nose to the left or right; and to roll, you rotate around the long axis of the vehicle.

These concepts translated most easily to the axes of the CSM (note the windows on the upper right surface of the conical Command Module, which indicate the orientation of the astronauts while “flying” the spacecraft):

XYZ axes of CSM
Click to enlarge
Apollo 15, AS15-88-11961

With the astronauts’ feet pointing in the +Z direction as they looked out of the windows on the -Z side of the spacecraft, they could pitch the craft by rotating it around the Y axis, yaw around the Z axis, and roll around the X axis.

The rotation axes of the LM were similarly defined by the position of the astronauts:

XYZ axes of LM
Click to enlarge
Apollo 9, AS09-21-3212

Looking out of the windows in the +Z direction, with their heads pointing towards +X, they yawed the LM around the X axis, pitched it around the Y axis, and rolled it around the Z axis.

For the Saturn V, the roll axis was obviously along the length of the vehicle, its X axis.

XYZ axes of Saturn V launch vehicle
Click to enlarge
Apollo 8, S68-55416

But how do you decide which is pitch and which is yaw, in a vehicle that is superficially rotationally symmetrical? It turns out that the Saturn V was designed with a side that was intended to point down—its downrange side, marked by the +Z axis, which pointed due east when the vehicle was on the launch pad. This is the direction in which the space vehicle would travel after launch, in order to push the Apollo spacecraft into orbit—and to do that it needed to tilt over as it ascended, until its engines were pointing west and accelerating it eastwards. So the +Z side gradually became the down side of the vehicle, and various telemetry antennae were positioned on that side so that they could communicate with the ground. You’ll therefore sometimes see this side referred to as the “belly” of the space vehicle. And with +Z marking the belly, we can now tell that the vehicle will pitch around the Y axis, and yaw around the Z axis.

If you have read my previous post on this topic, you’ll know that the astronauts lay on their couches on the launch pad with their heads pointing east.* So as the space vehicle “pitched over” around its Y axis, turning its belly towards the ground, the astronauts ended up with their heads pointing downwards, all the way to orbit. This was done deliberately, so that they could have a view of the horizon during this crucial period.

But the first thing the Saturn V did, within a second of starting to rise from the launch pad, was yaw. It pivoted through a degree or so around its Z axis, tilting southwards and away from the Launch Umbilical Tower on its north side. Here you can see the Apollo 13 space vehicle in the middle of its yaw manoeuvre:

Apollo 13 yaw manoeuvre
Click to enlarge
Apollo 13, KSC-70PC-107

This was carried out so as to nudge the vehicle clear of any umbilical arms on the tower that had failed to retract.

Then, once clear of the tower, the vehicle rolled, turning on its vertical X axis. This manoeuvre was carried out because, although the belly of the Saturn V pointed east, the launch azimuth could actually be anything from 72º to 108º, depending on the timing of the launch within the launch window. (See my post on How Apollo Got To The Moon for more about that.) Here’s an aerial view of the two pads at Launch Complex 39, from which the Apollo missions departed, showing the relevant directions:

Launch Complex 39
Click to enlarge
Based on NASA image by Robert Simmon, using Advanced Land Imager data distributed by the USGS Global Visualization Viewer.

An Apollo launch which departed at the start of the launch window would be directed along an azimuth close to 72º, and so needed to roll anticlockwise (seen from above) through 18º to bring its +Z axis into alignment with the correct azimuth, before starting to pitch over and accelerate out over the Atlantic.

Once in orbit, the S-IVB stage continued to orientate with its belly towards the Earth, so that the astronauts could see the Earth and horizon from their capsule windows. This orientation was maintained right through to Trans-Lunar Injection (TLI), which sent the spacecraft on their way to the moon.

During the two hours after TLI, the CSM performed a complicated Transposition, Docking and Extraction manoeuvre, in which it turned around, docked nose to “roof” with the LM, and pulled the LM away from the S-IVB.

Translation, Docking & Extraction
Click to enlarge
Apollo 11 Press Kit

This meant that the X axes of CSM and LM were now aligned but opposed—their +X axes pointing towards each other. But they were also oddly rotated relative to each other. Here’s a picture from Apollo 9, taken by Rusty Schweickart, who was outside the LM hatch looking towards the CSM, where David Scott was standing up in the open Command Module hatch.

Principal axes of docked CSM & LM
Click to enlarge
Apollo 9, AS09-20-3064

The Z axes of the two spacecraft are not aligned, nor are they at right angles to each other. In fact, the angle between the CSM’s -Z axis and the LM’s +Z axis is 60º. This odd relative rotation meant that, during docking, the Command Module Pilot, sitting in the left-hand seat of the Command Module and looking out of the left-hand docking window, had a direct line of sight to the docking target on the LM’s “roof”, directly to the left of the LM’s docking port.

Alignment of CSM/LM axes during docking
Click to enlarge
CSM/LM Operational Data Book, Vol. III

Once the spacecraft were safely docked, roll thrusters on the CSM were fired to make them start rotating around their shared X axis. This was called the “barbecue roll” (formally, Passive Thermal Control), because it distributed solar heating evenly by preventing the sun shining continuously on one side of the spacecraft.

Once in lunar orbit, the LM separated from the CSM and began its powered descent to the lunar surface. This was essentially the reverse of the process by which the Saturn V pushed the Apollo stack into Earth orbit. Initially, the LM had to fire its descent engine in the direction in which it was orbiting, so as to cancel its orbital velocity and begin its descent. So its -X axis had to be pointed ahead and horizontally. During this phase the Apollo 11 astronauts chose to point their +Z axis towards the lunar surface, so that they could observe landmarks through their windows—they were flying feet-first and face-down. Later in the descent, as its forward velocity decreased, the LM needed to rotate to assume an ever more upright position (-X axis down) until it came to a hover and descended vertically to the lunar surface. So later in the powered descent, Armstrong and Aldrin had to roll the LM around its X axis into a “windows up” position, facing the sky. Then, as the LM gradually pitched into the vertical position, with its -X axis down, the +Z axis rotated to face forward, giving the astronauts the necessary view ahead towards their landing zone.

Lunar Module powered descent
Click to enlarge
The LM pitches towards the vertical as it descends (NASA TM X-58040)

Finally, at the end of the mission, the XYZ axes turn out to be important for the re-entry of the Command Module (CM) into the Earth’s atmosphere. The CM hit the atmosphere blunt-end first, descending at an angle of about 6º to the horizontal. But it was also tilted slightly relative to the local airflow, with the +Z edge of its basal heat-shield a little ahead of the -Z edge. This tilt occurred because the centre of mass of the CM was deliberately offset very slightly in the +Z direction, so that the airflow pushed the CM into a slightly tilted position. This tilt, in turn, generated a bit of lift in the +Z direction—which made the Command Module steerable. It entered the atmosphere with its +Z axis pointing upwards (and the astronauts head-down, again, with a view of the horizon through their windows). The upward-directed lift prevented the CM diving into thicker atmosphere too early, and reduced the rate of heating from atmospheric compression.

Apollo Command Module generating lift
Click to enlarge

Later in re-entry, the astronauts could use their roll thrusters to rotate the spacecraft around its X axis, using lift to steer the spacecraft right or left, or even rolling it through 180º so as to direct lift downwards, steepening their descent if they were in danger of overshooting their landing zone.


* As described in my previous post on this topic, the coordinate axes of the CSM were rotated 180º relative to those of the Saturn V—the astronauts’ heads pointed in the -Z direction of the CSM, but the +Z direction of the Saturn V.

I’m missing out a couple of steps here, in an effort to be succinct. (I know, I know … that’s not like me. Take a look at NASA Technical Memorandum X-58040 if you want to know all the details.)

The Coordinate Axes Of Apollo-Saturn: Part 1

As a matter arising from my long, slow build of a Saturn V model, I became absorbed in the confusing multiplicity of coordinate systems and axes applied to the Apollo launch vehicle and spacecraft. So I thought I’d provide a guide to what I’ve learned, before I forget it all again. (Note, I won’t be talking about all the other coordinate systems used by Apollo, relating to orbital planes, the Earth and the Moon—just the ones connected to the machinery itself. And I’m going to talk only about the Saturn V launch vehicle, though much of what I write can be transferred to the Saturn IB, which launch several uncrewed Apollo missions, as well as Apollo 7.)

First up, some terminology. The Saturn V that sent Apollo on its way to the Moon is called the launch vehicle, consisting of three booster stages, with an Instrument Unit on top, responsible for controlling what the rest of the launch vehicle does. Sitting on top of the launch vehicle, mated to the Instrument Unit, is the spacecraft—all the specifically Apollo-related hardware that the launch vehicle launches. This bit is sometimes also called the Apollo stack, since it will eventually split up into two independent spacecraft—the Lunar Module (LM) and the Command/Service Module (CSM). The combination of launch vehicle and spacecraft (that is, the whole caboodle as it sat on the launch pad) is called the space vehicle.

Components of Apollo-Saturn
Click to enlarge
From NASA Technical Note D-5399

The easiest set of coordinate axes to see and understand were the position numbers and fin letters which were labelled in large characters on the base of the Saturn V’s first stage, the S-IC. You can see them here, in my own model of the S-IC:

Position and fin labels, Saturn V
Click to enlarge

In this view you can see fins labelled C and D, and the marker for Position IIII, equidistant between them.

The numbering and lettering ran anticlockwise around the launch vehicle when looking down from above, creating an eight-point coordinate system of lettered quadrants (A to D) with numbered positions (I to IIII) between them, which applied to the whole launch vehicle. They marked out the distribution of black and white stripes—each stripe occupied the span between a letter and a number, with white stripes always to the left of the position numbers, and black stripes to the right. The five engines of the S-IC and S-II stages were each numbered according to the lettered quadrant in which they lay, with Engine 5 in the centre, Engine 1 in the A quadrant, Engine 2 in the B quadrant, and so on. The curious chequer pattern of the S-IVB aft interstage (the “shoulder” where the launch vehicle narrows down between the second and third stages) is distributed in the lettered quadrants, with A all black, B black high and white low, C white high and black low, and D all white.*

S-IVB Aft Interstage axes and paint
Click to enlarge
Umbilicals & Hatches, Saturn V Pos. II
Click to enlarge
Umbilical connections (red) and personnel hatches (blue), Apollo-Saturn Pos. II

Position II of the launch vehicle was the side facing the Launch Umbilical Tower (LUT), so that side of the Saturn V was dotted with umbilical connections and personnel access hatches, as well as a prominent vertical dashed line painted on the second stage, called the vertical motion target, which made it easy for cameras to detect the first upward movement as the space vehicle left the launch pad. You don’t often get a clear view of the real thing from the Position II side, so I’ve marked up the appropriate view of my model instead, at left.

The two Cape Kennedy launch pads used for Apollo (39A and 39B) were oriented on a north-south axis, with the LUT positioned on the north side of the Saturn V, so Position II faced north. Position IIII, on the opposite side, faced south, looking back down the crawler-way along which the Saturn V had been transported on its Mobile Launcher Platform. Position IIII was also the side that faced the Mobile Service Structure, which was rolled up to service the Saturn V in its launch position, and then rolled away again before launch. And so Position I faced east, which was the direction in which the space vehicle had to travel in order to push the Apollo stack into orbit.

These letters and numbers seem to have been largely a reference for the contractors and engineers responsible for assembling and mating the different launch vehicle stages. Superimposed on them were the reference axes used by the flight engineers, who used them to talk about the orientation and movements of the launch vehicle and the two Apollo spacecraft. These axes were labelled X, Y and Z.

For the launch vehicle, LM and CSM the positive X axis was defined as pointing in the direction of thrust of the rocket engines. So the end with the engines was always -X, and the other end was +X. The +Z direction was defined as “the preferred down range direction for each vehicle, when operating independently”. For the launch vehicle, that’s straightforward—downrange is to the east as it sits on the pad (the direction in which it will travel after launch), so +Z corresponds to Position I, and -Z to Position III. The Y axis was always chosen to make a “right-handed” coordinate system, so +Y points south through Position IIII.

In the image below, we’re looking north. Once the Saturn V has launched it will tip over and head eastwards (to the right) to inject the Apollo stack into orbit.

XYZ axes of Saturn V launch vehicle
Click to enlarge
Apollo 8, S68-55416

These axes were actually labelled on the outside of the Instrument Unit (IU), at the very top of the launch vehicle. Here’s one in preparation, with the +Z label flanked by the casings of two chunky directional antennae—a useful landmark I’ll come back to later.

Saturn V instrument unit
Click to enlarge
Source

So here’s a summary of all the axes of the Saturn V:

Saturn V principal axes
Click to enlarge

Moving on to the Lunar Module, its downrange direction is the direction in which it travels during landing, when it is orientated with its two main windows facing forward—so +Z points in that direction, out the front. The right-hand coordinate system then puts +Y to the astronauts’ right as they stand looking out the windows.

XYZ axes of LM
Click to enlarge
Apollo 9, AS09-21-3212

The landing legs were designated according to their coordinate axis locations. In the descent stage, between the legs, were storage areas called quads—they were numbered from 1 to 4 anticlockwise (looking down), starting with Quad 1 between the +Z and -Y leg. The ascent stage, sitting on top of the descent stage, had four clusters of Reaction Control System (RCS) thrusters, which were situated between the principal axes and numbered with the same scheme as the descent-stage quads.

Lunar Module principal axes
Click to enlarge

But it’s not clear that there is a natural downrange direction for the CSM—the +Z direction is defined (fairly randomly, I think) as pointing towards the astronauts’ feet, with -Z therefore corresponding to the position of the Command Module hatch. That places +Y to the astronauts’ right side as they lie in their couches.

XYZ axes of CSM
Click to enlarge
Apollo 15, AS15-88-11961

The Command Module was fairly symmetrical around its Z axis, and its RCS thrusters were neatly place on the Z and Y axes. Not so the Service Module, which was curiously skewed. Its RCS thrusters, arranged in groups of four called quads, were offset from the principal axes by 7º15′ in a clockwise direction when viewed from ahead (that is, looking towards the pointed end of the CSM). The RCS quad next to the -Z axis was designated Quad A; Quad B was near the +Y axis, and the lettering continued in an anticlockwise direction through C and D. I’ve yet to find out why the RCS system was offset in this way, since it would necessarily produce translations and rotations that were offset from the “natural” orientation of the crew compartment, and from the translations and rotations produced by the RCS system of the Command Module.

The Service Module also contained six internal compartments, called sectors, numbered from 1 to 6. These were symmetrically placed relative to the RCS system, rather than the spacecraft’s principal axes. Finally, the prominent external umbilical tunnel connecting the Service Module to the Command Module wasn’t quite on the +Z axis, but offset by 2º20′ in the same sense as the RCS offset.

Command/Service Module principal axes
Click to enlarge

So those are the axes for the launch vehicle and spacecraft. But how did they line up when the Saturn V and Apollo stack were assembled? Badly, as it turns out.

First, the good news—all the X axes align, because the spacecraft and launch vehicle are all positioned engines-down for launch, for structural support reasons, if nothing else.

With regard to Y and Z, it’s easy to see the CSM’s orientation on the launch pad. Here’s a view from the Launch Escape Tower, which we’ve established (see above) is on the -Y side of the launch vehicle. The tunnel allowing access to the crew hatch of the Command Module (-Z) is on the left, and the umbilical tunnel connecting the Service Module to the Command Module is on the right (+Z), so the CSM +Y axis is pointing towards us.

YZ axes of CSM on launch pad
Click to enlarge
Apollo 11, 69-HC-718

Oops. The CSM YZ axes are rotated 180º relative to those of the Saturn V launch vehicle.

It’s more difficult to find out the orientation of the Lunar Module within the Apollo stack, since it’s concealed inside the shroud of the Spacecraft/Lunar Module Adapter. Various diagrams depict it as facing in any number of directions relative to the CSM, but David Weeks’s authoritative drawings show it turned so that its +Z and +Y axes align with those of the CSM—facing to the right in the picture above, then, with its YZ axes rotated 180º relative to those of the Saturn V launch vehicle below. We can check that this is actually the case by looking at photographs of the LM when it’s exposed on top of the S-IVB and Instrument Unit, during the transposition and docking manoeuvre. The viewing angles are never very favourable, but the big pair of directional antennae flanking the +Z direction on the IU are useful landmarks (see above).

XY axes of LM and IU
Click to enlarge
Apollo 9, AS09-19-2925

We can see that the front of the Lunar Module (+Z) is indeed pointing in the opposite direction to the directional antennae marking the +Z axis of the IU and the rest of the launch vehicle. Weeks’s drawing are correct.

So, sitting on the launch pad, the axes of the launch vehicle are pointing in the opposite direction to those of the spacecraft. NASA rationalized this situation by stating that:

A Structural Body Axes coordinate system can be defined for each multi-vehicle stack. The Standard Relationship defining this coordinate system requires that it be identical with the Structural Body Axes system of the primary or thrusting vehicle.

NASA, Project Apollo Coordinate System Standards (June 1965)

So the whole space vehicle used the coordinate system of the Saturn V launch vehicle, and the independent coordinates of the LM and CSM didn’t apply until they were manoeuvring under their own power.

So, beware—there’s real potential for confusion here, when modelling the Apollo-Saturn space vehicle, because different sources use different coordinates; and many diagrams, even those prepared by NASA, do not reflect the final reality.

In Part 2, I write about what happens to all those XYZ axes once the vehicles start moving around.


* I suspect I’m not the first person to notice that the S-IVB aft interstage chequer can be interpreted as sequential two-digit binary numbers, with black signifying zero and white representing one. Reading the least significant digit in the “low” positions, we have 00 in the A quadrant, 01 in the B quadrant, 10 in C and 11 in D—corresponding to 0, 1, 2, 3 in decimal. (I doubt if it actually means anything, but it’s a useful aide-memoire. Well, if you have a particular kind of memory, I suppose.)

Relativistic Ringworlds

Cover of Xeelee Redemption by Stephen BaxterNo matter how many times he considered it, Jophiel shivered with awe. It was obviously an artefact, a made thing two light years in diameter. A ring around a supermassive black hole.

Stephen Baxter, Xeelee: Redemption (2018)

 

I’ve written about rotating space habitats in the past, and I’ve written about relativistic starships, so I guess it was almost inevitable I’d end up writing about the effect of relativity on space habitats that rotate really, really rapidly.

What inspired this post was my recent reading of Stephen Baxter’s novel Xeelee: Redemption. I’ve written about Baxter before—he specializes in huge vistas of space and time, exotic physics, and giant mysterious alien artefacts. This novel is part of his increasingly complicated Xeelee sequence, which I won’t even attempt to summarize for you. What intrigued me on this occasion was Baxter’s invocation of a relativistic ringworld, briefly described in the quotation above.

Ringworlds are science fiction’s big rotating space habitats, originally proposed by Larry Niven in his novel Ringworld (1970). Instead of spinning a structure a few tens of metres in diameter to produce centrifugal gravity, like the space station in the film 2001: A Space Odyssey, Niven imagined one that circled a star, with a radius comparable to Earth’s distance from the sun. Spin one of those so that it rotates once every nine days or so, and you have Earthlike centrifugal gravity on its inner, sun-facing surface.

If we stipulate that we want one Earth gravity (henceforth, 1g), then there are simple scaling laws to these things—the bigger they are, the longer it takes for them to rotate, but the faster the structure moves. The 11-metre diameter centrifuge in 2001: A Space Odyssey would have needed to rotate 13 times a minute, with a rim speed of 7m/s, to generate 1g.

Estimates vary for the “real” size of the space station in the same movie, but if we take the diameter of “300 yards” from Arthur C. Clarke’s novel, it would need to rotate once every 23.5 seconds, with a rim speed of 37m/s.

Space Station V from 2001 A Space Odyssey
Niven’s Ringworld takes nine days to revolve, but has a rim speed of over a 1000 kilometres per second.

Ringworld
Image by Hill, used under the Creative Commons Attribution-Share Alike 3.0 Unported licence.

You get the picture. For any given level of centrifugal gravity, the rotation period and the rotation speed both vary with the square root of the radius.

So what Baxter noticed is that if you make a ringworld with a radius of one light-year, and rotate it with a rim speed equal to the speed of light, it will produce a radial acceleration of 1g.* In a sense, he pushed the ringworld concept to its extreme conclusion, since nothing can move faster than light. Indeed, nothing can move at the speed of light—so Baxter’s ring is just a hair slower. By my estimate, from figures given in the novel, the lowest “deck” of his complicated ringworld is moving at 99.999999999998% of light speed (that’s thirteen nines).

And this truly fabulous velocity is to a large extent the point. Clocks moving at close to the speed of light run slow, when checked by a stationary observer. This effect becomes more extreme with increasing velocity. The usual symbol for velocity when given as a fraction of the speed of light is β (beta), and from beta we can calculate the time dilation factor γ (gamma):

Formula for relativistic gamma

Here’s a graph of how gamma behaves with increasing beta—it hangs about very close to one for a long time, and then starts to rocket towards infinity as velocity approaches lightspeed (beta approaches one).

Relationship between relativistic beta and gamma
Click to enlarge

Plugging the mad velocity I derived above into this equation, we find that anyone inhabiting the lowest deck of Baxter’s giant alien ringworld would experience time dilation by a factor of five million—for every year spent in this extreme habitat, five million years would elapse in the outside world. This ability to “time travel into the far future” is a key plot element.

But there’s a problem. Quite a big one, actually.

The quantity gamma has wide relevance to relativistic transformations (even though I managed to write four posts about relativistic optics without mentioning it). As I’ve already said, it appears in the context of time dilation, but it is also the conversion factor for that other well-known relativistic transformation, length contraction. Objects moving at close to the speed of light are shortened (in the direction of travel) when measured by an observer at rest. A moving metre stick, aligned with its direction of flight, will measure only 1/γ metres to a stationary observer. Baxter also incorporates this into his story, telling us that the inhabitants of his relativistic ringworld measure its circumference to be much greater than what’s apparent to an outside observer.

So far so good. But acceleration is also affected by gamma, for fairly obvious reasons. It’s measured in metres per second squared, and those metres and seconds are subject to length contraction and time dilation. An acceleration in the line of flight (for instance, a relativistic rocket boosting to even higher velocity) will take place using shorter metres and longer seconds, according to an unaccelerated observer nearby. So there is a transformation involving gamma cubed, between the moving and stationary reference frames, with the stationary observer always measuring lower acceleration than the moving observer. A rocket accelerating at a steady 1g (according to those aboard) will accelerate less and less as it approaches lightspeed, according to outside observers. The acceleration in the stationary reference frame decays steadily towards zero, the faster the rocket moves—which is why you can’t ever reach the speed of light simply by using a big rocket for a long time.

That’s not relevant to Baxter’s ringworld, which is spinning at constant speed. But the centripetal acceleration, experienced by those aboard the ringworld as “centrifugal gravity”, also undergoes a conversion between the moving and stationary reference frames. Because this acceleration is always transverse to the direction of movement of the ringworld “floor” at any given moment, it’s unaffected by length contraction, which only happens in the direction of movement. But things that occurs in one second of external time will occur in less than a second of time-dilated ringworld time—the ringworld inhabitants will experience an acceleration greater than that observed from outside, by a factor of gamma squared.

So the 1g centripetal acceleration required in order to keep something moving in a circle at close to lightspeed would be crushingly greater for anyone actually moving around that circle. In Baxter’s extreme case, with a gamma of five million, his “1g” habitat would experience 25 trillion gravities. Which is quite a lot.

To get the time-travel advantage of γ=5,000,000 without being catastrophically crushed to a monomolecular layer of goo, we need to make the relativistic ringworld a lot bigger. For a 1g internal environment, it needs to rotate to generate only one 25-trillionth of a gravity as measured by a stationary external observer. Keeping the floor velocity the same (to keep gamma the same), that means it has to be 25 trillion times bigger. Which is a radius of 25 trillion light-years, or 500 times the size of the observable Universe.

Even by Baxter’s standards, that would be … ambitious.


* This neat correspondence between light-years, light speed and one Earth gravity is a remarkable coincidence, born of the fact that a year is approximately 30,000,000 seconds, light moves at approximately 300,000,000 metres per second, and the acceleration due to Earth’s gravity is about 10 metres per second squared. Divide light-speed by the length of Earth’s year, and you have Earth’s gravity; the units match. This correspondence was a significant plot element in T.J. Bass’s excellent novel Half Past Human (1971).

Baxter’s novel is full of plot homages to Niven’s original Ringworld, including a giant mountain with a surprise at the top.

As Baxter also notes, this mismatch between the radius and circumference of a rapidly rotating object generates a fruitful problem in relativity called the Ehrenfest Paradox.

How Apollo Got To The Moon

Apollo 11 launch
Click to enlarge
NASA image S69-39961

I’m posting this at 13:32 GMT on 16th July 2019—exactly fifty years after the launch of Apollo 11. It’s the last part of a loose trilogy of posts about Apollo—the first two being M*A*S*H And The Moon Landings and The Strange Shadows Of Apollo. This one’s about the rather complicated sequence of events required to get the Apollo spacecraft safely to the moon.

To get from the Earth to the moon, Apollo needed to be accelerated into a long elliptical orbit. The low point of this orbit was close to the Earth’s surface (for Apollo 11, the 190 kilometres altitude of its initial parking orbit); the high point of the ellipse had to reach out to the moon’s distance (380,000 kilometres), or even farther.

Extremely diagrammatically, it looked like this:

Diagrammatic Apollo translunar trajectory

To be maximally fuel-efficient, the acceleration necessary to convert the low, circular parking orbit into the long, elliptical transfer orbit needs to be imparted at the lowest point of the ellipse—that is, on exactly the opposite side of the Earth from the planned destination. Since the moon is moving continuously in its orbit, the translunar trajectory actually has to “lead” the moon, and aim for where it will be when the spacecraft arrives at lunar orbit, about three days after leaving Earth.

Here’s the real elliptical transfer orbit followed by Apollo 11, drawn with the moon in the position it occupied at the time of launch (you’ll need to enlarge it to see detail):

Apollo 11 transfer orbit (1)
Click to enlarge
Prepared using Celestia

(For reasons I’ll come back to, NASA gave the Apollo spacecraft a little extra acceleration, lengthening its translunar transfer ellipse so that it would peak well beyond the moon’s orbit.)

And here’s the situation three days later, with Apollo 11 arriving at the moon’s orbit just as the moon arrives in the right place for a rendezvous:

Apollo 11 transfer orbit (2)
Click to enlarge
Prepared using Celestia

With the proximity of the moon at this point, lunar gravity in fact pulled the Apollo spacecraft away from the simple ellipse I’ve charted, warping its trajectory to wrap around the moon—something else I’ll come back to.

In the meantime, let’s go back to the fact that NASA needed to manoeuvre the Apollo spacecraft to a very exact position, on the opposite side of the Earth from the position the moon would occupy in three days’ time, and then accelerate it into the long elliptical orbit you can see in my diagrams. The process of accelerating from parking orbit to transfer orbit is called translunar injection, or TLI.

The point on the Earth’s surface opposite the moon at any given time is called the lunar antipode. (This is a horrible word, born of a misunderstanding of the word antipodes—I’ve written more about that topic in a previous post about words.) But, given that I don’t want to keep repeating the phrase “on the opposite side of the Earth from where the moon will be in three days’ time”, from now on I’ll use the word antipode with that meaning.

So TLI had to happen at this antipode, and NASA therefore needed to launch the Apollo lunar spacecraft into an Earth orbit that at some point passed through the antipode. Not only that, but they needed to do so using a minimum of fuel, and needed to get the spacecraft to the antipode reasonably quickly, so as to economize on consumables like air and food, thereby keeping the spacecraft’s launch weight as low as possible.

Now, the moon orbits the Earth in roughly the plane of the Earth’s orbit around the sun—the ecliptic plane. But the moon can stray 5.1º above or below the ecliptic. And the ecliptic is inclined at about 23.4º to the plane of the Earth’s equator. So the moon’s orbital plane can be inclined to the Earth’s equator at anything from 18.3º to 28.5º. This means the moon can never be overhead in the sky anywhere outside of a band between 28.5º north and south of the equator, and therefore its antipode is confined in the same way—always drifting around the Earth somewhere within, or just outside, the tropics.

The Cape Kennedy launch complex (now Cape Canaveral), lies at  28.6ºN. The most energy-efficient way to get a spacecraft into Earth orbit is to launch it due east, taking advantage of the Earth’s rotation to boost its speed. Such a trajectory puts the spacecraft into an orbit inclined at 28.6º to the equator. So a launch from Kennedy put a spacecraft into an orbit inclined relative to the plane of the moon’s orbit. The inclination might be a fractional degree, if the moon’s orbit were tilted favourably close to Kennedy; but generally it would be significantly larger than that, with the spacecraft’s orbit passing through the plane of the moon’s orbit at just two points.

As it happens, the situation at the time of the Apollo 11 mission shows all these angles between equator, ecliptic, moon’s orbit and Apollo parking orbit quite clearly, because all the tilts were roughly aligned with each other. Here’s a view from above the east Pacific at the time of Apollo 11’s launch: 13:32 GMT, 16 July 1969:

Relevant orbital planes at time of Apollo 11 launch
Click to enlarge
Prepared using Celestia

The red line is the ecliptic, the plane of Earth’s orbit around the sun. From the latitude and longitude grid I’ve laid on to the Earth, you can see how the Earth’s northern hemisphere is tilted towards the sun, enjoying northern summer. The plane of the moon’s orbit (in cyan) is carrying the moon above the ecliptic plane on the illuminated side of the Earth, so that the angle between the Apollo 11 parking orbit and the moon’s orbital plane is relatively small.

It wasn’t always like that, though. Here’s the situation at Apollo 14’s launch: 21:03 GMT, 31 January 1971. It’s southern summer, and the plane of the moon’s orbit crosses Australia, so Apollo 14’s parking orbit passes through the plane of the moon’s orbit at a fairly steep angle.

Apollo 14 orbit and plane of moon's orbit
Click to enlarge
Prepared using Celestia

Whatever the crossing angle, NASA needed to launch the Apollo moon missions so that the spacecraft’s orbit took it through the moon’s orbital plane at the same moment the antipode drifted through that crossing point. And in order to economize on consumables, that needed to happen within the time it took to make two or three spacecraft orbits, each lasting an hour and a half. This requirement dictated that there was always a launch window for each lunar mission—any launch that didn’t take place within a very specific time frame had no chance of bringing the spacecraft and the antipode together to allow a successful TLI.

At first sight, it seems like the launch window should be vanishingly narrow, given that the parking orbit intersects the moon’s orbital plane at only two points, only one of which can be suitable for a TLI at any given time. In fact, by varying the direction in which the Saturn V launched, NASA was able to hit a fairly broad sector of the lunar orbital plane. Launching in any direction except due east was less energy-efficient, but with additional fuel the Apollo spacecraft could still be placed in orbit using launch directions 18º either side of due east. The technical name for the launch direction, as measured in the horizontal plane, is the launch azimuth. So Apollo could be launched on azimuths anywhere between 72º and 108º east of north.

You can see this range of orbital options drawn out in sinusoids on Apollo 11’s Earth Orbit Chart:

Apollo 11 Earth Orbit chart
Click to enlarge

Cape Kennedy is at the extreme left edge of the chart, and all the options for launch azimuths between 72º and 108º are marked. Here’s a detail from that edge:

Detail of Apollo 11 Earth Orbit chart
Click to enlarge

Notice how launches directed either north or south of east take the spacecraft to a higher latitude than Cape Kennedy’s, and therefore into a more inclined orbit—at the extremes, Apollo orbits were inclined at close to 33º.

So NASA could take aim at the antipode by adjusting the launch direction. By launching north of east, they could hit a more easterly antipode; by launching south of east, they could hit a more westerly antipode. This range of options allowed a launch window spanning about four hours. A launch early in the launch window would involve an azimuth close to 72º, as the launch vehicle was aimed at the antipode in its most extreme accessible eastern position. During the four-hour window, as the moon moved across the sky from east to west, the antipode would track across the Earth’s surface in the same direction, and the required launch azimuth would gradually increase, until the launch window closed when an azimuth of 108º was reached. NASA planned to have their launch vehicle ready to go just as the launch window opened, to give themselves maximum margin for delays. Apollo 11 launched on time, and so departed along an azimuth very close to 72º.

Here’s the Apollo 11 launch trajectory:

Apollo 11 launch sequence
Click to enlarge
Prepared using Celestia

The huge S-IC stage (the first stage of the Saturn V) shut down and dropped away with its fuel exhausted after just 2½ minutes, falling into the western Atlantic (where one of its engines was recently retrieved from 4.3 kilometres underwater). The S-II second stage then burned for 6½ minutes before falling away in turn, dropping in a long trajectory that ended in mid-Atlantic. Meanwhile, the S-IVB third stage fired for another two minutes, shoving the Apollo spacecraft into Earth orbit before shutting down at a moment NASA calls Earth Orbit Insertion (EOI). The astronauts then had about two-and-a-half hours in orbit (completing about one-and-three-quarter revolutions around the Earth) before their scheduled rendezvous with the lunar antipode over the Pacific. This gave them time to check out the spacecraft systems and make sure everything was working properly before committing to the long translunar trajectory.

At two hours and forty-four minutes into the mission, the S-IVB engine was fired up again, and worked continuously for six minutes as Apollo 11 arced across the night-time Pacific. Here’s that trajectory with the S-IVB ignition and cutoff (TLI proper) marked, as well as the plane of the moon’s orbit and the position(s) of the antipode(s). On this occasion I’ve marked the true lunar antipode as “Antipode”, and the antipode of the moon’s position in 3 days’ time as “Antipode+3”.

Apollo 11 TLI
Click to enlarge
Prepared using Celestia

See how Apollo 11 accelerated continuously through the lunar orbital plane, clipping neatly past the three-day antipode. The velocity change in those six minutes took the spacecraft from 7.8 kilometres per second (the orbital speed of the parking orbit) to the 10.8 kilometres per second necessary for the planned translunar trajectory.

I promised I’d come back to the reason NASA used extra energy to propel the spacecraft into an orbit that would take it well past the moon, if it were not captured by the moon’s gravity. In part, because it speeded the journey—Apollo took three days to reach its destination, rather than five. But the main reason was to put Apollo on to a free-return trajectory. It shaved past the eastern limb of the moon and then (held by the moon’s gravity) looped around behind it. If it had not fired its engine to slow down into lunar orbit at that point, it would have reemerged from behind the western limb of the moon and come straight back to Earth. So there was a safety feature built in, if the astronauts encountered a problem with the main engine of their spacecraft—any other arrival speed would have resulted in a free-return orbit that missed the Earth.

Another safety feature of the Apollo orbits was their inclination of around 30º to the equator, which was maintained as the spacecraft entered its transfer orbit. This meant that the spacecraft avoided most of the dangerous radiation trapped in Earth’s Van Allen Belts.

The Van Allen belts are trapped in the Earth’s magnetic field, which is tilted at about 10º relative to Earth’s rotation axis—and the tilt is almost directly towards Cape Kennedy, with the north geomagnetic pole sitting just east of Ellesmere Island in the Canadian Arctic.

Location of Cape Kennedy relative to VAB
Source (modified)

This means that a spacecraft launched from Cape Kennedy, with an orbital inclination of 30º to Earth’s equator, has an inclination of about 40º to the geomagnetic equator. A departure orbit with that inclination rises up and over the Van Allen belts, passing through their fringes rather than through the middle. Of course, since the Earth rotates while the spacecraft’s orbital plane remains more or less fixed in space, it needs to depart within a few hours, otherwise it will lose the advantageous tilt of the radiation belts—but Apollo already had good reason to get going so as not to waste precious consumables.

To finish, here are a couple of diagrams I’ve prepared with Celestia, using an add-on created by user Cham. The add-on shows the Earth’s magnetic field lines, and the calculated trajectory of a few charged particles trapped in the radiation belt. I’ve used a subset of Cham‘s particle tracks, so I can show the position of the inner Van Allen Belt clearly—it’s the one that contains the high-energy protons which were of most danger to the astronauts.

Here’s Apollo 11’s departure orbit (red line) seen from above the Pacific; the plane of the moon’s orbit is also shown, in cyan. The plot is for the time of translunar injection.

Apollo 11's departure orbit relative to Van Allen Belts (1)
Click to enlarge
Prepared using Celestia

And here’s a side view.

Apollo 11's departure orbit relative to Van Allen Belts (2)
Click to enlarge
Prepared using Celestia

(You can ignore the lower part of the orbit, which is only there to show the full elliptical shape—Apollo 11 followed the upper, northern trajectory, starting from the vicinity of the equator.)

So that’s how Apollo got to the moon.

The Strange Shadows Of Apollo

AS11-40-5872
Click to enlarge
NASA AS11-40-5872

In a previous post, I explained how all the manned moon landings were made with the sun low in the sky behind the Lunar Module, so that long shadows accentuated terrain features, making it easier to locate a safe place to land. But this meant that the LM landed facing into its own shadow, so that the astronauts descended the ladder to the surface in the shade of their own vehicle. It seems as if they should have been fumbling around in the dark, to a large extent, because there is no air on the moon to scatter light into shadowed areas. But, as you can see from the Apollo 11 photograph above, although the shadows on the ground appear very dark, the shadowed face of the LM is quite well illuminated. That light is being reflected from the lunar surface, but it’s being reflected in a peculiar and interesting way, which is what I want to talk about here.

Take a look at this photograph of the Boon Companion’s shadow, projected on to an area of grassy parkland:

Heiligenschein on dry grass
Click to enlarge
© 2019 The Boon Companion

The area around her head appears strangely bright compared to the rest of the view. In fact, that patch of brightness is centred on the antisolar point of her camera—it’s directly opposite the sun.

What’s happening is called shadow hiding. The parkland is full of shadows cast by the blades of grass, so in most directions we can see a mixture of sunlight and shade. But when we look directly down-sun, we see only the illuminated surfaces of the individual blades of grass—they hide their own shadows from our view. So the region around the antisolar point appears bright, compared to the rest of the field where shadows are visible.

A high vantage point above a field of vegetation is a good way to see this effect. A person looking down into the field will see the bright patch concentrated around the shadow of their head, and that gives the phenomenon another name—it’s called heiligenschein, German for “holy light”, because the bright patch resembles the depiction of a halo around a saint’s head. In particular, the shadow-hiding version of the effect is called dry heiligenschein—there’s also a “wet” version that occurs when drops of water (for instance, beads of dew) act as retro-reflectors of the kind I discussed in my post about signalling mirrors.

So what’s the relevance to the moon? The moon is largely covered in a layer of compacted rocks and dust called regolith, the product of billions of years of meteor impacts. This surface has never been weathered by the action of air and water, and so is jagged, on the small scale, beyond anything we commonly encounter on Earth. So it produces exactly the same sort of shadow-hiding heiligenschein as a field of grass on Earth.

We knew about this long before we went to the moon, for two reasons. The first is that the full moon is so very much brighter than the half-phase moon—ten times brighter, rather than just the factor of two you might expect. The second is that the full moon looks like a uniformly illuminated flat circle, rather than a sphere—this is because the edges of the full moon appear just as bright as the centre of its disc.

Full moon
Photo via Good Free Photos

We’re used to surfaces that are inclined to our line of sight (like those around the edge of the moon) reflecting less light than transverse surfaces (like the middle of the lunar disc), but the moon’s surface doesn’t seem to obey that rule. Both of these effects are explicable in terms of dry heiligenschein—the whole full moon is behaving like that bright patch of shadow-hiding grassland.

The way in which the moon’s surface brightens dramatically when it is opposite the sun in the sky is called the opposition surge. It’s also sometimes called the Seeliger effect, after Hugo von Seeliger, an astronomer who used a similar opposition surge in the brightness of Saturn’s rings to deduce that the rings consisted of multiple self-shadowing particles.

One thing we didn’t know, until the Apollo missions reached the moon, was how bright the exact antisolar point on the moon’s surface would look. Here on Earth, we can never see the brightly illuminated full moon exactly opposite the sun, because in that position it is eclipsed by the Earth’s shadow. But when Apollo 8 went into orbit around the moon, the astronauts were able to look down on the illuminated moon’s surface with the sun precisely behind them. A publication in the Astronomical Journal soon followed*, showing a distinctive sharp peak in reflectance close to the antisolar point.

Pohn et al. Astrophysical Journal (1969) 157: L195So when the Apollo 11 astronauts landed on the moon, they were already expecting to see bright dry heiligenschein on the regolith around them—in fact, some time was set aside in their busy schedule for them to record their observations of this “zero phase angle” effect. Here’s a view of the checklist attached to Aldrin’s spacesuit glove, reminding him to check and describe the reflectance of the lunar surface “UP/DOWN/CROSS SUN” during his time on the lunar surface:

Detail from S69-38937 (Aldrin's gloves)
Click to enlarge
NASA: Detail from S69-38937

And the effect was immediately obvious. Here’s a view of the lunar surface and the shadow of the Lunar Module, taken from the right-hand window of the LM shortly after landing:

AS11-37-5454
Click to enlarge
NASA AS11-37-5454

That bright reflection coming from around the shadow zone is bouncing straight back, like a spotlight, to illuminate the shadowed face of the LM. And as the astronauts moved around the surface, they continually observed a patch of “holy light” around the shadow of their helmets. We can actually see what this heiligenschein halo looked like, if we zoom in on Armstrong’s famous portrait of Aldrin:

AS11-40-5903
Click to enlarge
NASA AS11-40-5903
Detail from AS11-40-5903
Click to enlarge

Examining the reflection in Aldrin’s helmet visor, we can see (among other interesting things) his long shadow stretching off into the distance, and the patch of heiligenschein he was able to observe around his own head. Here’s what he reported at Mission Elapsed Time 110 hours, 28 minutes:

As I look around the area, the contrast, in general, is … comes about completely by virtue of the shadows. Almost [garbled] looking down-Sun at zero-phase very light-colored gray, light gray color [garbled] a halo around my own shadow, around the shadow of my helmet.
Then, as I look off cross-Sun, the contrast becomes strongest in that the surrounding color is still fairly light. As you look down into the Sun [garbled] a larger amount of [garbled] shadowed area is looking toward us. The general color of the [garbled] surrounding [garbled] darker than cross-Sun. The contrast is not as great.

(Aldrin, you’ll notice, had recurrent problems with his comms during lunar surface activities.)

But the moon was really just our first extraterrestrial encounter with this effect. We now know that heiligenschein is common on the airless, meteor-battered worlds of the solar system, most of which are surfaced with regolith like the moon’s.

Here, for instance, is a photography the Japanese Hyabusa2 spacecraft took of its own shadow during its recent close encounter with the asteroid Ryugu:

JAXA Hyabusa2 shadow on Ryugu
JAXA

It’s an absolutely perfect illustration of the sort of self-shadowing surface that produces heiligenschein.


* Pohn, HA, Radin, HW, Wildey, RL. The Moon’s photometric function near zero phase angle from Apollo 8 photography. Astrophysical Journal (1969) 157: L193-L195

If the image looks a little unfamiliar, that’s because it’s the rather poorly composed picture direct from Armstrong’s Hasselblad camera. The story of how that original image AS11-40-5903 was processed to produce the more familiar (and now thoroughly iconic) Aldrin portrait is told here.

M*A*S*H And The Moon Landings

Still from M*A*S*HI’ve got into the habit of checking what the Internet Movie Database has to say about films after I’ve watched them. After rewatching Robert Altman’s 1970 classic M*A*S*H, I happened on something odd in the film’s “Trivia” section at IMDb:

The loudspeaker shots and announcements were added after editing had begun, and the filmmakers realized that they needed more transitions. Some of the loudspeaker shots have the Moon visible and were shot while the Apollo 11 astronauts were on the Moon.

Well, that’s not right. Like me, many people of a certain age have a pretty vivid recollection of what the moon looked like during the Apollo 11 landing, and it didn’t look as it appears in the film’s nocturnal loudspeaker shot, at the head of this post. Here’s a close-up:

Moon phase from M*A*S*HThat’s a gibbous waxing moon, a day or two past its First Quarter.

The Apollo 11 Lunar Module touched down at 20:17:39 GMT on 20 July 1969. It took off less than a day later, at 17:54 on 21 July. Here’s what the moon looked like at landing and take-off (I’ve marked the Apollo 11 landing site, for reasons I’ll come back to):

Moon phase during Apollo 11 landing
Click to enlarge
Prepared using Celestia
Moon phase during Apollo 11 LM takeoff
Click to enlarge
Prepared using Celestia

The moon was a fattish crescent throughout the first moon landing. If the image of the moon at the head of the post was taken during July 1969, it was probably on the night of the 24-25 July, by which time the astronauts were safely back on Earth.

So where does the story come from? I think it’s from Enlisted: The Story Of M*A*S*H (2002). In that documentary Robert Altman describes how, during the editing process, he realized that he needed more transitional shots to insert into what was essentially a very episodic story. He came up with the idea of the now-iconic public address announcements by the hapless Sergeant-Major Vollmer. The film’s editor Danford Greene then goes on to explain:

I thought that we needed more speakers—more inserts of speakers. So Bob [Altman] said, “Fine, go shoot them.” But one wonderful thing that I don’t think anyone knows about is that our astronauts were on the moon. They had just hit the moon, like the day before, and I’ve got a couple of those in the shots of the speakers with our astronauts on the moon in the background.

So Greene doesn’t actually specify Apollo 11. But given the film’s release date in 1970, he can only be referring to Apollo 11 or 12, since the remaining moon landings occurred in 1971-2.

And the Apollo 12 landing is a much better match for the lunar phase in the film:

Moon phase during Apollo 12 landing
Click to enlarge
Prepared using Celestia

So at first I thought I’d solved the puzzle. But Apollo 12 landed on the moon on 19 November 1969, and the movie was first released in the USA on 25 January 1970. That seems like a pretty tight schedule. Another M*A*S*H documentary confirmed my suspicion—the AMC TV series Backstory discussed the making of M*A*S*H in an episode broadcast in 2000, and stated that filming ended in June 1969, and the edited movie was shown to a (rapturous) test audience in September. That puts Apollo 11 precisely in the frame, and excludes Apollo 12. So either there were other loudspeaker shots, taken during the night of 20-21 July, which didn’t make it into the final version of the movie, or Danford Greene just misremembered the exact date—he had other things on his mind at the time, I’m sure.

But notice how both Apollo 11 and Apollo 12 landed close to the edge of the illuminated part of the moon, in a region where the sun had only recently risen. That’s no coincidence—here’s Apollo 14:

Moon phase during Apollo 14 landing
Click to enlarge
Prepared using Celestia

Maybe you’ll take my word for it that the other three landings took place under similar circumstances.

The Apollo spacecraft orbited the moon in a clockwise direction when viewed from the north, so crossed the moon’s face from right to left in the views I’ve presented, which have north at the top. So the Lunar Module descended towards the landing site in the same direction, from daylight towards darkness. The timing of the landing was chosen specifically to be a couple of days after sunrise at the landing site, so the astronauts in the LM descended with the sun at their backs, avoiding glare, while long shadows accentuated the shape of the terrain ahead, making it easier to pick out a level landing area.

What was useful on the descent had the potential to be a hazard on the ground, because the Lunar Module landed facing down-sun, into its own long shadow—and so the astronauts descended to the lunar surface in the shadow of the LM. With a black sky above shedding no scattered light into the shadow zone, that seems like it should have been a recipe for a fall and a broken ankle (at best).

But they benefited from a rather remarkable optical effect produced by lunar dust—and I’ll write about that in another post soon.