Category Archives: Phenomena

The Coordinate Axes Of Apollo-Saturn: Part 2

In my previous post on this topic, I described how flight engineers working on the Apollo programme assigned XYZ coordinate axes to the Saturn V launch vehicle and to the two Apollo spacecraft, the Command/Service Module (CSM) and the Lunar Module (LM). This time, I’m going to talk about how these axes came into play when the launch vehicle and spacecraft were in motion. At various times during an Apollo mission, they would need to orientate themselves with an axis pointing in a specific direction, or rotate around an axis so as to point in a new direction. These axial rotations were designated roll, pitch and yaw, and the names were assigned in a way that would be familiar to the astronauts from their pilot training. To pitch an aircraft, you move the nose up or down; to yaw, you move the nose to the left or right; and to roll, you rotate around the long axis of the vehicle.

These concepts translated most easily to the axes of the CSM (note the windows on the upper right surface of the conical Command Module, which indicate the orientation of the astronauts while “flying” the spacecraft):

XYZ axes of CSM
Click to enlarge
Apollo 15 AS15-88-11961

With the astronauts’ feet pointing in the +Z direction as they looked out of the windows on the -Z side of the spacecraft, they could pitch the craft by rotating it around the Y axis, yaw around the Z axis, and roll around the X axis.

The rotation axes of the LM were similarly defined by the position of the astronauts:

XYZ axes of LM
Click to enlarge
Apollo 9, AS09-21-3212

Looking out of the windows in the +Z direction, with their heads pointing towards +X, they yawed the LM around the X axis, pitched it around the Y axis, and rolled it around the Z axis.

For the Saturn V, the roll axis was obviously along the length of the vehicle, its X axis.

But how do you decide which is pitch and which is yaw, in a vehicle that is superficially rotationally symmetrical? It turns out that the Saturn V was designed with a side that was intended to point down—its downrange side, marked by the +Z axis, which pointed due east when the vehicle was on the launch pad. This is the direction in which the space vehicle would travel after launch, in order to push the Apollo spacecraft into orbit—and to do that it needed to tilt over as it ascended, until its engines were pointing west and accelerating it eastwards. So the +Z side gradually became the down side of the vehicle, and various telemetry antennae were positioned on that side so that they could communicate with the ground. You’ll therefore sometimes see this side referred to as the “belly” of the space vehicle. And with +Z marking the belly, we can now tell that the vehicle will pitch around the Y axis, and yaw around the Z axis.

XYZ axes of Saturn V launch vehicle
Click to enlarge
Apollo 8, S68-55416

If you have read my previous post on this topic, you’ll know that the astronauts lay on their couches on the launch pad with their heads pointing east.* So as the space vehicle “pitched over” around its Y axis, turning its belly towards the ground, the astronauts ended up with their heads pointing downwards, all the way to orbit. This was done deliberately, so that they could have a view of the horizon during this crucial period.

But the first thing the Saturn V did, within a second of starting to rise from the launch pad, was yaw. It pivoted through a degree or so around its Z axis, tilting southwards and away from the Launch Umbilical Tower on its north side. Here you can see the Apollo 13 space vehicle in the middle of its yaw manoeuvre:

Apollo 13 yaw manoeuvre
Click to enlarge
Apollo 13, KSC-70PC-107

This was carried out so as to nudge the vehicle clear of any umbilical arms on the tower that had failed to retract.

Then, once clear of the tower, the vehicle rolled, turning on its vertical X axis. This manoeuvre was carried out because, although the belly of the Saturn V pointed east, the launch azimuth could actually be anything from 72º to 108º, depending on the timing of the launch within the launch window. (See my post on How Apollo Got To The Moon for more about that.) Here’s an aerial view of the two pads at Launch Complex 39, from which the Apollo missions departed, showing the relevant directions:

Launch Complex 39
Click to enlarge
Based on NASA image by Robert Simmon, using Advanced Land Imager data distributed by the USGS Global Visualization Viewer.

An Apollo launch which departed at the start of the launch window would be directed along an azimuth close to 72º, and so needed to roll anticlockwise (seen from above) through 18º to bring its +Z axis into alignment with the correct azimuth, before starting to pitch over and accelerate out over the Atlantic.

Once in orbit, the S-IVB stage continued to orientate with its belly towards the Earth, so that the astronauts could see the Earth and horizon from their capsule windows. This orientation was maintained right through to Trans-Lunar Injection (TLI), which sent the spacecraft on their way to the moon.

During the two hours after TLI, the CSM performed a complicated Transposition, Docking and Extraction manoeuvre, in which it turned around, docked nose to “roof” with the LM, and pulled the LM away from the S-IVB.

Translation, Docking & Extraction
Click to enlarge
Apollo 11 Press Kit

This meant that the X axes of CSM and LM were now aligned but opposed—their +X axes pointing towards each other. But they were also oddly rotated relative to each other. Here’s a picture from Apollo 9, taken by Rusty Schweickart, who was outside the LM hatch looking towards the CSM, where David Scott was standing up in the open Command Module hatch.

Principal axes of docked CSM & LM
Click to enlarge
Apollo 9, AS09-20-3064

The Z axes of the two spacecraft are not aligned, nor are they at right angles to each other. In fact, the angle between the CSM’s -Z axis and the LM’s +Z axis is 60º. This odd relative rotation meant that, during docking, the Command Module Pilot, sitting in the left-hand seat of the Command Module and looking out of the left-hand docking window, had a direct line of sight to the docking target on the LM’s “roof”, directly to the left of the LM’s docking port.

Once the spacecraft were safely docked, roll thrusters on the CSM were fired to make them start rotating around their shared X axis. This was called the “barbecue roll” (formally, Passive Thermal Control), because it distributed solar heating evenly by preventing the sun shining continuously on one side of the spacecraft.

Once in lunar orbit, the LM separated from the CSM and began its powered descent to the lunar surface. This was essentially the reverse of the process by which the Saturn V pushed the Apollo stack into Earth orbit. Initially, the LM had to fire its descent engine in the direction in which it was orbiting, so as to cancel its orbital velocity and begin its descent. So its -X axis had to be pointed ahead and horizontally. During this phase the Apollo 11 astronauts chose to point their +Z axis towards the lunar surface, so that they could observe landmarks through their windows—they were flying feet-first and face-down. Later in the descent, as its forward velocity decreased, the LM needed to rotate to assume an ever more upright position (-X axis down) until it came to a hover and descended vertically to the lunar surface. So later in the powered descent, Armstrong and Aldrin had to roll the LM around its X axis into a “windows up” position, facing the sky. Then, as the LM gradually pitched into the vertical position, with its -X axis down, the +Z axis rotated to face forward, giving the astronauts the necessary view ahead towards their landing zone.

Lunar Module powered descent
Click to enlarge
The LM pitches towards the vertical as it descends (NASA TM X-58040)

Finally, at the end of the mission, the XYZ axes turn out to be important for the re-entry of the Command Module (CM) into the Earth’s atmosphere. The CM hit the atmosphere blunt-end first, descending at an angle of about 6º to the horizontal. But it was also tilted slightly relative to the local airflow, with the +Z edge of its basal heat-shield a little ahead of the -Z edge. This tilt occurred because the centre of mass of the CM was deliberately offset very slightly in the +Z direction, so that the airflow pushed the CM into a slightly tilted position. This tilt, in turn, generated a bit of lift in the +Z direction—which made the Command Module steerable. It entered the atmosphere with its +Z axis pointing upwards (and the astronauts head-down, again, with a view of the horizon through their windows). The upward-directed lift prevented the CM diving into thicker atmosphere too early, and reduced the rate of heating from atmospheric compression.

Apollo Command Module generating lift
Click to enlarge

Later in re-entry, the astronauts could use their roll thrusters to rotate the spacecraft around its X axis, using lift to steer the spacecraft right or left, or even rolling it through 180º so as to direct lift downwards, steepening their descent if they were in danger of overshooting their landing zone.


* As described in my previous post on this topic, the coordinate axes of the CSM were rotated 180º relative to those of the Saturn V—the astronauts’ heads pointed in the -Z direction of the CSM, but the +Z direction of the Saturn V.

I’m missing out a couple of steps here, in an effort to be succinct. (I know, I know … that’s not like me. Take a look at NASA Technical Memorandum X-58040 if you want to know all the details.)

The Coordinate Axes Of Apollo-Saturn: Part 1

As a matter arising from my long, slow build of a Saturn V model, I became absorbed in the confusing multiplicity of coordinate systems and axes applied to the Apollo launch vehicle and spacecraft. So I thought I’d provide a guide to what I’ve learned, before I forget it all again. (Note, I won’t be talking about all the other coordinate systems used by Apollo, relating to orbital planes, the Earth and the Moon—just the ones connected to the machinery itself. And I’m going to talk only about the Saturn V launch vehicle, though much of what I write can be transferred to the Saturn IB, which launched several uncrewed Apollo missions, as well as Apollo 7.)

First up, some terminology. The Saturn V that sent Apollo on its way to the Moon is called the launch vehicle, consisting of three booster stages, with an Instrument Unit on top, responsible for controlling what the rest of the launch vehicle does. Sitting on top of the launch vehicle, mated to the Instrument Unit, is the spacecraft—all the specifically Apollo-related hardware that the launch vehicle launches. This bit is sometimes also called the Apollo stack, since it will eventually split up into two independent spacecraft—the Lunar Module (LM) and the Command/Service Module (CSM). The combination of launch vehicle and spacecraft (that is, the whole caboodle as it sat on the launch pad) is called the space vehicle.

Components of Apollo-Saturn
Click to enlarge
From NASA Technical Note D-5399

The easiest set of coordinate axes to see and understand were the position numbers and fin letters which were labelled in large characters on the base of the Saturn V’s first stage, the S-IC. You can see them here, in my own model of the S-IC:

Position and fin labels, Saturn V
Click to enlarge

In this view you can see fins labelled C and D, and the marker for Position IIII, equidistant between them.

The numbering and lettering ran anticlockwise around the launch vehicle when looking down from above, creating an eight-point coordinate system of lettered quadrants (A to D) with numbered positions (I to IIII) between them, which applied to the whole launch vehicle. They marked out the distribution of black and white stripes—each stripe occupied the span between a letter and a number, with white stripes always to the left of the position numbers, and black stripes to the right. The five engines of the S-IC and S-II stages were each numbered according to the lettered quadrant in which they lay, with Engine 5 in the centre, Engine 1 in the A quadrant, Engine 2 in the B quadrant, and so on. The curious chequer pattern of the S-IVB aft interstage (the “shoulder” where the launch vehicle narrows down between the second and third stages) is distributed in the lettered quadrants, with A all black, B black high and white low, C white high and black low, and D all white.*

S-IVB Aft Interstage axes and paint
Click to enlarge

Umbilicals & Hatches, Saturn V Pos. II
Click to enlarge
Umbilical connections (red) and personnel hatches (blue), Apollo-Saturn Pos. II

Position II of the launch vehicle was the side facing the Launch Umbilical Tower (LUT), so that side of the Saturn V was dotted with umbilical connections and personnel access hatches, as well as a prominent vertical dashed line painted on the second stage, called the vertical motion target, which made it easy for cameras to detect the first upward movement as the space vehicle left the launch pad. You don’t often get a clear view of the real thing from the Position II side, so I’ve marked up the appropriate view of my model instead, at left.

The two Cape Kennedy launch pads used for Apollo (39A and 39B) were oriented on a north-south axis, with the LUT positioned on the north side of the Saturn V, so Position II faced north. Position IIII, on the opposite side, faced south, looking back down the crawler-way along which the Saturn V had been transported on its Mobile Launcher Platform. Position IIII was also the side that faced the Mobile Service Structure, which was rolled up to service the Saturn V in its launch position, and then rolled away again before launch. And so Position I faced east, which was the direction in which the space vehicle had to travel in order to push the Apollo stack into orbit.

These letters and numbers seem to have been largely a reference for the contractors and engineers responsible for assembling and mating the different launch vehicle stages. Superimposed on them were the reference axes used by the flight engineers, who used them to talk about the orientation and movements of the launch vehicle and the two Apollo spacecraft. These axes were labelled X, Y and Z.

For the launch vehicle, LM and CSM the positive X axis was defined as pointing in the direction of thrust of the rocket engines. So the end with the engines was always -X, and the other end was +X. The +Z direction was defined as “the preferred down range direction for each vehicle, when operating independently”. For the launch vehicle, that’s straightforward—downrange is to the east as it sits on the pad (the direction in which it will travel after launch), so +Z corresponds to Position I, and -Z to Position III. The Y axis was always chosen to make a “right-handed” coordinate system, so +Y points south through Position IIII.

In the image below, we’re looking north. Once the Saturn V has launched it will tip over and head eastwards (to the right) to inject the Apollo stack into orbit.

XYZ axes of Saturn V launch vehicle
Click to enlarge
Apollo 8, S68-55416

These axes were actually labelled on the outside of the Instrument Unit (IU), at the very top of the launch vehicle. Here’s one in preparation, with the +Z label flanked by the casings of two chunky directional antennae—a useful landmark I’ll come back to later.

Saturn V instrument unit
Click to enlarge
Source

So here’s a summary of all the axes of the Saturn V:

Saturn V principal axes
Click to enlarge

Moving on to the Lunar Module, its downrange direction is the direction in which it travels during landing, when it is orientated with its two main windows facing forward—so +Z points in that direction, out the front. The right-hand coordinate system then puts +Y to the astronauts’ right as they stand looking out the windows.

XYZ axes of LM
Click to enlarge
Apollo 9, AS09-21-3212

The landing legs were designated according to their coordinate axis locations. In the descent stage, between the legs, were storage areas called quads—they were numbered from 1 to 4 anticlockwise (looking down), starting with Quad 1 between the +Z and -Y leg. The ascent stage, sitting on top of the descent stage, had four clusters of Reaction Control System (RCS) thrusters, which were situated between the principal axes and numbered with the same scheme as the descent-stage quads.

Lunar Module principal axes
Click to enlarge

But it’s not clear that there is a natural downrange direction for the CSM—the +Z direction is defined (fairly randomly, I think) as pointing towards the astronauts’ feet, with -Z therefore corresponding to the position of the Command Module hatch. That places +Y to the astronauts’ right side as they lie in their couches.

XYZ axes of CSM
Click to enlarge
Apollo 15, AS15-88-11961

The Command Module was fairly symmetrical around its Z axis, and its RCS thrusters were neatly place on the Z and Y axes. Not so the Service Module, which was curiously skewed. Its RCS thrusters, arranged in groups of four called quads, were offset from the principal axes by 7º15′ in a clockwise direction when viewed from ahead (that is, looking towards the pointed end of the CSM). The RCS quad next to the -Z axis was designated Quad A; Quad B was near the +Y axis, and the lettering continued in an anticlockwise direction through C and D. I’ve yet to find out why the RCS system was offset in this way, since it would necessarily produce translations and rotations that were offset from the “natural” orientation of the crew compartment, and from the translations and rotations produced by the RCS system of the Command Module.

The Service Module also contained six internal compartments, called sectors, numbered from 1 to 6. These were symmetrically placed relative to the RCS system, rather than the spacecraft’s principal axes. Finally, the prominent external umbilical tunnel connecting the Service Module to the Command Module wasn’t quite on the +Z axis, but offset by 2º20′ in the same sense as the RCS offset.

Command/Service Module principal axes
Click to enlarge

So those are the axes for the launch vehicle and spacecraft. But how did they line up when the Saturn V and Apollo stack were assembled? Badly, as it turns out.

First, the good news—all the X axes align, because the spacecraft and launch vehicle are all positioned engines-down for launch, for structural support reasons, if nothing else.

With regard to Y and Z, it’s easy to see the CSM’s orientation on the launch pad. Here’s a view from the Launch Umbilical Tower, which we’ve established (see above) is on the -Y side of the launch vehicle. The tunnel allowing access to the crew hatch of the Command Module (-Z) is on the left, and the umbilical tunnel connecting the Service Module to the Command Module is on the right (+Z), so the CSM +Y axis is pointing towards us.

YZ axes of CSM on launch pad
Click to enlarge
Apollo 11, 69-HC-718

Oops. The CSM YZ axes are rotated 180º relative to those of the Saturn V launch vehicle.

It’s more difficult to find out the orientation of the Lunar Module within the Apollo stack, since it’s concealed inside the shroud of the Spacecraft/Lunar Module Adapter. Various diagrams depict it as facing in any number of directions relative to the CSM, but David Weeks’s authoritative drawings show it turned so that its +Z and +Y axes align with those of the CSM—facing to the right in the picture above, then, with its YZ axes rotated 180º relative to those of the Saturn V launch vehicle below. We can check that this is actually the case by looking at photographs of the LM when it’s exposed on top of the S-IVB and Instrument Unit, during the transposition and docking manoeuvre. The viewing angles are never very favourable, but the big pair of directional antennae flanking the +Z direction on the IU are useful landmarks (see above).

XY axes of LM and IU
Click to enlarge
Apollo 9, AS09-19-2925

We can see that the front of the Lunar Module (+Z) is indeed pointing in the opposite direction to the directional antennae marking the +Z axis of the IU and the rest of the launch vehicle. Weeks’s drawing are correct.

So, sitting on the launch pad, the axes of the launch vehicle are pointing in the opposite direction to those of the spacecraft. NASA rationalized this situation by stating that:

A Structural Body Axes coordinate system can be defined for each multi-vehicle stack. The Standard Relationship defining this coordinate system requires that it be identical with the Structural Body Axes system of the primary or thrusting vehicle.

NASA, Project Apollo Coordinate System Standards (June 1965)

So the whole space vehicle used the coordinate system of the Saturn V launch vehicle, and the independent coordinates of the LM and CSM didn’t apply until they were manoeuvring under their own power.

So, beware—there’s real potential for confusion here, when modelling the Apollo-Saturn space vehicle, because different sources use different coordinates; and many diagrams, even those prepared by NASA, do not reflect the final reality.

In Part 2, I write about what happens to all those XYZ axes once the vehicles start moving around.


* I suspect I’m not the first person to notice that the S-IVB aft interstage chequer can be interpreted as sequential two-digit binary numbers, with black signifying zero and white representing one. Reading the least significant digit in the “low” positions, we have 00 in the A quadrant, 01 in the B quadrant, 10 in C and 11 in D—corresponding to 0, 1, 2, 3 in decimal. (I doubt if it actually means anything, but it’s a useful aide-memoire. Well, if you have a particular kind of memory, I suppose.)
See the Apollo Systems Engineering Manual: Service Module And Adapter Structure.

Relativistic Ringworlds

Cover of Xeelee Redemption by Stephen Baxter

No matter how many times he considered it, Jophiel shivered with awe. It was obviously an artefact, a made thing two light years in diameter. A ring around a supermassive black hole.

Stephen Baxter, Xeelee: Redemption (2018)

I’ve written about rotating space habitats in the past, and I’ve written about relativistic starships, so I guess it was almost inevitable I’d end up writing about the effect of relativity on space habitats that rotate really, really rapidly.

What inspired this post was my recent reading of Stephen Baxter’s novel Xeelee: Redemption. I’ve written about Baxter before—he specializes in huge vistas of space and time, exotic physics, and giant mysterious alien artefacts. This novel is part of his increasingly complicated Xeelee sequence, which I won’t even attempt to summarize for you. What intrigued me on this occasion was Baxter’s invocation of a relativistic ringworld, briefly described in the quotation above.

Ringworlds are science fiction’s big rotating space habitats, originally proposed by Larry Niven in his novel Ringworld (1970). Instead of spinning a structure a few tens of metres in diameter to produce centrifugal gravity, like the space station in the film 2001: A Space Odyssey, Niven imagined one that circled a star, with a radius comparable to Earth’s distance from the sun. Spin one of those so that it rotates once every nine days or so, and you have Earthlike centrifugal gravity on its inner, sun-facing surface.

If we stipulate that we want one Earth gravity (henceforth, 1g), then there are simple scaling laws to these things—the bigger they are, the longer it takes for them to rotate, but the faster the structure moves. The 11-metre diameter centrifuge in 2001: A Space Odyssey would have needed to rotate 13 times a minute, with a rim speed of 7m/s, to generate 1g.

Estimates vary for the “real” size of the space station in the same movie, but if we take the diameter of “300 yards” from Arthur C. Clarke’s novel, it would need to rotate once every 23.5 seconds, with a rim speed of 37m/s.

Space Station V from 2001 A Space Odyssey


Niven’s Ringworld takes nine days to revolve, but has a rim speed of over a 1000 kilometres per second.

You get the picture. For any given level of centrifugal gravity, the rotation period and the rotation speed both vary with the square root of the radius.

So what Baxter noticed is that if you make a ringworld with a radius of one light-year, and rotate it with a rim speed equal to the speed of light, it will produce a radial acceleration of 1g.* In a sense, he pushed the ringworld concept to its extreme conclusion, since nothing can move faster than light. Indeed, nothing can move at the speed of light—so Baxter’s ring is just a hair slower. By my estimate, from figures given in the novel, the lowest “deck” of his complicated ringworld is moving at 99.999999999998% of light speed (that’s thirteen nines).

And this truly fabulous velocity is to a large extent the point. Clocks moving at close to the speed of light run slow, when checked by a stationary observer. This effect becomes more extreme with increasing velocity. The usual symbol for velocity when given as a fraction of the speed of light is β (beta), and from beta we can calculate the time dilation factor γ (gamma):

\gamma =\frac{1}{\sqrt{1-\beta ^2}}

Here’s a graph of how gamma behaves with increasing beta—it hangs about very close to one for a long time, and then starts to rocket towards infinity as velocity approaches lightspeed (beta approaches one).

Relationship between relativistic beta and gamma
Click to enlarge

Plugging the mad velocity I derived above into this equation, we find that anyone inhabiting the lowest deck of Baxter’s giant alien ringworld would experience time dilation by a factor of five million—for every year spent in this extreme habitat, five million years would elapse in the outside world. This ability to “time travel into the far future” is a key plot element.

But there’s a problem. Quite a big one, actually.

The quantity gamma has wide relevance to relativistic transformations (even though I managed to write four posts about relativistic optics without mentioning it). As I’ve already said, it appears in the context of time dilation, but it is also the conversion factor for that other well-known relativistic transformation, length contraction. Objects moving at close to the speed of light are shortened (in the direction of travel) when measured by an observer at rest. A moving metre stick, aligned with its direction of flight, will measure only 1/γ metres to a stationary observer. Baxter also incorporates this into his story, telling us that the inhabitants of his relativistic ringworld measure its circumference to be much greater than what’s apparent to an outside observer.

So far so good. But acceleration is also affected by gamma, for fairly obvious reasons. It’s measured in metres per second squared, and those metres and seconds are subject to length contraction and time dilation. An acceleration in the line of flight (for instance, a relativistic rocket boosting to even higher velocity) will take place using shorter metres and longer seconds, according to an unaccelerated observer nearby. So there is a transformation involving gamma cubed, between the moving and stationary reference frames, with the stationary observer always measuring lower acceleration than the moving observer. A rocket accelerating at a steady 1g (according to those aboard) will accelerate less and less as it approaches lightspeed, according to outside observers. The acceleration in the stationary reference frame decays steadily towards zero, the faster the rocket moves—which is why you can’t ever reach the speed of light simply by using a big rocket for a long time.

That’s not relevant to Baxter’s ringworld, which is spinning at constant speed. But the centripetal acceleration, experienced by those aboard the ringworld as “centrifugal gravity”, also undergoes a conversion between the moving and stationary reference frames. Because this acceleration is always transverse to the direction of movement of the ringworld “floor” at any given moment, it’s unaffected by length contraction, which only happens in the direction of movement. But things that occurs in one second of external time will occur in less than a second of time-dilated ringworld time—the ringworld inhabitants will experience an acceleration greater than that observed from outside, by a factor of gamma squared.

So the 1g centripetal acceleration required in order to keep something moving in a circle at close to lightspeed would be crushingly greater for anyone actually moving around that circle. In Baxter’s extreme case, with a gamma of five million, his “1g” habitat would experience 25 trillion gravities. Which is quite a lot.

To get the time-travel advantage of γ=5,000,000 without being catastrophically crushed to a monomolecular layer of goo, we need to make the relativistic ringworld a lot bigger. For a 1g internal environment, it needs to rotate to generate only one 25-trillionth of a gravity as measured by a stationary external observer. Keeping the floor velocity the same (to keep gamma the same), that means it has to be 25 trillion times bigger. Which is a radius of 25 trillion light-years, or 500 times the size of the observable Universe.

Even by Baxter’s standards, that would be … ambitious.


* This neat correspondence between light-years, light speed and one Earth gravity is a remarkable coincidence, born of the fact that a year is approximately 30,000,000 seconds, light moves at approximately 300,000,000 metres per second, and the acceleration due to Earth’s gravity is about 10 metres per second squared. Divide light-speed by the length of Earth’s year, and you have Earth’s gravity; the units match. This correspondence was a significant plot element in T.J. Bass’s excellent novel Half Past Human (1971).

Baxter’s novel is full of plot homages to Niven’s original Ringworld, including a giant mountain with a surprise at the top.

As Baxter also notes, this mismatch between the radius and circumference of a rapidly rotating object generates a fruitful problem in relativity called the Ehrenfest Paradox.

How Apollo Got To The Moon

Apollo 11 launch
Click to enlarge
NASA image S69-39961

I’m posting this at 13:32 GMT on 16th July 2019—exactly fifty years after the launch of Apollo 11. It’s the last part of a loose trilogy of posts about Apollo—the first two being M*A*S*H And The Moon Landings and The Strange Shadows Of Apollo. This one’s about the rather complicated sequence of events required to get the Apollo spacecraft safely to the moon.

To get from the Earth to the moon, Apollo needed to be accelerated into a long elliptical orbit. The low point of this orbit was close to the Earth’s surface (for Apollo 11, the 190 kilometres altitude of its initial parking orbit); the high point of the ellipse had to reach out to the moon’s distance (380,000 kilometres), or even farther.

Extremely diagrammatically, it looked like this:

Diagrammatic Apollo translunar trajectory

To be maximally fuel-efficient, the acceleration necessary to convert the low, circular parking orbit into the long, elliptical transfer orbit needs to be imparted at the lowest point of the ellipse—that is, on exactly the opposite side of the Earth from the planned destination. Since the moon is moving continuously in its orbit, the translunar trajectory actually has to “lead” the moon, and aim for where it will be when the spacecraft arrives at lunar orbit, about three days after leaving Earth.

Here’s the real elliptical transfer orbit followed by Apollo 11, drawn with the moon in the position it occupied at the time of launch:

Apollo 11 orbit calculated from TLI, and Moon position at time of TLI
Click to enlarge
Prepared using Celestia

(For reasons I’ll come back to, NASA gave the Apollo spacecraft a little extra acceleration, lengthening its translunar transfer ellipse so that it would peak well beyond the moon’s orbit.)

And here’s the situation three days later, with Apollo 11 arriving at the moon’s orbit just as the moon arrives in the right place for a rendezvous:

Apollo 11 orbit calculated from TLI, and Moon position at time of Lunar Orbit Insertion
Click to enlarge
Prepared using Celestia

With the proximity of the moon at this point, lunar gravity in fact pulled the Apollo spacecraft away from the simple ellipse I’ve charted, warping its trajectory to wrap around the moon—something else I’ll come back to.

In the meantime, let’s go back to the fact that NASA needed to manoeuvre the Apollo spacecraft to a very exact position, on the opposite side of the Earth from the position the moon would occupy in three days’ time, and then accelerate it into the long elliptical orbit you can see in my diagrams. The process of accelerating from parking orbit to transfer orbit is called translunar injection, or TLI.

The point on the Earth’s surface opposite the moon at any given time is called the lunar antipode. (This is a horrible word, born of a misunderstanding of the word antipodes—I’ve written more about that topic in a previous post about words.) But, given that I don’t want to keep repeating the phrase “on the opposite side of the Earth from where the moon will be in three days’ time”, from now on I’ll use the word antipode with that meaning.

So TLI had to happen at this antipode, and NASA therefore needed to launch the Apollo lunar spacecraft into an Earth orbit that at some point passed through the antipode. Not only that, but they needed to do so using a minimum of fuel, and needed to get the spacecraft to the antipode reasonably quickly, so as to economize on consumables like air and food, thereby keeping the spacecraft’s launch weight as low as possible.

Now, the moon orbits the Earth in roughly the plane of the Earth’s orbit around the sun—the ecliptic plane. But the moon can stray 5.1º above or below the ecliptic. And the ecliptic is inclined at about 23.4º to the plane of the Earth’s equator. So the moon’s orbital plane can be inclined to the Earth’s equator at anything from 18.3º to 28.5º. This means the moon can never be overhead in the sky anywhere outside of a band between 28.5º north and south of the equator, and therefore its antipode is confined in the same way—always drifting around the Earth somewhere within, or just outside, the tropics.

The Cape Kennedy launch complex (now Cape Canaveral), lies at  28.6ºN. The most energy-efficient way to get a spacecraft into Earth orbit is to launch it due east, taking advantage of the Earth’s rotation to boost its speed. Such a trajectory puts the spacecraft into an orbit inclined at 28.6º to the equator. So a launch from Kennedy put a spacecraft into an orbit inclined relative to the plane of the moon’s orbit. The inclination might be a fractional degree, if the moon’s orbit were tilted favourably close to Kennedy; but generally it would be significantly larger than that, with the spacecraft’s orbit passing through the plane of the moon’s orbit at just two points.

As it happens, the situation at the time of the Apollo 11 mission shows all these angles between equator, ecliptic, moon’s orbit and Apollo parking orbit quite clearly, because all the tilts were roughly aligned with each other. Here’s a view from above the east Pacific at the time of Apollo 11’s launch: 13:32 GMT, 16 July 1969:

Relevant orbital planes at time of Apollo 11 launch
Click to enlarge
Prepared using Celestia

The red line is the ecliptic, the plane of Earth’s orbit around the sun. From the latitude and longitude grid I’ve laid on to the Earth, you can see how the Earth’s northern hemisphere is tilted towards the sun, enjoying northern summer. The plane of the moon’s orbit (in cyan) is carrying the moon above the ecliptic plane on the illuminated side of the Earth, so that the angle between the Apollo 11 parking orbit and the moon’s orbital plane is relatively small.

It wasn’t always like that, though. Here’s the situation four months later, at Apollo 12’s launch: 16:22 GMT, 14 November 1969. Now, the tilt of Apollo’s orbit and the Moon’s orbit are almost opposed to each other, making the angle between the two orbits impressively large.

Inclination of Apollo 12's orbit to the Moon's
Click to enlarge
Prepared using Celestia

Whatever the crossing angle, NASA needed to launch the Apollo moon missions so that the spacecraft’s orbit took it through the moon’s orbital plane at the same moment the antipode drifted through that crossing point. And in order to economize on consumables, that needed to happen within the time it took to make two or three spacecraft orbits, each lasting an hour and a half. This requirement dictated that there was always a launch window for each lunar mission—any launch that didn’t take place within a very specific time frame had no chance of bringing the spacecraft and the antipode together to allow a successful TLI.

At first sight, it seems like the launch window should be vanishingly narrow, given that the parking orbit intersects the moon’s orbital plane at only two points, only one of which can be suitable for a TLI at any given time. In fact, by varying the direction in which the Saturn V launched, NASA was able to hit a fairly broad sector of the lunar orbital plane. Launching in any direction except due east was less energy-efficient, but with additional fuel the Apollo spacecraft could still be placed in orbit using launch directions 18º either side of due east. The technical name for the launch direction, as measured in the horizontal plane, is the launch azimuth. So Apollo could be launched on azimuths anywhere between 72º and 108º east of north.

You can see this range of orbital options drawn out in sinusoids on Apollo 11’s Earth Orbit Chart:

Apollo 11 Earth Orbit chart
Click to enlarge

Cape Kennedy is at the extreme left edge of the chart, and all the options for launch azimuths between 72º and 108º are marked. Here’s a detail from that edge:

Detail of Apollo 11 Earth Orbit chart
Click to enlarge

Notice how launches directed either north or south of east take the spacecraft to a higher latitude than Cape Kennedy’s, and therefore into a more inclined orbit—at the extremes, Apollo orbits were inclined at close to 33º.

So NASA could take aim at the antipode by adjusting the launch direction. By launching north of east, they could hit a more easterly antipode; by launching south of east, they could hit a more westerly antipode. This range of options allowed a launch window spanning about four hours. A launch early in the launch window would involve an azimuth close to 72º, as the launch vehicle was aimed at the antipode in its most extreme accessible eastern position. During the four-hour window, as the moon moved across the sky from east to west, the antipode would track across the Earth’s surface in the same direction, and the required launch azimuth would gradually increase, until the launch window closed when an azimuth of 108º was reached. NASA planned to have their launch vehicle ready to go just as the launch window opened, to give themselves maximum margin for delays. Apollo 11 launched on time, and so departed along an azimuth very close to 72º.

Here’s the Apollo 11 launch trajectory:

Apollo 11 launch trajectory
Click to enlarge
Prepared using Celestia

The huge S-IC stage (the first stage of the Saturn V) shut down and dropped away with its fuel exhausted after just 2½ minutes, falling into the western Atlantic (where one of its engines was recently retrieved from 4.3 kilometres underwater). The S-II second stage then burned for 6½ minutes before falling away in turn, dropping in a long trajectory that ended in mid-Atlantic. Meanwhile, the S-IVB third stage fired for another two minutes, shoving the Apollo spacecraft into Earth orbit before shutting down at a moment NASA calls Earth Orbit Insertion (EOI). The astronauts then had about two-and-a-half hours in orbit (completing about one-and-three-quarter revolutions around the Earth) before their scheduled rendezvous with the lunar antipode over the Pacific. This gave them time to check out the spacecraft systems and make sure everything was working properly before committing to the long translunar trajectory.

At two hours and forty-four minutes into the mission, the S-IVB engine was fired up again, and worked continuously for six minutes as Apollo 11 arced across the night-time Pacific. Here’s that trajectory with the S-IVB ignition and cutoff (TLI proper) marked, as well as the plane of the moon’s orbit and the position(s) of the antipode(s). On this occasion I’ve marked the true lunar antipode as “Antipode”, and the antipode of the moon’s position in 3 days’ time as “Antipode+3”.

Apollo 11 TLI in relation to lunar antipodes
Click to enlarge
Prepared using Celestia

See how Apollo 11 accelerated continuously through the lunar orbital plane, clipping neatly past the three-day antipode. The velocity change in those six minutes took the spacecraft from 7.8 kilometres per second (the orbital speed of the parking orbit) to the 10.8 kilometres per second necessary for the planned translunar trajectory.

I promised I’d come back to the reason NASA used extra energy to propel the spacecraft into an orbit that would take it well past the moon, if it were not captured by the moon’s gravity. In part, because it speeded the journey—Apollo took three days to reach its destination, rather than five. But the main reason was to put Apollo on to a free-return trajectory. It shaved past the eastern limb of the moon and then (held by the moon’s gravity) looped around behind it. If it had not fired its engine to slow down into lunar orbit at that point, it would have reemerged from behind the western limb of the moon and come straight back to Earth. So there was a safety feature built in, if the astronauts encountered a problem with the main engine of their spacecraft—any other arrival speed would have resulted in a free-return orbit that missed the Earth.

Another safety feature of the Apollo 11 orbit was its inclination of around 30º to the equator, which was maintained as the spacecraft entered its transfer orbit. This meant that it avoided most of the dangerous radiation trapped in Earth’s Van Allen Belts.

The Van Allen belts are trapped in the Earth’s magnetic field, which is tilted at about 10º relative to Earth’s rotation axis—and the tilt is almost directly towards Cape Kennedy, with the north geomagnetic pole sitting just east of Ellesmere Island in the Canadian Arctic.

Location of Cape Kennedy relative to VAB
Source (modified)

This means that a spacecraft launched from Cape Kennedy, with an orbital inclination of 30º to Earth’s equator, has an inclination of about 40º to the geomagnetic equator. A departure orbit beginning in the North Pacific with that inclination rises up and over the Van Allen belts, passing through their fringes rather than through the middle. Of course, since the Earth rotates while the spacecraft’s orbital plane remains more or less fixed in space, it needs to depart within a few hours, otherwise it will lose the advantageous tilt of the radiation belts—but Apollo already had good reason to get going so as not to waste precious consumables.

To finish, here are a couple of diagrams I’ve prepared with Celestia, using an add-on created by user Cham. The add-on shows the Earth’s magnetic field lines, and the calculated trajectory of a few charged particles trapped in the radiation belt. I’ve used a subset of Cham‘s particle tracks, so I can show the position of the inner Van Allen Belt clearly—it’s the one that contains the high-energy protons which were of most danger to the astronauts.

Here’s the plane of Apollo 11’s departure orbit (red line) seen from above the Pacific; the plane of the moon’s orbit is also shown, in cyan. The plot is for the time of translunar injection.

Apollo 11's departure orbit relative to Van Allen Belts (1)
Click to enlarge
Prepared using Celestia

And some more views, to give a full three-dimensional impression:

Apollo 11 Translunar trajectory relative to Van Allen Belts 3
Click to enlarge
Prepared using Celestia
Apollo 11 Translunar trajectory relative to Van Allen Belts 2
Click to enlarge
Prepared using Celestia
Apollo 11 Translunar trajectory relative to Van Allen Belts 1
Click to enlarge
Prepared using Celestia

So that’s how Apollo got to the moon.

The Strange Shadows Of Apollo

AS11-40-5872
Click to enlarge
NASA AS11-40-5872

In a previous post, I explained how all the manned moon landings were made with the sun low in the sky behind the Lunar Module, so that long shadows accentuated terrain features, making it easier to locate a safe place to land. But this meant that the LM landed facing into its own shadow, so that the astronauts descended the ladder to the surface in the shade of their own vehicle. It seems as if they should have been fumbling around in the dark, to a large extent, because there is no air on the moon to scatter light into shadowed areas. But, as you can see from the Apollo 11 photograph above, although the shadows on the ground appear very dark, the shadowed face of the LM is quite well illuminated. That light is being reflected from the lunar surface, but it’s being reflected in a peculiar and interesting way, which is what I want to talk about here.

Take a look at this photograph of the Boon Companion’s shadow, projected on to an area of grassy parkland:

Heiligenschein on dry grass
Click to enlarge
© 2019 The Boon Companion

The area around her head appears strangely bright compared to the rest of the view. In fact, that patch of brightness is centred on the antisolar point of her camera—it’s directly opposite the sun.

What’s happening is called shadow hiding. The parkland is full of shadows cast by the blades of grass, so in most directions we can see a mixture of sunlight and shade. But when we look directly down-sun, we see only the illuminated surfaces of the individual blades of grass—they hide their own shadows from our view. So the region around the antisolar point appears bright, compared to the rest of the field where shadows are visible.

A high vantage point above a field of vegetation is a good way to see this effect. A person looking down into the field will see the bright patch concentrated around the shadow of their head, and that gives the phenomenon another name—it’s called heiligenschein, German for “holy light”, because the bright patch resembles the depiction of a halo around a saint’s head. In particular, the shadow-hiding version of the effect is called dry heiligenschein—there’s also a “wet” version that occurs when drops of water (for instance, beads of dew) act as retro-reflectors of the kind I discussed in my post about signalling mirrors.

So what’s the relevance to the moon? The moon is largely covered in a layer of compacted rocks and dust called regolith, the product of billions of years of meteor impacts. This surface has never been weathered by the action of air and water, and so is jagged, on the small scale, beyond anything we commonly encounter on Earth. So it produces exactly the same sort of shadow-hiding heiligenschein as a field of grass on Earth.

We knew about this long before we went to the moon, for two reasons. The first is that the full moon is so very much brighter than the half-phase moon—ten times brighter, rather than just the factor of two you might expect. The second is that the full moon looks like a uniformly illuminated flat circle, rather than a sphere—this is because the edges of the full moon appear just as bright as the centre of its disc.

Full moon
Photo via Good Free Photos

We’re used to surfaces that are inclined to our line of sight (like those around the edge of the moon) reflecting less light than transverse surfaces (like the middle of the lunar disc), but the moon’s surface doesn’t seem to obey that rule. Both of these effects are explicable in terms of dry heiligenschein—the whole full moon is behaving like that bright patch of shadow-hiding grassland.

The way in which the moon’s surface brightens dramatically when it is opposite the sun in the sky is called the opposition surge. It’s also sometimes called the Seeliger effect, after Hugo von Seeliger, an astronomer who used a similar opposition surge in the brightness of Saturn’s rings to deduce that the rings consisted of multiple self-shadowing particles.

One thing we didn’t know, until the Apollo missions reached the moon, was how bright the exact antisolar point on the moon’s surface would look. Here on Earth, we can never see the brightly illuminated full moon exactly opposite the sun, because in that position it is eclipsed by the Earth’s shadow. But when Apollo 8 went into orbit around the moon, the astronauts were able to look down on the illuminated moon’s surface with the sun precisely behind them. A publication in the Astronomical Journal soon followed*, showing a distinctive sharp peak in reflectance close to the antisolar point.

Pohn et al. Astrophysical Journal (1969) 157: L195So when the Apollo 11 astronauts landed on the moon, they were already expecting to see bright dry heiligenschein on the regolith around them—in fact, some time was set aside in their busy schedule for them to record their observations of this “zero phase angle” effect. Here’s a view of the checklist attached to Aldrin’s spacesuit glove, reminding him to check and describe the reflectance of the lunar surface “UP/DOWN/CROSS SUN” during his time on the lunar surface:

Detail from S69-38937 (Aldrin's gloves)
Click to enlarge
NASA: Detail from S69-38937

And the effect was immediately obvious. Here’s a view of the lunar surface and the shadow of the Lunar Module, taken from the right-hand window of the LM shortly after landing:

AS11-37-5454
Click to enlarge
NASA AS11-37-5454

That bright reflection coming from around the shadow zone is bouncing straight back, like a spotlight, to illuminate the shadowed face of the LM. And as the astronauts moved around the surface, they continually observed a patch of “holy light” around the shadow of their helmets. We can actually see what this heiligenschein halo looked like, if we zoom in on Armstrong’s famous portrait of Aldrin:

AS11-40-5903
Click to enlarge
NASA AS11-40-5903
Detail from AS11-40-5903
Click to enlarge

Examining the reflection in Aldrin’s helmet visor, we can see (among other interesting things) his long shadow stretching off into the distance, and the patch of heiligenschein he was able to observe around his own head. Here’s what he reported at Mission Elapsed Time 110 hours, 28 minutes:

As I look around the area, the contrast, in general, is … comes about completely by virtue of the shadows. Almost [garbled] looking down-Sun at zero-phase very light-colored gray, light gray color [garbled] a halo around my own shadow, around the shadow of my helmet.
Then, as I look off cross-Sun, the contrast becomes strongest in that the surrounding color is still fairly light. As you look down into the Sun [garbled] a larger amount of [garbled] shadowed area is looking toward us. The general color of the [garbled] surrounding [garbled] darker than cross-Sun. The contrast is not as great.

(Aldrin, you’ll notice, had recurrent problems with his comms during lunar surface activities.)

But the moon was really just our first extraterrestrial encounter with this effect. We now know that heiligenschein is common on the airless, meteor-battered worlds of the solar system, most of which are surfaced with regolith like the moon’s.

Here, for instance, is a photography the Japanese Hyabusa2 spacecraft took of its own shadow during its recent close encounter with the asteroid Ryugu:

JAXA Hyabusa2 shadow on Ryugu
JAXA

It’s an absolutely perfect illustration of the sort of self-shadowing surface that produces heiligenschein.


* Pohn, HA, Radin, HW, Wildey, RL. The Moon’s photometric function near zero phase angle from Apollo 8 photography. Astrophysical Journal (1969) 157: L193-L195

If the image looks a little unfamiliar, that’s because it’s the rather poorly composed picture direct from Armstrong’s Hasselblad camera. The story of how that original image AS11-40-5903 was processed to produce the more familiar (and now thoroughly iconic) Aldrin portrait is told here.

M*A*S*H And The Moon Landings

Still from M*A*S*HI’ve got into the habit of checking what the Internet Movie Database has to say about films after I’ve watched them. After rewatching Robert Altman’s 1970 classic M*A*S*H, I happened on something odd in the film’s “Trivia” section at IMDb:

The loudspeaker shots and announcements were added after editing had begun, and the filmmakers realized that they needed more transitions. Some of the loudspeaker shots have the Moon visible and were shot while the Apollo 11 astronauts were on the Moon.

Well, that’s not right. Like me, many people of a certain age have a pretty vivid recollection of what the moon looked like during the Apollo 11 landing, and it didn’t look as it appears in the film’s nocturnal loudspeaker shot, at the head of this post. Here’s a close-up:

Moon phase from M*A*S*HThat’s a gibbous waxing moon, a day or two past its First Quarter.

The Apollo 11 Lunar Module touched down at 20:17:39 GMT on 20 July 1969. It took off less than a day later, at 17:54 on 21 July. Here’s what the moon looked like at landing and take-off (I’ve marked the Apollo 11 landing site, for reasons I’ll come back to):

Moon phase during Apollo 11 landing
Click to enlarge
Prepared using Celestia
Moon phase during Apollo 11 LM takeoff
Click to enlarge
Prepared using Celestia

The moon was a fattish crescent throughout the first moon landing. If the image of the moon at the head of the post was taken during July 1969, it was probably on the night of the 24-25 July, by which time the astronauts were safely back on Earth.

So where does the story come from? I think it’s from Enlisted: The Story Of M*A*S*H (2002). In that documentary Robert Altman describes how, during the editing process, he realized that he needed more transitional shots to insert into what was essentially a very episodic story. He came up with the idea of the now-iconic public address announcements by the hapless Sergeant-Major Vollmer. The film’s editor Danford Greene then goes on to explain:

I thought that we needed more speakers—more inserts of speakers. So Bob [Altman] said, “Fine, go shoot them.” But one wonderful thing that I don’t think anyone knows about is that our astronauts were on the moon. They had just hit the moon, like the day before, and I’ve got a couple of those in the shots of the speakers with our astronauts on the moon in the background.

So Greene doesn’t actually specify Apollo 11. But given the film’s release date in 1970, he can only be referring to Apollo 11 or 12, since the remaining moon landings occurred in 1971-2.

And the Apollo 12 landing is a much better match for the lunar phase in the film:

Moon phase during Apollo 12 landing
Click to enlarge
Prepared using Celestia

So at first I thought I’d solved the puzzle. But Apollo 12 landed on the moon on 19 November 1969, and the movie was first released in the USA on 25 January 1970. That seems like a pretty tight schedule. Another M*A*S*H documentary confirmed my suspicion—the AMC TV series Backstory discussed the making of M*A*S*H in an episode broadcast in 2000, and stated that filming ended in June 1969, and the edited movie was shown to a (rapturous) test audience in September. That puts Apollo 11 precisely in the frame, and excludes Apollo 12. So either there were other loudspeaker shots, taken during the night of 20-21 July, which didn’t make it into the final version of the movie, or Danford Greene just misremembered the exact date—he had other things on his mind at the time, I’m sure.

But notice how both Apollo 11 and Apollo 12 landed close to the edge of the illuminated part of the moon, in a region where the sun had only recently risen. That’s no coincidence—here’s Apollo 14:

Moon phase during Apollo 14 landing
Click to enlarge
Prepared using Celestia

Maybe you’ll take my word for it that the other three landings took place under similar circumstances.

The Apollo spacecraft orbited the moon in a clockwise direction when viewed from the north, so crossed the moon’s face from right to left in the views I’ve presented, which have north at the top. So the Lunar Module descended towards the landing site in the same direction, from daylight towards darkness. The timing of the landing was chosen specifically to be a couple of days after sunrise at the landing site, so the astronauts in the LM descended with the sun at their backs, avoiding glare, while long shadows accentuated the shape of the terrain ahead, making it easier to pick out a level landing area.

What was useful on the descent had the potential to be a hazard on the ground, because the Lunar Module landed facing down-sun, into its own long shadow—and so the astronauts descended to the lunar surface in the shadow of the LM. With a black sky above shedding no scattered light into the shadow zone, that seems like it should have been a recipe for a fall and a broken ankle (at best).

But they benefited from a rather remarkable optical effect produced by lunar dust—and I’ll write about that in another post soon.

Equinox

March equinox, 2019
Click to enlarge
Prepared using Celestia

I’m posting this on March 20, the date of the first equinox of the year. In the northern hemisphere, we call it the spring or vernal equinox, because it marks the start of astronomical spring in northern latitudes. (The meteorological seasons follow the calendar months, so meteorological spring started on March 1.) Of course, for people who live in the southern hemisphere the same moment marks the onset of astronomical autumn—so it’s becoming more customary to refer to this equinox as the March or northward equinox, according to the month in which it occurs and the direction in which the sun is moving in the sky, thereby avoiding the awkward association with a specific season. Correspondingly, the other equinox is designated the September or southward equinox.

At the equinoxes, the sun stands directly above the Earth’s equator. Three months later, it reaches its most northerly or southerly excursion in the Earth’s sky, and begins to move towards the equator again, until another equinox occurs, six months after the previous one.

With the sun over the equator, the division between day and night runs through both poles. So every line of latitude is (almost) evenly divided between day and night, and (pretty much) everyone on Earth can expect to experience (something pretty close to) 12 hours of daylight and 12 hours of darkness around the time of the equinox. Hence the name, which is derived from Latin æquus, “equal”, and nox, “night”.

The previous paragraph is thick with disclaimers because an exact division into equal periods of day and night applies only to a strictly geometric ideal, in which a point-like sun illuminates an Earth with no atmospheric refraction. In the real world, the sun is about half a degree across, so it continues to shed daylight even when the centre of its disc is below the horizon. And atmospheric refraction serves to lift the solar disc into view even when it is, geometrically speaking, below the horizon. (I’ve written about these effects in more detail in my post about the shape of the low sun, and my calculation of which place on Earth gets the most daylight.) Both these effects serve to extent the period of daylight. At the equator, their combined effect means that the equinoctial day is almost quarter of an hour longer than the equinoctial night. And the effect increases the farther from the equator you travel, because the sun rises and sets on a more diagonal trajectory relative to the horizon. At the extreme, we find that the equinoctial sun is visible above the horizon at the north and south poles simultaneously, skimming along just above, and almost parallel to, the horizon. This year, the sun will rise at the north pole in the evening of March 18; it won’t set at the south pole until the very early morning of March 23. (Both according to Greenwich Mean Time.) So, counterintuitively, both poles are experiencing 24-hour daylight at the time of the equinox.

Now let’s consider the timing of the equinox. The image at the head of this post is the sun’s view of the Earth on 20 March 2019, at 21:59:34 GMT. You can see from the reflected highlight in the Pacific Ocean that the sun is shining directly down on the equator, somewhere in the Pacific to the east of the Date Line.

West of the Date Line, a new day has already begun, so for anyone in a time zone more than two hours ahead of Greenwich, this equinox is occurring on March 21. But here in the UK, which keeps GMT in March, we haven’t had an equinox on March 21 since 2007, and we won’t have another until 2102. In fact, all our March equinoxes will occur on March 20 until 2044, when we’ll start seeing them fall on March 19, one year in four.

What’s happening to make the dates shift like that?

The problem is that the average length of a tropical year (the time between one equinox and its equivalent the following year) is 365.2422 days. So in a sequence of 365-day years, the seasons will come around 0.2422 days (5 hours 49 minutes) later each year. The date of the equinoxes would very quickly run ahead through the calendar, if it weren’t for leap years. During a 366-day year, the equinox arrives 18 hours 11 minutes earlier than it did the previous year, because the extra day of February 29 has shoved the calendar date ahead by 24 hours, outstripping the movement of the equinox.

So the GMT timing of the March equinox looks like this, for the forty years spanning 2000 *:

March equinox times, 1980-2020
Click to enlarge

A sawtooth pattern, made up of three steps forward, totalling 17 hours 26 minutes, followed by one jump back of 18 hours 11 minutes. So the equinoxes actually drift back through the calendar year, at a rate of about 45 minutes every four years. Hence the fact we haven’t seen a March 21 equinox in the UK for more than a decade, and will start seeing March 19 equinoxes in a couple more decades.

And that was the problem with the old Julian calendar, and its regular repeating pattern of leap years. The seasons drifted steadily earlier in the calendar. The problem was addressed with the introduction of the Gregorian calendar in 1582, which drops three leap years in four centuries. Centuries not divisible by 400 are not leap years—so we dropped a leap year in 1700, 1800 and 1900, but had one in 2000. And we’ll drop leap years in 2100, 2200 and 2300. (I wrote more about the Gregorian calendar reform in my post concerning February 30.)

The interruption to the sawtooth regression of the equinox relative to the calendar will look like this in 2100:

March equinox times, 2090-2110
Click to enlarge

That extra forward drift is cumulative over the three centuries of dropped leap years. Here’s what the equinox timing looks like, on leap years between 1600 and 2400:

Leap-year March equinoxes, 1600-2400
Click to enlarge

Here we see the steady backwards drift during each century, as shown in my first chart. Then a jump forward at the turn of the century when the leap day is omitted, as was shown in my second chart. If we omitted the leap day every century, it’s evident that the trend would carry the time of the equinox steadily forward relative to the calendar. But by observing a leap day in 2000 we allowed the backward drift of the equinox to continue uninterrupted from the 1900s into the 2000s, undoing the forward drift incurred by the three missed leap days.

It’s neat, isn’t it? But there’s still a very slight mismatch. The average length of a Gregorian calendar year is 365.2425 years, a little longer than the tropical year of 365.2422. So there’s a very slow backward drift of the equinox relative to the calendar. Compare, for instance, the peak immediately after 1900 to the peak after 2300. The 1904 leap year equinox fell on March 21, whereas the one in 2304 will occur late on March 20. There were twelve March 21 equinoxes in a row (1900 to 1911, inclusive) at the start of the twentieth century. There will be just four (2300 to 2303, inclusive) at the start of the twenty-fourth.

Still, not to be sniffed at. That’s the longest run of March 21 equinoxes that will ever happen in the future, at least until the Gregorian calendar is revised in favour of something more accurate. Mark it on your calendar.


* All my figures for equinox timings come from Jean Meeus’s incomparable Astronomical Tables Of The Sun, Moon And Planets.

The Myth Of The Starbow

Cover of Starburst, by Frederik PohlThus, with all Einstein numbers of flight [velocity as a proportion of the speed of light] greater than 0.37 a major dark spot will surround the take-off star, and a minor dark spot the target star. Between the two limiting circles of these spots, all stars visible in the sky are coloured in all the hues of the rainbow, in circles concentric to the flight direction, starting in front with violet, and continuing over blue, green, yellow and orange to red at the other end.

E. Sänger “Some Optical And Kinematical Effects In Interstellar Astronautics” Journal of the British Interplanetary Society (1962) 18(7): 273-7

 

Above is one of the earliest descriptions of the appearance of the sky as seen from a spacecraft travelling at close to the speed of light, written more than half a century ago. It predicts something remarkable—that the sky would be dark both ahead of and behind the spaceship, and between these two extensive discs of darkness a rainbow would appear. One of the best illustrations of this phenomenon that I’ve found appears on the cover of Frederik Pohl’s 1982 science fiction novel, Starburst, shown at the head of this post. (This is both unexpected and ironic, for reasons I’ll reveal later.)

Now, I’ve recently invested four posts in systematically piecing together the appearance of the sky from a spacecraft moving at close to the speed of light. If you’re interested, the series begins here, builds mathematical detail over the second and third posts, and draws it all together, with illustrations, in the final one. Using the equations of special relativity for aberration and Doppler shift, and applying them to black-body approximations of stellar spectra, I was able to come up with some pictures using the space simulator software Celestia.

Here’s a wide-angle view of the sky ahead seen when moving at half the speed of light:

Sky view ahead at 0.5c
Click to enlarge

And a tighter view at 0.95 times light speed:

Sky view ahead at 0.95c
Click to enlarge

And at 0.999 times light speed:

Sky view ahead at 0.999c
Click to enlarge

No sign of Sänger’s “minor dark spot” ahead, and no real indication of a rainbow. The stars appear hot and blue ahead, in a patch that becomes more concentrated with increasing speed, and that central area is surrounded by a scattered rim of red-shifted stars, shading off into darkness all around. At very high velocity, the blue patch begins to fade. (For a detailed step-by-step explanation of all this, see my previous posts, referenced above.)

What’s going on? Well, Sänger made an embarrassing mistake:

For simplicity’s sake we may assume that the stars in the sky, as seen from the space vehicle when at rest, are all of a medium yellow colour of perhaps λ0 = 5900Å.

He modelled all the stars in the sky as if they emitted light at a single wavelength, like a laser! Unsurprisingly, when these monochromatic stars were Doppler-shifted, they passed through all the colours of the rainbow before disappearing into ultraviolet wavelengths (ahead) or infrared (behind). Hence the dark patches fore and aft of Sänger’s speeding spacecraft, and the rainbow ring between.

But of course real stars emit light over a range of wavelengths, with peak emissions that vary according to their temperatures. As I explained in previous posts, when real stars are Doppler-shifted they change their apparent temperature, so the stars ahead of our spacecraft appear to get hotter, while those behind appear cooler. Hot stars may look white or blue, but never violet. Cool stars may be yellow or orange or red, or faded to invisibility, but there is no temperature at which they will appear green. And the fact that stars of different temperatures are scattered all across the sky means that Doppler shift can’t ever produce the concentric circles of colour that Sänger imagined. Sänger’s rainbow is a myth, based on a fatally erroneous assumption (“for simplicity’s sake”) that really should have been picked up by reviewers at the British Interplanetary Society.

Sänger’s idea would have vanished into appropriate obscurity, were it not for the fact that science fiction writer Frederik Pohl was a member of the British Interplanetary Society, and received its monthly journals. Writing about it later, Pohl mistakenly recalled reading Sänger’s article in another BIS publication, Spaceflight. (BIS members received one publication as part of their membership, and could pay to receive the other, too—it seems likely Pohl subscribed to both.) He later described his encounter with Sänger’s article like this:

Before I had even finished it I sat up in bed, crying “Eureka!” It was a great article.

“Looking For The Starbow” Destinies (1980) 2(1): 8-17

Pohl loved this image of a rainbow ring, and called it a “starbow”. He went on to feature the starbow in an award-winning novella, “The Gold At The Starbow’s End” (1972):

The first thing was that there was a sort of round black spot ahead of us where we couldn’t see anything at all […] Then we lost the Sun behind us, and a little later we saw the blackout spread to a growing circle of stars there.
[…]
Even the stars off to one side are showing relativistic colour shifts. It’s almost like a rainbow, one of those full-circle rainbows that you see on the clouds beneath you from an aeroplane sometimes. Only this circle is all around us. Nearest the black hole* in front the stars have frequency-shifted to a dull reddish colour. They go through orange and yellow and a sort of leaf green to the band nearest the black hole* in back, which are bright blue shading to purple.

If you’re on the alert, you’ll notice that Pohl got the colours the wrong way around—Sänger’s prediction placed red behind and violet ahead (not Pohl’s “purple”, which is a mixture of red and blue).

When Pohl’s novella was published as part of a collection, its striking title was used as the book title, and Pohl’s description (including the reversed colours) leaked into the cover art of one edition:Cover of The Gold At The Starbow's End, by Frederik PohlPohl was a skilled and popular writer, and he cemented the erroneous “starbow” into the consciousness of science fiction readers.

Also in 1972 the space artist Don Davis, cooperating with the  starship designer and alleged translator of Ice-Age languages (among many other things) Robert Duncan-Enzmann, produced a distinctly weird New Age image of the starbow, which you can find here. I have scant idea what that’s about.

But then, in 1979, along came John M. McKinley and Paul Doherty, of the Department of Physics at Oakland University, Michigan. They had a computer, and they were unconvinced by Sänger’s identical monochromatic stars. They instead modelled the real distribution of stars in Earth’s sky, approximating each one as a blackbody radiator of the appropriate temperature, and applying the necessary relativistic transformations:

One prediction for the appearance of the starfield from a moving reference frame has been circulated widely, despite physically objectionable features. We re-examine the physical basis for this effect. […] We conclude with a sequence of computer-generated figures to show the appearance of Earth’s starfield at various velocities. A “starbow” does not appear.

“In search of the ‘starbow’: The appearance of the starfield from a relativistic spaceship” American Journal of Physics (1979) 47(4): 309-15

The physicist (and science fiction writer) Robert L. Forward mischievously forwarded a preprint of McKinley and Doherty’s article to Pohl. And Pohl, tongue firmly in cheek, described this experience in the Destinies article I quoted above:

… “there is no starbow,” they conclude. True, they then go on to say, “we regret its demise. We have nothing so poetic to offer as its replacement, only better physics”—but what’s the good of that?

Only slightly chastened, Pohl later went on to expand the novella “The Gold At The Starbow’s End” into a frankly-not-very-good novel, Starburst, the cover of which appears at the head of this post, resplendent with a starbow. I find it difficult to imagine the confusion that might have led to that cover, given that Pohl had removed the starbow from his narrative, while managing to give McKinley and Doherty a very slight (but distinctly ungracious) kicking in the rewrite:

Right now we’re seeing more in front than I expected to and less behind. Behind, mostly just blackness. It started out like, I don’t know what you’d call it, sort of a burnt-out fuzziness, and it’s been spreading over the last few weeks. Actually in front it seems to be getting a little brighter. I don’t know if you all remember, but there was some argument about whether we’d see the starbow at all, because some old guys ran computer simulations and said it wouldn’t happen. Well, something is happening! It’s like Kneffie always says, theory is one thing, evidence is better, so there! (Ha-ha.)

As the cover of Starburst suggests, the starbow was just too good an image to die easily, and few science fiction readers (or writers) read the American Journal of Physics. Undead, the starbow continued to trudge forward—a zombie idea. In September 1988, Robert J. Sawyer had a short story published in Amazing Stories, entitled “Golden Fleece”. It scored the coveted cover illustration for that month:

Amazing Stories, Golden Fleece 1988It’s a slightly confusing image, illustrating a key event in the story. The vehicle in the foreground is a shuttle-craft, which is escaping from the large spacecraft in the background, a relativistic Bussard interstellar ramjet travelling from right to left. And there’s a starbow! And it’s the wrong way round again, with red at the front! I haven’t read Sawyer’s original short story, but I have read the 1990 novel of the same name, in the form of its 1999 revised edition:

The view of the starbow was magnificent. At our near-light speed, stars ahead had blue-shifted beyond normal visibility. Likewise, those behind had red-shifted into darkness. But encircling us was a thin prismatic band of glowing points, a glorious rainbow of star—violet, indigo, blue, green, yellow, orange and red.

I don’t mean to single Sawyer out, because lots of authors were still invoking the starbow in their writing, but his 1999 novel is the most recent persisting version of the starbow I’ve turned up so far, particularly notable because it recycled the Amazing Stories cover art:Cover of Golden Fleece by Robert J. Sawyer

Twenty years after McKinley and Doherty wrote “We have nothing so poetic to offer as its replacement, only better physics”, the starbow lived on.

And you can still find it—order a starbow painting by Bill Wright on-line, here.


Note: I happened upon Stephen R. Wilk’s How The Ray Gun Got Its Zap, which I’ve previously reviewed, while searching for references to the starbow. Wilk’s chapter “The Rise And Fall And Rise Of The Starbow” overlaps with some of what I’ve written here, but also discusses starbow-like manifestations in film.

* Pohl’s “black holes” are the patches of sky devoid of visible stars ahead of and behind the narrator’s spaceship, as predicted by Sänger, not the astronomical objects of the same name.

The Celestial View From A Relativistic Starship: Part 4

Bussard Interstellar Ramjet
Bussard Interstellar Ramjet (source)

This series of posts is about what the sky would look like to an observer travelling at close to the speed of light. In Part 1, I described the effects of light aberration on the apparent position of the stars; in Part 2, I introduced the effects of Doppler shift on the frequency of the starlight; and in Part 3 I described the effect that Doppler shift would have on the appearance of real stars.

In this post, I’m planning to pull all that together and show you some sky views I’ve generated using the 3-D space simulator Celestia. To do this, I had to write some code to rewrite Celestia‘s stars and constellation-boundaries databases, using the various aberration and Doppler equations I’ve previously presented. The result was a set of Celestia databases that reproduce the appearance of the sky for an observer moving at high velocity—allowing me to exploit all Celestia‘s rendering capabilities to produce my final graphics.

The final hurdle on the way to producing my sky views was to decide how to convert the Doppler-shifted energy spectrum of a star into the corresponding visual appearance. The human eye is not equally sensitive to all visible wavelengths, and that has to be taken into account when converting power (in watts) to luminous flux (in lumens). But the eye’s sensitivity also changes in response to differing light levels—the retinal cone cells which give us our photopic (daytime, colour) vision have a different sensitivity profile from the rod cells that give us scotopic (nighttime, black-and-white) vision.

There are two classic papers dealing with the sky view from a relativistic spacecraft. McKinley and Doherty (1978)* make the visual conversion using a model of scotopic vision, with peak sensitivity at a wavelength of 500nm, whereas Stimets and Sheldon (1981) make the conversion using an approximation to photopic visual sensitivity, with a peak at 555.6nm. You might imagine that McKinley and Doherty have the right idea, applying scotopic vision to a problem involving the visibility of the stars. Unfortunately, the stellar visual magnitude scale is calibrated by neither photopic nor scotopic vision, but by an instrument called a photometer, counting the number of photons that pass through a filter that approximates the sensitivity of the human eye. The old visual standard was provided by the Johnson V-band filter, but newer star surveys (like Hipparcos and Tycho) have used filters with slightly different passbands. The resulting differences are tiny compared to the variable sensitivity of the human eye, however.

Here are standard curves for scotopic and photopic sensitivity, compared to the V-band filter curve:

Luminous efficiency curves compared to V-band filter
Click to enlarge

Although the V-band peak lies intermediate between scotopic and photopic, the bulk of the curve lies within the photopic range, and well away from scotopic. This is confirmed when I generate black-body bolometric corrections (the difference between bolometric magnitude and visual magnitude) using the three different curves above:

Bolometric corrections generated from photopic, scotopic and V-band
Click to enlarge

Photopic vision turns out to be a very good match for the V-band. Scotopic vision, with its increased blue sensitivity, peaks at higher temperatures, and is actually inconsistent with the standard visual magnitude scale. So McKinley and Doherty’s results are unfortunately skewed away from the conventional visual magnitude scale, assigning blue stars inappropriately bright visual magnitudes, and red stars inappropriately dim magnitudes. This has consequences for anyone using their formulae—for instance John O’Hanley’s excellent Special Relativity site, which in its “Optics and Signals” section does something very similar to what I’m doing later in this post, but with results that are skewed by use of McKinley and Doherty’s formula to convert bolometric magnitude to visual magnitude.

To generate my Celestia views, I used the V-band profile.

First up, a series of views ahead of our speeding spacecraft, which is travelling directly out of the plane of the solar system, towards the constellation Draco. Throughout these images, you can orientate yourself using the superimposed grid. It’s Celestia‘s built-in ecliptic grid, but I’ve modified the latitude markings to show the angle θ′ instead—the angle measured between a sky feature and the dead-ahead direction. So θ′=0° directly ahead, and 180º directly astern, with the 90º position at right angles to the line of flight. The outermost ring in the views that follow is at θ′=30°. I’ve set Celestia to show star brightness with scaled discs, and to display colours according to black-body temperature. (In reality, the colours of dimmer stars would not be evident—they would appear white.) Stars are displayed down to a magnitude limit of 6.5, which is in the vicinity of the commonly quoted cut-offs for naked-eye visibility (although under ideal conditions some people can do much better).

Here’s the stationary view, for orientation (I’m afraid you’ll need to click to enlarge most of these images to appreciate what they show):

View ahead, Beta=0
Click to enlarge

Draco occupies much of the view, with the Pole Star, Polaris, visible to the right.

Now here’s the view for a spacecraft in the same position, travelling at 0.5 of the speed of light (hereafter, I’ll quote all velocities in this form, using the symbol β):

View ahead, Beta=0.5
Click to enlarge

Even unenlarged, you can see that many more stars are visible in the same visual area as the previous image. The constellations have shrunk under the influence of aberration, and many invisibly dim stars have been brightened by Doppler shift so as to become visible. Celestia‘s automatic labelling system has stepped in to add names and catalogue numbers to the brightest. The fact that the light from stars ahead is being shifted towards the blue end of the spectrum is highlighted by the brightening of μ Cephei, the “Garnet Star”, which is a red giant deserving of its nickname, but which now appears blue-shifted to an apparent temperature of 5800K, making it appear yellow-white. The brightening effect of blue-shift is most marked for cool stars—for instance, the cool carbon star Hip 95154 has become a very marked presence in Draco, having brightened through 4.4 magnitudes.

Here’s the view at β=0.8:

View ahead, Beta=0.8
Click to enlarge

The sky is now so densely populated with bright blue stars that I’ve turned off Celestia‘s naming function to prevent clutter. Here and there are bright orange-yellow stars—these are in fact cool red stars like Hip 95154, which initially brighten dramatically, through several magnitudes, when blue-shifted.

Now, β=0.95:

View ahead, Beta=0.95
Click to enlarge

I’ve had to turn off constellation names now, but a look to the right of frame reveals Orion just coming into view, although the red star Betelgeuse in one of Orion’s shoulders is now blue in colour. A little inwards from Orion is Taurus, with its orange giant star Aldebaran similarly blue-shifted. Taurus gives you the marker for the zodiac constellations, which are arrayed in a circle just 20º away from the centre of the view.

And finally, β=0.999:

View ahead, Beta=0.999
Click to enlarge

The blue-shifted region has now shrunk to a diameter of 34º, although it contains most of the stars in the sky. The bright yellowish star on the right, lying just beyond the θ′=20º circle as a very noticeable outlier within the red-shifted zone, is Canopus, a star familiar to those in the southern hemisphere. In the rest frame, Canopus lies only about 14º from the dead-astern position of our spacecraft.

At this velocity, the total number of visible stars has begun to decline, as has the overall brightness of the blue-shifted patch—most stars are now so strongly blue-shifted that their visible light is fading away, as described in Part 3.

Here’s a graph of the number of visible stars (visual magnitude<6.5) in the whole sky as velocity increases. The dashed line marks the number of visible stars in the blue-shifted region ahead:

Overall count of stars with V less than 6.5, and stars in the blue-shifted region, against beta
Click to enlarge

Unsurprisingly, stars in the blue-shifted region dominate the star count as velocity increases. In this dataset (the Tycho-2 star catalogue as prepared for Celestia by Pascal Hartmann, which contains more than two million stars), the star count peaks at β=0.97. At this velocity 99% of the visible stars are in the blue-shifted region, which occupies just 10% of the sky.

Here’s the curve for the integrated star magnitude of the whole sky, and for the blue-shifted region:

Integrated sky magnitude for V less than 6.5, and magnitude of blue-shifted region, against beta
Click to enlarge

The overall brightness of the sky, and of the blue-shifted region, peaks at a slightly higher velocity than the star count—in this dataset, at β=0.98. The difference is because of cool stars joining the edge of the blue-shifted region and undergoing marked brightening, which temporarily offsets the gradual fading out of strongly blue-shifted stars in the middle of the forward view.

Now, a look to the side of the spacecraft, to illustrate how the stars in this view thin out and fade away. Firstly, the view when β=0. The direction of travel is towards the top of the image.

Side view, Beta=0
Click to enlarge

Orion is visible at bottom of frame, with the zodiac constellations of Gemini and Taurus occupying the θ=90º position.

Now, β=0.5:

View to the side, Beta=0.5
Click to enlarge

Aberration has carried Orion towards ecliptic north, where it straddles the transition from blue-shift to red-shift, at θ′=74º.

And β=0.8:

Side view, Beta=0.8
Click to enlarge

The whole view is now red-shifted, with the blue/red transition out of sight at θ′=60º. The southern constellation of Columba has now crept into view, considerably enlarged. Sirius, at top of frame, remains bright as it edges towards the blue-shift region, although moderate red shift has dropped its apparent temperature from 9200K to 7800K.

Finally, for this series, β=0.95:

Side view, Beta=0.95
Click to enlarge

Very few stars are visible—either aberration has carried them into the blue-shifted region ahead, or they are strongly red-shifted to invisibility. The constellation boundaries visible in this view separate Columba (at top) from Puppis and Carina (left) and Pictor (right).

As a final exercise, I’m going to follow the progress of a single constellation as it undergoes aberration and Doppler shift. I’m going to use the southern hemisphere constellation of Crux, the Southern Cross, which in the rest frame lies well in the rear view from our spacecraft, at θ=140º. Its four prominent stars are: two hot blue giants, Mimosa (β Crucis) and δ Crucis; one hot blue subgiant, Acrux (α Crucis); and one cool red giant, Gacrux (γ Crucis). All the blue stars have temperatures over 20000K, which places them above the 16350K threshold discussed in Part 3, meaning that they will initially brighten with red shift. The red giant has a temperature of 3400K, so it can be expected to brighten dramatically when blue-shifted, and to dim equally dramatically when red-shifted.

Here’s the rest-frame view, with the stars labelled:

Crux, Beta=0
Click to enlarge

Now, here’s the same constellation at β=0.5. I’ve kept the size of the field of view the same, and merely shifted it to follow the movement of the constellation under aberration:

Crux, Beta=0.5
Click to enlarge

The constellation is larger, and now positioned at θ′=115º. All the blue stars have grown brighter by about half a magnitude under the influence of red shift, whereas the red giant Gacrux has fallen in brightness by 0.8 magnitudes.

Here’s β=0.8, with the width of the field of view set to the same as previously:

Crux, Beta=0.8
Click to enlarge

The constellation is at θ′=85º, placing it in the side view from the spacecraft, at its maximum magnification by aberration, and its maximum red shift. Acrux and Mimosa are very slight brighter, their apparent temperatures still above the 16350K threshold; δ Crucis is very slightly dimmer, its apparent temperature having dropped to 13000K. And Gacrux has dropped in brightness by another half magnitude. But at higher velocities the constellation will move into regions of lower red-shift, so trends will now reverse.

Here’s β=0.95:

Crux, Beta=0.95
Click to enlarge

The constellation is crossing θ′=50º, and is just about to enter the blue-shifted region at θ′=44º. It’s now almost as close to the dead-ahead direction in the moving frame as it was to the dead-astern direction in the rest frame. Its Doppler shift is therefore close to 1, and its appearance in terms of size, colour and brightness is returning to approximately what it was in the rest frame.

Finally, here’s β=0.999 (I’ve had to zoom in fivefold compared to the previous images, to pick Crux out of the blue-shifted clutter):

Crux, Beta=0.999
Click to enlarge

The constellation is now only 7º away from the forward direction, and is strongly blue-shifted. Gacrux is now an extremely bright blue star, whereas the blue giants are blue-shifted to such high temperatures that their visible output has declined dramatically.

So that’s about it for the appearance of the stars. With velocities greater than 0.999, the blue-shifted area become more compact, and the number of visible stars gradually diminishes. At β=0.99999, only about 5000 visible stars are packed into an area 9º across, with an integrated visual magnitude of -5. The rest of the sky is dark.

But beyond β=0.99999, something interesting happens. As the number of visible stars continues to fall, a fleck of red appears, just a few arc-minutes across. It becomes white, and then blue, and brightens to an astonishing visual magnitude of -26—as bright as the sun seen from Earth. It’s the Cosmic Microwave Background (CMB), blue-shifted into the visible spectrum, and it won’t begin to fade until the velocity is over 0.9999999 (seven 9’s!).

The appearance of the CMB reminds us that there are many things in the sky that are not stars—I haven’t simulated the appearance of galaxies (including our own Milky Way), or of the cold clouds of dust and gas between the stars, which will be blue-shifted to visible wavelengths before the CMB.

Maybe another time …


* Stimets RW, Sheldon E. The celestial view from a relativistic starship. Journal of the British Interplanetary Society 1981; 34: 83-99.
McKinley JM, Doherty P. In search of the “starbow”: The appearance of the starfield from a relativistic spaceship. American Journal of Physics 1979; 47(4): 309-16.

The Celestial View From A Relativistic Starship: Part 3

Bussard Interstellar Ramjet
Bussard Interstellar Ramjet (source)

This is the third of a series of posts about what the sky would look like for the passengers aboard an interstellar spacecraft moving at a significant fraction of the speed of light, like the Bussard interstellar ramjet above.

In the first post, I wrote about light aberration, which will cause the apparent direction of the stars to be shifted towards the direction of the spacecraft’s line of flight. In the second post, I discussed the relativistic Doppler shift, which will cause the stars concentrated ahead to undergo a spectral shift towards shorter, bluer wavelengths, while the stars astern become red-shifted. I introduced the parameter η (eta) the relativistic Doppler factor, which I promised would have extended relevance to this section, in which I’m going to discuss the effects of aberration and Doppler on the appearance of the stars.

If you’ve read the previous posts, you’ll recall the terminology, but here’s a quick recap. We call the measurements made by an observer more or less at rest relative to the distant stars the “rest frame” (for our purposes, the Earth is pretty much in the rest frame). The observations we’re interested in are those from a spacecraft which has a large velocity relative to the rest frame. That’s the moving frame, and its velocity is customarily given as a fraction of the speed of light, and symbolized by β (beta). Of particular interest is the angle between the line of the spacecraft’s flight and the position of a given star. In the rest frame, that measurement is symbolized by θ (theta). Aberration transforms that measurement to a smaller angle in the moving frame, which we symbolize by θ′. It’s a convention in Special Relativity to mark variables in this way—the simple symbols used in the rest frame are marked with a prime mark (′) when they’re transformed into the moving frame.

One graph and one diagram can summarize much of what happened in the preceding posts:

Effect of beta on aberration and Doppler
Click  to enlarge

In the graph we see how increasing values of β cause the stars to shift progressively farther towards the spacecraft’s dead-ahead position, θ′=0°. In doing so, stars to the rear of the spacecraft in the rest frame are displaced into the forward view, and eventually pass from a region of red shift into one of blue shift.

Aberration and Doppler: apparent shift at 0.5c and 0.85c
Click to enlarge

In the diagram, we see how aberration shifts the apparent position of a sphere of stars in the rest frame to an ellipsoid in the moving frame, with the ellipsoid becoming more elongated at higher velocities. Not only does the angular position of the stars change, but their apparent distance does, too. And the change in distance is proportional to the Doppler parameter η—a star with a blue shift that doubles the frequency of its light will also be moved to double the apparent distance by the effects of aberration.

After that summary, I now want to discuss how the stars will actually look, when their visual appearance has been transformed by Doppler and aberration as described above. This involves a digression on the subject of the black body radiation spectrum—it’s a good first approximation to the electromagnetic radiation profile emitted by stars, and has the advantage that it can be easily treated mathematically, which is not true of the rather lumpy radiation distribution of real stars. However, if all you want is the executive summary, you can reasonably skip ahead to THE APPEARANCE OF THE STARS.

BLACK BODY RADIATION

I’m not going to describe black body radiation in detail. There’s a superficial treatment here, and a more mathematical description here. Suffice it to say that there’s a mathematical formula which describes the amount and spectral distribution of energy a black body radiates at any given temperature, and that stars are (to a first approximation) black body radiators. So we can use the black body formulae to look at what happens when the light from the stars is transformed by a Doppler shift.

Here are some typical black body radiation curves, plotted against wavelength, using intensity units that don’t matter for our purposes. The violet and red vertical lines mark out the range of the spectrum of visible light. To the left of violet, ultraviolet and X-rays at short wavelengths; to the right of red, infrared and radio waves at long wavelengths.

Black body spectra
Click to enlarge

As we heat an object, two things happen to its black body radiation curve—the area under the curve (the total energy) gets larger, in proportion to the fourth power of the temperature; and the peak in the curve shifts to shorter wavelengths, in inverse proportion to the temperature (which means the frequency goes up, in direct proportion to the temperature). So there’s a pretty simple relationship between temperature and radiant energy.

But we’re more interested in what happens in the visible band. You can see that the amount of energy in that range goes up with increasing temperature. So our radiation source gets visibly brighter as its temperature rises.

And you can see that the shape of the curve crossing the visible band changes with temperature—at 4000K, red wavelengths predominate; at 7000K, blue predominates. So a black body (and therefore, to a first approximation, a star) follows a characteristic  trajectory through colour space (called the “Planckian locus”) as it changes temperature:

Planckian locus
Click to enlarge (source)

So we have the familiar sequence of red, orange, yellow, white and blue-hot, which is reflected in the colours of the stars.

Finally, notice that black bodies are not very efficient producers of light—you can see that the 4000K and 5500K sources are putting out more energy in the infrared than in the visible. At temperatures above 7000K, the radiation output is dominated by a huge spike in the short wavelengths. In fact, 7000K is close to being as efficient as a black body gets at producing visible light. The human eye’s sensitivity varies at different wavelengths and in different lighting conditions, but here’s the “luminous efficacy” curve for black body radiation in photopic vision—the sensitivity of an average human eye in daylight:

Luminous efficacy curve of black body radiation
Click to enlarge

You can see that black bodies with temperatures below 2000K are pretty rubbish at producing visible light. The curve then rapidly spikes to a maximum near 7000K, before declining in an exponential decay at higher temperatures. While an increase in temperature will always produce an increase in visible light emission, it does so with less and less efficiency at high temperatures.

This variation in luminous efficacy shows up when we look at the magnitude scale used to measure the brightness of stars. The total energy output is measured by the star’s bolometric magnitude; its visible brightness by the visual magnitude. By convention, the visual and bolometric magnitudes of a star are about equal for temperatures in the vicinity of 7000K, near the peak of black body luminous efficacy. But above and below that temperature, the visual and bolometric magnitudes diverge, the difference between the two being called the bolometric correction.

For historical reasons, stellar magnitudes are measured on a logarithmic scale, with a change of 5 magnitudes reflecting a 100-fold change in brightness. And for really annoying historical reasons a decrease in magnitude reflects an increase in brightness—a star of magnitude -1 is brighter than a star of magnitude 0, which is in turn brighter than a star of magnitude 1. So visual magnitudes are always greater than bolometric magnitudes, either side of the 7000K peak.

If we take a big chunk of something that behaves as a black body radiator, and heat it up so it progressively brightens, this is what happens to its bolometric and visual magnitudes (I’ve flipped the vertical axis to match intuition—magnitude values decrease towards the top, reflecting increasing brightness):

Bolometric versus Visual surface brightness for increasing temperature
Click to enlarge

The red bolometric line rises steadily on the logarithmic plot, at a slope of -10 magnitudes/decad (that is, it brightens through ten magnitudes for each tenfold increase in temperature). But the orange visual magnitude line starts low on the chart (reflecting the poor luminous efficacy of 3000K black bodies), rises to kiss the bolometric line at about 7000K, and then settles into a less steep rise. At high temperatures it becomes effectively straight, brightening at just -2.5 magnitudes/decad.

THE APPEARANCE OF THE STARS

So in what follows, I’m going to treat the stars as if they are perfect black body radiators, which is a reasonably approximation that allows simple mathematical treatment.

We know that the frequency of all electromagnetic radiation emitted by a star will be Doppler-shifted by a factor of η for an observer aboard our spacecraft. That implies that the energy of each photon will be changed by a factor of η. The number of photons received in a given time period will also be changed by a factor of η, meaning that the energy received in a given time period will vary with η².

And we know that the apparent distance to the star will be changed by a factor of η, which means its apparent angular diameter will change in proportion to 1/η, and its angular area in proportion to 1/η². So, compared to the rest frame, the spacecraft observer receives η² times the energy from 1/η² times the area—implying that the radiance of the star (it surface energy output) varies as η4.

The frequency shift of η combined with the radiance change of η4 means that a given black body spectrum in the rest frame is Doppler-shifted to another black body spectrum in the moving frame—one that has a temperature of T′=ηT. The overall effect of the Doppler shift is simply to change the apparent temperature of the star!

So now we know that the Doppler-shifted colour of a star will still lie on that Planckian locus of black-body colours, simply shifting up or down the curve according to the value of η.

Planckian locus
Click to enlarge (source)

But the changing apparent size of the star, with total energy received by the moving observer varying as η² rather than η4, means I have to redraw my curves of bolometric and visual magnitude:

Apparent bolometric and visual magnitudes under Doppler + aberration
Click to enlarge

We’re heating the same black body radiator as in the previous example, so that its surface brightness increases, but varying its distance from us in proportion to the temperature—just as happens when Doppler effect and aberration work together on a star.

Now the bolometric (total energy) line has half its previous gradient, rising at just -5 magnitudes per decad (a change of five magnitudes for each 10-fold increase in temperature). The bolometric correction remains the same at every temperature, so the visual magnitude curve stays the same distance below the bolometric curve as it did in the previous graph, but that now means it takes a down-turn at higher temperatures, eventually dimming at 2.5 magnitudes per decad.

So a sufficiently large decrease in the Doppler factor η will reduce a star’s apparent temperature enough to red-shift into invisibility; but a sufficiently large increase in η will increase a star’s apparent distance enough to blue-shift it into invisibility (despite its increasing surface brightness). How quickly these effects happen depends on the real temperature of the star, as measured in the rest frame. Here are plots of the change in visual magnitude against η for black body radiators of various temperatures. I’ve placed the spectral class of a corresponding star in brackets:

Visual magnitude change under Doppler + aberration
Click to enlarge

A cool 3000K M star will brighten dramatically, by five magnitudes, under blue shift; but it will dim equally dramatically under red shift. K, G, F (not shown) and A stars will brighten less on blue shift, but are slightly more resistant to the dimming effect of red shift. A star with a temperature of 16350K (around spectral class B4) is a special case—it will grow dimmer with either blue shift or red shift. Stars hotter than B4, like the 40000K O-class star shown here, will initially grow brighter under red shift, but only by half a magnitude or so.

So it works like this:

  • Any star will reach its maximum brightness when it has been Doppler-shifted to an apparent temperature of 16350K.
  • The slope on the red-shift side of 16350K is very steep—stars will change visual magnitude dramatically for relatively small changes of η in that region.
  • The slope on the blue-shift side of 16350K is relatively gentle—large changes in η result in fairly modest changes in visual magnitude.

So that’s it. Over three posts I’ve got to the point where we can predict the apparent position, colour and visual brightness of the stars as seen from a rapidly moving spacecraft.

My next post on this topic will exploit the 3-D space simulator Celestia to generate some views of the real sky, showing how it all fits together.