Category Archives: Phenomena

Keplerian Orbital Elements

1. All planets move in elliptical orbits, with the sun at one focus.
2. A line that connects a planet to the sun sweeps out equal areas in equal times.
3. The square of the period of any planet is proportional to the cube of the semimajor axis of its orbit.

Kepler’s Laws of Planetary Motion (formulated 1609-1619)

Okay, this is probably a bit niche, even by my standards, but it’s part of a longer project. I eventually want to write some more about the Apollo spacecraft, and the orbits they followed on their way to, and return from, the Moon. And the problem with that is that (for various good reasons) NASA didn’t document these orbits with a list of “orbital elements” that would allow the spacecraft trajectories in the vicinity of the Earth to be plotted easily. Instead, the flight documentation includes long tables of “state vectors”, listing the position and velocity of the spacecraft at various times—these are more accurate, but unwieldy to deal with. So in a future post I’m going to write about how to extract orbital elements from a few important state vectors. But first I need to describe the nature and purpose of the orbital elements themselves. Which is what I’m going to do in this post, hopefully enlivened by explanations of how the various orbital elements came by their rather odd names.

But first, the “Keplerian” bit. Johannes Kepler was the person who figured out that the planets move around the sun in elliptical orbits, and who codified the details of that elliptical motion into the three laws which appear at the head of this post. In doing that, he contributed to a progressive improvement in our understanding, which began with the old Greek geocentric model, which placed the Earth at the centre of the solar system with the planets, sun and moon moving in circles around it. This was replaced by Nicolaus Copernicusheliocentric model, which placed the sun at the centre, but retained the circular orbits. Kepler’s insight that the orbits are elliptical advanced things farther. (Next up was Isaac Newton, who provided the Theory of Universal Gravitation which explained why the orbits are ellipses.)

So Keplerian orbits are simple elliptical orbits.* They’re the sort of orbits objects would follow if subject to gravity from a single point source. In that sense, they’re entirely theoretical constructs, because real orbits are disturbed away from the Keplerian ideal by all sorts of other influences. But if we look at orbits that occur under the influence of one dominant source of gravity, and look at them for a suitably short period of time, then simple Keplerian ellipses serve us well enough and make the maths nice and simple. (And that’s what I’ll be doing with my Apollo orbits in later posts.)

Before going on, I’ll introduce a bit of necessary jargon. Henceforth, I’ll refer to the thing doing the orbiting as the satellite, and the thing around which it orbits as the primary. In Kepler’s original model of the solar system, the “satellites” are the planets, and the primary is the Sun; for my Apollo orbits, the satellites will be the spacecraft, and the primary is the Earth. Kepler’s First Law tells us that the primary sits at one focus of the satellite’s elliptical orbit. Geometrically, an ellipse has two foci, placed on its long axis at equal distances either side of the centre; only one of these is important for orbital mechanics. Pleasingly, focus is the Latin word for “fireplace” or “hearth”, so it seems curiously appropriate that the first such orbital focus ever identified was the Sun. Kepler’s Second Law tells us, in geometrical terms, that the satellite moves fastest when it’s at its closest to the primary, and slowest when it’s at its farthest. I’ll come to the Third Law a little later.

The Keplerian orbital elements are a set of standard numbers that fully define the size, shape and orientation of such an orbit. The name element comes from Latin elementum, which is of obscure etymology, but was used as a label for some fundamental component of a larger whole. We’re most familiar with the word today because of the chemical elements, which are the fundamental atomic building blocks that underlie the whole of chemistry.


The first pair of orbital elements define the size and shape of the elliptical orbit. (They’re called the metric elements, from Greek metron, “measure”.)

For size, the standard measure is the semimajor axis. An ellipse has a long axis and a short axis, at right angles to each other, and they’re called the major and minor axes. As its name suggests, the semimajor axis is just half the length of the major axis—the distance from the centre of the ellipse to one of its “ends”. It’s commonly symbolized by the letter a. The corresponding semiminor axis is b.

To put a number on shape, we need a measure of how flattened (or otherwise) our ellipse is—so some way of comparing a with b. For mathematical reasons, the measure used in orbital mechanics is the eccentricity, symbolized by the letter e. This has a rather complicated definition:

e=\sqrt{1-\frac{b^{2}}{a^{2}}}

But once we’ve got e, we can easily understand why it’s called eccentricity, because the distance from the centre of the ellipse to one of its foci turns out to be just a times e. Our word eccentricity comes from Greek ek-, “out of”, and kentron, “centre”. So it’s a measure of how “off-centre” something is. And multiplying the semimajor axis by the eccentricity does exactly that—tells us how far the primary lies from the geometric centre of the ellipse.

The metric orbital elements, a and e
Click to enlarge

For elliptical orbits, eccentricity can vary from zero, for a perfect circle, to just short of one, for very long, thin ellipses. (At e=1 the ellipse becomes an open-ended parabola, and at e>1 a hyperbola.)

Before I move on from the two metric elements, I should mention another concept that’ll be important later. The line of the major axis, which runs through the centre of the ellipse and the foci (marked in my diagram above), has another name specific to astronomy and orbital mechanics. It’s called the line of the apsides. Apsides is the plural of Greek apsis, which was the name of the curved sections of wood that were joined together to make the rim of a wheel. The elliptical orbit is deemed to have two apsides of special interest—the parts of the orbit closest to the primary (the periapsis) and farthest from the primary (the apoapsis), and these are joined by the line of the apsides.


Then there are three angular elements, which specify the orbit’s orientation in space. They’re specified relative to a reference plane and a reference longitude. A good analogy for this is how we measure latitude and longitude on Earth. To specify a unique position, we measure latitude north or south of the equatorial plane, and longitude relative to the prime meridian at Greenwich. For orbits around the Earth, like my Apollo orbits, the reference plane is the celestial equator, which is just the extension of the Earth’s equator into space. The reference longitude is called the First Point of Aries, for reasons I won’t go into here—it’s the point on the celestial equator where the sun appears to cross the equator from south to north at the time of the March equinox, and I wrote about it in more detail in my post about the Harvest Moon.

The first angular element is the inclination, symbolized by the letter i, which is the angle between the orbital plane and the reference plane. The meaning of its name is blessedly obvious, because it’s the same as in standard English.

Following its tilted orbit, the satellite will pass through the reference plane twice as it goes through one complete revolution—once heading north, and once heading south. These points are called the nodes of the orbit, from Latin nodus, meaning “knot” or “lump”. The northbound node is called the ascending node, and the southbound node is (you guessed) the descending node—names that reflect the “north = upwards” convention of our maps. The angle between the reference longitude and the ascending node of the orbit, measured in the reference plane, is called the longitude of the ascending node, symbolized by a capital letter omega (Ω), and it’s our second angular element.

Those two elements tell us the orientation of the orbital plane in space—how it’s tilted (inclination) and which direction it’s tilted in (longitude of the ascending node). Finally, we need to know how the orbit is positioned within its orbital plane—in which direction the line of the apsides is pointing, in other words. To do that job, we have our third and final angular element, the argument of the periapsis, which is the angle, measured in the orbital plane, between the ascending node and the periapsis, symbolized by a lower-case Greek omega (ω). The meaning of argument, here, goes back to the original sense of Latin arguere, “to make clear”, “to show”. That sense of argument found its way into mathematical usage, to designate what we’d now think of in computing terms as an “input variable”—a number that you need to know in order to solve an equation and get a numerical answer.

The angular orbital elements
Click to enlarge

Those five elements exactly define the size, shape and orientation of the orbit, and are collectively called the constant elements. In addition to those five, we need a sixth, time-dependent element, which specifies the satellite’s position in orbit at some given time. (The specified time, symbolized by t or t0, is called the epoch, from Greek epoche, “fixed point in time”.) There are actually a number of different time-dependent elements in common use, but the standard Keplerian version is the true anomaly, which is the angle (measured at the primary) between the satellite and the periapsis. Different texts use different symbols for this angle, most commonly a Greek nu (ν) or theta (θ).

To understand why it’s called an “anomaly”, we need to go back to the original geocentric model of the solar system. Astronomers knew very well that the planets didn’t move across the sky at the constant rate that would be expected if they were adhering to some hypothetical sphere rotating around the Earth. Sometimes Mars, Jupiter and Saturn even turned around and moved backwards in the sky! These irregularities in motion were therefore called anomalies, from the Greek anomalos, “not regular”. And there were two sorts of anomaly. The First or Zodiacal Anomaly was a subtle variation in the speed of movement of a planet according to its position among the background stars. The Second or Solar Anomaly was a variation that depended on the planet’s position relative to the Sun. Copernicus explained the Second Anomaly by placing the Sun at the centre of the solar system, because he realized that much of the apparent irregularity of planetary motion was due to the shifting perspective created by the Earth’s motion around the Sun. The First Anomaly persisted, however, until Kepler’s Second Law showed how it was due to a real acceleration as a planet moved through periapsis, followed by a deceleration towards apoapsis. Because this “anomaly” was a real effect linked to orbital position, the word anomaly became attached to the angular position of the orbiting body. And if you’re wondering why it’s called the “true” anomaly, that’s because there are a couple of other time-dependent quantities in use, which are computationally convenient and which are also called “anomalies”. But the true anomaly is the one that measures the satellite’s real position in space.


And those are the six standard orbital elements, together with their odd names. However, we generally need to know one more thing. Kepler’s Third Law applies to all orbits—the larger the semimajor axis, the longer it takes for the satellite to make one complete revolution, with a cube-square relationship. But for a given orbital size, the time for one revolution also depends on the mass of the primary. A satellite must move more quickly to stay in orbit around a more massive primary. So we need to specify the orbital period of revolution (variously symbolized with P or T) if we are to completely model our satellite’s behaviour. The word comes from Greek peri-, “around”, and odos, “way”.

So—six elements and a period. That’s what I’ll be aiming to extract from the Apollo documentation when I return to this topic next time.


* Parabolic and hyperbolic “orbits” are, strictly speaking, trajectories, since they don’t follow closed loops. The word orbit comes from the Latin orbis, “wheel”—so something that is round and goes round.
Periapsis and apoapsis are general terms that apply to all orbits. Curiously, they can have other specific names, according to the primary around which the satellite orbits. Most commonly you’ll see perigee and apogee for orbits around the Earth, and perihelion and aphelion for orbits around the Sun. See my post about the word perihelion for more detail.

Fata Morgana

Fata Morgana in Kolyuchin Inlet, Russian Far East
Click to enlarge

As the weary traveller sees
In desert or prairie vast,
Blue lakes, overhung with trees,
That a pleasant shadow cast;

Fair towns with turrets high,
And shining roofs of gold,
That vanish as he draws nigh,
Like mists together rolled,—

Henry LongfellowFata Morgana” (1873)

I took the photograph above in Kolyuchin Inlet, in the Russian Far East, one evening in September 2016. The curious “objects” on the horizon are not clouds, as you might at first guess, but are an example of a kind of mirage called the Fata Morgana. Here’s a time-lapse film of the same phenomenon at Lake Michigan:

In both cases, we’re seeing distant land distorted by a complex mirage effect. The ice floes in the foreground of the video give a clue to the atmospheric conditions required for the Fata Morgana—there must be a layer of air at the surface that is considerably colder than the air aloft. This situation, the reverse of the usual condition in which the air becomes progressively cooler as one climbs higher, is called a temperature inversion. In Kolyuchin Inlet and Lake Michigan, the surface air was close to freezing point, but that’s not necessary for a Fata Morgana to appear. The important thing is the change in temperature with altitude, not the absolute temperature. Indeed, the Fata Morgana got its name in the Strait of Messina, the narrow channel that separates the island of Sicily from the “toe” of the Italian mainland. It’s not an area known for its ice-floes, but it is prone to temperature inversions. The proximity of two warm landmasses separated by a narrow channel of relatively cool water means that Sicilians can often observe this mirage effect distorting the mainland hills, while the residents of Reggio Calabria can watch the same thing happen to the Sicilian coastline.

La Fata Morgana is the Italian name of Morgan le Fay, a mythical adversary of King Arthur. From her origin in the Arthurian legends of Britain and France, she arrived in southern Italy along with the Normans, who established a short-lived kingdom in the region during the twelfth century. In her new Mediterranean home, Morgan was said to inhabit a castle that floated in the air. And so her name became attached to an optical phenomenon that occasionally produces the appearance of towers and battlements where none exist.

Here’s an example from Antarctica that shows how the Fata Morgana can convert the appearance of rounded hills into flat-topped towers:

Fata Morgana, Black Island
Click to enlarge
Photo credit: United States Antarctic Program

Everyone probably knows that mirages, of which the Fata Morgana is a particularly complex example, occur because light rays are following unusually curved paths through the atmosphere. It’s perhaps less well known that light generally follows a curved path through the atmosphere, induced by the drop in atmospheric pressure with height, which produces a corresponding change in the refractive index of air. This normal curvature of light rays is concave downwards, so it routinely brings into view objects that are actually below the level of the geometrical horizon. In particular, if we were to wait until the setting sun appeared to be resting exactly on the sea-level horizon, and then removed all the intervening air, we’d discover that the sun was already entirely below the “real” horizon. I’ve written a lot more about that in my post on the Shape Of The Low Sun. The curving rays also make visible distant landscape features that would be invisible below the horizon in the absence of the atmosphere.

A temperature inversion, in its simplest form, simply accentuates this natural concave curvature of light rays, as increasingly warmer air aloft further reduces the atmospheric pressure and the refractive index of the air. Such temperature inversions further delay the setting of the sun, and lift even more distant geographical features into view—for instance, the towers of Chicago occasionally pop into view from the opposite shore of Lake Michigan, a good 60 miles away. It’s been calculated that a rise in temperature of 0.11°C for every metre of altitude induces horizontal light rays to follow a path with a curvature equal to the curvature of the Earth. Under such conditions, the surface of the Earth appears flat, and an observer can see as far as atmospheric haze and intervening topography allow. There have long been speculations that this sort of “Arctic mirage” informed the early voyages of discovery in the North Atlantic—allowing the first inhabitants of the Faroes to glimpse Iceland, and the early Icelanders to occasionally discern Greenland.

But temperature gradients much steeper than the critical 0.11°C/m exist, locally and over a few tens of metres altitude, at times when the Fata Morgana is visible. Under these circumstances, light rays emitted upwards by an object on the surface of the Earth can follow an arching course that brings them down to meet the eye of an observer some kilometres away. Like this:

Simple superior mirage
Click to enlarge

The vertical scale of my diagram is typically a few tens of metres; the horizontal a few tens of kilometres—so you must imagine these trajectories stretched out a thousandfold from side to side.

If the temperature increases in a roughly linear way with altitude, the trajectories of the emitted light rays are all roughly the same shape. As a result, only one ray (the red one in my diagram) connects the object to the observer’s eye. So this sort of mirage merely lifts the image of the distant object so that it appears to sit higher in the sky than usual. At the extreme, sailors traversing a cold sea with warm air aloft can get the visual impression that their ship is sitting at the bottom of a bowl. In the parlance of English-speaking mariners, this effect was aptly named looming. And it was also known to those archetypal cold-water sailors, the Vikings, who called it hillingar. (Cleasby and Vigfusson’s dictionary of Old Icelandic translates this word as “an upheaving”.) So the general phenomenon is sometimes referred to as the hillingar effect.

The Fata Morgana requires a rather more dramatic temperature gradient, however—one in which the temperature changes abruptly over a short change of altitude, at a junction called a thermocline. Like this:

Relevance of thermocline to Fata Morgana formation
Click to enlarge

The thermocline provides such a range of temperature gradients in a relatively short span of altitudes, it can deflect a correspondingly wide range of ascending light-rays downwards towards a ground-level observer, acting like a sort of mirror in the sky. Like this:

Fata Morgana formation (1)
Click to enlarge

So now our observer sees two images of the same object. The lower ray (let’s call it the direct ray) may or may not produce a noticeable hillingar effect, but the upper ray (call it the reflected ray) certainly produces an out-of-place image floating above the first.

And the thermocline really does act like a mirror. Here’s a diagrammatic plot adding the trajectories of light rays originating from a higher position on our distant object (marked in red):

Fata Morgana formation (3)
Click to enlarge

The direct rays maintain their relative positions on their way to the observer’s eye, so that the higher parts of a distant object appear above its lower parts, in the usual way. But the reflected rays cross each other, so that the image of the higher parts arrive at the observer’s eye from a direction lower than the image of its lower parts. So we see an inverted image hovering disconcertingly above an upright image, like this:

Diagram of Fata Morgana mirage (1)

But there’s more. For objects at a specific distance and height above the horizon, the range of temperature gradients associated with the thermocline can deflect multiple rays from the same source into the observer’s eye. Like this:

Fata Morgana formation (2)
Click to enlarge

That means that part of the direct image can appear to be smeared vertically:

Diagram of Fata Morgana mirage (3)

This phenomenon has another name that comes to us from the Vikings—the hafgerdingar effect. The original Norse word is haf-gerðingar, variously translated as “sea-fences” or “sea-hedges*”, which certainly fits the mirage appearance. But the Vikings may not actually have been referring to the mirage that now has this name; it’s possible they were talking about some sort of real, dangerous oceanic wave. The phenomenon is only ever discussed in an Old Norse text called Konungs skuggsjá (“The King’s Mirror”), which is far from clear:

Now there is still another marvel in the seas of Greenland, the facts of which I do not know precisely. It is called “sea hedges,” and it has the appearance as if all the waves and tempests of the ocean have been collected into three heaps, out of which three billows are formed. These hedge in the entire sea, so that no opening can be seen anywhere; they are higher than lofty mountains and resemble steep, overhanging cliffs. In a few cases only have the men been known to escape who were upon the seas when such a thing occurred. But the stories of these happenings must have arisen from the fact that God has always preserved some of those who have been placed in these perils, and their accounts have afterwards spread abroad, passing from man to man. It may be that the tales are told as the first ones related them, or the stories may have grown larger or shrunk somewhat. Consequently, we have to speak cautiously about this matter, for of late we have met but very few who have escaped this peril and are able to give us tidings about it.

But whatever the original haf-gerðingar was, it’s the merging of the hafgerdingar effect with the inverted reflected image that produces the full Fata Morgana appearance, like this:

Diagram of Fata Morgana mirage (2)

Added to this appearance of distant, fluted towers and battlements, there’s a degree of animation to the Fata Morgana, because the thermocline is never entirely still. If you watch the time-lapse video near the head of this post carefully, you’ll be able to see the occasional wave running along the top of the mirage, produced by real wind-driven waves in the thermocline itself. These produce a gentle billowing effect at the upper margin of the miraged image, which on occasion can look like banners wafting gently in the wind.

So that’s an “edited highlights” explanation of the appearance of the Fata Morgana. Proper detailed treatments are hard to come by, I find, and much of the early mathematical analysis was published in German. So my primary reference has been one of the documents published in English by the Norwegian Polar Institute in 1964, reporting the scientific findings of the Norwegian-British-Swedish Antarctic Expedition of 1949-1952. It’s entitled Refraction Phenomena In The Polar Atmosphere (Maudheim 71° 03´S, 10° 56´W), and written by G.H. Liljequist.

* I’m somewhat surprised that there’s an Old Norse word for “hedge”. It’s like discovering that the Vikings had words for “herbaceous border” and “ornamental water feature”.


More About Converging Rainbows

Reflected-light rainbows over Morecambe Bay, by Mick Shaw
Click to enlarge
Photo credit: Mick Shaw

A couple of months ago I received this lovely picture from Mick Shaw, which I use with his permission. The sun is reflecting off a thin layer of sea-water covering the sand-flats of Morecambe Bay, and producing a pair of reflected-light rainbows in tandem with the usual primary and secondary arcs.

Reflected-light rainbows were the subject of one of my first posts on this blog, back in 2015, when I posted my own photograph of the phenomenon:

Double rainbow from reflected sun
Click to enlarge

In that photograph, you can appreciate how still the waters of the Tay estuary were—an essential condition for this sort of rainbow to form, because the water surface has to be flat enough to reflect a near-perfect image of the sun.

Once the water surface reflects a clear image, falling drops of rain are illuminated by two suns—one above the horizon, and one below. So two rainbows can form, each centred on an antisolar point—the point directly opposite the sun. Rainbows formed by direct sunlight are centred on a point as far below the horizon as the sun is above it—this angular distance between sun and horizon is called the solar altitude, in astronomical jargon, and that’s what I’ll call it from now on. But the reflected-light rainbow arcs are centred on an antisolar point that is at the same height in the sky as the sun. Like this:

How double rainbows form

Rainbows form a circle around the antisolar point, but the sections of their arc that fall below the horizon are generally invisible, unless the observer is in a high place looking down on clouds. The reflected-light rainbow forms an arc that is identical in shape and size to the portion of the normal rainbow that lies below the horizon, except it is flipped vertically to sit above the horizon. So the two arcs, from direct and reflected sunlight, converge exactly at the horizon, forming a prominent lopsided “V” shape, evident in the photographs above.

A little bit of spherical trigonometry lets me plot the divergence between our two intersecting rainbows—that is, the angle at the base of the “V”. Here it is for the primary rainbows:

Divergence of directed and reflected-light rainbows at horizon, by solar altitude

With a solar altitude of zero, there is no divergence. The sun is sitting on the horizon, and the direct and reflected antisolar points have merged on the opposite horizon, so that the direct and reflected-light rainbows are precisely superimposed, forming a single semicircular arc. With the sun farther above the horizon, the “V” between the two rainbows opens up steadily, until suddenly shooting up towards 180 degrees as the sun approaches an altitude of 42 degrees. This critical angle of 42 degrees corresponds to the radius of the primary rainbow. With the antisolar points 42 degrees above and below the horizon, the two rainbows are circles that touch each other at the horizon, with the primary rainbow invisible below the horizon, and the reflected-light rainbow forming a complete circle in the sky, sitting on the horizon. Like this:

Direct and reflected light primary rainbows at solar altitude of 42 degrees

Have you ever seen such a thing? Me neither. So I began to wonder why that doesn’t seem to happen.

There are a couple of practical considerations that strongly limit our opportunities to see a circular reflected-light rainbow entirely above the horizon. One is that rainbows are huge. A circular primary rainbow would stretch from the horizon almost all the way to the zenith. We’re used to the upper part of even a normal rainbow fading out, as it extends from a region of sky near the horizon where we have long sight-lines through falling rain, producing bright rainbow “legs”, into a region where our sight-lines are blocked by the rain-clouds themselves, so that the upper part of the arc is faint or invisible, like this:

Rainbow at sunset, Kylerhea
Click to enlarge
Rainbow at sunset, Kylerhea, © 2021, The Oikofuge

Compared to the sunset bow above, a full circular rainbow would extend twice as far upwards into the sky, taking it into regions that normally provide few raindrops along our line of sight.

There’s also the issue that a reflected-light rainbow needs a sizeable reflective surface. Areas simultaneously large enough and calm enough to produce a complete circular rainbow are probably fairly rare.

But there’s a more fundamental reason that precludes our seeing a perfect circular reflected-light rainbow, hanging in the sky. It has to do with the reflective properties of water. The amount of light a flat water surface reflects depends on the angle at which the incoming light hits the water surface. Here’s a graph of the amount of reflection of (unpolarized) sunlight according to solar altitude:

Proportion of sunlight reflected by water surface, by solar altitude

You can see that, by the time the sun is thirty degrees above the horizon, its reflected image is only a fraction of the brightness of the real sun—about 6%, if we do the sums. So the reflected-light rainbow will be comparably reduced in brightness.

Another, more minor, influence on the brightness of the reflected-light rainbow is that the light reflected from a water surface is usually quite strongly polarized:

Degree of polarization of sunlight reflected from water

Now, as I described in my post about rainbow rays, the repeated process of reflection and refraction inside a raindrop mean that the rainbow itself is polarized, in a fashion that follows the arc of the rainbow. The horizontal top of the rainbow arc is polarized in the same sense as the reflected light from the water surface; the vertical sides of the rainbow are polarized transversely relative to the reflected light. So the upper part of a reflected-light rainbow should be a little brighter than its “legs”, if all other things are equal (which they’re generally not).

Here’s what happens when I feed the reflected light from a flat water surface into the light-path of the primary “rainbow rays”:

Brightness of reflected-light rainbow arcs, by solar altitude

The centre of the graph corresponds to the sunlight that reaches us after reflecting off the water surface and then passing through the horizontal top of the rainbow arc. The sides of the graph show the same information for the vertical “legs” on either side of the rainbow.

At the top of the graph is the line corresponding to a solar altitude of zero. With the sun on the horizon, the water surface reflects all the light that falls on it, so there’s no polarization—illumination from the reflected sun is the same as that from the real sun, and our reflected rainbow is as bright as a normal rainbow (though that means only about 4.5% of light rays striking the reflecting surface find their way back to our eyes in the rainbow).

But the rapid fall in reflection from the water surface with increasing solar altitude means that our reflected-light rainbow fades out quickly. The onset of polarization also means that the “legs” of the rainbow grow fainter faster than does the top of its curve. With the sun just 10 degrees above the horizon, the reflected-light rainbow is already fainter than a normal secondary rainbow, which we know is frequently faint or invisible.

So our best chance of seeing a reflected-light rainbow occurs when the sun is close to the horizon, because these rainbows fade into invisibility as the sun gets higher. We generally see them as they appear in the two photographs at the head of this post—as a fainter rainbow that echoes the “leg” of a normal rainbow, while converging with it at the horizon. The supposedly brighter top of the reflected-light arc is often invisible, for the same reason that top of a conventional rainbow is often invisible—because it extends higher than the region in which long sight-lines extend under the rain-clouds. So I’ve never seen the upper arc of a reflected bow, but I live in hope.

Same Sun, Other Skies

Cover of Russian edition of Asimov's "Lucky Starr and the Big Sun of Mercury"
Russian edition of Isaac Asimov’s “Lucky Starr And The Big Sun Of Mercury
(Chosen because, of all the editions of this novel, it does the best job of delivering exactly what the title says.)

A section of the horizon was etched sharply against a pearly region of the sky. Every pointed irregularity of that part of the horizon was in keen focus. Above it, the sky was in a soft glow (fading with height) a third of the way to the zenith. The glow consisted of bright, curving streamers of pale light.
“That’s the corona, Mr. Jones,” said Mindes.
Even in his astonishment Bigman was not forgetful of his own conception of proprieties. He growled, “Call me Bigman.” Then he said, “You mean the corona around the Sun? I didn’t think it was that big.”
“It’s a million miles deep or more,” said Mindes, “and we’re on Mercury, the planet closest to the Sun. We’re only thirty million miles from the Sun right now. You’re from Mars, aren’t you?”
“Born and bred,” said Bigman.
“Well, if you could see the Sun right now, you’d find it was thirty-six times as big as it is when seen from Mars, and so’s the corona. And thirty-six times as bright too.”
Lucky nodded. Sun and corona would be nine times as large as seen from Earth.

Isaac Asimov, Lucky Starr And The Big Sun Of Mercury (1956)

That’s Isaac Asimov (writing under the pseudonym “Paul French”), being very Asimov about things, in one of his “Lucky Starr” science fiction juveniles. In Asimov stories, characters explain things to each other quite often; in his “Lucky Starr” stories, doubly so. This particular passage introduces my theme for this post—the Sun as seen from other planets of the Solar System.

First, there’s some basic geometry to deal with. The farther a planet is from the Sun, the smaller the Sun will appear in the planet’s sky—there’s a simple inverse relationship between planetary distance and the apparent width of the solar disc as seen from that planet. But it’s the apparent area of the solar disc that determines how much light and heat the planet receives from the Sun—that bears an inverse-square relationship to the distance between planet and Sun.

We can distil that down into a simple table if we list the average distance at which each planet orbits the Sun, giving that figure in Astronomical Units, one AU being the Earth’s orbital radius. In the table below, the second column of numbers, indicating the apparent width of the solar disc, is simply the inverse of the first column; the third column, showing the apparent area of the disc, is the square of the second. (I’ve rounded the numbers, so the relationship between the tabulated figures isn’t exact.)

Planet Sun dist. (x Earth) Sun width (x Earth) Sun area (x Earth)
Mercury 0.3871 2.58 6.67
Venus 0.7233 1.38 1.91
Earth 1.0000 1.00 1.00
Mars 1.5237 0.66 0.43
Jupiter 5.2029 0.19 0.037
Saturn 9.5367 0.10 0.011
Uranus 19.1892 0.052 0.0027
Neptune 30.0699 0.033 0.0011

That’s not the full story, however, because the planets have elliptical, rather than circular, orbits. So the apparent width of the solar disc will vary somewhat around the mean values listed above, getting larger and smaller during the course of each planet’s year. For most planets, the change is very slight. From the Earth, for instance, the solar disc has an average apparent width of 32 minutes of arc, which increases by about half a minute of arc in January, when the Earth makes its closest approach to the Sun (its perihelion), and correspondingly decreases by about half a minute in July, when the Earth is at the farthest point in its orbit (the aphelion). It’s not a particularly noticeable change. But two planets, Mercury and Mars, have significantly elliptical orbits, and I can improve my table by listing maximum and minimum values for their solar discs.

Planet Sun dist. (x Earth) Sun width (x Earth) Sun area (x Earth)
Mercury 0.3871 2.14 – 3.25 4.59 – 10.58
Venus 0.7233 1.38 1.91
Earth 1.0000 1.00 1.00
Mars 1.5237 0.60 – 0.72 0.36 – 0.52
Jupiter 5.2029 0.19 0.037
Saturn 9.5367 0.10 0.011
Uranus 19.1892 0.052 0.0027
Neptune 30.0699 0.033 0.0011

The area of the solar disc (and therefore the light and heat from the Sun) varies more than two-fold during the course of a Mercurian year! Mars undergoes a more modest change, but the Sun gets 40% larger in the Martian sky as the planet moves from the farthest to the closest point in its orbit.

Another thing we can tell from my table above is that Asimov got his numbers wrong. (Now, there’s a phrase I thought I’d never write.) We can defend his characters’ claim that the solar disc appears nine times larger on Mercury than on Earth, if we assume they’re talking about its apparent area at a time when Mercury was close to perihelion. But there are no circumstances under which the Mercurian solar disc can appear even thirty times larger than that seen on Mars, let alone thirty-six times.

Before I go through the list of planets in more detail, I need to give a couple of definitions. The apparent surface brightness of the solar disc is called its luminance, and (perhaps counterintuitively) it doesn’t change with distance within the planetary system. The reason the solar disc sheds less light on more distant planets is because it is smaller, as detailed in my table above, not because it’s dimmer, area for area. The amount of light a planet receives from the Sun is called illuminance, and it’s what gives us the sense of whether our surroundings are dimly or brightly lit, and what determines the settings our cameras need to use to get a properly exposed picture. The SI unit of illuminance is the lux (say “looks”, not “lucks”), and sunlight on a clear day on Earth provides about 100,000 lux.

Now, let’s go through my list of planets, one at a time:

Mercury

Sunlight on Mercury is going to be brighter than anything we experience on Earth—five to ten times brighter. But, although science-fiction illustrators tend to depict the Mercurian sun as huge in the sky, it wouldn’t actually be that large. You can easily cover the solar disc, seen from Earth, with just the tip of your little finger. The Mercurian sun could be obscured with a couple of finger-tips, even at its largest.

Mercury’s slow rotation has an interesting effect on its daylight. It’s locked into what’s called a spin-orbit resonance. One Mercurian “year” lasts 87.97 Earth days; one Mercurian rotation takes 58.65 Earth days, meaning that the planet rotates exactly three times on its axis for every two orbits around the Sun. This means that, for any point in Mercury’s orbit, the planet returns to that point one orbit later with an extra half-rotation. So if some point on the surface experiences noon at a particular orbital location, it’ll experience midnight when it returns to that orbital location after one Mercurian year, and then noon again the next year, and so on. If it’s noon at a particular location when Mercury is passing through perihelion, it will be noon during perihelion every two Mercurian years (and midnight on the alternate years). So there are two points on the Mercurian equator, on opposite sides of the planet, that experience the brunt of the solar heating during Mercury’s closest approaches to the Sun. One of these hot points, on the equator at 180 degrees longitude, lies some distance to the southeast of a huge Mercurian impact basin, which has accordingly been named Caloris Planitia, or “Plain of Heat”.

There’s more fun to be had with Mercury’s spin-orbit coupling and eccentric orbit, but it strays away from the chosen topic of this post—I’ll come back to it another time, perhaps.

Venus

Venus, at about three-quarters of Earth’s distance from the Sun, sees the solar disc about a third larger than it appears from Earth. So light levels in orbit around Venus are close to double what we’d experience while in orbit around the Earth, and about two-and-a-half times the illuminance at the surface of the Earth on a clear summer’s day. That difference occurs because our atmosphere, even on a clear day, absorbs and reflects some sunlight before it arrives at the surface—but not nearly as much as the atmosphere of Venus, the surface of which lies under a perpetual dense overcast.

The light level at the surface of Venus, under all that cloud, was measured by the Venera series of Soviet-era landers. Venera-9, which made the first successful landing in a location with the Sun high in the Venusian sky, reported an illuminance of 14,000 lux, which is about what you receive if you stand in the shade under a bright, blue sky on Earth. Results from Venera-13 and Venera-14 were a bit lower, if the numbers quoted in a slightly batty paper entitled “Hypothetic Flora of Venus” can be considered reliable. Again with the Sun fairly high in the sky, the Venusian light level reached a value of 3,500 lux, about a thirtieth of a sunny day on Earth, and representing just a seventieth of what arrives at Venus’s cloud-tops. Even that low figure can be considered bright, by Earthly standards. 3,500 lux is the equivalent of the illuminance provided by the lights positioned above surgical operating tables, sufficient to carry out extremely fine work. (Our eyes adapt readily to a range of lighting conditions, and most of us barely notice that the normal level of illuminance indoors is usually at least a hundred times lower than that outdoors.)

Not that we’d be doing much fine work on the surface of Venus, given that the massive greenhouse effect from its dense atmosphere pushes the surface temperature up over 450ºC. (It’s traditional at this point to say “hot enough to melt lead”. Consider it said.) But if we were standing on the surface, the illumination would be equivalent to that of a diffusely lit but extremely bright room. A day-night cycle would last 177 Earth days (Venus rotates very slowly), and the sun would rise in the west and set in the east (Venus has retrograde rotation). But we wouldn’t be able to determine the exact moment of sunrise or sunset, since the location of the Sun would be no more than a brighter patch in the sky—“a smear of light”, says NASA, though I haven’t been able to track down a simulation of Venus’s atmosphere that might provide more detail on that.

Mars

Because of Mars’s elliptical orbit, the solar disc varies in apparent size with the seasons, ranging between about 60% and 70% of its width seen from Earth. This means the illuminance at the surface of Mars on a clear day can vary between 47,000 and 68,000 lux—to the adaptable human eye, indistinguishable from daylight on Earth. The solar disc appears largest when Mars is at perihelion during its southern hemisphere summer, and smallest during southern hemisphere winter. The southern hemisphere therefore experiences more extreme seasonal temperature changes than the north.

The Martian day is not much different in duration from an Earth day—about 40 minutes longer—and planetary scientists who study Mars refer to the Martian day as a “sol” (from the Latin for “sun”) to avoid confusion. There’s no confusing a Martian sunset with an Earth sunset, however:

The dominant colours are reversed, with a blue sun in a red sky. The effect is due to the particular size of fine dust particles suspended in Mars’s atmosphere, which produces a diffraction pattern that preferentially reinforces the forward-scattering of blue light. (A similar effect is sometimes produced by smoke aerosols on Earth.) You can find the detailed optical explanation here.

Jupiter

The next planet out from the Sun is Jupiter. While there’s nowhere for an observer to stand on a gas giant planet, it has a retinue of moons that might provide convenient locations from which to observe the Sun.

Such hypothetical observers would see a solar disc shrunk to about a fifth of its width as seen from Earth, providing only about a twenty-fifth of the light. That is, however, still about 5,000 lux—a not-too-overcast day on Earth, and easily sufficient illumination for the finest of work. And still comparable to the surface of Venus.

Saturn

By the time we reach Saturn, the solar disc has a tenth of its Earthly width, and sheds about hundredth of the light—1,400 lux, which is the equivalent of a solidly overcast day on Earth, or the sort of lamp one uses for fine work indoors. Still not particularly dim, then.

Again, our hypothetical observer would need to station themselves on one of Saturn’s moons in order to have a clear view of the Sun. Any moon would do, with the exception of Titan, which is swathed in a thick atmosphere. And we have some pretty detailed calculations of what the Sun would look like from the surface of Titan. I quote from the linked paper:

At visible wavelengths, the sky appears as nearly featureless orange soup most of the time, with little if any increased brightness toward the Sun’s azimuth.

My linked paper also provides some helpful graphs, suggesting that the overall reduction in illuminance when the Sun is overhead on Titan is somewhere between five-fold and ten-fold—so down to around 150-300 lux, in round numbers. That’s what we get under the absolute densest of massive cumulonimbus storm clouds on Earth, and the range of lighting used in corridors and stairwells indoors. So some of us would need to reach for our reading glasses on even the brightest day on Titan. The illuminance is cut a hundred-fold by the time the Sun has reached the horizon on Titan (say 15 lux, getting down to the limit for reading newsprint), and the setting sun would be absolutely invisible. But Titan’s atmosphere scatters light so well that the sky would continue to illuminate the landscape with full-moon brightness (about 0.2 lux) even when it was thirty degrees below the horizon.

Uranus

At Uranus, the solar disc has a twentieth of its width on Earth, and provides only around 350 lux—a little brighter than the surface of Titan, but still a profoundly overcast day on Earth.

Neptune

By the time we reach the outermost planet of the Solar System, the solar disc is down to a thirtieth of the width we see on Earth, and providing just 140 lux. Still, that’s the equivalent of 700 full moons, and you’d have no more difficult finding your way about in the vicinity of Neptune than you’d have finding your way down a stairwell on Earth.

We reach a significant threshold at Neptune, however. The solar disc is now just one minute of arc across, which is the limit of resolution of the human eye. Looking at the sky from one of Neptune’s moons, the Sun would appear as an eye-wateringly intense point of light, rather than a clear disc. But it would have the same surface brightness as the Sun seen from Earth—you could still damage your retina by staring at it with the naked eye (though the naked eye would of course not be an option out at the edge of the Solar System).

If we move farther from the Sun than Neptune, the solar disc can never appear any smaller to our eyes—it will always appear as a little point of light, smeared by the physical limitations of our eyes into a tiny spot one minute of arc across. At first a searingly bright star, and then progressively dimmer as we move farther and farther away.

But that’s a topic for another post.

Comparison of sizes of solar disc seen from various planets
Click to enlarge

Why Does The Illuminated Side Of The Moon Sometimes Not Point At The Sun?

Illuminated part of moon apparently not pointing at sun
Click to enlarge

I took the above panoramic view, spanning something like 120 degrees, in a local park towards the end of last year. The sun was almost on the horizon to the southwest, at right of frame. The moon was well risen in the southeast, framed by the little red box in the image above. After taking the panorama, I zoomed in for the enlarged view of the moon shown in the inset, to demonstrate the apparent problem. The moon is higher in the sky than the sun, but its illuminated side is pointing slightly upwards, rather than being orientated, as one might expect, with a slight downward tilt to face the low sun.

This appearance is quite common, whenever the moon is in gibbous phase (between the half and the full), and therefore separated by more than 90 degrees from the sun in the sky. Every now and then someone notices the effect, and decides that they have to overthrow the whole of physics to explain it. I could offer you a link to a relevant page, but I won’t—firstly, I don’t like to send traffic to these sites; secondly, you might be driven mad by the experience and I’d feel responsible.

Actually, the illuminated part of the moon is pointing directly towards the sun; it just doesn’t look as if it is. So (as with my previous post “Why Do Mirrors Reverse Left And Right But Not Up And Down?”) the title of this post is an ill-posed question—it assumes something that isn’t actually so.

Here’s a diagram showing the arrangement of Earth, moon and sun in the situation photographed above:

Gibbous moon at sunset
Click to enlarge

The Earth-bound observer is looking towards the setting sun. Behind and above him is the moon, its Earth-facing side more than half-illuminated. The sun is so far away that its rays are very nearly parallel across the width of the moon’s orbit. In particular the light rays bringing the image of the setting sun to the observer’s eyes are effectively parallel to those shining on the moon—the divergence is only about a sixth of a degree.

But we know that parallel lines are affected by perspective. They appear to converge at a vanishing point. The most familiar example is that of railway lines, like these:

Railway line perspective, Carnoustie
Click to enlarge

But there’s a problem with this sort of perspective. To illustrate it, I took some photographs of the top of the very low wall that surrounds the park featured in my first photograph:

Vanishing points in two directions
Click to enlarge

The views look north and south towards two opposite vanishing points. The surface of the wall is marked with the remains of the old park railings, which were sawn off and removed during the Second World War. These provide a couple of reference points, which I’ve marked with numbers. The parallel sides of the wall appear to diverge as they approach the camera towards Point 1; and they appear to converge as they recede from the camera beyond Point 2. But what happens between 1 and 2?

I used my phone camera again to produce this rather scrappy and unconventional panorama, looking down on the top of the wall and spanning about ninety degrees:

Perspective between two vanishing points
Click to enlarge

The diverging perspective at Point 1 curves around to join the converging perspective at Point 2. It’s mathematically inevitable that this should happen—what’s surprising is that we’re generally unaware of it. In part, that’s because our normal vision spans a smaller angle than we can produce in a panoramic photograph; but it’s also because our brains are very good at interpreting the raw data from our eyes so that we see what we need to see. In this case, as we scan our eyes along the length of this wall, we have the strong impression that its sides are always parallel, despite the fact that its projection on our retinas is more like a tapered sausage with a bulge in the middle.

So: our brains are good at suppressing this “curve in the middle” feature of parallel lines in perspective, at least for simple local examples like railway lines and walls.

Now let’s go back to those parallel light rays coming from the sun and illuminating the moon. Like railway tracks, they’re affected by perspective. In the photograph below, the setting sun is projecting rays from behind a low cloud:

Crepuscular Rays © 2016 Marion McMurdo
Click to enlarge
© 2016 The Boon Companion

Although the rays are in fact parallel, perspective makes them seem to radiate outwards in a fan centred on the sun. I’ve written about these crepuscular rays in a previous post, and at that time suggested that whenever you see them you should turn around and look for anticrepuscular rays, too:

Anticrepuscular rays
Source
Click to enlarge

These converge towards the antisolar point—the point in the sky directly opposite the sun—and they’re produced by exactly the same perspective effect. Which means solar rays have to do the same “diverge, curve, converge” trick as the sides of my park wall. Unfortunately, crepuscular rays tend to fade into invisibility a relatively short distance from the sun, and to reappear as anticrepuscular rays only a relatively short distance from the antisolar point. So we can’t visually track their grand curves across the sky.

But we can see the effect of that perspective curvature when the low sun illuminates a gibbous moon. Here’s a diagram of a sheaf of parallel solar rays, as they would appear when projected on to the dome of the sky:

Sun rays in perspective
Click to enlarge

Perspective makes the sun’s rays diverge when the observer looks towards the sun, but converge when the observer turns and looks at the antisolar point. Because the sun is sitting on the horizon, all the rays in my diagram above are not only parallel to each other, but also to the horizon. And because the gibbous moon is more than ninety degrees away from the sun, it’s illuminated by rays that are apparently converging towards the antisolar point on the horizon, rather than spreading outwards from the sun.

So the impression that that the moon’s illuminated portion doesn’t point towards the sun is a very strong one. This is because the scale of the moon-sun perspective is very much larger than the examples for which our brains have learned to compensate. The moon is the only illuminated object we see which is further away than a few kilometres, and our brains otherwise never have to deal with grand, horizon-spanning perspectives in illumination. So our intuitions tell us that the light rays illuminating the moon in the diagram above can’t possibly have come from the sun, since they’re apparently descending towards the antisolar point.

Standing in the open, observing the illusion, I find it impossible to mentally sketch the curve from sun to moon and see that it’s a straight line. Nothing that rises from one horizon and descends to the other horizon can possibly be a straight line, my brain insists, despite its cheerful acceptance that the straight, parallel sides of my park wall can appear to diverge and then converge in exactly the same way.

In the old days the approved way of demonstrating that there really was a straight line connecting the sun to the centre of the illuminated portion of the moon was with a long bit of string held taut between two hands at arm’s length. Placing one end of the string over the sun, and then fiddling with the other end until it intersected the moon, one could eventually produce a momentary impression that the straight line of the taut string really did align with the illuminated side of the moon. But it was all a bit unsatisfactory.

But now we have panorama apps on our phones. The one I use stitches together multiple images, and provides an on-screen guide to ensure that each successive image aligns with its vertical edge parallel to the image before—it forces the user to stay aligned in a single plane as they shift the viewing direction between successive frames. Usually, the object of the exercise is to scan along the horizon to obtain a wide-angle view of the scenery. But (as my odd little downward-looking panorama of the park wall demonstrated) it isn’t necessary to start the panorama with a vertically orientated camera aimed at the horizon.

So, back in the park and shortly after I took the image at the head of this post, I aimed my phone camera at the moon, and tilted it sideways so that it aligned with the tilted orientation of the moon’s illuminated portion. Then I triggered my panorama exposures and followed the on-screen guides—which led me across the sky in a rising and falling arc until I arrived at the setting sun!

Here’s the result:

Illuminate part of moon actually does point at sun
NowClick to enlarge

So now perspective makes the horizon appear to curve implausibly, while the illuminated portion of the moon quite obviously faces directly towards the sun.

We Are Stardust (Supplement)

The cosmic origins of the chemical elements that make up the human body
Click to enlarge

I published my original “We Are Stardust” post some time ago, introducing the infographic above, which shows the cosmic origins of the chemical elements that make up our bodies, according to mass. At that time I concluded that Joni Mitchell should actually have sung “We are 90% stardust,” because that’s the proportion of our body weight made up of atoms that originated in the nuclear fusion processes within stars. The remaining 10% is almost entirely hydrogen, which is left over from the Big Bang.

The original post got quite a lot of traffic, largely courtesy of the Damn Interesting website. But it also prompted one correspondent to ask, “But what proportion of our atoms comes from stars?” Which is an interesting question, with an answer that requires a whole new infographic.

If you want to know more about the background to all this—how various stellar processes produce the various chemical elements, and the function of those elements in the human body—I refer you back to my original post.

This time around, I’m just going to take the various weights by element I used in my last post, and divide them by the atomic weight of each element. There’s a wide range of atomic weights among the 54 elements on my list of those present in our bodies in more than 100-microgram quantities. The heaviest atoms in that group, like mercury and lead, are more than 200 times heavier than the lightest, hydrogen. So each microgram of hydrogen contains 200 times more atoms than a microgram of mercury or lead. And that skews the atomic make-up of the human body strongly towards the lighter elements, and particularly to those lighter elements that are common components of our tissues.

Most of our weight is water, which consists of hydrogen and oxygen, making these two elements the most common atoms in our bodies. The carbohydrates and fats in our tissues also contain hydrogen and oxygen, along with a lot of carbon, which is our third most common atom. Proteins contain the same three elements, along with nitrogen and a little sulphur. And in fact the four elements hydrogen, oxygen, carbon and nitrogen, all relatively light and all relatively common, account for almost all the atoms in our bodies. The seven kilograms of hydrogen in a 70-kilogram person accounts for 62% of all that person’s atoms. Oxygen accounts for 24%, carbon 12%, and nitrogen 1%. That leaves just 1% for the fifty other elements on my list.

The calcium and phosphorus in our bones and dissolved in our tissues account for a further 0.5%. The only other elements present at levels greater than 0.01% are the sulphur in our proteins, and the sodium, magnesium, chlorine and potassium which are dissolved as important ions in our body fluids. Everything else—the iron in our haemoglobin, the cobalt in Vitamin B12, the iodine in our thyroid glands—accounts for just 0.003% of our atoms.

Hydrogen is the major element left over from the Big Bang, so our atoms are dominated by that primordial element. Oxygen comes almost entirely from core-collapse supernovae, and so is the main representative of that stellar process in our bodies, along with significant amounts of carbon and nitrogen. But most of our carbon and nitrogen was blown off by red giant stars, and those two elements account for most of our atoms from that source. In fact those three sources—the Big Bang, core-collapse supernovae and red giant stars—provided almost all our atoms:

The cosmic origins of the atoms that make up the human body
Click to enlarge

If you compare the graphic above to the one at the head of this post, you can see how the balance has shifted strongly towards hydrogen (with its very light atoms) and away from oxygen (with atoms sixteen times heavier than hydrogen). And the even heavier atoms from Type Ia supernovae are so rare I can’t now add them visibly to my graphic.

So perhaps Joni Mitchell should have sung, “We are 38% stardust.”

Labyrinth

ˈlæbɪrɪnθ

Labyrinth: 1) A structure consisting of a number of intercommunicating passages arranged in bewildering complexity, through which it is difficult or impossible to find one’s way without guidance. 2) A structure consisting of a single passageway winding compactly through a tortuous route between an entrance and a central point.

Maggie's Centre, Dundee, with labyrinth
Click to enlarge

When Minos reached Cretan soil he paid his dues to Jove, with the sacrifice of a hundred bulls, and hung up his war trophies to adorn the palace. The scandal concerning his family grew, and the queen’s unnatural adultery was evident from the birth of a strange hybrid monster. Minos resolved to remove this shame, the Minotaur, from his house, and hide it away in a labyrinth with blind passageways. Dædalus, celebrated for his skill in architecture, laid out the design, and confused the clues to direction, and led the eye into a tortuous maze, by the windings of alternating paths. No differently from the way in which the watery Mæander deludes the sight, flowing backwards and forwards in its changeable course, through the meadows of Phrygia, facing the running waves advancing to meet it, now directing its uncertain waters towards its source, now towards the open sea: so Dædalus made the endless pathways of the maze, and was scarcely able to recover the entrance himself: the building was as deceptive as that.

Ovid Metamorphoses Book VIII (A.S. Kline translation)

The connection between the legendary labyrinth of the Minotaur and our local Maggie’s Centre, in the picture above, is perhaps not immediately evident. But all will become clear.

The labyrinth in which the Minotaur was confined, constructed by the architect Dædalus on the instruction of King Minos of Crete, was clearly imagined to be an exceedingly complicated maze of some kind—in the words of my first definition above, “consisting of a number of intercommunicating passages arranged in bewildering complexity”. So complex, in fact, that Ovid describes how the designer himself was hard-pressed to find his way out.

But during the Hellenic Age in Crete (long after the fall of the Bronze Age civilization associated with Minos and Dædalus), a representation of the labyrinth started to turn up on Cretan coinage; and it was quite obviously not a maze. It looked like this:

Classical Labyrinth

There’s no difficulty or confusion about finding your way in or out of a structure like this, because there is only one continuous route from its entrance to the single dead-end at the centre. So it corresponds to the second, more technical definition of labyrinth given above. In the jargon, the branching maze in which the Minotaur was confined is multicursal (“multiple paths”), whereas the labyrinth pattern on the coinage is unicursal (“single path”).

The multicursal maze went on to become an entertainment, as a feature of grand ornamental gardens during the 17th century—complex branching pathways bounded by hedges, intended to confuse and divert. The oldest surviving example in the UK is Hampton Court Maze, which looks like this:

Hampton Court Maze

These hedge mazes are nowadays often called “puzzle mazes”, but in their heyday were sometimes referred to as wildernesses. That seems like an odd word to use for something so manicured, but it derives from the old verb wilder, “to cause to lose one’s way” (as you might do in a wild or unknown place). And of course to bewilder is to put someone in such a state. In a striking parallel, our noun maze derives from the obsolete verb maze, meaning “to confuse, to drive mad”, and to amaze is to put someone in a state of confused astonishment.

In complete contrast to the alleged entertainment value of mazes, the unicursal labyrinth turned into an object of Christian devotion, laid out in stone on the floors of the great gothic cathedrals, such as the one at Chartres. Their exact purpose is unclear, but walking the winding route of the labyrinth seems to have been part of a ceremony or penitence, or perhaps a substitute for pilgrimage. Similar devotional labyrinths were also laid out in the open air using turf—they’re often referred to as mizmazes, and a few mediaeval examples still exist, like the one at Breamore, in Hampshire.

These mediaeval labyrinths had a more complex pattern than the classical version—instead of travelling in long arcs back and forth, the mediaeval labyrinth-walker encounters more frequent turning points. Here’s the pattern used at Chartres, which is very common elsewhere too.

Which brings us back to the local Maggie’s Centre. The centre itself is designed by Frank Gehry, who has given us many astonishing and beautiful buildings, including the Guggenheim Museum in Bilbao. And the sculpted landscape in front of it was designed by Arabella Lennox-Boyd. As you can see in my photograph, it contains a copy, in cobblestones set in turf, of the Chartres labyrinth—a modern mizmaze, in fact.

If you want to draw a classical labyrinth for yourself, you need to start with a “seed”—a cross, four right-angles and four dots. The black figure in the diagram below is the seed. Start by connecting the top of the cross to the top of one of the right angles (red line below), then the top of a right angle to the dot nested in the opposite right angle (orange path). The sequence thereafter should be clear, working your way through each successive pair of anchor points, joining them by wider and wider loops, indicated by the successive colours of the rainbow below.

Drawing a classical labyrinth

The result is the classical labyrinth I presented earlier, which is a seven-circuit, right-handed labyrinth—there are seven loops around the dead-end centre, and the first turn after the entrance is to the right. The left-hand version is simply the mirror image of this one. You can add more circuits by nesting four more right angles into your seed, interposed between each dot and each existing right angle—that adds four more circuits. And you can keep adding multiples of four in this way for as long as your time, patience and drawing materials last.


Labyrinth is a bit of an etymological loner, coming to us pretty much straight from the Greek, and forming a little cluster of words in English directly related to its meaning. Of its adjectival forms, only labyrinthine (“like a labyrinth”) has survived in common use, leaving labyrinthal, labyrinthial, labyrinthian, labyrinthic and labyrinthical in the dustbin of disuse. Labyrinthiform is still in technical use, designating anatomical structures that form convoluted tunnels.

The loops and curls forming the hearing and balance organs of our inner ear sit in a cavity within the temporal bone of the skull, called the bony labyrinth. The delicate winding and branching tubes themselves are collectively called the membranous labyrinth. The derivation of the name should be obvious from the diagram below:

Labyrinthitis is an inflammation of these organs, which causes disabling dizziness and unpleasant tinnitus.

The legendary King Minos, who commissioned the original labyrinth, has given us two words. The first is Minotaur (“Minos bull”), the name of the half-human creature confined within the Cretan labyrinth. This seems a little unfair on poor Minos, since the Minotaur was the product of his wife Pasiphaë’s lust for a bull. (Though admittedly she had been cursed, and it may have been Minos who offended the gods and caused the curse. So it goes.) The second word Minos has given us is Minoan, the designation for the Bronze-Age civilization that flourished on Crete between 2000 and 1500 BCE. It was named in Minos’ honour by the archaeologist Arthur Evans, who excavated the palace at Knossos in 1900.

Dædalus, the architect of the labyrinth (who had also reprehensibly aided Pasiphaë in her assignation with the infamous bull) has a slightly larger footprint in the English language than his king. We know him best by the Latinized version of his name—the original Greek was Daidalos, “the cunning one”. which is why a skilled artificer was once referred to as a Dædal. And something skilfully fashioned could be described with the adjectives dædal, Dædaleous or Dædalian. To dædalize is to make things unnecessarily complicated, and something pan-dædalian has been wrought by curious and intricate workmanship.

Finally a logodædalus or logodædalist is a person who is cunning with words; an example of such cunning is called logodædaly. I hope this post has given you material for some logodædaly of your own.

We Are Stardust

We are stardust
We are golden
And we’ve got to get ourselves
Back to the garden

Joni Mitchell “Woodstock” (1970)

A few months ago I ran into the periodic table above, detailing the cosmological origins of the chemical elements. And it occurred to me that I could quantify Joni Mitchell’s claim that “we are stardust”. How much of the human body is actually produced by the stars? But before I get to that, I should probably explain a little about the various categories indicated by the colours in the chart above.

As the Universe expanded after the Big Bang, protons and neutrons were formed from quarks, and there was a brief period (a few minutes) when the whole Universe was hot and dense enough for nuclear fusion to take place—just enough time to build up the nuclei of a couple of very light elements, but not enough to produce anything heavier in any significant quantity.* So what came out of the Big Bang was hydrogen, helium and a little lithium—the first three elements of the periodic table. The rest of the chemical elements that make up the solar system, the Earth, and our bodies were produced by fusion reactions inside stars.

The first stars formed about 100 million years after the Big Bang, and were composed entirely of hydrogen, helium, and a little lithium. Conditions at that time favoured the formation of large stars, which burned through their nuclear fuel quickly. A series of fusion reactions in such massive stars generates energy, producing progressively heavier atomic nuclei until the “iron peak” is reached—beyond that point, the creation of heavier nuclei requires an input of energy. When the star exhausts its energy reserves in this way, it collapses and then explodes as a core-collapse supernova (an “exploding massive star” in the table above). This propels the elements synthesized by the star out into space, and also drives a final burst of additional fusion as shock waves sweep outwards through the body of the star, producing a few elements that lie beyond the iron peak, as far as rubidium. The remnant of the supernova’s core collapses into a neutron star, or perhaps even a black hole.

These elements produced by the first supernovae seed the gas clouds which later condense into subsequent generations of stars. The lower-mass stars which appeared later in the Universe’s lifetime (such as the sun) are unable to drive internal fusion as far as the iron peak, and stall their fusion processes after producing only the relatively light nuclei of carbon and oxygen. They evolve into red giants stars (“dying low-mass stars”, above), puff off their outer layers, and then subside to become white dwarfs. But during this process they are able to breed higher-mass elements from the metal contaminants they inherited from the first supernovae. Neutrons from the star’s fusion reactions are absorbed by these heavy nuclei, gradually building them up into ever-heavier elements with higher atomic numbers, as some of the neutrons convert to protons by emitting an electron (beta decay). This building process finally “sticks” when it gets to Bismuth-210, which decays by emitting an alpha particle (two neutrons and two protons), rather than an electron. So, perhaps counter-intuitively, the gentle wind from low-mass stars in their red-giant phase enriches the interstellar medium with heavier atoms than does the spectacular explosion of a supernova.

But once a low-mass star has finished its red-giant phase and settled to become a white dwarf, it may (rarely) manage to turn into a supernova. For this to happen, it must have a companion star orbiting close by, which expands and spills material from its own outer envelope on to the white dwarf’s surface. Once a critical mass of material accumulates, a runaway fusion reaction takes place and blows the white dwarf apart in what’s called a Type Ia supernova. The nature of the fusion reaction is different from what occurs in the core-collapse supernovae, so Type Ia supernovae (“exploding white dwarfs”, above) eject a slightly different spectrum of elements into space—in particular, they don’t create any elements beyond the iron peak.

The processes described so far account for almost all the elements up to bismuth. To produce heavier elements requires a massive bombardment of neutrons, building up nuclei faster than they can decay and thereby pushing beyond the “bismuth barrier” I described earlier. Such torrents of neutrons occurs when neutron stars collide. As their name suggests, neutron stars contain a lot of neutrons, and if two of these supernova remnants are formed in close orbit around each other they may eventually collide. This unleashes a massive blast of neutrons, which bombard the conventional matter on the surface of the neutron stars, building up heavy radioactive elements before they have a chance to decay, and ejecting these products into space.

Finally, a few of the lightest elements are formed by cosmic rays (particles generated during supernova explosions). When these rapidly moving particles strike carbon or oxygen nuclei in space, they can break them into lighter fragments. This process accounts for almost all the beryllium and boron in the Universe, and some of the lithium.

Here on Earth, we have a few more processes that contribute to the mix of chemical elements around us. There’s natural radioactive decay, which is slowly converting some chemical elements into slightly lighter ones. And there are artificial radioactive elements, which we produce in our bombs and nuclear reactors. But these are essentially minor processes in the scheme of things, and I feel safe to ignore them here.


It should come as no surprise that our bodies are made up primarily of the most common chemical elements in the Universe—that is, hydrogen from the Big Bang, and those elements from early in the periodic table which are most frequently spewed out by dying or exploding stars. Indeed, apart from the noble gases helium, neon and argon, an adult human body contains significant traces (more than 100 micrograms) of every element from hydrogen to the “iron peak” elements that represent the limits of equilibrium stellar fusion processes. And a surprising number of these elements have biological roles.

The water that makes up the bulk of our bodies is composed of hydrogen and oxygen. The fats and carbohydrates of our tissues consist of hydrogen, oxygen and carbon, and our proteins add nitrogen and a little sulphur to that mix. Calcium and phosphate are the major structural components of our bones, and sodium, potassium, magnesium and chlorine are present as dissolved ions in our body water, regulating the activity of our cells.

In smaller quantities, there are elemental micronutrients that we must have in our diet in small quantities to stay healthy. Iron, an oxygen-carrying component of haemoglobin and myoglobin, is the major such micronutrient. Iodine is required for thyroid function, cobalt is a component of Vitamin B12, and chromium is present in a hormone called Glucose Tolerance Factor. To these we can add manganese, copper, zinc, selenium and molybdenum, all of which are required for the function of various enzymes. People eating anything approximating a normal diet obtain all these latter elements in adequate quantities, but they must be carefully provided for patients reliant on intravenous feeding in Intensive Care Units.

A few elements seem to produce deficiency syndromes in experimental animals that are fed very carefully controlled diets. Silicon, vanadium, nickel and tin fall into this group, but their biological role, and relevance to humans, is unknown. And then there are the elements which are known to be present in the human body, but appear to have no function—they’re probably just in our tissues because they’re in our food. Some are industrial contaminants, like mercury and lead; but some, like lithium and boron, are probably just part of the natural environment.

Cover of "Nature's Building Blocks" by John Emsley

Estimates of the elemental make-up of the human body vary. I’ve used the figures quoted by John Emsley in his marvellous book Nature’s Building Blocks: An A-Z Guide to the Elements, and found 54 elements that are present in a 70-kilogram human in quantities that exceed 100 micrograms.

Summing the proportions of all these elements that come from different cosmological sources, I was able to produce this infographic:

The cosmic origins of the chemical elements that make up the human body
Click to enlarge

(There are no prizes for identifying the original source of the human outlines used.)

Hydrogen, although the most common element in the human body, is also the lightest. So it accounts for just 10% of our weight, and that is, to a good approximation, the only component of our bodies that originated in the Big Bang, because we contain helium and lithium in only tiny quantities.

So the rest was produced, directly or indirectly, in stars. The oxygen in our body water and in our tissues accounts for the large majority of that, originating from core-collapse supernovae. Carbon and nitrogen in our tissues makes up much of the remaining mass, mainly coming from the stellar wind produced by low-mass stars in their red-giant phase. And the rare white-dwarf explosions, Type Ia supernovae, account for just 1% of our weight, producing a significant fraction of the calcium and phosphorus in our bones, and some of the important ions dissolved in our body water. Neutron star mergers, even rarer than exploding white dwarfs, are responsible for a few trace elements, most notably almost all the iodine in our thyroid glands. And cosmic-rays from supernovae account for the production of the (apparently biologically inactive) boron and beryllium in our bodies, as well as a little of the lithium.

So to be strictly accurate, Joni Mitchell should have written “We are 90% stardust”.


* One of the earliest publications on this topic was a Letter to the Editor entitled “The Origin Of Chemical Elements” (Physical Review 1948 73: 803-4). The authors were Ralph Alpher, Hans Bethe and George Gamow. Alpher was a PhD student at the time, and Gamow was his supervisor. Alpher’s dissertation was on the topic of what’s now called Big Bang nucleosynthesis—the process of nuclear fusion during the first few minutes of the Big Bang. Bethe was a physicist working in the field of nuclear fusion in stars, but had made no contribution at all to Alpher’s work. He was only included as an author to allow Gamow to make a pun on the Greek alphabet—Alpher, Bethe, Gamow; alpha, beta, gamma. Gamow must have been delighted when their letter was published in the April 1 edition of Physical Review.

Old Lady-Day

At length it was the eve of Old Lady-Day, and the agricultural world was in a fever of mobility such as only occurs at that particular date of the year. It is a day of fulfilment; agreements for outdoor service during the ensuing year, entered into at Candlemas, are to be now carried out. The labourers—or “work-folk”, as they used to call themselves immemorially till the other word was introduced from without—who wish to remain no longer in old places are removing to the new farms.

Thomas Hardy Tess Of The d’Urbervilles (1891)

Yesterday (as this post goes live) was Old Lady-Day, once a significant day in the English agricultural calendar, as Thomas Hardy describes above. And today (April 6th), a new tax year begins in the UK. These dates are not unrelated to each other, and are also linked to the Christian Feast of the Annunciation, which commemorates the Biblical event depicted in the Leonardo painting at the head of this post—the arrival of the Angel Gabriel to inform the Virgin Mary that she was to conceive a miraculous child. As the King James Version of the Bible tells the story:

And the angel came in unto her, and said, Hail, thou that art highly favoured, the Lord is with thee: blessed art thou among women.
And when she saw him, she was troubled at his saying, and cast in her mind what manner of salutation this should be.
And the angel said unto her, Fear not, Mary: for thou hast found favour with God.
And, behold, thou shalt conceive in thy womb, and bring forth a son, and shalt call his name JESUS.

Luke 1:28-31

This event was called “The Annunciation To Mary”, or”The Annunciation” for short, annunciation being the act of announcing something. When it came to nailing these events to the Christian calendar, it made sense for the Feast of the Annunciation to fall exactly nine months before the Feast of the Nativity, which celebrates the birth of Jesus. Since that festival, Christmas Day, falls on December 25th in the Western Christian tradition, the Feast of the Annunciation occurs on March 25th. In Britain, that day is commonly called “Lady Day”, a reference to “Our Lady”, the Virgin Mary.

As well as being a religious feast-day, Lady Day was a significant secular date, too. As one of the four English quarter days*, it was a time when payments fell due and contracts were honoured. Farm labourers were usually indentured to work for a year at a time, and if they wanted to change jobs they all did so on Lady Day.

In fact, Lady Day was such an important date in the calendar, it marked the start of the New Year in English-speaking parts of the world for almost 600 years. While it may seem very strange to us now, under what was called “The Custom of the English Church” the year number would increment on March 25th each year, rather than on January 1st.

Scotland switched to using January 1st for New Year’s Day in 1600, but England didn’t make the change until it adopted the Gregorian calendar reform in 1752. I’ve written before about how eleven days were dropped from September that year, to bring the calendar back into alignment with the seasons.

Gregorian Calendar 1752 (Great Britain)

So 1752 was famously a short year in the English-speaking world. But it’s probably less well-known that 1751 was an even shorter year in England, since it began on March 25th, but ended on December 31st.

The missing eleven days in 1752 created a problem for all the legal stuff relating to contracts and debts that fell due on Lady Day. All the famous protests about “Give Us Back Our Eleven Days” were not from ignorant people who thought their lives had been shortened, but from people who were being paid daily wages, but were settling their debts monthly or quarterly or yearly.

One solution to this problem was to shift the dates of contracts and payments to compensate—so instead of changing jobs on Lady Day 1753, farm labourers worked until eleven days later, April 5th, and continued to renew their contracts on that day in subsequent years. April 5th therefore became known as “Old Lady-Day”, an expression that was still in use in 1891, when Hardy wrote about it in the quotation at the head of this post.

Similarly, the date of the new tax year moved from March 25th to April 5th—so the workers were given back their eleven days, at least by the tax authorities, if not by their landlords and other creditors.

But wait. I told you that the tax year in the UK begins on April 6th, not on Old Lady-Day. Why has it shifted by another day? Because under the old Julian calendar, the year 1800 was due to be a leap year, but under the new Gregorian calendar, it was not. So workers were being done out of a day’s wages in 1800 because of the calendar reform, and the tax authorities duly shifted the date of the new tax year by a day, to compensate. By 1900, which also dropped a leap day, these calendrical niceties seem to have been forgotten, and no shift in the tax year occurred, so the date has remained the same ever since.

So there you are. People in Britain used to start a new tax year at the start of a new calendar year, back when Lady Day was also New Year’s Day. Now, thanks to a centuries-old calendar reform and a surprising impulse of fairness from the tax authorities of the time, we calculate our taxes from what seems (to the uninitiated) like a random date in April.


* The other quarter days were Midsummer Day (June 24th), Michaelmas (September 29th) and Christmas (December 25th).
Oddly enough, the same date was also used in Florence, which earned that system of year reckoning its alternative name, Calculus Florentinus.

Falling Through The Earth

Falling through the Earth
Earth image prepared using Celestia

If the alien cyborgs have constructed this miraculous planet-coring device with the precision I would expect of them, I predict we shall plunge entirely through the center and out to the other side.

Gregory Benford Tides Of Light (1989)

Cover of Tides of Light by Gregory Benford

There’s an old puzzle in physics, to work out how long it would take a person to fall right through the centre of the Earth to the other side, along a tunnel constructed through its core. Gregory Benford is the only science fiction writer I’ve ever seen attempt to incorporate that scenario into a story—a particularly striking bit of audacity that made it on to the cover of some editions of his novel.

For simplicity, the puzzle stipulates that the tunnel contains no air to slow the faller, and usually also specifies that it has a frictionless lining—any trajectory that doesn’t follow the rotation axis will result in the faller being pressed against the side of the tunnel by the Coriolis effect. (Benford had his protagonist Killeen fall through an evacuated tunnel from pole to pole, thereby avoiding Coriolis.)

So our unfortunate faller drops through a hole in the surface of the Earth, accelerates all the way to the centre, and then decelerates as she rises towards the antipodal point, coming to a momentary halt just as she pops out of the hole on the far side of the planet—hopefully not too flustered to remember to grab a hold of something at that point, so as to avoid embarking on the return journey.

We can set a lower bound to the duration of that journey by working out how long it would take to fall from the surface of the Earth to the centre, assuming all the Earth’s mass is concentrated at that point. Because the journey is one that involves symmetrical acceleration and deceleration, doubling this number will give us the duration of the complete traverse of the tunnel.

This means our faller moves under the influence of an inverse-square gravitational field throughout her fall. The acceleration at any given distance from the centre of such a field is given by:

a=\frac{GM}{r^2}

where a is the acceleration, G is the Gravitational Constant, M is the central mass and r the radial distance. That’s an important equation and I’ll invoke it again, but for the moment it’s more useful to know that the potential energy of an object in such a field varies inversely with r. The gravitational potential energy per unit mass is given by:

U_{m}=-\frac{GM}{r}

If we drop an object starting from rest at some distance R, we can figure out its kinetic energy per unit mass at some distance r by subtracting one potential energy from the other. And from that, we can figure out the velocity at any given point during the fall:

v=\sqrt{2GM\left (\frac{1}{r}-\frac{1}{R}\right)}

Finally, integrating* inverse velocity against distance gives us the time it takes to fall from a stationary starting point at R to some lesser distance r. For the special case of r=0, this comes out to be

t=\frac{\pi }{2\sqrt{2}}\sqrt{\frac{R^3}{GM}}

This calculation, incidentally and remarkably, is a major plot element of Arthur C. Clarke’s 1953 short science-fiction story “Jupiter V”.

Plugging in values for the mass and mean radius of the Earth, we find that t turns out to be 895 seconds. Doubling that value gives a total time to fall from one side of the Earth to the other of 29.8 minutes. That’s our lower bound for the journey time, since it assumes the faller is subject to the gravitational effect of Earth’s entire mass throughout the journey. (We draw a veil over what would actually happen at the centre point, where the faller would encounter a point of infinite density and infinite gravity.)

We can also put an upper bound on the journey time by assuming the Earth is of uniform density throughout. Under those circumstance, instead of the gravitational acceleration getting every higher as our faller approaches the centre of the Earth, the acceleration gets steadily lower, and reaches zero at the centre. This is because of something called Newton’s Shell Theorem, which shows that the gravitational force experienced by a particle inside a uniform spherical shell is zero. (Which rather undermines the premise of Edgar Rice Burroughs’s “Hollow Earth” novels.)

So as our faller descends into the Earth, she is accelerated only by the gravity of the spherical region of the Earth that is closer to the centre than she is. For any given radial distance r, the mass m of this interior sphere will be

m=\rho \frac {4}{3} \pi r^3

where ρ is the density.

Plugging that into our equation for the acceleration due to gravity (the first in the post) we get:

a=\frac {4} {3} \pi \frac {G \rho r^3} {r^2}=\frac {4} {3} \pi G \rho r

So the acceleration is directly proportional to the distance from the centre. This is the defining property of a simple harmonic oscillator, like a pendulum or spring, for which the restoring force increases steadily the farther from the neutral position we displace the oscillating mass.

Which is handy, because there’s a little toolbox of equations that apply to simple harmonic motion (they’re at the other end of my link), and with a bit of fiddling we can derive our journey time . The basic time parameter for oscillators is the period of oscillation, which I’ll call P. But that would be the time taken to fall from one side of the Earth to the other and back again. So the time we’re interested in is just half of that:

\frac {P} {2}=\pi \sqrt {\frac {3} {4 \pi G \rho}}

And plugging in the value for the mean density of the Earth, that shakes down to 42.2 minutes.

Notice how this length of time depends only on the density—it actually doesn’t matter how big our uniform sphere is, the time to fall from one side to the other remains the same so long as the density is the same. This is analogous to the fact that the period of oscillation of a pendulum doesn’t depend on how far we displace it—which is why we have pendulum clocks.

And because the acceleration due to gravity, g, at the surface of a spherical mass of radius R is given by

g=\frac {GM} {R^2}=\frac {4 \pi G \rho R^3} {3R^2}=\frac {4 \pi G \rho R} {3}

we can also derive our half-period P/2 as

\frac {P} {2}=\pi \sqrt{\frac{R}{g}}

A nice compact formula depending on radius and surface gravity, which is often quoted in this context.

And this is the gateway to another interesting result for spheres of uniform density, if you’ll permit me a brief digression. Suppose we dig a straight (evacuated, frictionless) tunnel between any two points on the surface of the sphere, and allow an object to slide along the tunnel under the influence of gravity alone—a concept called a gravity train. How long will such an object take to make its journey from one point on the surface to the other? It turns out that this time is exactly the same as the special case of a fall through the centre. We can see why by constructing the following diagram:

Geometry of gravity train tunnel

By constructing similar triangles, we see that the ratio of R (the distance to the centre of the earth) to d (the distance to the centre of the tunnel), is always locally the same as the ratio of g (the local gravitational acceleration) to a (the component of g that accelerates the gravity train along the tunnel. So for any straight tunnel at all, d/a is always equal to R/g, which (from the equation above) we know determines the period of oscillation through a central tunnel.

Remarkably then, we can take a sphere of uniform density of any size, and drill a straight hole between any two points on its surface, and the time it takes to fall from one surface point to the other will be exactly the same, and determined entirely by the density of the sphere.

But back to the original problem. I’ve determined that the fall time is somewhere between 29.8 and 42.2 minutes. Here are plots of the velocity and time profiles for the two scenarios I’ve discussed so far:

Falling to the centre of the Earth, two limits
Click to enlarge

Can I be more precise? I can indeed, by using the Preliminary Reference Earth Model (PREM), which uses seismological data to estimate how the density of the Earth varies with distance from its centre.

Taking those figures and Newton’s Shell Theorem, I can chart how the acceleration due to gravity will vary as our faller descends into the Earth. Here’s the result, with density in blue plotted against the left axis, and gravity in red against the right axis:

PREM density and gravity chart
Click to enlarge

As our faller descends through the low-density crust and mantle of the Earth, and approaches the high-density core, she actually finds herself descending into regions of higher gravity, reaching a maximum about 9% higher than the gravity at the Earth’s surface when she reaches the boundary between the mantle and the core.

If I take the data from the PREM as representing a succession of shells, across which acceleration rises or falls linearly within each shell, I can integrate my way through each shell in turn, deriving velocity and elapsed time. For each shell of thickness s, with initial velocity v0, initial acceleration a0 and final acceleration a1, the final velocity is given by

v=\sqrt{v_{0}^{2}+(a_{0}+a_{1})s}

and the time taken to traverse the shell is

t=\int_{0}^{s}\frac{dx}{\sqrt{v_{0}^{2}+2a_{0}x+\frac{a_{1}-a_{0}}{s}x^{2}}}

Handing off v as v0 to the next shell and summing all the t‘s once I reach the centre of the Earth will give me the answer I want.

The solution to the time integral is a bit messy, coming out as an arcsin equation when a0 > a1, and a natural log when a1 > a0.

But it’s soluble, and with steely nerves and a large spreadsheet, the graphs of the solution fall neatly between the two extremes I figured out earlier:

Falling to the centre of the Earth, PREM curves
Click to enlarge

And the summed times for the full journey come out to be 38.2 minutes. And that’s my best estimate of the answer to the question posed at the head of this post.


* Integrating this expression turned out to be a little tricky, at least for me.

t=\sqrt{\frac{R}{2GM}}\int_{r}^{R}\sqrt{\frac{r}{R-r}} dr

After mauling it around and substituting sin²θ for r/R, then mauling it around some more, I ended up with this eye-watering equality as the general solution:

t=\sqrt{\frac{R^3}{2GM}}\cdot \left [( \frac{\pi }{2}-asin\left ( \sqrt{\frac{r}{R}} \right )+ \sqrt{\frac{r}{R}}\cdot \sqrt{1-\frac{r}{R}}\right ]

Exact solutions for this integral look like this:
For a0>a1:

t=\sqrt{\frac{s}{a_{0}-a_{1}}}\cdot \left [ asin\left ( \frac{a_{0}}{\sqrt{a_{0}^2+v_{0}^2\left ( \frac{a_{0}-a_{1}}{s}\right )}} \right )-asin\left ( \frac{a_{1}}{\sqrt{a_{0}^2+v_{0}^2\left ( \frac{a_{0}-a_{1}}{s}\right )}} \right ) \right ]

For a1>a0:

t=\sqrt{\frac{s}{a_{1}-a_{0}}}\cdot ln\left [ \frac{\sqrt{\frac{a_{1}-a_{0}}{s}\left ( v_{0}^2+sa_{0}+sa_{1} \right )}+a_{1}}{\sqrt{\frac{a_{1}-a_{0}}{s} v_{0}^2}+a_{0}} \right ]

(If you can simplify any of these, be sure to let me know.)