# Why Does The Illuminated Side Of The Moon Sometimes Not Point At The Sun?

I took the above panoramic view, spanning something like 120 degrees, in a local park towards the end of last year. The sun was almost on the horizon to the southwest, at right of frame. The moon was well risen in the southeast, framed by the little red box in the image above. After taking the panorama, I zoomed in for the enlarged view of the moon shown in the inset, to demonstrate the apparent problem. The moon is higher in the sky than the sun, but its illuminated side is pointing slightly upwards, rather than being orientated, as one might expect, with a slight downward tilt to face the low sun.

This appearance is quite common, whenever the moon is in gibbous phase (between the half and the full), and therefore separated by more than 90 degrees from the sun in the sky. Every now and then someone notices the effect, and decides that they have to overthrow the whole of physics to explain it. I could offer you a link to a relevant page, but I won’t—firstly, I don’t like to send traffic to these sites; secondly, you might be driven mad by the experience and I’d feel responsible.

Actually, the illuminated part of the moon is pointing directly towards the sun; it just doesn’t look as if it is. So (as with my previous post “Why Do Mirrors Reverse Left And Right But Not Up And Down?”) the title of this post is an ill-posed question—it assumes something that isn’t actually so.

Here’s a diagram showing the arrangement of Earth, moon and sun in the situation photographed above:

The Earth-bound observer is looking towards the setting sun. Behind and above him is the moon, its Earth-facing side more than half-illuminated. The sun is so far away that its rays are very nearly parallel across the width of the moon’s orbit. In particular the light rays bringing the image of the setting sun to the observer’s eyes are effectively parallel to those shining on the moon—the divergence is only about a sixth of a degree.

But we know that parallel lines are affected by perspective. They appear to converge at a vanishing point. The most familiar example is that of railway lines, like these:

But there’s a problem with this sort of perspective. To illustrate it, I took some photographs of the top of the very low wall that surrounds the park featured in my first photograph:

The views look north and south towards two opposite vanishing points. The surface of the wall is marked with the remains of the old park railings, which were sawn off and removed during the Second World War. These provide a couple of reference points, which I’ve marked with numbers. The parallel sides of the wall appear to diverge as they approach the camera towards Point 1; and they appear to converge as they recede from the camera beyond Point 2. But what happens between 1 and 2?

I used my phone camera again to produce this rather scrappy and unconventional panorama, looking down on the top of the wall and spanning about ninety degrees:

The diverging perspective at Point 1 curves around to join the converging perspective at Point 2. It’s mathematically inevitable that this should happen—what’s surprising is that we’re generally unaware of it. In part, that’s because our normal vision spans a smaller angle than we can produce in a panoramic photograph; but it’s also because our brains are very good at interpreting the raw data from our eyes so that we see what we need to see. In this case, as we scan our eyes along the length of this wall, we have the strong impression that its sides are always parallel, despite the fact that its projection on our retinas is more like a tapered sausage with a bulge in the middle.

So: our brains are good at suppressing this “curve in the middle” feature of parallel lines in perspective, at least for simple local examples like railway lines and walls.

Now let’s go back to those parallel light rays coming from the sun and illuminating the moon. Like railway tracks, they’re affected by perspective. In the photograph below, the setting sun is projecting rays from behind a low cloud:

Although the rays are in fact parallel, perspective makes them seem to radiate outwards in a fan centred on the sun. I’ve written about these crepuscular rays in a previous post, and at that time suggested that whenever you see them you should turn around and look for anticrepuscular rays, too:

These converge towards the antisolar point—the point in the sky directly opposite the sun—and they’re produced by exactly the same perspective effect. Which means solar rays have to do the same “diverge, curve, converge” trick as the sides of my park wall. Unfortunately, crepuscular rays tend to fade into invisibility a relatively short distance from the sun, and to reappear as anticrepuscular rays only a relatively short distance from the antisolar point. So we can’t visually track their grand curves across the sky.

But we can see the effect of that perspective curvature when the low sun illuminates a gibbous moon. Here’s a diagram of a sheaf of parallel solar rays, as they would appear when projected on to the dome of the sky:

Perspective makes the sun’s rays diverge when the observer looks towards the sun, but converge when the observer turns and looks at the antisolar point. Because the sun is sitting on the horizon, all the rays in my diagram above are not only parallel to each other, but also to the horizon. And because the gibbous moon is more than ninety degrees away from the sun, it’s illuminated by rays that are apparently converging towards the antisolar point on the horizon, rather than spreading outwards from the sun.

So the impression that that the moon’s illuminated portion doesn’t point towards the sun is a very strong one. This is because the scale of the moon-sun perspective is very much larger than the examples for which our brains have learned to compensate. The moon is the only illuminated object we see which is further away than a few kilometres, and our brains otherwise never have to deal with grand, horizon-spanning perspectives in illumination. So our intuitions tell us that the light rays illuminating the moon in the diagram above can’t possibly have come from the sun, since they’re apparently descending towards the antisolar point.

Standing in the open, observing the illusion, I find it impossible to mentally sketch the curve from sun to moon and see that it’s a straight line. Nothing that rises from one horizon and descends to the other horizon can possibly be a straight line, my brain insists, despite its cheerful acceptance that the straight, parallel sides of my park wall can appear to diverge and then converge in exactly the same way.

In the old days the approved way of demonstrating that there really was a straight line connecting the sun to the centre of the illuminated portion of the moon was with a long bit of string held taut between two hands at arm’s length. Placing one end of the string over the sun, and then fiddling with the other end until it intersected the moon, one could eventually produce a momentary impression that the straight line of the taut string really did alignment with the illuminated side of the moon. But it was all a bit unsatisfactory.

But now we have panorama apps on our phones. The one I use stitches together multiple images, and provides an on-screen guide to ensure that each successive image aligns with its vertical edge parallel to the image before—it forces the user to stay aligned in a single plane as they shift the viewing direction between successive frames. Usually, the object of the exercise is to scan along the horizon to obtain a wide-angle view of the scenery. But (as my odd little downward-looking panorama of the park wall demonstrated) it isn’t necessary to start the panorama with a vertically orientated camera aimed at the horizon.

So, back in the park and shortly after I took the image at the head of this post, I aimed my phone camera at the moon, and tilted it sideways so that it aligned with the tilted orientation of the moon’s illuminated portion. Then I triggered my panorama exposures and followed the on-screen guides—which led me across the sky in a rising and falling arc until I arrived at the setting sun!

Here’s the result:

So now perspective makes the horizon appear to curve implausibly, while the illuminated portion of the moon quite obviously faces directly towards the sun.

# We Are Stardust (Supplement)

I published my original “We Are Stardust” post some time ago, introducing the infographic above, which shows the cosmic origins of the chemical elements that make up our bodies, according to mass. At that time I concluded that Joni Mitchell should actually have sung “We are 90% stardust,” because that’s the proportion of our body weight made up of atoms that originated in the nuclear fusion processes within stars. The remaining 10% is almost entirely hydrogen, which is left over from the Big Bang.

The original post got quite a lot of traffic, largely courtesy of the Damn Interesting website. But it also prompted one correspondent to ask, “But what proportion of our atoms comes from stars?” Which is an interesting question, with an answer that requires a whole new infographic.

If you want to know more about the background to all this—how various stellar processes produce the various chemical elements, and the function of those elements in the human body—I refer you back to my original post.

This time around, I’m just going to take the various weights by element I used in my last post, and divide them by the atomic weight of each element. There’s a wide range of atomic weights among the 54 elements on my list of those present in our bodies in more than 100-microgram quantities. The heaviest atoms in that group, like mercury and lead, are more than 200 times heavier than the lightest, hydrogen. So each microgram of hydrogen contains 200 times more atoms than a microgram of mercury or lead. And that skews the atomic make-up of the human body strongly towards the lighter elements, and particularly to those lighter elements that are common components of our tissues.

Most of our weight is water, which consists of hydrogen and oxygen, making these two elements the most common atoms in our bodies. The carbohydrates and fats in our tissues also contain hydrogen and oxygen, along with a lot of carbon, which is our third most common atom. Proteins contain the same three elements, along with nitrogen and a little sulphur. And in fact the four elements hydrogen, oxygen, carbon and nitrogen, all relatively light and all relatively common, account for almost all the atoms in our bodies. The seven kilograms of hydrogen in a 70-kilogram person accounts for 62% of all that person’s atoms. Oxygen accounts for 24%, carbon 12%, and nitrogen 1%. That leaves just 1% for the fifty other elements on my list.

The calcium and phosphorus in our bones and dissolved in our tissues account for a further 0.5%. The only other elements present at levels greater than 0.01% are the sulphur in our proteins, and the sodium, magnesium, chlorine and potassium which are dissolved as important ions in our body fluids. Everything else—the iron in our haemoglobin, the cobalt in Vitamin B12, the iodine in our thyroid glands—accounts for just 0.003% of our atoms.

Hydrogen is the major element left over from the Big Bang, so our atoms are dominated by that primordial element. Oxygen comes almost entirely from core-collapse supernovae, and so is the main representative of that stellar process in our bodies, along with significant amounts of carbon and nitrogen. But most of our carbon and nitrogen was blown off by red giant stars, and those two elements account for most of our atoms from that source. In fact those three sources—the Big Bang, core-collapse supernovae and red giant stars—provided almost all our atoms:

If you compare the graphic above to the one at the head of this post, you can see how the balance has shifted strongly towards hydrogen (with its very light atoms) and away from oxygen (with atoms sixteen times heavier than hydrogen). And the even heavier atoms from Type Ia supernovae are so rare I can’t now add them visibly to my graphic.

So perhaps Joni Mitchell should have sung, “We are 38% stardust.”

# Labyrinth

## ˈlæbɪrɪnθ

Labyrinth: 1) A structure consisting of a number of intercommunicating passages arranged in bewildering complexity, through which it is difficult or impossible to find one’s way without guidance. 2) A structure consisting of a single passageway winding compactly through a tortuous route between an entrance and a central point.

When Minos reached Cretan soil he paid his dues to Jove, with the sacrifice of a hundred bulls, and hung up his war trophies to adorn the palace. The scandal concerning his family grew, and the queen’s unnatural adultery was evident from the birth of a strange hybrid monster. Minos resolved to remove this shame, the Minotaur, from his house, and hide it away in a labyrinth with blind passageways. Dædalus, celebrated for his skill in architecture, laid out the design, and confused the clues to direction, and led the eye into a tortuous maze, by the windings of alternating paths. No differently from the way in which the watery Mæander deludes the sight, flowing backwards and forwards in its changeable course, through the meadows of Phrygia, facing the running waves advancing to meet it, now directing its uncertain waters towards its source, now towards the open sea: so Dædalus made the endless pathways of the maze, and was scarcely able to recover the entrance himself: the building was as deceptive as that.

Ovid Metamorphoses Book VIII (A.S. Kline translation)

The connection between the legendary labyrinth of the Minotaur and our local Maggie’s Centre, in the picture above, is perhaps not immediately evident. But all will become clear.

The labyrinth in which the Minotaur was confined, constructed by the architect Dædalus on the instruction of King Minos of Crete, was clearly imagined to be an exceedingly complicated maze of some kind—in the words of my first definition above, “consisting of a number of intercommunicating passages arranged in bewildering complexity”. So complex, in fact, that Ovid describes how the designer himself was hard-pressed to find his way out.

But during the Hellenic Age in Crete (long after the fall of the Bronze Age civilization associated with Minos and Dædalus), a representation of the labyrinth started to turn up on Cretan coinage; and it was quite obviously not a maze. It looked like this:

There’s no difficulty or confusion about finding your way in or out of a structure like this, because there is only one continuous route from its entrance to the single dead-end at the centre. So it corresponds to the second, more technical definition of labyrinth given above. In the jargon, the branching maze in which the Minotaur was confined is multicursal (“multiple paths”), whereas the labyrinth pattern on the coinage is unicursal (“single path”).

The multicursal maze went on to become an entertainment, as a feature of grand ornamental gardens during the 17th century—complex branching pathways bounded by hedges, intended to confuse and divert. The oldest surviving example in the UK is Hampton Court Maze, which looks like this:

These hedge mazes are nowadays often called “puzzle mazes”, but in their heyday were sometimes referred to as wildernesses. That seems like an odd word to use for something so manicured, but it derives from the old verb wilder, “to cause to lose one’s way” (as you might do in a wild or unknown place). And of course to bewilder is to put someone in such a state. In a striking parallel, our noun maze derives from the obsolete verb maze, meaning “to confuse, to drive mad”, and to amaze is to put someone in a state of confused astonishment.

In complete contrast to the alleged entertainment value of mazes, the unicursal labyrinth turned into an object of Christian devotion, laid out in stone on the floors of the great gothic cathedrals, such as the one at Chartres. Their exact purpose is unclear, but walking the winding route of the labyrinth seems to have been part of a ceremony or penitence, or perhaps a substitute for pilgrimage. Similar devotional labyrinths were also laid out in the open air using turf—they’re often referred to as mizmazes, and a few mediaeval examples still exist, like the one at Breamore, in Hampshire.

These mediaeval labyrinths had a more complex pattern than the classical version—instead of travelling in long arcs back and forth, the mediaeval labyrinth-walker encounters more frequent turning points. Here’s the pattern used at Chartres, which is very common elsewhere too.

Which brings us back to the local Maggie’s Centre. The centre itself is designed by Frank Gehry, who has given us many astonishing and beautiful buildings, including the Guggenheim Museum in Bilbao. And the sculpted landscape in front of it was designed by Arabella Lennox-Boyd. As you can see in my photograph, it contains a copy, in cobblestones set in turf, of the Chartres labyrinth—a modern mizmaze, in fact.

If you want to draw a classical labyrinth for yourself, you need to start with a “seed”—a cross, four right-angles and four dots. The black figure in the diagram below is the seed. Start by connecting the top of the cross to the top of one of the right angles (red line below), then the top of a right angle to the dot nested in the opposite right angle (orange path). The sequence thereafter should be clear, working your way through each successive pair of anchor points, joining them by wider and wider loops, indicated by the successive colours of the rainbow below.

The result is the classical labyrinth I presented earlier, which is a seven-circuit, right-handed labyrinth—there are seven loops around the dead-end centre, and the first turn after the entrance is to the right. The left-hand version is simply the mirror image of this one. You can add more circuits by nesting four more right angles into your seed, interposed between each dot and each existing right angle—that adds four more circuits. And you can keep adding multiples of four in this way for as long as your time, patience and drawing materials last.

Labyrinth is a bit of an etymological loner, coming to us pretty much straight from the Greek, and forming a little cluster of words in English directly related to its meaning. Of its adjectival forms, only labyrinthine (“like a labyrinth”) has survived in common use, leaving labyrinthal, labyrinthial, labyrinthian, labyrinthic and labyrinthical in the dustbin of disuse. Labyrinthiform is still in technical use, designating anatomical structures that form convoluted tunnels.

The loops and curls forming the hearing and balance organs of our inner ear sit in a cavity within the temporal bone of the skull, called the bony labyrinth. The delicate winding and branching tubes themselves are collectively called the membranous labyrinth. The derivation of the name should be obvious from the diagram below:

Labyrinthitis is an inflammation of these organs, which causes disabling dizziness and unpleasant tinnitus.

The legendary King Minos, who commissioned the original labyrinth, has given us two words. The first is Minotaur (“Minos bull”), the name of the half-human creature confined within the Cretan labyrinth. This seems a little unfair on poor Minos, since the Minotaur was the product of his wife Pasiphaë’s lust for a bull. (Though admittedly she had been cursed, and it may have been Minos who offended the gods and caused the curse. So it goes.) The second word Minos has given us is Minoan, the designation for the Bronze-Age civilization that flourished on Crete between 2000 and 1500 BCE. It was named in Minos’ honour by the archaeologist Arthur Evans, who excavated the palace at Knossos in 1900.

Dædalus, the architect of the labyrinth (who had also reprehensibly aided Pasiphaë in her assignation with the infamous bull) has a slightly larger footprint in the English language than his king. We know him best by the Latinized version of his name—the original Greek was Daidalos, “the cunning one”. which is why a skilled artificer was once referred to as a Dædal. And something skilfully fashioned could be described with the adjectives dædal, Dædaleous or Dædalian. To dædalize is to make things unnecessarily complicated, and something pan-dædalian has been wrought by curious and intricate workmanship.

Finally a logodædalus or logodædalist is a person who is cunning with words; an example of such cunning is called logodædaly. I hope this post has given you material for some logodædaly of your own.

# We Are Stardust

We are stardust
We are golden
And we’ve got to get ourselves
Back to the garden

Joni Mitchell “Woodstock” (1970)

A few months ago I ran into the periodic table above, detailing the cosmological origins of the chemical elements. And it occurred to me that I could quantify Joni Mitchell’s claim that “we are stardust”. How much of the human body is actually produced by the stars? But before I get to that, I should probably explain a little about the various categories indicated by the colours in the chart above.

As the Universe expanded after the Big Bang, protons and neutrons were formed from quarks, and there was a brief period (a few minutes) when the whole Universe was hot and dense enough for nuclear fusion to take place—just enough time to build up the nuclei of a couple of very light elements, but not enough to produce anything heavier in any significant quantity.* So what came out of the Big Bang was hydrogen, helium and a little lithium—the first three elements of the periodic table. The rest of the chemical elements that make up the solar system, the Earth, and our bodies were produced by fusion reactions inside stars.

The first stars formed about 100 million years after the Big Bang, and were composed entirely of hydrogen, helium, and a little lithium. Conditions at that time favoured the formation of large stars, which burned through their nuclear fuel quickly. A series of fusion reactions in such massive stars generates energy, producing progressively heavier atomic nuclei until the “iron peak” is reached—beyond that point, the creation of heavier nuclei requires an input of energy. When the star exhausts its energy reserves in this way, it collapses and then explodes as a core-collapse supernova (an “exploding massive star” in the table above). This propels the elements synthesized by the star out into space, and also drives a final burst of additional fusion as shock waves sweep outwards through the body of the star, producing a few elements that lie beyond the iron peak, as far as rubidium. The remnant of the supernova’s core collapses into a neutron star, or perhaps even a black hole.

These elements produced by the first supernovae seed the gas clouds which later condense into subsequent generations of stars. The lower-mass stars which appeared later in the Universe’s lifetime (such as the sun) are unable to drive internal fusion as far as the iron peak, and stall their fusion processes after producing only the relatively light nuclei of carbon and oxygen. They evolve into red giants stars (“dying low-mass stars”, above), puff off their outer layers, and then subside to become white dwarfs. But during this process they are able to breed higher-mass elements from the metal contaminants they inherited from the first supernovae. Neutrons from the star’s fusion reactions are absorbed by these heavy nuclei, gradually building them up into ever-heavier elements with higher atomic numbers, as some of the neutrons convert to protons by emitting an electron (beta decay). This building process finally “sticks” when it gets to Bismuth-210, which decays by emitting an alpha particle (two neutrons and two protons), rather than an electron. So, perhaps counter-intuitively, the gentle wind from low-mass stars in their red-giant phase enriches the interstellar medium with heavier atoms than does the spectacular explosion of a supernova.

But once a low-mass star has finished its red-giant phase and settled to become a white dwarf, it may (rarely) manage to turn into a supernova. For this to happen, it must have a companion star orbiting close by, which expands and spills material from its own outer envelope on to the white dwarf’s surface. Once a critical mass of material accumulates, a runaway fusion reaction takes place and blows the white dwarf apart in what’s called a Type Ia supernova. The nature of the fusion reaction is different from what occurs in the core-collapse supernovae, so Type Ia supernovae (“exploding white dwarfs”, above) eject a slightly different spectrum of elements into space—in particular, they don’t create any elements beyond the iron peak.

The processes described so far account for almost all the elements up to bismuth. To produce heavier elements requires a massive bombardment of neutrons, building up nuclei faster than they can decay and thereby pushing beyond the “bismuth barrier” I described earlier. Such torrents of neutrons occurs when neutron stars collide. As their name suggests, neutron stars contain a lot of neutrons, and if two of these supernova remnants are formed in close orbit around each other they may eventually collide. This unleashes a massive blast of neutrons, which bombard the conventional matter on the surface of the neutron stars, building up heavy radioactive elements before they have a chance to decay, and ejecting these products into space.

Finally, a few of the lightest elements are formed by cosmic rays (particles generated during supernova explosions). When these rapidly moving particles strike carbon or oxygen nuclei in space, they can break them into lighter fragments. This process accounts for almost all the beryllium and boron in the Universe, and some of the lithium.

Here on Earth, we have a few more processes that contribute to the mix of chemical elements around us. There’s natural radioactive decay, which is slowly converting some chemical elements into slightly lighter ones. And there are artificial radioactive elements, which we produce in our bombs and nuclear reactors. But these are essentially minor processes in the scheme of things, and I feel safe to ignore them here.

It should come as no surprise that our bodies are made up primarily of the most common chemical elements in the Universe—that is, hydrogen from the Big Bang, and those elements from early in the periodic table which are most frequently spewed out by dying or exploding stars. Indeed, apart from the noble gases helium, neon and argon, an adult human body contains significant traces (more than 100 micrograms) of every element from hydrogen to the “iron peak” elements that represent the limits of equilibrium stellar fusion processes. And a surprising number of these elements have biological roles.

The water that makes up the bulk of our bodies is composed of hydrogen and oxygen. The fats and carbohydrates of our tissues consist of hydrogen, oxygen and carbon, and our proteins add nitrogen and a little sulphur to that mix. Calcium and phosphate are the major structural components of our bones, and sodium, potassium, magnesium and chlorine are present as dissolved ions in our body water, regulating the activity of our cells.

In smaller quantities, there are elemental micronutrients that we must have in our diet in small quantities to stay healthy. Iron, an oxygen-carrying component of haemoglobin and myoglobin, is the major such micronutrient. Iodine is required for thyroid function, cobalt is a component of Vitamin B12, and chromium is present in a hormone called Glucose Tolerance Factor. To these we can add manganese, copper, zinc, selenium and molybdenum, all of which are required for the function of various enzymes. People eating anything approximating a normal diet obtain all these latter elements in adequate quantities, but they must be carefully provided for patients reliant on intravenous feeding in Intensive Care Units.

A few elements seem to produce deficiency syndromes in experimental animals that are fed very carefully controlled diets. Silicon, vanadium, nickel and tin fall into this group, but their biological role, and relevance to humans, is unknown. And then there are the elements which are known to be present in the human body, but appear to have no function—they’re probably just in our tissues because they’re in our food. Some are industrial contaminants, like mercury and lead; but some, like lithium and boron, are probably just part of the natural environment.

Estimates of the elemental make-up of the human body vary. I’ve used the figures quoted by John Emsley in his marvellous book Nature’s Building Blocks: An A-Z Guide to the Elements, and found 54 elements that are present in a 70-kilogram human in quantities that exceed 100 micrograms.

Summing the proportions of all these elements that come from different cosmological sources, I was able to produce this infographic:

(There are no prizes for identifying the original source of the human outlines used.)

Hydrogen, although the most common element in the human body, is also the lightest. So it accounts for just 10% of our weight, and that is, to a good approximation, the only component of our bodies that originated in the Big Bang, because we contain helium and lithium in only tiny quantities.

So the rest was produced, directly or indirectly, in stars. The oxygen in our body water and in our tissues accounts for the large majority of that, originating from core-collapse supernovae. Carbon and nitrogen in our tissues makes up much of the remaining mass, mainly coming from the stellar wind produced by low-mass stars in their red-giant phase. And the rare white-dwarf explosions, Type Ia supernovae account for just 1% of our weight, producing a significant fraction of the calcium and phosphorus in our bones, and some of the important ions dissolved in our body water. Neutron star mergers, even rarer than exploding white dwarfs, are responsible for a few trace elements, most notably almost all the iodine in our thyroid glands. And cosmic-rays from supernovae account for the production of the (apparently biologically inactive) boron and beryllium in our bodies, as well as a little of the lithium.

So to be strictly accurate, Joni Mitchell should have written “We are 90% stardust”.

* One of the earliest publications on this topic was a Letter to the Editor entitled “The Origin Of Chemical Elements” (Physical Review 1948 73: 803-4). The authors were Ralph Alpher, Hans Bethe and George Gamow. Alpher was a PhD student at the time, and Gamow was his supervisor. Alpher’s dissertation was on the topic of what’s now called Big Bang nucleosynthesis—the process of nuclear fusion during the first few minutes of the Big Bang. Bethe was a physicist working in the field of nuclear fusion in stars, but had made no contribution at all to Alpher’s work. He was only included as an author to allow Gamow to make a pun on the Greek alphabet—Alpher, Bethe, Gamow; alpha, beta, gamma. Gamow must have been delighted when their letter was published in the April 1 edition of Physical Review.

At length it was the eve of Old Lady-Day, and the agricultural world was in a fever of mobility such as only occurs at that particular date of the year. It is a day of fulfilment; agreements for outdoor service during the ensuing year, entered into at Candlemas, are to be now carried out. The labourers—or “work-folk”, as they used to call themselves immemorially till the other word was introduced from without—who wish to remain no longer in old places are removing to the new farms.

Yesterday (as this post goes live) was Old Lady-Day, once a significant day in the English agricultural calendar, as Thomas Hardy describes above. And today (April 6th), a new tax year begins in the UK. These dates are not unrelated to each other, and are also linked to the Christian Feast of the Annunciation, which commemorates the Biblical event depicted in the Leonardo painting at the head of this post—the arrival of the Angel Gabriel to inform the Virgin Mary that she was to conceive a miraculous child. As the King James Version of the Bible tells the story:

And the angel came in unto her, and said, Hail, thou that art highly favoured, the Lord is with thee: blessed art thou among women.
And when she saw him, she was troubled at his saying, and cast in her mind what manner of salutation this should be.
And the angel said unto her, Fear not, Mary: for thou hast found favour with God.
And, behold, thou shalt conceive in thy womb, and bring forth a son, and shalt call his name JESUS.

Luke 1:28-31

This event was called “The Annunciation To Mary”, or”The Annunciation” for short, annunciation being the act of announcing something. When it came to nailing these events to the Christian calendar, it made sense for the Feast of the Annunciation to fall exactly nine months before the Feast of the Nativity, which celebrates the birth of Jesus. Since that festival, Christmas Day, falls on December 25th in the Western Christian tradition, the Feast of the Annunciation occurs on March 25th. In Britain, that day is commonly called “Lady Day”, a reference to “Our Lady”, the Virgin Mary.

As well as being a religious feast-day, Lady Day was a significant secular date, too. As one of the four English quarter days*, it was a time when payments fell due and contracts were honoured. Farm labourers were usually indentured to work for a year at a time, and if they wanted to change jobs they all did so on Lady Day.

In fact, Lady Day was such an important date in the calendar, it marked the start of the New Year in English-speaking parts of the world for almost 600 years. While it may seem very strange to us now, under what was called “The Custom of the English Church” the year number would increment on March 25th each year, rather than on January 1st.

Scotland switched to using January 1st for New Year’s Day in 1600, but England didn’t make the change until it adopted the Gregorian calendar reform in 1752. I’ve written before about how eleven days were dropped from September that year, to bring the calendar back into alignment with the seasons.

So 1752 was famously a short year in the English-speaking world. But it’s probably less well-known that 1751 was an even shorter year in England, since it began on March 25th, but ended on December 31st.

The missing eleven days in 1752 created a problem for all the legal stuff relating to contracts and debts that fell due on Lady Day. All the famous protests about “Give Us Back Our Eleven Days” were not from ignorant people who thought their lives had been shortened, but from people who were being paid daily wages, but were settling their debts monthly or quarterly or yearly.

One solution to this problem was to shift the dates of contracts and payments to compensate—so instead of changing jobs on Lady Day 1753, farm labourers worked until eleven days later, April 5th, and continued to renew their contracts on that day in subsequent years. April 5th therefore became known as “Old Lady-Day”, an expression that was still in use in 1891, when Hardy wrote about it in the quotation at the head of this post.

Similarly, the date of the new tax year moved from March 25th to April 5th—so the workers were given back their eleven days, at least by the tax authorities, if not by their landlords and other creditors.

But wait. I told you that the tax year in the UK begins on April 6th, not on Old Lady-Day. Why has it shifted by another day? Because under the old Julian calendar, the year 1800 was due to be a leap year, but under the new Gregorian calendar, it was not. So workers were being done out of a day’s wages in 1800 because of the calendar reform, and the tax authorities duly shifted the date of the new tax year by a day, to compensate. By 1900, which also dropped a leap day, these calendrical niceties seem to have been forgotten, and no shift in the tax year occurred, so the date has remained the same ever since.

So there you are. People in Britain used to start a new tax year at the start of a new calendar year, back when Lady Day was also New Year’s Day. Now, thanks to a centuries-old calendar reform and a surprising impulse of fairness from the tax authorities of the time, we calculate our taxes from what seems (to the uninitiated) like a random date in April.

* The other quarter days were Midsummer Day (June 24th), Michaelmas (September 29th) and Christmas (December 25th).
Oddly enough, the same date was also used in Florence, which earned that system of year reckoning its alternative name, Calculus Florentinus.

# Falling Through The Earth

If the alien cyborgs have constructed this miraculous planet-coring device with the precision I would expect of them, I predict we shall plunge entirely through the center and out to the other side.

There’s an old puzzle in physics, to work out how long it would take a person to fall right through the centre of the Earth to the other side, along a tunnel constructed through its core. Gregory Benford is the only science fiction writer I’ve ever seen attempt to incorporate that scenario into a story—a particularly striking bit of audacity that made it on to the cover of some editions of his novel.

For simplicity, the puzzle stipulates that the tunnel contains no air to slow the faller, and usually also specifies that it has a frictionless lining—any trajectory that doesn’t follow the rotation axis will result in the faller being pressed against the side of the tunnel by the Coriolis effect. (Benford had his protagonist Killeen fall through an evacuated tunnel from pole to pole, thereby avoiding Coriolis.)

So our unfortunate faller drops through a hole in the surface of the Earth, accelerates all the way to the centre, and then decelerates as she rises towards the antipodal point, coming to a momentary halt just as she pops out of the hole on the far side of the planet—hopefully not too flustered to remember to grab a hold of something at that point, so as to avoid embarking on the return journey.

We can set a lower bound to the duration of that journey by working out how long it would take to fall from the surface of the Earth to the centre, assuming all the Earth’s mass is concentrated at that point. Because the journey is one that involves symmetrical acceleration and deceleration, doubling this number will give us the duration of the complete traverse of the tunnel.

This means our faller moves under the influence of an inverse-square gravitational field throughout her fall. The acceleration at any given distance from the centre of such a field is given by:

\Large
a=\frac{GM}{r^2}


where a is the acceleration, G is the Gravitational Constant, M is the central mass and r the radial distance. That’s an important equation and I’ll invoke it again, but for the moment it’s more useful to know that the potential energy of an object in such a field varies inversely with r. The gravitational potential energy per unit mass is given by:

\large U_{m}=-\frac{GM}{r}

If we drop an object starting from rest at some distance R, we can figure out its kinetic energy per unit mass at some distance r by subtracting one potential energy from the other. And from that, we can figure out the velocity at any given point during the fall:

\large v=\sqrt{2GM\left (\frac{1}{r}-\frac{1}{R}\right)}

Finally, integrating* inverse velocity against distance gives us the time it takes to fall from a stationary starting point at R to some lesser distance r. For the special case of r=0, this comes out to be

\large t=\frac{\pi }{2\sqrt{2}}\sqrt{\frac{R^3}{GM}}

This calculation, incidentally and remarkably, is a major plot element of Arthur C. Clarke’s 1953 short science-fiction story “Jupiter V”.

Plugging in values for the mass and mean radius of the Earth, we find that t turns out to be 895 seconds. Doubling that value gives a total time to fall from one side of the Earth to the other of 29.8 minutes. That’s our lower bound for the journey time, since it assumes the faller is subject to the gravitational effect of Earth’s entire mass throughout the journey. (We draw a veil over what would actually happen at the centre point, where the faller would encounter a point of infinite density and infinite gravity.)

We can also put an upper bound on the journey time by assuming the Earth is of uniform density throughout. Under those circumstance, instead of the gravitational acceleration getting every higher as our faller approaches the centre of the Earth, the acceleration gets steadily lower, and reaches zero at the centre. This is because of something called Newton’s Shell Theorem, which shows that the gravitational force experienced by a particle inside a uniform spherical shell is zero. (Which rather undermines the premise of Edgar Rice Burroughs’s “Hollow Earth” novels.)

So as our faller descends into the Earth, she is accelerated only by the gravity of the spherical region of the Earth that is closer to the centre than she is. For any given radial distance r, the mass m of this interior sphere will be

\large m=\rho \frac {4}{3} \pi r^3

where ρ is the density.

Plugging that into our equation for the acceleration due to gravity (the first in the post) we get:

\large a=\frac {4} {3} \pi \frac {G \rho r^3} {r^2}=\frac {4} {3} \pi G \rho r

So the acceleration is directly proportional to the distance from the centre. This is the defining property of a simple harmonic oscillator, like a pendulum or spring, for which the restoring force increases steadily the farther from the neutral position we displace the oscillating mass.

Which is handy, because there’s a little toolbox of equations that apply to simple harmonic motion (they’re at the other end of my link), and with a bit of fiddling we can derive our journey time . The basic time parameter for oscillators is the period of oscillation, which I’ll call P. But that would be the time taken to fall from one side of the Earth to the other and back again. So the time we’re interested in is just half of that:

\large \frac {P} {2}=\pi \sqrt {\frac {3} {4 \pi G \rho}}

And plugging in the value for the mean density of the Earth, that shakes down to 42.2 minutes.

Notice how this length of time depends only on the density—it actually doesn’t matter how big our uniform sphere is, the time to fall from one side to the other remains the same so long as the density is the same. This is analogous to the fact that the period of oscillation of a pendulum doesn’t depend on how far we displace it—which is why we have pendulum clocks.

And because the acceleration due to gravity, g, at the surface of a spherical mass of radius R is given by

\large g=\frac {GM} {R^2}=\frac {4 \pi G \rho R^3} {3R^2}=\frac {4 \pi G \rho R} {3}

we can also derive our half-period P/2 as

\large \frac {P} {2}=\pi \sqrt{\frac{R}{g}}

A nice compact formula depending on radius and surface gravity, which is often quoted in this context.

And this is the gateway to another interesting result for spheres of uniform density, if you’ll permit me a brief digression. Suppose we dig a straight (evacuated, frictionless) tunnel between any two points on the surface of the sphere, and allow an object to slide along the tunnel under the influence of gravity alone—a concept called a gravity train. How long will such an object take to make its journey from one point on the surface to the other? It turns out that this time is exactly the same as the special case of a fall through the centre. We can see why by constructing the following diagram:

By constructing similar triangles, we see that the ratio of R (the distance to the centre of the earth) to d (the distance to the centre of the tunnel), is always locally the same as the ratio of g (the local gravitational acceleration) to a (the component of g that accelerates the gravity train along the tunnel. So for any straight tunnel at all, d/a is always equal to R/g, which (from the equation above) we know determines the period of oscillation through a central tunnel.

Remarkably then, we can take a sphere of uniform density of any size, and drill a straight hole between any two points on its surface, and the time it takes to fall from one surface point to the other will be exactly the same, and determined entirely by the density of the sphere.

But back to the original problem. I’ve determined that the fall time is somewhere between 29.8 and 42.2 minutes. Here are plots of the velocity and time profiles for the two scenarios I’ve discussed so far:

Can I be more precise? I can indeed, by using the Preliminary Reference Earth Model (PREM), which uses seismological data to estimate how the density of the Earth varies with distance from its centre.

Taking those figures and Newton’s Shell Theorem, I can chart how the acceleration due to gravity will vary as our faller descends into the Earth. Here’s the result, with density in blue plotted against the left axis, and gravity in red against the right axis:

As our faller descends through the low-density crust and mantle of the Earth, and approaches the high-density core, she actually finds herself descending into regions of higher gravity, reaching a maximum about 9% higher than the gravity at the Earth’s surface when she reaches the boundary between the mantle and the core.

If I take the data from the PREM as representing a succession of shells, across which acceleration rises or falls linearly within each shell, I can integrate my way through each shell in turn, deriving velocity and elapsed time. For each shell of thickness s, with initial velocity v0, initial acceleration a0 and final acceleration a1, the final velocity is given by

\large v=\sqrt{v_{0}^{2}+(a_{0}+a_{1})s}

and the time taken to traverse the shell is

\large t=\int_{0}^{s}\frac{dx}{\sqrt{v_{0}^{2}+2a_{0}x+\frac{a_{1}-a_{0}}{s}x^{2}}}

Handing off v as v0 to the next shell and summing all the t‘s once I reach the centre of the Earth will give me the answer I want.

The solution to the time integral is a bit messy, coming out as an arcsin equation when a0 > a1, and a natural log when a1 > a0.

But it’s soluble, and with steely nerves and a large spreadsheet, the graphs of the solution fall neatly between the two extremes I figured out earlier:

And the summed times for the full journey come out to be 38.2 minutes. And that’s my best estimate of the answer to the question posed at the head of this post.

* Integrating this expression turned out to be a little tricky, at least for me.

\small t=\sqrt{\frac{R}{2GM}}\int_{r}^{R}\sqrt{\frac{r}{R-r}} dr

After mauling it around and substituting sin²θ for r/R, then mauling it around some more, I ended up with this eye-watering equality as the general solution:

\small t=\sqrt{\frac{R^3}{2GM}}\cdot \left [( \frac{\pi }{2}-asin\left ( \sqrt{\frac{r}{R}} \right )+ \sqrt{\frac{r}{R}}\cdot \sqrt{1-\frac{r}{R}}\right ]

Exact solutions for this integral look like this:
For a0>a1:

\tiny t=\sqrt{\frac{s}{a_{0}-a_{1}}}\cdot \left [ asin\left ( \frac{a_{0}}{\sqrt{a_{0}^2+v_{0}^2\left ( \frac{a_{0}-a_{1}}{s}\right )}} \right )-asin\left ( \frac{a_{1}}{\sqrt{a_{0}^2+v_{0}^2\left ( \frac{a_{0}-a_{1}}{s}\right )}} \right ) \right ]

For a1>a0:

\small t=\sqrt{\frac{s}{a_{1}-a_{0}}}\cdot ln\left [ \frac{\sqrt{\frac{a_{1}-a_{0}}{s}\left ( v_{0}^2+sa_{0}+sa_{1} \right )}+a_{1}}{\sqrt{\frac{a_{1}-a_{0}}{s} v_{0}^2}+a_{0}} \right ]

(If you can simplify any of these, be sure to let me know.)

# Long-Exposure Bicycle Spokes

The Boon Companion has been experimenting with long exposure times and intentional camera movement, of late. She was just about to discard the motion-blurred cyclist above as a failed experiment when something about the image caught my eye.

In the thirtieth-of-a-second exposure, the bicycle wheel has rolled a short distance. But why do the spokes look curved? Why don’t the curves point towards the centre of the wheel? And why is the effect only visible in the lower half of the wheel?

So I sat down to figure out the trajectory of a bicycle spoke as the wheel rolls along the ground. As you do.

Any point on wheel rolling across a flat surface without slipping follows a curve called a trochoid (from Greek trochos, “wheel”). I won’t pester you with the relevant equations (they’re at the other end of the link above). Here’s what the trochoid curves look like for the two ends of a radial spoke, spanning the distance between a wheel hub and a thick wheel rim:

The shape of the wheel is plotted in grey dashes, with a single vertical spoke marked. As the wheel rolls left or right, the ends of the spoke follow the curved trochoid trajectories, with successive positions marked at 20º intervals.

But bicycle wheels don’t (usually) have radial spokes, and I felt obliged, going into the problem, to look at the position of real bicycle spokes. Here’s a very common pattern:

Thirty-six spokes, laid out in what’s called a “three cross” pattern. Fundamentally, there are only two different spoke alignments in this pattern—leading and trailing.

For a wheel rolling from right to left, the spoke I’ve highlighted in red is leading, and the blue spoke is trailing. They’re simply mirror images of each other. One side of the wheel has nine leading spokes, spaced 40º apart, and nine trailing spokes in the same pattern. The other side of the wheel is laid out exactly the same way, but with the pattern rotated by 20º. The final result is called a “three cross” pattern because each trailing spoke crosses three leading spokes on its side of the wheel (and vice versa).

In this pattern, the anchor point for the spoke at the hub is offset 60º relative to its attachment at the rim. So to see the trajectory of a representative bicycle spoke, I need to slide the trochoid curve for the hub 60º out of alignment with the rim curve. Here its, with the spoke drawn in at 20º intervals:

This is the trajectory of a trailing spoke for a wheel rolling right to left, and a leading spoke for a wheel rolling left to right.

We can already get a hint of why some sort of spoke pattern shows up in the lower half of the wheel, but not the upper. In the upper half, the spoke is moving rapidly sideways, as it pivots across the top of the wheel; in the lower half it performs a sort of dipping motion, arcing downwards towards the point at which the wheel contacts the road, and then arcing back up again.

Now, I figure a cyclist moving at a reasonable speed for a shared-use path will rotate the wheels through about 30º during a thirtieth-of-a-second exposure. Here’s a more detailed trajectory for a trailing spoke (for a bicycle moving right-to-left) during a 30º rotation:

You can see that the spoke slides along itself, to some extent—different parts of the spoke occupy the same spatial position at different times. These are the only parts of the spoke that will show up as a dense “shadow” during a prolonged exposure that blurs the other parts. In the final photograph we’ll therefore see something like the arc I’ve sketched in red, while the rest of the spoke is smeared into a blur:

Something similar happens for a leading spoke as it passes through the same position:

So, actually, while the leading/trailing distinction slightly changes the details, both kinds of spoke produce a curved shadow during a prolonged exposure.

If we catch a spoke that’s higher in its trajectory, we get another arc:

Now we can put these images together, showing the curved shadows of several spokes at once. Each of them will lie on a different set of trochoid arcs, shifted laterally according to how far the spoke lies from the lowest point of its trajectory. Like this:

I’ve marked the visual centre of the wheel, for reference. Notice how the partial shadow arcs formed by the lower spokes seem to point below the centre, while the arcs of the higher spokes point above the centre. It happens because none of the spokes are radial, and because the centre of the wheel is never stationary, but shifts horizontally as the spokes sketch out their curved arcs.

So. Although I was baffled by the photograph when I first saw it, I began to get an inkling of what was going on as I made the first trochoid sketches, and was then pleasantly surprised by how things began to fall neatly into place as I added more detail. I’m hoping you’re as pleasantly surprised as I am.

# Why Do Mirrors Reverse Left And Right But Not Up And Down?

Reflection: A transformation under which each point in a shape appears at an equal distance on the opposite side of a given line—the line of reflection.

It’s not often I have occasion to shout at the television, but a recent episode of the BBC’s long-running television series QI precipitated just such an outburst. The cause of my vexation was their answer to the question that forms the title of this post. The offending episode was the R Series: Reflections, and the explanation was an excellent approximation to gibberish, involving as it did some business about “The mirror doesn’t flip things around; we flip things around,” intoned by Sandi Toksvig as she stood in front of a mirror fiddled with a bit of card with the word BOSS written on it (see above). To be fair to Toksvig, it probably wasn’t her idea, and she did manage to deliver the entire farrago while wearing the sort of anxious expression people wear when they’re not entirely convinced by their own argument.

The answer to the question is really that it is ill-posed. Mirrors actually don’t reverse left and right, for the simple reason that mirrors have no way of telling left from right. They have no left-right asymmetry, in other words. The only asymmetry they do possess is in the plane of reflection—stuff in front of the mirror is the real world; stuff “behind” the mirror is the reflected world.

That’s what’s being described in the definition at the head of this post, which refers to a two-dimensional reflection, like this:

Here we have a “line of reflection”, corresponding to the mirror; a letter “B” in front of the mirror, representing the real world; and a reflected letter “B” behind the mirror. Every point on the reflected “B” is the same distance from the mirror as the corresponding point on the original “B”. So because the spine of the “B” is the farthest part from the mirror, its reflection also lies farthest from the mirror. Conversely, the curved parts of the letter lie closer to the mirror on both sides. And it’s that preservation of “near” and “far” on either side of the mirror plane which causes the reflection to be a reversed image of the original. If we travel from one side of the mirror to the other, we encounter in turn spine-curves-mirror-curves-spine. So, actually (and pace Toksvig), the mirror very much does “flip things around”. Indeed, many introductory geometry texts gloss the word “reflection” as “a flip”.

The same thing happens when a three-dimensional person stands in front of a real mirror:

The reflected image’s left and right (and head and feet) are pointing in the same direction as the real person’s. What has been flipped in the mirror image is the direction in which the nose and toes are pointing. So the mirror has reversed front and back, not left and right. If you lie down with your feet pointing at the mirror, the reflection will also have its feet pointing at the mirror—so on this occasion the mirror leaves your front and back, and left and right, in the same positions, but reverses you top to bottom. Only if you stand sideways on to the mirror does it truly reverse your left and right—but that’s because you’ve chosen to place one side of your body close to the mirror and the other far from it, not because the mirror has some magical ability to tell left from right.

So why do we always think the mirror image has reversed left and right, no matter how we orientate ourselves before the mirror? The answer, I think, lies in the single plane of symmetry in our own bodies. Our fronts are very different from our backs, our heads are very different from our feet, but our left side is very similar to our right side. So it’s very difficult for us to see the mirror reflection as having reversed front and back—instead, we see it as another person who has turned around to face us. In which case, their right hand now moves when we move our left hand, and vice versa. No matter what the orientation of the reflection, we always interpret it as a left-right reversed person, because a head-foot reversed person or a front-back reversed person is harder to conceptualize.

Remove the left-right symmetry, and we stop talking about mirrors reversing left and right.

This barber’s pole lacks a clear left-right distinction. So instead, we find ourselves saying that it “spirals the opposite way”. We’re still, apparently, unable to discern that what has been reversed is the front and back, but now we can’t blame a left-right switch either.

So if anyone ever asks you the title question, permit yourself the slightest of headshakes and the faintest of smiles, and say: “But mirrors don’t reverse left and right; they preserve near and far.”

# Strange Moon

A couple of weeks ago, I reviewed three books about the activities of 161 (Special Duties) Squadron, RAF, during the Second World War. For this post, I want to talk specifically about the cover of Hugh Verity’s memoir and personal history of 161 Squadron, We Landed By Moonlight (Revised Edition), published in 2000 by Crécy. It’s a marvellous book, as a source of both anecdote and historical record, and we should all be grateful to Crécy for keeping it in print—but it’s an odd cover.

The first thing that struck me about it is that the Westland Lysander on the cover is sporting South-East Asia Command roundels and flashes, putting it a very long way from 161 Squadron’s base in the south of England. The image in fact comes from this photograph of Lysander V9289, of 357 (Special Duties) Squadron, RAF, operating in Burma. Right kind of aircraft, right kind of duties, wrong continent. But that’s fair enough, given that 161 Sq. Lysanders flew almost entirely at night on secret missions, so tended to go unphotographed unless they actually crashed.

The Lysander image has been composited with a fine full moon, to produce an atmospheric cover image. (In fact, there are two versions of this cover from Crécy, both using the same Lysander and moon images—you can find the other in my link from the book title, above.) And it’s that moon image that really got me puzzling, and inspired this post. Here it is, in a larger and more contrasty version:

It definitely doesn’t look like our own familiar full moon:

But it doesn’t look like a random painting, either. And I found it naggingly familiar. At first, I wondered if it was a photograph of some other moon in our solar system, but then I began to recognize significant features. The curve of Mare Nectaris below the three linked blotches of Mare Serenitatis, Mare Tranquilitatis and Mare Fecunditatis settled it—it is a photograph of our Moon.

But there are three things wrong with it.

One is that it is mirror-reversed. The real Moon looks like this:

The second is that it is pretty much lying on its side. By my estimate, the north-south axis is tilted at about twenty degrees to the horizontal:

Now, you can see the Moon in this orientation, if you catch it just after moonrise in the tropics, when it’s moving almost vertically towards the zenith. But the farther north or south you go, the more tilted is the Moon’s trajectory as it rises, and the closer to vertical is its axis. Even when the Moon is as far into the northern sky as it ever gets, it can never be seen in that orientation anywhere in France, which was the 161 Sq. stamping ground.

But that’s a nitpick, really, because the striking thing about this view is that you can never see it from Earth. The paired dark blotches about halfway towards the upper rim of the Moon, above, are Mare Marginis and Mare Smythii, which (as the former name implies) sit right on the edge of the Moon’s disc when seen from Earth. The photograph used on Crécy’s cover has actually been taken by a spacecraft, somewhere over about 60ºE lunar longitude.

Once I’d figured all this out, I realized why the image looked naggingly familiar. This view is a classic, because it’s what successive Apollo astronauts saw as they departed the Moon towards Earth. At the conclusion of their lunar mission, they fired their main engine as they orbited over the far side of the Moon, and then came looping around the eastern hemisphere, pulling away in a long orbit back towards Earth. The same view was photographed several times, by several relieved astronauts, but I think the Moon on Crécy’s cover is this one:

It’s one of a series of departure photographs taken by Apollo 11 on 22 July 1969. If Verity landed his Lysander by that moonlight, he was a very long way from home!

Note: Crécy’s current edition of this classic book features the Lysander from the Shuttleworth Collection, which is painted in early 161 Sq. markings, landing in a field of poppies.

# Harvest Moon

In the northern hemisphere, the Harvest Moon falls on 1 October in 2020, which is what provokes this post. The Harvest Moon is defined as the full moon that occurs closest to the autumnal equinox, which fell on 22 September (in the northern hemisphere, in 2020). You can find many lists of “names of the full moons” on-line (there’s a rather marvellous compilation of lists here), but the Harvest Moon is the only one that’s defined by the date of the equinox, rather than the month in which it falls—about three times in four it occurs in September, but the rest of the time (as on this occasion) it drifts into October.

The other thing about the Harvest Moon is that it has real astronomical and historical significance. Like many other full moon names, it obviously derives from what’s going on in the seasonal cycle at the time it appears—but there’s a deeper significance, which is what I want to write about here.

To understand what’s special about the full moon around the autumnal equinox, and its relevance to harvesting crops, we need to talk a bit about the orbit of the moon.

As is well known, the Earth’s rotation axis is inclined to the plane of its orbit around the sun, by about 23½º. So the Earth’s rotation and its orbit define two planes, tilted relative to each other—the celestial equator, which is the extension of the Earth’s equatorial plane; and the ecliptic, which is the plane of the Earth’s orbit. So from the vantage point of the Earth, the sun moves around the sky along the ecliptic plane, from west to east, completing one revolution per year. Like this:

The two points at which the celestial equator and the ecliptic intersect have names with complicated astrological origins. The point on the celestial equator which the sun crosses when heading north is called the First Point of Aries. The point opposite that is The First Point of Libra. Both are symbolized by the zodiacal symbols for their corresponding constellations. These are the locations of the sun at the times of the equinoxes—it crosses the First Point of Aries at the March equinox, spends six months bringing summer to the northern hemisphere, and then crosses the First Point of Libra at the September equinox, on its way south for the southern hemisphere summer.

The moon orbits close to the ecliptic plane. For the purposes of this discussion, we can treat it as travelling in the ecliptic plane, and come back to the slight deviation later. So the moon moves (roughly) along the ecliptic from west to east, taking a month to make a full revolution. It also passes through the First Points of Aries and Libra, spending two week over the northern hemisphere, and two weeks over the south.

The moon makes one complete circuit of the celestial sphere every 27.3 days. If it moved at a constant rate along the celestial equator, it would therefore be about 13º farther west every day. The Earth would need to rotate correspondingly farther between successive moonrises and moonsets, making each moonrise and moonset occur about fifty minutes later than its predecessor. And that’s true on average for the real moon. But the fact that the moon’s orbit follows the ecliptic, and not the equator, introduces a subtle variation.

Here’s what happens to successive moonrises at 50ºN latitude, when the moon is passing through the First Point of Aries.

Its 13º displacement along the ecliptic has a northward component in this part of its orbit, which means that it lies closer below the horizon on successive nights than it would do if it were moving parallel to the equator. So the Earth has to rotate less far between successive moonrises, and the moon rises only slightly later each night. (The effect becomes more pronounced at higher latitudes, and less so at lower latitudes.)

But at moonset, the northward movement at the First Point of Aries serves to lift the moon farther above the horizon than it would otherwise be. So successive moonsets show longer delays than the average 50 minutes when the moon is in this part of its orbit.

The situation is reversed two weeks later, as the moon passes through the First Point of Libra. Now each successive moonrise at northern latitudes is delayed more than 50 minutes, like this:

And it should come as no surprise that the delay between successive moonsets is correspondingly shortened at this point in the moon’s orbit.

So although it all averages out over the course of a month, there’s a regular variation in the delay between successive moonrises (and moonsets) during that period. Here’s a chart of the delays for a representative period (September and October 2018) at 50ºN; I’ve marked the passages through the First Point of Aries:

So this happens every month. Why is it particularly relevant only once a year, on the Harvest Moon? Because full moons occur only when the sun is on the opposite side of the sky from the moon. Which means the only time we see a full moon passing through the First Point of Aries is when the sun is in the vicinity of the First Point of Libra—which, you’ll recall from the top of this post, happens during the September equinox. So in the northern hemisphere, at the time of the autumnal equinox, the full moon rises at almost the same time for several successive evenings, but sets more than an hour later each morning. And if you’re bringing in the harvest (as you do in temperate latitudes in the autumn), and you don’t have access to artificial outdoor illumination, then that’s hugely advantageous. For several nights, the full moon rises before twilight fades, and sets only when the morning sky is already bright. You can work around the clock, day and night, to get the crops in, in other words. Which is what’s going on in Mason’s painting at the head of this post.

Does the southern hemisphere have a Harvest Moon? It surely does. All the geometry flips over, so the First Point of Libra assumes the role that the First Point of Aries does in the northern hemisphere. Like this:

Full moons occur at this point when the sun is passing through the First Point of Aries—the March equinox, which is the autumnal equinox for the southern hemisphere. Isn’t that neat? (Well, I think it’s neat.)

There are a couple of subtleties, which mean not every Harvest Moon is the same. The first complicating factor is that the moon’s orbit does not lie perfectly in the ecliptic, but inclined to it at about 5º. The inclined orbit of the moon twists continuously in the ecliptic plane, completing one rotation every 18.6 years, under the influence of the sun’s gravity. This means that the tilt of the moon’s orbit sometimes subtracts from the angle between the ecliptic and the celestial equator, and sometimes adds to it, like this:

So we have “seasons” when the Harvest Moon is delayed even less than average on successive nights, and seasons (nine years later) when it is delayed more than average.

The other complicating factor is that the moon doesn’t orbit in a perfect circle—it moves in an ellipse, and it crosses the sky more slowly when it’s farthest from the Earth (its apogee), and more quickly when it’s closest (perigee). This ellipse twists around in the plane of the moon’s orbit with a rotation period of about 8.8 years. When the apogee aligns with the First Point of Aries, the delay between successive Harvest Moon moonrises is shortened even farther. Conversely, a few years later, the perigee will prolong the delay between successive moonrises.

It so happens that apogee is passing through the First Point of Aries in 2020, and we can see a noticeable effect on moonrises and moonsets. Here’s the delay graph for September and October 2020, again at 50ºN.

The slow movement of the moon at the First Point of Aries shortens the delay between successive moonrises, and between successive moonsets. Conversely, the fast movement at the First Point of Libra lengthens these same time periods. So the delay graphs are dented downwards at Aries, and shoved upwards at Libra—you can see this most clearly in the flattened tops on the moonset curve.

There are few places left in the world where any of this is relevant to the life of farmers, of course. But it’s a fine astronomical curiosity, I hope you’ll agree.

Note: Moonrise and moonset times used in the graphs were taken from the calculator at timeanddate.com.