Most people know why the sun looks orange-yellow when it’s rising or setting. Air preferentially scatters shorter (bluer) wavelengths of light—so the more air there is between your eye and the sun, the more short wavelengths are scattered out of the line of sight, leaving yellow/orange/red as the predominant colours reaching your eye. There’s about 38 times more air between your eye and the sun when it’s at the horizon, compared to the zenith, so it’s not surprising it looks progressively yellower the lower it is in the sky.
But what makes it change shape? The setting sun in the photograph above is only 88% as high as it is wide, and ratios down to 80% are commonly seen.
The first thing to know about the sun in that photograph is that it isn’t really there. In fact, whenever you look at the sun on the horizon, it isn’t really there. The sun is actually below the horizon, and what you’re seeing is essentially a mirage, generated by light rays curving towards your eye from beyond the geometric horizon.
Light travels slightly more slowly in denser air. The atmosphere is denser near the ground than it is at altitude. So light passing through the atmosphere is refracted—it follows a slightly curved path, concave to the denser layers of air. That means it curves around the convexity of the Earth, bringing objects into view that would be hidden below the horizon in the absence of air.
This trick of the light means that, in order to calculate the distance to the visible horizon, you need to pretend that the Earth is a little bigger than it actually is. Calculation and observation suggest that multiplying the Earth’s actual radius by 7/6 give the apparent radius produced by refraction. It also means that when objects in the sky are just beyond the curve of the geometric horizon, they are still visible, lifted above the apparent horizon by atmospheric refraction. This applies to the moon (and the stars) as well as the sun, and the tables of rising and setting times you find in newspapers and online contain several minutes allowance for the effects of refraction.
So, going back to the setting sun in the photo, the (approximate) real position of the sun is marked with the white circle below:
Atmospheric refraction is lifting the lower rim of the sun into view above the horizon by an angular distance that is typically around 7/10ths of a degree. But light from the upper rim of the sun takes a trajectory with a slightly different slope through the atmosphere, and its image is lifted a little less—in my sketch, by about a twentieth of a degree less. But that slight angular difference amounts to a tenth of the apparent angular diameter of the sun, and creates the apparent vertical flattening of its disc.
If you watch a sunset or moonset from the International Space Station, you’ll see even more dramatic flattening, because the light rays from the top and bottom edges of the disc have dipped into the atmosphere, producing the amount of flattening we’re used to seeing on the Earth’s surface, and then they’ve come back out again, experiencing a second episode of flattening. Here’s the setting moon photographed from orbit by astronaut Don Pettit:
That’s a lot of flattening. There’s actually more than just a double dose of standard atmospheric refraction required to account for that very asymmetrical appearance.
If we go back to the original photo at the head of this post, the horizon is about 10 kilometres away. That means that light rays from the top and bottom edges of the solar disc are crossing the horizon only about 100 metres apart, vertically—although their trajectories are slightly different, they’re sampling very similar parts of the atmosphere. But in the ISS photo, the horizon (out of frame just below that flattened lunar disc) is about 2000 kilometres away. Light rays from the top and bottom edges of the lunar disc are crossing the horizon ten or twenty kilometres apart, vertically. So they’re sampling completely different parts of the atmosphere, with very different density gradients, and it’s no wonder the resulting image of the moon is very strongly distorted.
You don’t need to go into space to get that sort of dramatic distortion, however. If the lower few hundred metres of atmosphere contains layers with non-uniform density gradients (alternating bands of hotter and cooler air), then the trajectories of light rays coming from the sun will be deflected in a non-uniform way, and the neat ellipse of the sunset photo above can turn into something like this:
Notice that a single sunspot appears three times in the centre of that solar disc. Light from the sunspot is finding three different curved routes through the atmosphere that all end at the observer’s eye, coming from very slightly different directions.
There’s a critical density gradient (on Earth, corresponding to a temperature inversion of 0.11ºC/m) at which the refracted curvature of a horizontal light ray exactly matches the curvature of the Earth. In an atmosphere with this density structure, light could travel endlessly around the planet. For a while, back in the 1970s, it was thought that the planet Venus might have that sort of atmosphere. In 1975 the science fiction author John Varley wrote a short story, “In The Bowl”, set on Venus, and did a good job of describing what that might look like. (To make sense of the following, you also need to know that Venus rotates in the opposite direction to Earth, and very slowly):
I don’t like standing at the bottom of a bowl a thousand kilometers wide. That’s what you see. No matter how high you climb or how far you go, you’re still standing in the bottom of that bowl. […]
Then there’s the sun. When I was there it was nighttime, which means that the sun was a squashed ellipse hanging just above the horizon in the east, where it had set weeks and weeks ago. Don’t ask me to explain it. All I know is that the sun never sets on Venus. Never, no matter where you are. It just gets flatter and flatter and wider and wider until it oozes around to the north or south, depending on where you are, becoming a flat, bright line of light until it begins pulling itself back together in the west, where it’s going to rise in a few weeks.
But let’s go back to Earth again, and that critical value of refraction at 0.11ºC/m—this implies that, if the temperature gradient is even steeper, rising light rays can be so strongly curved by refraction that they’ll come back down again. This happens readily enough in polar regions, where cold air in contact with ice is overlain by warmer air aloft—at some critical altitude, the temperature can jump by several degrees Celsius in the space of just a few metres, creating an abrupt fall in air density.
So a polar temperature inversion can create a sort of reflective roof, allowing light to reach the observer’s eye from objects well beyond the geometric horizon. In fact, if the temperature inversion is widespread, and the terrain is flat, light rays can “bounce” several times around the curvature of the Earth on their way to the observer, bringing in distorted images from hundreds of kilometres away. If the sun moves into alignment with this “light pipe” then it can become visible despite being four degrees or more below the geometric horizon.
This is called the Novaya Zemlya Effect, because it was first recorded while members of Willem Barentsz’s 1596/7 expedition were overwintering on the island of Novaya Zemlya in the Russian Arctic. Towards the end of the long polar night, on 24 January 1597, Gerrit de Veer and two other men saw a distorted image of the sun appear on the horizon two weeks before the polar night was due to end, at a time when the sun was still, geometrically, about five degrees below the horizon. This is still one of the most extreme examples of the effect ever observed. Calculations by van der Werf et al.(1.1 MB pdf) in 2003 suggest that it could have involved no less than five successive bounces from an inversion layer at about 80 metres altitude, over a distance of 400 km.
We have a drawing of what the sun looks like under these circumstances, courtesy of Fritjof Nansen, who saw an example of the Novaya Zemlya Effect during his Fram expedition to the Arctic:
Those multiple bounces along the “light pipe” completely scramble the image of the solar disc, until what remains is a stack of bright horizontal bands, all of roughly equal width.
You can get a nice impression of what it must have looked like from this beautiful video of a miraged sun, filmed for four minutes after the expected sunset time, in California:
The blue appearance of veins under unpigmented skin is a commonplace observation, to the extent that it has become standard coding in anatomy diagrams to colour arteries red and veins blue:
But that pale blue colour is actually a bit of a puzzle.
Blood gets its colour from the haemoglobin contained in the red blood cells. The haemoglobin changes colour depending on whether it’s bound to oxygen or not. So first of all we want to know what colour venous blood is.
I’m indebted to Scott Prahl, of the Oregon Medical Laser Center, for permission to reproduce this graph of the absorption spectra of haemoglobin bound to oxygen (HbO2) and haemoglobin without attached oxygen (Hb):
The lower the graph dips, the less the absorption, and so the more light at that wavelength the haemoglobin will reflect. And notice there’s a logarithmic vertical scale—those dips mean a big proportional change in light absorption. Below, I’ve edited the graph to show just the visual wavelengths, stretching from 400nm (violet) on the left to 700nm (red) on the right:
HbO2 is absorbing between a hundred and a thousand times more short-wavelength blue than long-wavelength red light. So it’s going to be strongly red in colour—hence the lurid colour of the oxygenated blood that comes out of a cut artery. Without oxygen, Hb absorbs more red light than HbO2 by a factor of ten or so. But it still absorbs ten to a hundred times more light at the blue end of the spectrum than the red—so venous blood, which contains a mix of oxygenated and deoxygenated haemoglobin, is also going to be red in colour, albeit a darker and less saturated red than arterial blood. And we confirm that every time we take a venous blood sample—what flows into the syringe is anything from dark red to deep purple-red; certainly not blue.
So it’s not venous blood that causes the blue colour. Is it the vein wall?
Veins are thin-walled structures. If removed from the body and inflated with saline, they appear translucent pink. When they’re filled with blood, the blood inside shows through, rendered a little paler and pinker by the wall of the vein. Here is a view of a major artery and vein, exposed during surgery (SMV = superior mesenteric vein; SMA = superior mesenteric artery):
A vein full of blood evidently doesn’t look blue, either.
So it must be something to do with the overlying skin and subcutaneous tissues. Keinle et al. (116 KB pdf) investigated this in the journalApplied Optics, back in 1996. To slightly simplify their argument, what they found was as follows.
Unpigmented skin and tissues scatter both blue and red wavelengths back to the eye, with a preponderance of red that makes flesh look pink. But, crucially, red light travels farther into the tissue before bouncing back out again. The blue wavelengths show us the very superficial layers; the red wavelengths probe deeper structures.
And we know that veins, with their translucent walls, bounce back a small amount of light from the dark blood inside, but with red still predominant.But what happens if we put a vein inside the tissues, positioning it deeper than the blue wavelengths typically penetrate, but within range of the red? The superficial tissues scatter away the blue light before it reaches the vein. The vein absorbs more red light than the tissues would have done. Blue predominates!And that’s why most visible veins look blue. If they lie any deeper in the tissues, neither red nor blue light reaches them, and they are invisible. And it’s unusual for them to be so superficially placed that blue wavelengths can reach them—that requires the sort of delicate, thin skin we sometimes find in premature babies and the very old. Those of us whose job involves inserting IV cannulae into veins know from experience that a red-looking vein is thin-walled and lying very close to the surface.
So those blue veins on the back of your hand represent a remarkable conspiracy between optics and anatomy.
Last week, the Boon Companion and I were sipping sundowner cocktails in the British Virgin Islands, leaning back in our chairs, and cloud-watching. Long streets of fair-weather cumulus had been strung out over the Caribbean since midday. Now, half an hour before sunset, the cumulus was growing—boiling upwards to form towering cumulus congestus, which were draping little dark banners of rain here and there across the sunlit sea. At the end of a warm, still day, there was clearly some strong convection building out there on the water.
Quite suddenly, a little finger of grey cloud poked out of the dark base of one of the clouds. I watched it for a minute, as it grew thinner and longer, eventually stretching far enough to catch the rays of the setting sun. It was a funnel cloud. Which meant … Yes, on the sea horizon below the cloud there was a little swirling dark smudge—a spray ring. We were looking at a waterspout, the first I’ve ever seen.
As we watched, the funnel cloud sent down a thin dark streamer that speared into the centre of the spray ring—the waterspout was fully evolved. It stayed that way for about fifteen minutes, idling around gently beneath its parent cloud and slowly moving along the horizon, until it suddenly decayed away—the funnel withdrew into the cloud, and the spray ring collapsed. Show over.
There are actually two different kinds of waterspout. If a tornado moves out over water, it creates a tornadic waterspout. These are big fierce beasts, with all the destructive power of a tornado. They evolve in the fierce updrafts of supercell thunderstorms, and they form by dropping a thick funnel cloud downwards to the ground.
But what we saw in the British Virgin Islands was a fair weather waterspout. They’re generated by air rising into evolving cumulus clouds, and they progress from the surface upwards.
In their early stages, a swirl of wind forms around warm air rising from the water surface, just like a dust-devil spinning through a hot car-park. The wind swirl creates a characteristic disc of rough water with a calm centre—this is the “dark spot phase”, which is best seen from the air. A few spiral lanes of disturbed water then converge on the dark spot, and spray begins to rise into the air as the surface winds intensify. Above this visible spray ring, a column of spiralling, rising winds connects to the base of the cloud overhead. As the spiral tightens and intensifies, the pressure at its core drops by several millibars, and water vapour starts to condense within it—the cloud starts to extend downwards in a visible funnel cloud. In a fully developed waterspout, the funnel reaches all the way down to the centre of the spray ring, by which time it has usually developed a hollow centre—water droplets formed in the low-pressure core are flung outwards to orbit in the zone of highest winds, a few metres from the core.
Ocean Today have put together a nice two-and-a-half-minute video compilation that illustrates the evolution of a fair weather waterspout. It’s full of impressive images, and I recommend it. Here’s a link to the embedding version of the video, which seems reluctant to embed on this website:
Fair weather waterspouts are typically short-lived, persisting for about twenty minutes; and they often occur in groups of two or more when conditions are favourable. (Indeed, we spotted a couple of other tentative funnels appearing and disappearing at the cloud base while we watched our waterspout noodling around the horizon.) The Florida Keys reports more than a hundred a month between May and September, when the surface water temperature is over 25ºC.
They can be relatively benign—people have driven speedboats through them. But with wind speeds of 65 m/s (130 knots), they can also be very dangerous—large vessels have been capsized or dismasted; small vessels have been swamped by the spray ring; and in 1993 a windsurfer was drowned on the Chicago waterfront by a Lake Michigan waterspout.
The updraft is fierce—a waterspout that made landfall on Matecumbe Key, Florida, reportedly lifted a two-ton Cadillac a few feet into the air before setting it down again. So it seems at least possible that they can lift a significant quantity of water from the sea surface, and there are anecdotes to support this. In his book It’s Raining Frogs And Fishes, Jerry Dennis gives a report of a downpour of salty rain on the island of Martha’s Vineyard on August 19, 1896, a few hours after a waterspout had been seen close to shore—the salt water had presumably been incorporated briefly into the parent cumulus cloud. And in the Annals and Magazine of Natural History(January 1929), E.W. Gudger reported that a waterspout had dissipated near an open boat in the Gulf of Mexico, which was then promptly swamped under a torrent of seawater and fish.
Gudger’s article is entitled “More Rains Of Fishes”—he was primarily interested in sporadic reports of fish falling from the sky, and he felt that waterspouts offered a reasonable route by which aquatic creatures might get up there in the first place.
Surprisingly, falls of fish (and frogs) are quite well documented, both before and after Gudger’s time. For instance, on October 23, 1947, a Canadian fisheries biologist was fortuitously on hand to see fish falling from the sky in Marksville, Louisiana:
In the morning of that day, between seven and eight o’clock, fish ranging from two to nine inches in length fell on the trees and in yards […] There were spots on Main Street, in the vicinity of the bank (a half block from the restaurant) averaging one fish per square yard. Automobiles and trucks were running over them. Fish also fell on the roofs of houses.
They were freshwater fish native to local waters, and belonging to the following species: large-mouth black bass (Micropterus salmoides), google-eye (Chaenobrittus coronarius), two species of sunfish (Lepomis), several species of minnow and hickory shad (Pomolobus mediocris). The latter species were the most common.
Following Gudger’s example, waterspouts (and their terrestrial cousins, whirlwinds) are nowadays the usual explanation trotted out for such events—the fish or frogs are presumed to have been hoovered up from surface water, to bounce improbably around in the clouds for a while before falling on some unsuspecting town.
However, these anomalous falls tend to be oddly pure samplings, as in Bajkov’s description above, which seemed to involve nothing but fish. I suppose we can imagine a waterspout picking a shoal of surface fish out of the ocean quite cleanly; but what about the freshwater fish in Bajkov’s example? Or the frogs reported elsewhere? Why are such falls apparently never accompanied by mud, gravel, weeds and battered waterfowl, scooped up incidentally? I’ll give the last word on that topic to Charles Fort:
[…] a pond going up would be quite as interesting as a frog coming down. Whirlwinds we read over and over—but where and what whirlwind? It seems to me that anyone who had lost a pond would be heard from.
Here’s the problem: the tropical year, the time it takes the Earth to go through a complete cycle of seasons, is 365.2422 days long (to four-decimal accuracy).
If every calendar year were 365 days long, then the missing 0.2422 days would add up from year to year, each year starting a little earlier relative to the changing seasons. It would take only 120 years for the calendar to be a month adrift from the season.
The Roman Republican calendar had a standard year of just 355 days. Every few years an additional month was added, bringing the year up to 377 or 378 days. With the right frequency (about 11 long years to every 13 standard years), that system could have kept the calendar year aligned with the tropical year on average, though with pretty large excursion from year to year. Trouble was, the decision whether or not to add additional months was driven as much by politics and superstition as it was by astronomical accuracy, and on occasion the Republican calendar drifted as much as four months away from seasonal alignment.
Julius Caesar came to power when the Roman calendar was awry by more than two months. After taking advice from the Greek astronomer Sosigenes of Alexandria, he came up with a plan to get the calendar back into alignment with the seasons, and to keep it that way. First, he decreed that 46 BC should be 445 days long. He then established the familiar pattern of leap years we know today, by creating a regular year of 365 days, and adding an extra day to February quarto quoque anno, “every four years”. That made the average calendar year 365.25 days long, tolerably close to the desired 365.2422.
Caesar was assassinated in 44 BC, and confusion immediately reigned. Roman counting was commonly inclusive—for instance, they used what we would call an eight-day week, from one market day to the next, but they called it a nundinem, derived from nonus, “ninth”. They counted nine days because they included the market day at the beginning and at the end of the week as being part of the same week. So Caesar’s quarto quoque anno was implemented as a three-year cycle until 9 BC. At that point his successor, Augustus (having had the necessary arithmetic drawn to his attention), started the correct four-year cycle, and tried to get things back the way Julius had wanted them by skipping the leap years in 5 BC, 1 BC and AD 4 in order to get rid of the effect of the excessive leap years that had accumulated so far.
Julius Caesar’s calendar years of 365.25 days (called Julian years, in his honour), then ticked away steadily from AD 8 until 1582. And the difference of 0.0078 days between the Julian year and the tropical year mounted up steadily, so that by 1582 the calendar had moved more than 12 days out of register with its original seasonal position.
This was a problem for the Christian Church. The date of Easter was tied to the seasons—specifically the northern spring equinox—but for the purpose of calculating the specific date of Easter each year, the spring equinox was represented by a date, March 21. This date had been correct at the time of the Council of Nicaea in AD 325, when the standard computation of Easter was agreed, but by the sixteenth century the calendar had drifted so that the vernal equinox was occurring on March 11. Martin Luther pointed out that, in 1538, Easter should have been celebrated on March 17, according to the timing of the vernal equinox, but had been pushed to April 21 because of the slippage in the Julian calendar.
Pope Gregory XIII, advised by astronomers Aloysius Lilius and Christopher Clavius, came up with a solution.* Like Caesar’s calendrical intervention previously, there were two parts to the fix—one to get the calendar back into alignment with the seasons, and the other to prevent it drifting again. The details were promulgated in the papal bull Inter gravissimas. To realign the seasons (specifically, to get the vernal equinox back to March 21) ten days were to be omitted from the month of October—October 4, 1582 was to be followed by October 15.
To tighten the approximation of the calendar year to the tropical year, the rule for leap years was subtly tweaked, by dropping three leap years every four centuries. According to the Julian calendar, every century year was a leap year; according to the new Gregorian calendar, only century years exactly divisible by 400 were to be leap years. So 1600 was a leap year, 1700, 1800 and 1900 were not, and 2000 was (you may remember) a leap year. Having just 97 leap years in every four centuries brings the length of a mean calendar year down to 365.2425 days—just 0.0003 days longer than the tropical year.
Catholic countries all made the change as instructed, though some lagged a little behind the dates set out in Inter gravissimas. Rulers and governments in Protestant and Orthodox countries were keen not to be seen as toeing the papal line, and so in some places the improvement took a long time to be adopted. The two calendars therefore ran in parallel for several centuries, with writers having to be careful to mark their dates “O.S.” (for “Old Style”) or “N.S.” (for “New Style”).
Great Britain and its colonies eventually made the change in the eighteenth century, by which time eleven days† had to be dropped—the Julian calendar had drifted another day ahead of the Gregorian calendar by observing a leap year in 1700. In Britain, September 2, 1752 was followed by September 14. (This eventually led to the renaming of a butterfly. At that time the April Fritillary was so called because of its early hatching; but the change in the calendar shifted its peak hatching period into May. It’s nowadays called the Pearl-bordered Fritillary.)
Russia held out until 1918, by which time the Julian leap years in 1800 and 1900 meant they had to drop a total of 13 days, which they did between January 31 and February 14. This had the unfortunate consequence that the anniversary of the October Revolution had to be celebrated in November.
Sweden tried a different approach, with a plan to drop all leap years between 1700 and 1740, thereby making the necessary eleven-day shift gradually. Unfortunately, after missing the leap year in 1700, they observed leap years in 1704 and 1708, getting stuck a day ahead of the Julian calendar and ten days behind the Gregorian. At this point they seem to have thrown their hands in the air and declared the whole thing to be a bad idea. They shifted back into synchrony with the Julian calendar by having both a February 29 and a February 30 in 1712.
(Sweden eventually made the Gregorian shift in the conventional manner, by dropping eleven days in February 1753.)
* It’s of course depressingly predictable that the calendars have been called Julian and Gregorian, after the powerful men who legislated the changes, rather than Sosigenean and Lilian, after the clever men who worked out the details. † There seems no truth to the story that people rioted in Britain because they believed the eleven days were being removed from their lives. There were riots in the election year of 1754, and the recent calendar reform was one of the political hot potatoes of the time; there was also an issue that some people were paying tax and rent for a full quarter, while being denied wages for the missing eleven days.
In my first post on this topic, I discussed some physics and physiology, in an effort to predict and explain the likely consequences for a person exposed to the vacuum of space.
In this part, I’m going to look at the evidence from animal experiments and human accidents.
ANIMAL DATA
The animal decompression experiments were carried out in the 1960s. The original reports used to be available on-line, but they seem now to have vanished. I suspect the organizations involved have concerns about being associated with these experiments, even though they took place half a century ago. They are, however, summarized in the second edition of the Bioastronautics Data Book, and in a 1968 NASA report entitled Rapid (Explosive) Decompression Emergencies in Pressure-Suited Subjects (5.7 MB pdf). The NASA report (NASA CR-1223) provides more physiological detail, and that’s what I’m using for the information below unless otherwise stated.
Boiling and swelling
Dogs were decompressed to ambient pressures of 2 mmHg—effectively a vacuum, and well below the 47 mmHg pressure threshold that marks the Armstrong Limit, the point at which water boils at body temperature. This decompression was accompanied by
… violent evolution of water vapor with swelling of the whole body of dogs.
The Bioastronautics Data Book adds that the body swelled to “perhaps twice as much as its normal volume”.* How fast did that happen? Experiments in which the gaseous composition of the subcutaneous bubbles was monitored showed:
At first there appears to be a rapid conversion of liquid water to the vapor phase which reaches a peak at one minute and continues at a slower rate for several minutes. There is an initial rush of carbon dioxide, nitrogen, and oxygen into the pocket, but carbon dioxide and the nitrogen soon become predominant.
So these whole-body decompression experiments produced different results from the isolated hand experiments I described at the end of Part 1. When the whole body is decompressed there’s a prompt, extensive formation of water vapour bubbles in the subcutaneous tissue—no sign of the delayed onset of swelling (by up to ten minutes) that was found in the hand experiments. Either there’s something unusual about human hands, or something about being connected to an undecompressed body delayed the onset of swelling in the hand experiments.
As soon as the water vapour bubbles form, gases dissolved in the tissues diffuse into them. These experiments were carried out breathing air, so nitrogen is the major gas to enter the bubbles, with carbon dioxide a more minor component. Oxygen levels in the tissues are falling rapidly, so the oxygen content of the bubble gas will initially rise, and then fall off dramatically.
Did the blood boil? The technical term for this is ebullism (Latin “out-boiling”), and some of the animals were monitored for the formation of bubbles in the circulation. (I don’t have access to the original publication, but I presume this was done fluoroscopically—by continuous X-ray screening—given that this was all happening in the 1960s, before ultrasound became a diagnostic tool.)
Yes, the blood did indeed boil:
Almost immediately after decompression to an ambient atmospheric pressure at which ebullism can occur, vapor bubbles form at the entrance of the great veins into the heart, then rapidly progress in a retrograde fashion through the venous system to the capillary level.
This backward propagation of the bubbles may be because there is a slight pressure gradient along the length of the veins, from the capillaries to the heart, but it might also be because small bubbles are forming at the periphery, washing centrally, and growing or grouping into larger bubbles as they go.
Venous return is blocked by this “vascular vapor lock.” This leads to a precipitous fall in cardiac output, a simultaneous reduction of the systemic arterial pressure, and the development of vapor bubbles in the arterial system and in the heart itself, including the coronary arteries.
As soon as the right side of the heart contains a significant volume of gas, it can’t work as a liquid pump any more—it just compresses and recompresses the gas volume it contains. So flow to the lungs and the left side of the heart is prevented, the left side of the heart has nothing to pump, and the arterial blood pressure begins to fall. As soon as it falls below the critical 47 mmHg, the arterial circulation starts to fill with gas, too.
Systemic arterial and venous pressures then approach equilibrium in dogs at 70 mm Hg. At ebullism altitudes, one can expect vapor lock of the heart to result in complete cardiac standstill after 10-15 seconds, with increasing lethality for exposures lasting over 90 seconds. Vapor pockets have been seen in the heart of animals as soon as 1 second after decompression to 3 mm Hg.
That figure of 70 mmHg for the pressure in the blood vessels at cardiac arrest is interesting. Since it’s higher than the vapour pressure of water at body temperature, it presumably reflects the additional presence of nitrogen in the bubbles.
While the nitrogen in the tissue bubbles isn’t a serious problem, its presence in the circulation is. Recompression quickly causes the big water vapour bubbles to collapse back to the liquid phase. If the heart is still beating, the disappearance of these large bubbles means it can start working as a pump again. But nitrogen takes some time to redissolve, so it persists as small bubbles that are then showered into the circulation:
Upon recompression, the water vapor returns immediately to liquid form but the gas components remain in the bubble form. When circulation is resumed, these bubbles are ejected as emboli to the lungs and periphery. Cardiac arrythmias often occur as do focal lesions in the nervous system. These are probably a result of infarct by inert gas bubbles.
In effect, these animals suffered a case of the bends (nitrogen bubble emboli) as they recovered from cardiac arrest.
Survival
Cardiac arrest evidently occurs in two stages. At first, the heart is still beating, but full of gas and therefore ineffective as a pump. This is called Pulseless Electrical Activity. Recompression at this stage will get rid of most of the bubbles in the circulation, and allow the heart to start working properly again. But in the absence of recompression, rapidly falling oxygen levels and bubbles in the coronary arteries will soon cause the heart to stop beating. The Bioastronautics Data Book reports: “Once heart action ceased, death was inevitable, despite attempts at resuscitation.”
In the dog experiments, after 90 seconds of exposure to near-vacuum the animal’s hearts were still beating, but extremely slowly—about 10 beats per minute. If recompressed at this stage, all the animals survived, but often with transient neurological deficits, presumably from a shower of nitrogen-bubble emboli as the heart started pumping again. Beyond 120 seconds of vacuum exposure, deaths occurred frequently. Squirrel monkeys showed a similar pattern of survival but, interestingly, chimpanzees survived longer, with some making a delayed return to “baseline function” after 3.5 minutes of vacuum exposure.
Consciousness and brain damage
It’s difficult to judge “useful consciousness” from animal experiments. One chimpanzee with EEG monitoring is reported as having useful consciousness of 11 seconds, which I presume means that EEG activity looked normal for that period. The cortex had shut down by 45 seconds, and the whole brain was electrically silent by 75 seconds.
Both squirrel monkeys and chimpanzees showed a range of deficits afterwards, in the form of changes in their behaviour and performance. This sort of thing clearly isn’t good for your brain.
Lung injury
After an episode of decompression, the lungs are affected by bruising and areas of overdistension, presumably caused by gas trapping. The longer the decompression, and the faster its onset, the greater the lung injury. Another problem is atelectasis—regions in which the lung is completely collapsed. During a decompression episode, the airways fill with water vapour. When recompressed, this water vapour collapses back to the liquid phase, and the lung tissue tends to collapse with it.
HUMAN DATA
Someone’s got to have an accident for us to obtain the sort of data we’re interested in here—even in the high days of test-pilot derring-do, no-one was volunteering to be decompressed to vacuum.
We’ve seen from the animals studies that chimpanzees seem to survive vacuum exposure better than dogs. So are humans more like chimps or dogs? We’ve got some survival data that help answer that question.
Survival
In 1966, Jim LeBlanc was the victim of a depressurization accident in a vacuum chamber while carrying out spacesuit tests. A hose disconnection caused his suit to rapidly (but not explosively) depressurize. He lost conscious after about 15 seconds. The last thing he remembers is the saliva on his tongue starting to boil. The chamber was completely repressurized within a minute, he recovered consciousness during the repressurization, and was able to stand up almost immediately. Apart from sore ears, he suffered no ill-effects. There is video of the incident:
In 1971, the Soyuz 11 capsule returned to Earth with all three crewmen dead inside: Georgi Dobrovolski, Vladislav Volkov and Viktor Patsayev. A damaged air vent had opened shortly after the separation of the orbital and descent modules. The crew had been exposed to vacuum for 11 minutes, and could not be resuscitated.
From a 2013 article in Space Safety Magazine, we know that the dead men appeared normal apart from facial bruising and evidence of bleeding from the nose and ears. Dobrovolski and Patsayev had apparently tried to unstrap in order to deal with the emergency. Telemetry recorded the subsequent course of events:
At the instant of separation of the orbital and instrument modules, the cosmonauts’ pulse rates varied broadly: from 78-85 in Dobrovolski’s case to 92-106 for Patsayev and 120 for Volkov. A few seconds later, when they first became aware of the leak, their pulse rates shot up dramatically—Dobrovolski’s to 114, Volkov’s to 180—and thereafter the end had been swift. Fifty seconds after the separation of the two modules, Patsayev’s pulse had dropped to 42, indicative of someone suffering oxygen starvation, and by 110 seconds all three men’s hearts had stopped.
In 1982, Kolesari & Kindwall reported a case in which a technician was accidentally decompressed over several minutes to a pressure less than 30 mmHg, and held there for a minute before recompressing. Total time at pressures less than the Armstrong Limit was estimated at between one and three minutes. His heart did not stop, but on removal from the chamber he was bleeding from his lungs, unconscious and showing a type of abnormal posturing associated with brain injury. He remained unconscious for five and a half hours, at which point he was treated in a hyperbaric chamber. He woke up within twenty-four hours and eventually made a complete recovery without neurological problems. In the first two days he showed a spike in a biochemical marker called creatine phosphokinase, which is an indicator of tissue damage—presumably due to a combination of the initial hypoxia and bubble emboli.
From these very sparse data it appears that humans may perform closer to dogs than to chimpanzees, with cardiac arrest intervening somewhere around or just after the two-minute mark. The technician in the 1982 accident made a full neurological recover, but probably only with the aid of a hyperbaric chamber to improve his tissue oxygenation in the aftermath of the accident.
Breath-holding
The NASA CR-1223 report lists a number of episodes of explosive or rapid decompression (one fatal). In particular, there are notes on three men who were decompressed by about 250 mmHg (from the equivalent of 8,000 ft to 22,000 ft) over two seconds—a relatively mild change by the standards we’re discussing here, which would probably have been relatively uneventful without the attempt at breath-holding.
The first subject was a 42-year-old pilot who inadvertently held his breath at the instant of decompression. He immediately experienced an upper abdominal pain of moderate severity and then lost consciousness. His respirations were noted to be irregular and in the nature of short gasps. Consciousness was regained on reaching ground level about one-half minute after the decompression.
This looks like a fainting episode induced by trying to hold pressure in the lungs that was higher than the pressures in his heart.
A twenty-three-year-old altitude chamber technician is believed to have held his breath at the time of decompression. Almost immediately he noted generalized chest pain and collapsed about twenty seconds later. There were no voluntary respiratory movements. Artificial respiration was begun at once. His skin was cyanotic, cold and clammy. Blood pressure was 126/80 and the pulse was regular at 90 per minute. Voluntary respiration began about two minutes after the rapid decompression but he remained unconscious for about five minutes. On recovering consciousness, he noted weakness of the right arm, numbness of the face, headache and blurred vision. He was nauseated and vomited. The paresis and numbness disappeared rapidly but the clinical picture of shock, an ashen pallor with cold wet skin, persisted for a half hour. His blurred vision cleared about five hours post decompression, the nausea and vomiting lasted six hours and the headache subsided in about eight hours. An x-ray of the chest was normal.
That looks like a near-fatal shower of air emboli that were squeezed into the lung blood vessels by the high pressure in the lungs.
In the third case , a thirty-three-year-old pilot was near the
peak of inspiration when decompression started. Initially, he noted the expulsion of air from his nose and mouth. This was followed by a severe left parasternal pain. Within a few seconds he felt weak and giddy and shortly thereafter became unresponsive. His respirations were irregular, shallow and associated with a hacking cough. During the descent to ground level he exhibited several uncoordinated twitching movements of the upper extremities. The pulse was 45 per minute about two minutes after the decompression. He was in shock and had an ashen pallor and cold, clammy skin. The patient was unconscious for about ten minutes. In the meantime the blood pressure and pulse stabilized at 130/76 and 80 per minute, respectively. The patient had a complete quadriplegia, as well as the loss of tactile sensation for the initial twenty minutes following the decompression.[…]
Chest x-rays taken about one hour after the incident showed a
pneumomediastinum, a small pneumothorax of the left apex and air in the soft tissues of the neck.
There seems to have been temporary damage from emboli, as in the previous case, but accompanied by air squeezing into the chest cavity, into the tissues around the heart, and then up into the neck.
All of the above is a pretty powerful argument for not attempting to hold your breath when decompressing.
And finally … Joseph Kittinger’s hand
Project Excelsior was a series of three simulated high-altitude bailouts that took place during 1959 and 1960. Captain Joseph Kittinger ascended in an open gondola suspended from a helium balloon, and then returned to the ground by parachute. All his jumps were from altitudes well above the Armstrong Limit, so he wore a pressure suit. He described the three jumps in an article for the December 1960 edition of National Geographic.
On his third ascent, Kittinger noted at an altitude of about 43,000 ft that his right hand didn’t feel normal, and realized that his suit glove had failed to pressurize. As he wrote later:
The prospect of exposing the hand to the near-vacuum of peak altitude causes me some concern. From my previous experiences, I know that the hand will swell, lose most of its circulation, and cause extreme pain. I also know, however, that I can still operate the gondola, since all the controls can be manipulated by the flick of a switch or a nudge of the hand.
I am acutely aware of all the faith, sweat, and work that are riding with me on this mission. I decide to continue the ascent, without notifying ground control of my difficulty.
He and his depressurized hand then rode up to 102,800 ft and spent 12 minutes at altitude, after which he bailed out of his gondola and took 14 minutes to descend to the ground.
[Dr. Dick Chubb] looks at the swollen hand with concern. Three hours later the swelling will have disappeared with no ill effect.
Since the ascent took an hour and a half, it’s likely that Kittinger’s hand spent more than an hour above the Armstrong Limit of 63,000 ft. And the idea that Kittinger’s hand spent an hour in near-vacuum without suffering lasting damage or causing his death seems to be a powerful internet meme, brought up whenever the topic of human vacuum exposure is discussed.
But what happened to Kittinger’s hand is quite complicated, and probably not very informative about vacuum exposure generally.
It’s difficult to say exactly what pressure Kittinger’s body was at. He was wearing an MC-3A partial pressure suit—an outfit that relied on tight lacing and multiple inflating bladders to apply approximately uniform pressure to the body at altitude. Its main function was to allow pressure breathing—the wearer could breathe oxygen from a source at higher-than-ambient pressure, and the suit would compress his body to prevent dangerous pressure gradients being created within his lungs and circulatory system. Some residual pressure gradient commonly occurred—breathing from a source with a pressure 30 mmHg above the suit pressure was not dangerous, and could be performed for some time. But even breathing oxygen, Kittinger’s suit would need to compress his body by at least 100 mmHg to allow him to breathe at his maximum altitude.
The glove that had failed was made of leather and nylon, with lacing at the back to ensure a snug fit, and an internal bladder across the back of the hand, which should have inflated to generate the necessary counterpressure to balance that in the rest of the suit. It was similar in construction to the pair below:
So as Kittinger’s altitude increased and his suit inflated, his body pressure (and in particular, the pressure in his blood vessels) would ramp steadily higher relative to the tissues in his hand. Critically, the venous pressure in his arm would rise until it was ∼100 mmHg higher than normal venous pressure in his hand. Valves in the veins would prevent blood squeezing backwards down this pressure gradient, but arterial blood was still entering his hand, at an abnormally high relative pressure. The only thing that could happen was for the veins and capillaries to fill with blood until their pressure rose to match the suit pressure in Kittinger’s arm, at which point a trickle of blood flow out of his hand would resume.
So intravascular pressures everywhere in his hand would have very quickly risen to exceed 47 mmHg—there would be no gas formation in the blood vessels of his hand.
And, at those grossly abnormal pressures, his capillaries would have started to leak fluid into the surrounding tissues—he would develop oedema in his hand. The combination of oedema and (above the Armstrong Limit) water vapour bubbles would quickly expand the tissues of his hand until it completely filled the snugly fitted glove. Tissue pressure would then rise further as more oedema squeezed out of the capillaries, and eventually the water vapour bubbles would collapse back into the liquid phase as his tissues pressures exceeded 47 mmHg.
To what extent that process completed during Kittinger’s time above the Armstrong Limit we don’t know—but the fact that his hand was still swollen for a couple of hours after he returned to the ground implies that something other than water vapour was present in the tissues. Although in his National Geographic article Kittinger says he was “breathing oxygen”, Dennis R. Jenkins, in Dressing For Altitude (17.8 MB pdf) writes that the breathing-gas mix for Project Manhigh, the precursor to Project Excelsior, was 60% oxygen, 20% nitrogen and 20% helium, because of concerns about fire hazard. So it may be that Kittinger had a mixture of residual inert gas bubbles and oedema in the swollen hand Dr Chubb examined with concern.
SUMMARY
Tissues do swell with gas, promptly, and up to approximately double their normal size. Evidence of blood boiling can appear as early as one second after depressurization, and gas bubbles will get big enough within 10-15 seconds to prevent the heart pumping blood. Prompt repressurization will immediately fix this pump problem but, if you’ve been breathing nitrogen, you may then endure a shower of residual nitrogen emboli into your circulation, causing transient neurological problems. After a couple of minutes depressurized your heart will stop, and it will then become much more difficult to resuscitate you. At about the same time, the neurological insult and tissue damage from hypoxia become so severe that it’s likely you’ll need advanced medical facilities and access to hyperbaric medicine to recover unimpaired. Transient attempts at breathholding (or even being caught at the top of a deep breath in) are likely to have nasty consequences due to lung injury and air leaks into the blood vessels and tissues.
It seems likely that the actual Time of Useful Consciousness will be established by a race between falling oxygen concentrations in the blood and the onset of cardiac arrest because of gas bubbles filling the heart—there’s not much hope of anything longer than ten seconds, and (as I described in Part 1) reason to believe it might be significantly shorter than that.
For me, it’s interesting that there are two opposing schools of speculation about vacuum exposure out there, neither of which is accurate. In one, people are imagined to explode or freeze within seconds of exposure to space—certainly untrue. In the other, which seems to be almost a reaction to the excesses of the first, there’s the idea that the skin and blood vessels are somehow tight enough to stop widespread and immediate gas formation in the blood and tissues—again, as demonstrated by experiment, also untrue.
The truth, as ever, lies somewhere in the middle.
* This prompts the question of whether all that internal evaporation of water might not cause a significant fall in body temperature.
If we take a doubling of body volume as a ball-park figure for the volume of water vapour evolved, we could put it at about 80 litres. Steam tables tell us that, at an absolute pressure of 47 mmHg (that’s -713 mmHg on the “gauge” scale used in steam tables), a gram of water generates about 22.8 litres of vapour—so doubling an adult human’s size under these conditions requires the evaporation of only about 3.5 g of water. Our trusty steam table also tells us that, at 37ºC, this will require about 8.5 kJ of energy. But the specific heat capacity of water is 4.2 kJ/kg/ºC, making the heat capacity of an 80-kg person about 336 kJ/ºC. So all that internal evaporation is only enough to cool a person by a fortieth of a degree Celsius.
The topic of explosive decompression generates a lot of nonsense, particularly in science fiction films and television series, but also scattered across the internet generally. We actually know quite a lot about what would happen if a human being were exposed to the vacuum of space—and it turns out not to be like the movies.
For this first part, I’m going to write a bit about basic physics and physiology, and discuss what that can tell us about the accuracy (or otherwise) of the common SF tropes we see in the movies. In the second part, I’ll move on to the evidence we have from actual vacuum exposures and explosive decompressions.
EXPLODING
Will people explode when exposed to vacuum?
Absolutely not.
Liquid pressures aren’t an issue—when decompressed, the liquids in our tissues will expand only very, very slightly before their ambient pressure drops to zero. Gas pressures are the problem. Our bodies contain various gas cavities which are at the same pressure as the surrounding atmosphere. Most spacecraft and spacesuits aren’t pressurized to one atmosphere, but we can take that as the worst-case scenario for explosive decompression. So at the moment tissue pressures fall to zero, these gas cavities will press outwards against the surround tissue with one atmosphere of pressure—that’s 100 kilopascals, which is 100,000 newtons per square metre.
But skin and soft tissue is strong. Here’s film from Arthur C. Clarke’s World of Strange Powers (1985). The scientists are reproducing a traditional “hook-hanging” rite carried out at Kataragama, Sri Lanka:
The volunteer weighs 55 kilograms, and hangs from six slim hooks. With a generous allowance of 30 square centimetres for the total suspension area, that comes out to pressures of 180,000 newtons per square metre on the soft tissues the hooks supports. That’s almost twice our worse-case limit, and the skin doesn’t even stretch very far.
So no exploding.
FREEZING
Will people freeze solid as soon as they are exposed to space?
Absolutely not.
Vacuum is a good insulator. At cool ambient temperatures, our bodies lose heat mainly by conduction and convection, which is why air temperature and wind speed are so important to the way we dress outdoors. In the absence of air, our skin will cool by radiation—the loss of energy at infrared wavelengths emitted by our warm bodies. Depending on skin temperature and clothing, we radiate at anything from 100 to several hundred watts. So that’s how fast we’ll lose heat.
Now, we’re made mainly of water, and water has a high specific heat capacity, around 4000 J/kg/ºC—which means a kilogram of water needs to lose 4000 joules to fall in temperature by one degree Celsius. So an 80-kilogram bag of water (that’s approximately me), is going to need to lose over 300,000 joules of energy before its temperature falls by one degree. (That’s neglecting the continuing metabolic production of energy in the meantime.) If I’m radiating at a generous 500 watts, and producing no internal energy, and receiving no energy from sunlight, that’s ten minutes of vacuum exposure before my temperature falls by just one degree Celsius.
There may be some local difficulties, though. We also cool by evaporation, which becomes significant when it’s hot enough to cause sweating. Water will evaporate from any moist surfaces exposed to vacuum, and it will take energy with it as it does so, driving down the temperature of the tissue it evaporates from. So on exposure to vacuum the eyes, nasal cavity, and probably mouth and respiratory tract are going to start cooling by evaporation. How cold they get will depend on how quickly water moves out of the tissues to replace what is lost from the surface.
But no-one is going to turn to ice crystals and shatter.
BREATH-HOLDING
Should people hold their breath if about to be exposed to vacuum?
Not a good idea.
With the tissues equilibrated to an ambient zero pressure, the cardiovascular system will continue to work as usual—all its pressures are relative pressures (what engineers call “gauge” pressures). A blood pressure of 120/80 is telling you that the systolic pressure is 120 millimetres of mercury (mmHg) above ambient, and the diastolic pressure 80 mmHg above ambient. That’s true at sea level with an ambient pressure of one atmosphere, at twenty metres down in the ocean with an ambient pressure of three atmospheres, on top of Everest with an ambient pressure of a third of an atmosphere, or in vacuum at zero atmospheres.
Trouble is, respiratory gas exchange works on absolute pressures. No matter what your tissue ambient pressure is, you still need to breathe a partial pressure of 160 mmHg of oxygen (21% of an atmosphere) to get a normal concentration of oxygen into your blood. This is fine when you’re breathing at one atmosphere of ambient pressure. It’s even fine on top of Everest—if you mix supplementary oxygen into a third of an atmosphere of breathing gas, you can easily get normal oxygenation while still balancing the pressure inside your lungs against the ambient pressure outside your body and in your tissues. There’s no resulting pressure gradient, and nothing gets squashed or stretched.
But in a vacuum, the pressure in your lungs (necessary for gas exchange) is not balanced by any external pressure. Holding air in your chest is going to cause pressure outwards, stretching the lungs; and inwards, compressing the heart and large blood vessels in the middle of your chest. And notice that even a standard 160 mmHg pressure of oxygen is a large pressure, exceeding the normal pressures of arterial blood. It’s enough pressure to squash your heart, which is not going to have a good effect on its ability to pump. This is why people can make themselves faint while trying too hard to blow up a balloon—the high pressure inside their chest interferes with the flow of blood through the heart. So trying to hold on to a lungful of oxygen in vacuum will make your blood pressure crash, and you’ll almost certainly pass out.
And that 160 mmHg is the smallest plausible pressure someone might find themselves trying to hold when suddenly exposed to vacuum. It’s the minimum operating pressure for spacesuits—most operate at around 240 mmHg. The Space Shuttle maintained an internal atmosphere at 530 mmHg during missions. These are pretty lethal pressures to try to hold in the lungs.
It’s not just the cardiovascular system that will suffer. The lungs themselves are not designed to support that sort of pressure differential. In the fifth edition of Diving and Subaquatic Medicine, Chapter 6, Edmonds et al. report that a person’s lungs will leak air into the surrounding tissue when subjected to a pressure gradient of 110 mmHg, even if the chest is prevented from expanding using a binder. If the chest is allowed to expand in response to the imposed pressure, the lungs start to leak at just 70 mmHg. (Admittedly, this is from a cadaver study, but it’s not the sort of test you can find volunteers for.) What’s going on here is that stretch is bad for your lungs, too—if your chest is blown up like a balloon, the lungs will burst at a lower pressure. This is bad news for anyone tempted to take a deep breath before entering vacuum—the extra stretch moves their lungs closer to the burst point. But that’s probably academic, since these experimental burst pressures are lower than the lowest spacesuit operating pressures.
So if you try to hold a lungful of air on decompression, not only will you squash your heart and cause your blood pressure to fall catastrophically, your lungs will leak—they’ll squeeze air into the lung blood vessels, sending showers of bubbles into your circulation; they’ll squeeze air into the tissues around your heart and then up into your neck; and they’ll squeeze air into the pleural cavities lining your chest, causing your lungs to deflate.
As an added extra, the air held in your middle ears will burst your eardrums.
So breath-holding on exposure to vacuum is a good way to incapacitate yourself. Is there another option?
EXHALING
Should people exhale on exposure to vacuum?
Better … but still not great.
If breath-holding will make you lose consciousness and pop your lungs, not breath-holding seems like the only viable alternative. If you can arrange to yawn just as the pressure drops, to open your Eustachian tubes and let the air out of your middle ears, then you’ll also prevent your eardrums bursting.
Exhaling gets rid of that abnormal pressure gradient in the chest, so there’s no interference with blood pressure, and no popped lungs. However, it takes time for the lungs to empty, so if decompression is very fast, lung injury could still occur while the pressure in the lungs remains transiently higher than the pressure in the surrounding tissues. Here’s a theoretical plot from the second edition of the Bioastronautics Data Book, showing that a 250 mmHg decompression (from 350 to 100 mmHg) over 0.3 seconds will produce a brief pressure gradient across the chest wall that reaches what we know to be potentially lung-popping levels:
Having exhaled to vacuum, there’s now no oxygen at all in the lungs. From the point of view of keeping oxygen circulating in the blood, this is a disaster. Venous blood, returning from the tissues, still contains a considerable residue of oxygen. This is normally topped up by oxygen diffusing from the lungs into the blood. Even if not much oxygen is added (for instance, if you’re holding your breath), at least some oxygen goes back to the tissues. But if there’s no oxygen in the lungs, the normal diffusion gradient is reversed—oxygen leaves the venous blood and diffuses into the space inside the lungs. So very little then gets sent back to the tissues. This means that tissue oxygenation fails abruptly and catastrophically—much faster than it does with simple breath-holding.
We’ve got some experience of how quickly things go badly wrong under this sort of hypoxic insult—some of it comes from pilots depressurizing at very high altitude, and some of it comes from people who have accidentally breathed gas containing no oxygen, usually in an industrial accident.
The USN Aerospace Physiologist’s Manual makes a prediction about how long a person might remain conscious and orientated enough to carry out simple tasks, once exposed to near-vacuum. This is based on the apparent convergence of a number of graphs generated by decompressing various hapless volunteers to various altitude-equivalents during the 1960s:
Somewhere around an altitude of 65,000 ft (with an air pressure of 43 mmHg, and a partial pressure of oxygen of just 9 mmHg) the period during which volunteers remain conscious enough to perform simple tasks converges on 12 seconds.
So a Time of Useful Consciousness (TUC) of 12 seconds following exposure to vacuum is often quoted, but there are two caveats to it, neither of them encouraging. Firstly, this derived time applies to individuals at rest—that is, not trying to do urgent things in order to stay alive. Secondly, it derives from experiments involving non-abrupt decompression. As Paul W. Fisher reports in the USAF Flight Surgeon’s Guide:
These TUCs are for an individual at rest. Any exercise will reduce the time considerably. For example, usually upon exposure to hypoxia at FL 250 [an altitude of 25,000 ft], an average individual has a TUC of 3 to 5 minutes. The same individual, after performing 10 deep knee bends, will have a TUC in the range of 1 to 1.5 minutes.
[…]
A rapid decompression can reduce the TUC by up to 50 percent caused by the forced exhalation of the lungs during decompression …
Exercise not only increases the consumption of oxygen, it also increases the speed at which blood returns to the lungs and is depleted of its oxygen content. So (extrapolating the extrapolations!) if you’re explosively decompressed while exercising vigorously, it looks like you might end up with less than six seconds of useful consciousness. Which would be disappointing.
BOILING
Will a person’s blood boil on exposure to vacuum?
Yes.
At body temperature, water has a saturated vapour pressure of 47 mmHg. Which means that if the ambient pressure falls below 47 mmHg, the water will evaporate into the gas phase throughout its bulk. Which is the definition of boiling—bubbles forming and expanding within the liquid. The altitude at which the atmospheric pressure drops below 47 mmHg (63,000 ft), and the water in human tissues is in danger of boiling, is called the Armstrong Limit—named for Harry Armstrong, one of the pioneers of aviation medicine.
You’ll find some places on the internet that claim a person’s blood won’t boil, because normal blood pressure (120 mmHg systolic, 80 mmHg diastolic, remember) is higher than that 47 mmHg critical value. In support of this claim, many cite physicist Geoffrey Landis’s otherwise excellent exposition on explosive decompression:
Your blood is at a higher pressure than the outside environment. A typical blood pressure might be 75/120. The “75” part of this means that between heartbeats, the blood is at a pressure of 75 Torr (equal to about 100 mbar) above the external pressure. If the external pressure drops to zero, at a blood pressure of 75 Torr the boiling point of water is 46 degrees Celsius (115 F). This is well above body temperature of 37 C (98.6 F). Blood won’t boil, because the elastic pressure of the blood vessels keeps it it a pressure high enough that the body temperature is below the boiling point …
Now, the fact that Landis gets the notation for arterial blood pressure reversed (his numbers should be written 120/75) is probably a hint that he’s not entirely at ease with the physiology of blood circulation. What he has forgotten about is the blood that’s not in the arteries, which at any given time amounts to about 90% of the blood volume—flowing through the capillaries, veins and lung blood vessels, all of which normally have pressures well below the critical value. The pressure in the central veins, in particular, is usually only a few millimetres of mercury above ambient. So although Landis writes “No” in answer to the rhetorical question “Would your blood boil?” what he really means is that ten percent of your blood wouldn’t boil, but the rest would. Which certainly seems more like a “Yes” to me.
It’s also sometimes claimed that, while the veins do run at low pressures normally, they will tightly contain any rise in pressure, preventing gas bubbles from forming. But veins are so-called capacitance vessels—they adjust their volume according to the volume of the circulating blood. They will reach their elastic limit if overfilled, but in healthy adults they’re continuously adjusting their volume at low pressure. It’s possible to infuse a litre or more of fluid into the veins of a healthy adult without the venous pressure shifting much over 15 mmHg. So there’s likewise room for a litre or more of gas to form in the venous side of the circulation without causing any major pressure rise in the system. This is a problem, because the amount of gas necessary to cause cardiac arrest in a human is estimated at 3-5 ml/kg—if a few hundred millilitres of gas gets into the heart chambers, it forms a compressible volume that stops the heart propelling liquid when it pumps. The heart continues beating, but it moves no blood.
Another variation on this optimistic theme is that the skin and subcutaneous tissues are tight enough to prevent gas expanding. (Presumably the people who make this claim have never thought seriously about the level of tissue stretchiness implied by a yawn or a clenched fist.) There’s a condition called subcutaneous emphysema, in which gas (usually air from a leaking lung) becomes trapped under the skin. We know that even the relatively low pressures associated with a mechanical ventilator (around 20 mmHg) can squeeze gas out of an already injured lung and into the tissues—which shows that the tissues are unable to immediately generate the necessary counterpressure. Like the veins, the tissues eventually reach an elastic limit and oppose the entry of any further gas, but there is considerable, obvious distension before that happens. So, on exposure to vacuum, the surrounding tissues are not going to be able to prevent the veins expanding.
What happens within the tissues themselves is an interesting question. Dense tissues like tendon and ligament may well be able to contain any tendency for gas bubbles to form within their substance. The loose, soft tissue that lies under the skin (the region affected by subcutaneous emphysema) obviously doesn’t have the structural strength to prevent the spread of gas bubbles once they start forming, but there’s some evidence that it is tightly woven enough to suppress initial bubble formation, for a while.
The second edition of the US Naval Flight Surgeon’s Manual discusses a study in which the hands of volunteers were decompressed to very low pressures. Subcutaneous gas bubbles didn’t form at the Armstrong Limit; they didn’t even form at pressures equivalent to the boiling point of water at skin temperature, which is a little lower than the body’s core temperature. The highest pressure at which gas formation occurred was 20 mmHg; three people were decompressed to 5 mmHg, and all showed gas formation. But the onset of visible gas was delayed—it occurred “suddenly and manifested itself by marked swelling”, but after a lapse of between thirty seconds and over ten minutes of decompression time. Once swelling occurred, it could be rapidly abolished by recompression. But, strikingly, gas was generated much more readily when the hand was decompressed again. So this experiment suggests that it might be initially difficult for gas bubbles to form spontaneously in the tissues, but once they do they can spread rapidly. Other body gases (nitrogen, oxygen, carbon dioxide) will diffuse into these gas bubbles once they form, and will persist as tiny bubbles for some time after recompression has caused the water vapour to collapse back into the liquid phase. These tiny bubbles then form nuclei that will quickly re-expand with water vapour if the tissues are decompressed again.
Unfortunately, the Naval Flight Surgeon’s Manual is a little short on detail. It’s not clear if these volunteers’ hands were still being perfused with blood, or if they’d been isolated by tourniquet, for instance—which would make a significant difference to the tissue pressures. The study was reported in the first edition of NASA’s Bioastronautics Data Book (1964), but was dropped from the second edition of 1973, apparently superceded by more recent experiments which I’ll describe in my next post on this topic. So the pressures and timings in this report are interesting, but may not reflect what happens with total-body decompression.
OTHER THINGS
As if all the foregoing wasn’t enough, a few other problems may occur. If the atmosphere you’re breathing immediately before decompression contains nitrogen, some of that nitrogen will bubble out of solution, in the tissues and blood, as the ambient pressure falls. While this can be a major problem for aviators at altitude, it’s probably a relatively minor problem for those exposed to vacuum, given that water vapour bubbles will be forming. And the second edition of the Bioastronautics Data Book notes that, “The symptoms of decompression sickness are rarely observed during the first few minutes of exposure to low pressure.”
I’ve already mentioned the problem of air trapped in the middle ear. Other areas where air can be trapped, causing pain when the ambient pressure falls, are in the sinuses and under dental fillings. The USN Aerospace Physiologist’s Manual puts the experimental incidence of sinus problems on decompression at 1%, and of dental pain at 0.1%.
Finally, there’s the volume of gas that’s sitting in everyone’s stomach and intestines. This will expand as the ambient pressure falls. On slow decompression it generally causes cramping pain, burping and flatulence. But both the third edition of the USN Flight Surgeon’s Manual and the second edition of the Bioastronautics Data Book note that more rapid expansion of gut gas has the potential to intensely stimulate the vagus nerve, causing a profound fall in heart rate and blood pressure, leading to unconsciousness.
SUMMARY
So:
You won’t explode, but you may well swell up. You won’t freeze instantly, but your eyes, nose, mouth and airways will experience evaporative cooling. You shouldn’t try to hold your breath. You should breathe out and yawn as the pressure drops, but if decompression is explosive that may not protect your lungs from pressure injury. Your venous blood will boil, and there’s room in your venous circulation to generate enough gas to stop your heart moving blood. The gas in your gut may expand so rapidly it leads to a reflex slowing of your heart and rapid fall in blood pressure. You will have a maximum of 12 seconds of useful consciousness, but if you’re exerting yourself and/or the decompression has been explosive, your period of consciousness may well be considerably shorter.
For Part 2 of this topic, I’m going to look at the data we have from actual vacuum exposure, in humans and animals.
Note: Almost all pressures are quoted in an antique unit of measurement, the millimetre of mercury (mmHg). This is because the most familiar physiological pressure for most people is blood pressure, which is always quoted in millimetres of mercury, and also because a lot of the relevant literature dates back to a time when atmospheric gas pressures were quoted in millimetres of mercury.
If you want to convert, the following are equivalents, in round numbers: 1 atmosphere, 1000 millibars, 760 millimetres of mercury, 100 kilopascals, 15 pounds per square inch.
nacreous: pertaining to or resembling mother-of-pearl
Nacreous clouds are in the UK news at present, with multiple sightings in Scotland. There was an interesting divide in the BBC news coverage of the phenomenon this evening, with national newsreader George Alagiah intoning some twaddle about “forming at sunset” and “caused by refraction” in a sing-song voice, as if delivering a boring bedtime story. Whereas the BBC Scotland weather presenter, Gillian Smart, got the story right and had some nice pictures, too.
Nacreous clouds form in the low stratosphere, which is pretty high for a cloud. They’re present at all times of the day and night, but are more visible before sunrise and after sunset, when they are the first things to catch the sunlight, and the last to lose it, by virtue of their altitude. Their colours are due to diffraction, not refraction.
But this is a post about words, not natural phenomena.
Nacreous means “pertaining to nacre“. Nacre is the iridescent substance that lines many varieties of sea-shells, most notably those of pearl-forming oysters—it’s therefore commonly known as mother-of-pearl. The word comes to us from the Romance languages—it has analogues in French, Portuguese, Spanish and Italian—but its early origins remain obscure.
Nacre also provides the characteristic sheen on the surface of a pearl. And it’s interesting that a simple little world like pearl should also be a puzzle to etymologists. There are tentative links to Latin perula, a diminutive of perum, “pear”; or to a hypothesized diminutive pernula of perna, “leg of mutton” (from the shape of a mussel shell); or to pilula, “globule”. Take your pick.
In Latin, a pearl is margarita, and in Greek, margarites. Just as Pearl is a woman’s name in English, so Margarita is in Spanish. Margaret and Margery are its English-language equivalents. Margarita is also the Spanish word for “daisy”, though the connection between the pearl and the flower is obscure. The connection between the flower and the various cocktails called “daisies” is also obscure—at one time there was a Whiskey Daisy, a Gin Daisy and a Brandy Daisy, but the Tequila Daisy was the one that became most popular, and took the name margarita for itself.
Margarita also gave us the name for margaric acid, a mixture of fatty acids with a pearl-like lustre. Margarins were chemical derivatives of margaric acid, and margarine is a butter-like substance that took its name from the margarins, although chemically unrelated.
Something that looks pearly is margaritaceous, and something that produces pearls is margaritiferous.
Oyster comes from Latin ostrea and Greek ostreon. Something that resembles an oyster is ostracine, ostraceous or ostreaceous. The farming of oysters is ostreiculture.
An ostrakon (plural ostraka) is an archaeological find—a shard of pottery that has been used to jot down a note, something that was common practice in Ancient Greece. The Greeks called these pottery shards ostraka because of their curving resemblance to oyster shells. Votes were cast using ostraka, in particular when citizens voted for the banishment of one of their number. Such banishment was called ostrakismos—which gives us our word ostracism, meaning “exclusion”.
Crepuscular rays are rays that occur during the crepuscule, which is a fine old word for “twilight”. They’re the rays of brightness and shadow that seem to fan outwards and upwards from the setting or rising sun when it is masked by cloud. What’s happening is that the shadow of the clouds is being projected across the sky above your head. Dust and moisture in the air up there is being illuminated, or cast into shadow, and we see bright and dark streaks across the sky as a result. So we only get crepuscular rays if there’s something in the air to be illuminated—on a dry day with little dust or smog, there’s no hope of a spectacular display of rays like the one above.
One striking thing about the image above is that there is a noticeable dark shadow framing the cumulus cloud. That implies that there is an illuminated surface somewhere above the visible cloud. There’s actually a thin layer of higher stratus cloud, on to which the shadow of the cumulus is being projected. The illuminated stratus also accounts for the beautiful golden yellow hue of the sky. Just a couple of minutes later, the lengthening shadow on the stratus is more evident:
In the daytime, a place to look for clouds casting shadows on other clouds is around the tops of towering cumulonimbus. Sometimes the rising tops of these clouds push upwards through a layer of cirrus, and the sun will project the shadow of the crown of the cumulonimbus downwards on to the thin layer of cirrus.
Crespuscular rays become less evident as they fan out from the sun. My diagram shows what’s going on:
The individual bright rays are delineated by cloud shadow. When you look towards the sun, you’re looking diagonally through each shadow zone. The long sightline within the shadow makes it appear noticeably darker than the background sky. But as your gaze sweeps upwards, away from the sun, you begin to look through the shadows at right angles. The shorter sightline makes them progressively less evident, and its unusual to see crepuscular rays extend right overhead. But look what happens behind you in the diagram—your sightline is diagonal again and, although the shadow will have become more diffuse as light scatters into it from the surrounding air, there’s a possibility it may return to visibility behind you. So whenever you see crepuscular rays, you should turn around and check the sky in the opposite direction.
You may also see clouds in the sky behind you generating their own visible shadows. Just as perspective makes the crepuscular rays seem to radiate outwards from the sun in front of you, it makes these anticrepuscular rays behind you appear to converge on a point directly opposite the sun:
The same phenomenon that produces crepuscular rays is also responsible for the appearance of sunbeams shining downwards through the clouds:
Perspective again makes these beams appear to radiate from a central point, centred on the sun above the clouds.
Since these don’t happen at twilight, they shouldn’t really be called crepuscular rays, though they often are. The common word sunbeam seems as good as any; solar rays is a catch-all term that includes crepuscular rays; computer graphics artists call them god rays; and Marcel Minnaert, in his marvellous book Light and Color in the Outdoors, introduced me to the old expression “the sun drawing water”, from the old belief that water evaporated along the sunbeams.
Christmas Day’s full moon made me decide to make my first post of the New Year about a resolution—specifically, the resolution of the human eye. (See what I did, there?)
We’re so used to images of the full moon like the one above, it’s difficult to remember that, until the invention of the telescope in the 17th century, people had a very limited idea of what it actually looked like.
Here’s a 16th-century sketch of the moon by Leonardo da Vinci:
A very careful observer and excellent draftsman, using the naked eye, was apparently able to record very little surface detail.
Lest you think Leonardo was having a bad day, or perhaps just wasn’t that interested in the detail, here’s the best effort of the astronomer William Gilbert:
Pretty rubbish, eh? Although Leonardo and Gilbert both captured some of the larger dark shapes on the lunar disc, neither was able to produce much in the way of detail.
What was the problem? The size of the moon was the problem. Although it can occasionally seem huge in the sky, especially when rising or setting, it’s actually surprisingly small, in angular terms. It averages about 31 minutes of arc in diameter—just over half a degree. For comparison, your thumb at the end of your outstretched arm covers about a degree of the sky. So it’s easy to blot out the whole lunar disc with a finger at arm’s length.
Now, the average human eye can resolve detail down to one minute of arc. The row of letters on your optician’s Snellen chart that corresponds to normal 6/6 vision (or 20/20 if you’re in the USA) is five minutes of arc high, with the black lines and narrow white spaces subtending one minute at your eye.
Part of that resolution limit is due to something called diffraction limitation—when your pupil is small, light rays are scattered by the edge of the iris and end up converging to form a small disc, rather than a point, on the retina. When your pupil is large, diffraction limitation is less of an issue, but imperfections in the optics of your eye, especially around the edge of the lens, become a problem. So most people end up with one minute of arc being their best resolution.
Even if the optics of your eye were perfect, you’d hit another resolution problem, which is the density of photoreceptor cells in the retina. Even at their densest, in the central fovea, there are only a couple of hundred thousand per square millimetre, packed so tight that each is just two microns across—translating to a resolution of about 0.4 minutes of arc. So that’s as good a resolution as you’re going to get even with an excellent human eye.
(And that is why, although that Ultra HD 4K television screen may look jaw-droppingly marvellous when you’re peering at it from a metre away in the shop, it’s probably going to be a disappointment when you get it home—for most sizes of TV, at the usual viewing distances, those 4K pixels are smaller than your ability to resolve. If your eyes are already at their resolution limit with HD, Ultra HD is going to look exactly the same. See if you can get a salesperson to admit that.)
Anyway, back to the moon. In terms of visual resolution, it’s just 31 pixels across, like some rubbish little 32×32 icon from a prehistoric version of Windows. That’s why Leonardo and Gilbert produced the surprisingly poor sketch maps they did. What we actually see of the moon with the naked eye is nothing like the image at the top of this post, but more like this *:
To be visible to the naked eye, at one-minute resolution, a lunar feature has to be about 110 km across. So Leonardo and Gilbert were easily able to pick out the distribution of lunar “seas” (dark lava plains) that give the “Man in the Moon” his face, but neither of them was able to record a single lunar crater. However, now that we know where to look, we can often pick out the bright patches of ejecta surrounding the craters Kepler, Aristarchus and Copernicus, superimposed as they are on dark lava plains. Tycho produces a bright splash in the south, discernible even against the paler rocks of the lunar highlands. The 110-kilometre crater Plato makes a dark-floored contrast with the surrounding pale highland terrain, but it’s right at the dubious edge of visibility for most people.
Now here’s the moon at half-minute resolution, right down at the limit imposed by the density of photoreceptors in our eyes:There’s a great deal more detail—the dark-floored notch of Plato is now pretty evident, and the bright patch around Tycho now contains a central, circular crater.
Are there people who see this well? There are. When the planet Venus is at its closest to Earth it shows as a tiny crescent, one minute of arc across, which can easily be discerned with a small telescope but which most of us see as a simple point of light. Some people claim to be able to discern the crescent shape, however, and many of them can make a sketch of its orientation which convincingly matches the telescopic view.
* I took the original image, and downsized it so that the moon was 31 pixels across. Then I enlarged it, to produce an image of the correct resolution, but it was full of blocky artefacts around the edge of the moon. So I took the original again, and applied Gaussian blur until it smoothly degraded the resolution to match my blocky 31-pixel version.
A familiar pair of primary and secondary rainbows is always concentric, and the outer rainbow has its colours in the reverse order from the primary. But these two have their colours in the same order, and are converging to meet on the horizon. What’s going on there?
I was walking home from work a couple of months ago when I saw this pair of rainbows sticking up above the roofs of the houses like some sort of cosmic V-sign. I lurched to a halt, stared for a few seconds, and then broke into a jog—the sun was going to be setting soon, and I wanted to get a proper view of the pair. Once chez Oikofuge, I body-checked my way past the Boon Companion’s customary greeting (sorry, my love), grabbed a camera, and ran up the stairs to take this photograph looking out over our local river estuary. The photo provides a hint as to what’s causing the unusual rainbow.
Here’s a reminder of how a standard primary rainbow forms:
It’s actually a bit of a Just-So story. Quite why a spherical raindrop chooses to turn an incoming ray of red light back on itself at an angle of 42½º (and a violet ray at 40½º) is rather complicated, and I’m working on a little programming project to try to explain it clearly—watch this space. But for now, we just accept that if you look directly away from the sun, towards what is called the antisolar point (handily marked by the shadow of your head), then every raindrop that happens to be at 42½º from your line of sight will be directing red light towards your eyes (and every raindrop at 40½º will be directing violet light towards you). In principle, then, a primary rainbow should form a complete circle in the sky, centred on the shadow of your head, and 42½º in angular radius. In practice, the parts below the horizon become progressively more difficult to see, because there are fewer and fewer raindrops along your line of sight as you shift your gaze downwards.
Notice that the antisolar point is always below the horizon, because the sun can only illuminate raindrops when it’s above the horizon. (D’oh!)
Now, look at how still the water is in my photo. (That’s unusual, hereabouts.) The rainbow-forming raindrops on the far side of the estuary are not just being exposed to direct sunlight, they’re also being illuminated by light coming from the image of the sun reflected in the still water. Although I can’t see that extra “sun”, it’s nevertheless providing me with another antisolar point and an associated rainbow. This reflected antisolar point is precisely as far above the horizon as the real antisolar point is below it. So the two rainbows have to meet exactly at the horizon, as in my diagram:
What you’re seeing in my photo is the little V formed by those two rainbows coming together just above the horizon. (The V is noticeably narrower in the photograph than in the diagram, because the sun is lower and the antisolar points are closer together—but when I tried to reproduce the real situation in the diagram, it got less clear and harder to label properly.)
I stood and watch the display for a while. In theory, the angle of the V should get progressively narrower as the sun gets lower in the sky and the two antisolar points approach each other. And as the light of the setting sun gets redder, the associated rainbow should lose its bluer shades.
What actually happened was that the sun dropped below a bank of cloud, and the two rainbows winked out of existence.
I’ve now written some more on the topic of converging rainbows—you can find that post here.