# How To Model Rotating Propeller Discs

A shorter version of this post was published on 11 October 2017.

There are some things I hate about “in-flight” models of piston-engine aircraft. One is when the aircraft appear to be flying without a pilot; the other is a stationary propeller.

Modellers have a couple of ways of dealing with this second problem. One is to simply remove the propeller blades, leaving only the filled and smoothed spinner visible—it’s a well-recognized technique which many feel produces the most realistic appearance. But it always makes me think, Where’s the propeller? I find the complete absence of anything in the space where the propeller should be is a little distracting. I’m also not very keen on the photo-etched “prop-blur” option, which aims to produce a blurred sector for each prop blade, reproducing what we see in photos and movies, but not what we see with the naked eye.

So what I want to see is a transparent disc of the correct propeller colour(s), with the colour density at each radius matching the relative amount of prop blade and empty space at that radius.  Like this:

Years ago, I posted a short tutorial about this on WW2Aircraft.net; and shortly after I started this blog, I put up a slightly revised version. But it was light on detail—so now this is the extensively revised, expanded and updated version.

I appreciate that some people can’t be bothered doing all the measuring and editing I’ll describe below, which is intended to produce a prop disc that matches the specific measurements of the propeller. If you’re such a person, I encourage you to skip past the measurement and mathematics and read the start of the section about creating a GIMP gradient. Once you’ve seen how I create a gradient of the correct colour, skip ahead to the section on using that gradient to draw the prop disc, and follow on from there. (To skip the first dose of maths, follow this link.)

The first thing I do is to measure the radius of my kit propeller, and then divide it into eight equal sections, from boss to tip, marking off the divisions with a felt-tip pen. Here, I’m marking up two different kinds of propeller for a Blohm & Voss 138 seaplane:

Then I measure the propeller blade width at each of my marked locations. To calculate what proportion of the prop disc is occupied by prop blades at each of my eight points along the radius of the propeller, I first work out the radius at each marked distance, which is just the radius of the propeller multiplied by the number of eighths—the first mark on my 24mm propeller is at 3mm, the next at 6mm, and so on. Then I multiply each of those distances by 2π to derive the local circumference. To find out how much of that circumference is occupied by propeller blades, I take the measured width of a propeller blade at that distance, and multiply by the number of blades. Dividing this length by the total circumference tells me what proportion of the prop disc is occupied by propeller blades.

I use a little Excel spreadsheet to do the calculations for me:

The first column is the number of eighths, measured from the hub of the propeller. The second column converts this to a radial distance. In the third column I’ve entered my measured blade widths, and the fourth column does the maths—multiplying the local blade width by the number of blades, and dividing by 2π times the local radius. You’ll see these numbers turn out to be relatively small, and I find that discs printed with opacities to match are fairly unimpressive, visually—primarily because you can see through even a “zero transparency” layer of ink. So in the fifth column I take the highest number in the fourth column, and express all my numbers as a proportion of that. These are the numbers I’m going to feed to my printer, with 100% representing maximum opacity printing, and 0% full transparency.

I also need to specify a colour for my prop disc. The BV 138 aircraft I’m modelling would have had propellers painted in Luftwaffe Schwarzgrün, which was coded RLM70. I can use the handy digital colour charts created by William Marshall, available here, to discover that RLM70 translates into RGB values of 56/62/50.

So now I’m ready to create my prop disc in a graphics program. I’m going to use GIMP, which has all the tools I need. You can download it here, for Linux, Apple or Windows. (My screenshots and menu sequences below are from version 2.10.)

After opening GIMP, I go through Windows/Dockable Dialogs/Gradients, which brings up a list of GIMP’s built-in colour gradients. I right-click on the list, and choose New Gradient. This gives me a simple colour gradient, from black to white, ready to be edited. Here, I’ve renamed it “BV 138 three-blade prop”, and I’m ready to edit:

First, I tell GIMP my chosen colour, by right-clicking on the gradient and selecting Left Endpoint’s Color…, which brings up a colour dialogue box. All I’m interested in setting are the R, G and B values, for my colour, and the A value (for opacity). But first I need to ensure I’m using the right scale—the figures given by Marshall are based on a range of zero to 255, so I need to click on the 0..255 button before I start entering numbers. Then I set R equal to 56, G to 62 and B to 50, leaving A with its default value of 255 (that is, completely opaque). Here’s what that looks like:

Then I go through the same process with Right Endpoint’s Color…, except setting A to zero, for full transparency. So now I have a colour gradient that is RLM70 throughout, fading from complete opacity to complete transparency. That would actually produce a pretty convincing prop disc, just as it stands, but I can make it more physical accurate by adding the opacity profile I derived from my little spreadsheet. (To skip that detail and go straight to instructions on how to draw the prop disc using a gradient, follow this link.)

To give the gradient a more realistic opacity profile, I need to split my colour gradient into eight equal parts, to match the eight measurements I made above. That’s easy to do. I right-click on the gradient again, and click Split Segment at Midpoint—now I have three little black triangles at the base of my gradient, splitting it into two segments, and two white triangles marking the midpoints of these segments. (These triangles can be dragged around to modify the gradient, but they can stay right where they are at present.) Right-clicking again produces Split Segments at Midpoints, and creates four segments; and again, and I have the eight segments I need. Clicking between two black triangles in the bar at the bottom of the gradient selects one segment to be edited—it appears bright while the others go dark. Like this:

Now I can edit the colour and opacity within that segment. The RLM70 colour I set at the start has been inherited by all the segments I’ve created, so all I need to do is tweak the opacity. For the first (leftmost) segment, I leave the left endpoint unedited, but bring up Right Endpoint’s Color…. Now, because my calculated opacities from the spreadsheet run from 0 to 100, I click on the 0..100 button, and then set A to 80, the figure for the first eighth of my 3-blade prop.

(Notice how the RGB numbers seem to have changed from the original 56/62/50—that’s just the effect of changing the scale from 0..255 to 0..100, and the underlying colour has stayed the same.)

Then it’s just a matter of selecting each segment in turn, and editing the opacity. For the other segments, I need to make sure the left end of the segment inherits its opacity from the right end of the previous segment—I do that by using the sequence Load Left Color From…/Left Neighbor’s Right Endpoint from the right-click menu. Then I repeat the process above to set the opacity for the right endpoint. And so on. Here’s the final result, with a distinctive dark band a quarter of the way out, created by the broad bases of the paddle blades on this propeller.

There’s one last thing to do, which will come in handy later. I select the leftmost segment, and then right-click to Split Segment at Midpoint. I select the leftmost of these new segments, and set its left and right endpoints to black (RGB 0/0/0) with maximum opacity. Then I select the new segment immediately to its right, and set the opacity of its left endpoint to maximum. That ends up looking like this:

The narrow black segment at left is going to end up as a black dot marking the centre of the propeller disc. But it’s a little too broad at present. I can fix that by dragging the leftmost white triangle all the way to the left, and then dragging the neighbouring black triangle so as to close up the black segment while expanding the segment to its right.

Now I’m ready to use my gradient to create a prop disc.

I start with a new blank image using File/New… from the menu bar. Since I measured my propeller radius at 24mm, I make my blank 60mm across, to give plenty of space. And I note that the default resolution is 300 pixels per inch, which matches my printer settings.

Then I go through Tools/Paint Tools/Gradient to bring up the gradient tool at the left of the work area. I set Shape to “Radial”, and check that the Gradient is “BV 138 3-blade prop”. (If it isn’t, clicking on the little square to the left of Gradient brings up a list of available gradients.) I’ve captured all that in the screen-shot below:

Once all that’s set up, I plonk my cursor in the middle of my new blank image, hold down the left mouse button, and drag it outwards towards the edge of the image. And my propeller disc appears.

To get the size right, I need to glance at the bottom left of the screen, where GIMP gives me a readout of the radius of the pattern I’m creating—I need to tune it to 24mm, give or take a hundredth of a millimetre.

And then I’m done. I go through File/Export As… from the menu bar, and click on Select File Type (By Extension) so that I can save a *.jpg image to some directory where I can find it again.

Having gone through the same process for my 4-bladed propeller, I can print a test sheet to check that my prop discs are the right size:

Then I can print the discs on to a transparent sheet. Here, I’ve used an ink-jet compatible overhead transparency. My compass cutter is set to 24mm, the measured prop radius, and I can use the black dot I created in my colour gradient in GIMP to ensure that my cut disc is properly centred.

I don’t use laser transparencies, because the laser printer tends to a bake a curve into the transparent sheet which is very difficult to remove. I’ve had success in the past with printing on to transparent decal paper, and then laying the decals on to discs cut from transparent plastic sheet—you can produce a thicker and more rigid final product in that way, at the expense of several more production steps. And be sure not to use plastic so thick the compass cutters won’t go through it!

Preparing the propeller bosses and spinners is very much kit-dependent. Obviously, the propeller blades need to be removed from the boss, and the boss split so as to accommodate the plastic prop disc. For the BV 138 I trimmed a millimetre or so off the back of the spinners, replacing it with appropriately sized styrene tube, and filled the holes in the side of the spinner, producing a two-part assembly in which I could sandwich the prop disc.

I then used a hole punch to make appropriately sized holes in the middle of my discs, so that I could slide them on to the propeller spindle.

On other occasions I’ve simply had to glue the discs in place within the split boss and spinner.

Here’s the final result, with the spinners painted up:

And here they are on my BV 138 mine-sweeper:

There are a couple of limitations to this approach. It’s visually unsatisfying once you get up to larger scales (but then again, the alternatives are, too). And if your propeller has white markings on it, you’re out of luck unless you have an expensive printer that uses white ink.

I’ve also had a request for a tutorial on how to add yellow safety tips to propellers, for instance of the kind used by the RAF and USAF during the Second World War. It’s easy enough to do, and all the necessary knowledge is actually present in this post, but I’ll add a supplement at some time in the future that shows how I produced the yellow tipped propellers on my Supermarine Walrus:

# GPS Navigation With Historical Maps

One of my projects to maintain interest during lockdown walks has been to follow the route of the old Dundee-Newtyle railway. My main reference for that trip was a Six-Inch Ordnance Survey map dating from 1903, which I consulted on the National Library of Scotland’s excellent “georeferenced maps” webpage. If you follow this link, you should be able to see the set-up I used. There’s a little blue slider at the bottom of the control panel at top left, which will allow you to fade between the 1903 map and a modern street map from OpenStreetMap.

The good people at the National Library of Scotland have gone to the trouble of georeferencing a large collection of out-of-copyright historical maps of Scotland (and some of the wider UK), and this is a fabulous resource for anyone who wants to explore their local history and geography. And it got me hankering for the ability to load such detailed maps into a portable GPS-enabled device.

Now, my go-to service for georeferenced electronic Ordnance Survey maps is usually Anquet. Mainly, I use them on my PC or laptop, but I also keep a few local topographic maps on my mobile phone, and use them for the occasional bit of GPS navigation. Anquet also used to sell a variety of historical Ordnance Survey maps, but they were fairly pricey, and I anyway discover that the service now seems to have been discontinued.

So I began to wonder if I could parasitize the work of the National Library of Scotland, and get a copy of their georeferenced map on to my phone. And it turns out I could. Here’s what I did.

I dusted off and updated my old copy of the venerable OziExplorer software on my PC. OziExplorer has been around for decades, dating back to a time when it was expensive or impossible to get good quality maps into a hand-held navigation device. The unique feature it offers is the ability to import map images (in those days, from scanned paper maps) and “calibrate” them with latitude and longitude information. I bought my own copy of the program years ago. It’s nowadays fairly expensive, and probably not something you’d purchase for a one-off project. However, I’m pretty sure the trial version will let you do everything I’m describing here, if you’re prepared to put up with restarting it every hour.

My next step was to take a screenshot of the Ordnance Survey map from the NLS website. I use Greenshot for these tasks, but there are many options.

I fed this image to OziExplorer, using the “Load And Calibrate Map Image” option from the File menu.

OziExplorer is extremely versatile in how it calibrates map images. If the map gridlines run parallel to the edges of the image (as they do in the NLS maps), it only requires three calibration points, preferably close to three corners of the image. For skewed maps, or maps with curved gridlines, more points are needed.

But first I need to tell OziExplorer what map projection was used, in the Setup tab of the calibration window at top right.

From the drop-down menus, I choose “Ord Srvy Grt Britn” for my Map Datum, and “[BNG] British National Grid” for Map Projection. The next three tabs in this window are the set-up for the three calibration points.

So now it’s back to the NLS map, with a notepad and pencil, to write down coordinates for three points. I just place my cursor over a suitable point, and then read off the coordinates at the bottom right of the screen. When I started doing this, I spent some time casting around for suitable natural features or buildings on the map, before I had the blinding revelation that the text on the map would work just as well for this purpose. So here I am with the cursor on the dot of the first “i” of Menzieshill.

And here are the associated coordinates for that point:

What I want to feed to OziExplorer are the letters and numbers in bold in the top line—these are the Ordnance Survey grid square letters, and the easting and northing values. It’s important not to use the latitude and longitude provided by the NLS, since this will create a position error on the order of a hundred metres if transferred to OziExplorer. The NLS is providing the global standard WGS84 coordinates, which is what your GPS receiver tells you. But once you’ve stipulated to OziExplorer that you’re using the British National Grid, it then assumes (I think) that any latitude and longitude you enter pertain to coordinates on the specific ellipsoid on which the BNG is based, which is not the same shape and orientation as the WGS84 ellipsoid.

The underlying reason for the mismatch in latitude and longitude doesn’t really matter for practical purposes, though—just be sure to use the grid letters and numbers offered by the National Library of Scotland as your input to OziExplorer.

Going back to OziExplorer armed with my three calibration points, I enter the first set of coordinates by opening the “Point 1” tab in the calibration window at top right. This changes the cursor to a set of cross-hairs that I use to select the same points I copied off the NLS map:

Positioning the cross-hairs accurately is aided by the little magnified square that appears on the screen at top left—you can see it to the left of my screenshot above.

Once I have the position right, I click to set my calibration point:

And then I enter the grid reference for Point 1:

Then it’s just a matter of repeating the process for Point 2 and Point 3, and hitting Save. OziExplorer saves a little file with the same name as the map image file, but with the suffix *.map, and the map image is now calibrated.

In a minute I’ll go on to explain how I moved a calibrated map to my phone, but there’s one other thing that’s worth dealing with at this point. Even with a UHD monitor, you may want to capture more than one screenful to get complete coverage of an area of interest. This is where OziExplorer‘s free “Map Merge” utility comes in. It will combine any overlapping array of calibrated OziExplorer maps into a single large image.

So for my little project relating to Dundee’s abandoned railway lines, I captured a series of screenshots of the 1903 Ordnance Survey map from NLS, and calibrated them in OziExplorer as described above. This involves jotting down quite a lot of calibration coordinates, but not as many as you might expect—because the screenshot edges must overlap to produce a single large map, and because the calibration points need to be at the corners of each image, then calibration points can and should be reused, to ensure that the images are perfectly aligned in the final map.

Then I open Map Merge, and point it at the folder on my hard drive containing all the map images and their associated *.map files. When these are imported, Map Merge tiles them together to display the coverage of the final map:

When I’m happy with the coverage, I tell Map Merge to create a map from the selected maps:

I also need to tell it what projection and scale to use:

And then I just sit back and wait for Map Merge to zip all the individual files together into one calibrated map, which is saved to the hard drive as two files—an image file with extension *.ozfx3, and a *.map. calibration file for that image. I can load these back into OziExplorer to make sure everything is aligned as it should be.

To get this final map on to my phone, I needed to download and install the OziExplorer Android app. There’s nothing for Apple users, unfortunately, but there is a version for PocketPC handheld devices, which is a bit of a legacy market these days. You can find details on the OziExplorer website. Again, the full version of the Android app is distinctly pricey, but the trial version will do what I describe here, if you don’t mind a prominent watermark on your map display, and having to restart the app every fifteen minutes.

With the app installed on my phone, I plugged it into my PC via a USB cable, and used Windows Explorer to navigate my way to the phone’s OziExplorer\Maps folder. Then I copied across the *.ozfx3 and *map files created by Map Merge.

And that was that. When I opened the OziExplorer app on my phone, I was able to call up my Victorian OS map and follow the line of my disappeared railway using the phone’s GPS. So here I am on the Perth Road, just about to set off cross-country:

That’s neat, isn’t it?

# Ordnance Survey OpenData In QGIS 3: Part 4

At the end of my previous post on this topic, I left you with this map of the area around the mountain of Blaven (Gaelic Bla Bheinn) on the Isle of Skye:

That concluded a three-part tutorial on using Ordnance Survey OpenData products in QGIS mapping software. (To go to the start of the series, click here.) This post, as promised last time, will deal with adding data from other sources. It’s a bit of a grab-bag of ideas—I’ll mention a few useful data sources, and various ways of importing those data into QGIS, and also describe how to import or create your own map symbols.

The major deficiency with the OS’s OpenData, from the point of view of a hill-walker, is that it lacks any portrayal of mountain paths and tracks. Fortunately, there’s another open data source available which goes some way towards remedying that—the OpenStreetMap project. Their data are free to use under the Open Database License, which requires that they be suitably credited.

To get some path data for the map above, I go to the OpenStreetMap website, and then drag and zoom to reach the area around Blaven. Then I click on the Export button at top left of the web page, which brings up a dialogue box at the left side of the screen featuring a prominent blue button marked “Export”. Above that, you can see a grey box marked up with the latitude and longitude limits of the map view you’re looking at, and the option to “Manually Select A Different Area”:

I click on the “Manual Select” option, adjust the box to select only the area around Blaven that I’m interested in, and click Export. (Selecting too large an area will generate an error message.)

My data are downloadable in the form of a file named map.osm, which I can save under a more memorable name (like blaven.osm) to somewhere in my QGIS data folders. Then I load it as new layer using Layer/Add Layer/Add Vector Layer…. When I’m asked which vector layer I want to add, I select “lines”, which will contain the path data I’m looking for (as well as some other stuff).

We can take a look at the content of this layer by double-clicking on its name to bring up the Layer Properties dialogue and looking at “Source Fields”:

It looks like “highway” is going to be the field we want to process. Now I move to “Symbology” and set up some Rule-based filters to associate markers with only the “highway” values I’m interested in—which turn out to be ‘track’, ‘path’ and ‘footway’. Like this:

I’ve set up my OpenStreetMap tracks to match my definition for Ordnance Survey tracks, and selected a grey dashed line for paths. (For a detailed tutorial on how to set up rule-based filters, take a look at Part 3 of this series, where I used them to set up different label styles for different kinds of named place.)

Here’s the final result (note that I have now added the necessary credit to the OSM data compilers):

It’s actually a better portrayal of the paths on Blaven than appears on Ordnance Survey maps. That’s sometimes the case—OSM path data is extremely variable from place to place, depending as it does on the work of volunteers either walking the routes or plotting them from public domain aerial photographs.

Now I’m going to add some symbols, but first I want to slightly tweak the position of feature names on the map. Firstly, I want the mountain names to be offset from the peaks they label (to make room for symbols to be inserted later). I double-click on the “NamedPlaces” layer to bring up its Layer Properties dialogue box, select “Labels” and then double-click the “Landform” filter to open Edit Rule. In that dialogue I select “Placement”, and then change the label placement to “Around Point” with an offset of two typographical points. (In fact, I could produce a complicated rule applying different offsets for different sizes of text, in the same way I created different sizes of text in the first place, as described in Part 3—but this simple adjustment will do as an example.)

I’d also like to get rid of that giant “Strathaird” label on the map, which is just a distraction, given that it’s not clear what feature it is intended to label. I can do this by selecting the “NamedPlaces” layer, and activating editing by clicking on the little picture of a pencil among the array of icons at the top of the screen. Then I also click on the icon for “Select Features by area or single click”.

Here they are, circled in this screen capture:

Now I can just draw a box round the offending “Strathaird” (at which point the labelled location appears as a little red cross in a yellow square), and hit the Delete key to remove it. Then I can toggle off the little pencil icon, at which point I’m asked if I want to save the changes I’ve made. (Use this facility sparingly—you don’t want to remove labels that you might need in future.) Finally, a click on the little hand icon (just above and left of the pencil in my screen-grab) restores the usual function of the mouse cursor.

Here’s the final result:

The mountain names are all moved above and to the right of the peaks they label. An unwanted consequence is a shift in the labels naming the two corries—there are multiple ways to fix that, either by introducing new placement rules, or by using the layer editing facility to actually drag the labels around to where they’re wanted. But it’s not a big deal in this case, and I don’t want to get too bogged down in additional detail at this point.

So let’s just proceed to adding some symbols from an external dataset. I’ve downloaded the complete dataset of Ordnance Survey triangulation pillars in GPX format from haroldstreet.org.uk. QGIS will recognize the *.gpx file format, so we can add the data as a new layer using Layer/Add Layer/Add Vector Layer….

Once the layer is added, I want to produce a suitable symbol for the triangulation points it marks. I double-click on the layer name listed in the Layers window so as to open its Layer Properties dialogue, go to “Symbology”, and change the Simple Marker from the default circle to a triangle. I set the size to 15 points, making it roughly the same size as my text, and colour it blue and white to produce a match for the Ordnance Survey symbol for a trig point.

The OS symbol has a dot in the middle, and I can reproduce this by adding another layer to my symbol, using the green plus sign that appears on the left above the settings menu, and adding a blue dot of appropriate size on top of the triangle. Here’s the final result—a triangulation pillar on the summit of Blaven:

The website haroldstreet.org.uk provides a whole load of other useful data, including a large selection of hill summits from various lists. It also provides a dataset of mountain bothies. If you find it useful you should consider giving a donation for its up-keep—the option is offered each time you download a file.

POIgraves also offers a range of interesting data, including youth hostel locations.

Because QGIS understands the *.gpx format used by GPS receivers, we can also import routes, tracks and waypoints from GPS devices. Below, I’ve added some colour-coded summit markers from various hill lists, and superimposed the route recorded on my GPS when I ascended Blaven:

Now it would be nice to mark the car-park at the foot of Blaven, where the walk started and finished. There are various ways of doing this. The easiest, if you have a GPS receiver and are at the location, is to record a waypoint and then import the relevant file into QGIS.

Another possibility is to find the location on Google Earth, and mark it with a “placemark”—a little coloured map pin, generated using the map-pin icon at the top of the Google Earth screen. You can then export this placemark in the form of a *.kml file, by right-clicking on the location in the “Places” list at left of screen and choosing Save Place As….

The file produced uses KML (Keyhole Markup Language) which is another file format that QGIS can import as a vector layer. The terms of service for Google Earth certainly appear to give permission to do exactly this, in section 1b. But the point at which a few coordinates turn into a “derived dataset” (to which Google might object legally) is not clear to me, so I’m not going to use that approach here.

Instead, I’m going to use the old fashioned method of just looking at a map to get a set of coordinates. Checking the “Coordinates” panel at the bottom right of the QGIS display, while moving the cursor over the map location of my car park, tells me it’s located at 156064,821604. These values are given in the coordinate system for this QGIS project—which is, in fact, the standard Ordnance Survey system of eastings and northings, though probably not in an immediately familiar form. The values are given in metres, and use full numerical coordinates, rather than the familiar two-letter designator for each 100-kilometre square.

You can see the relationship between the two systems using a chart that shows the distance of each 100-kilometre square from the origin of the OS grid. So the NG square, which contains Blaven, is 100 kilometres east and 800 kilometres north. To specify a location within NG to the nearest metre, we therefore need a six-digit easting followed by a six-digit northing.

This full set of digits appears at all the corners of Ordnance Survey maps, though they go largely unnoticed. That means I can read coordinates suitable for QGIS directly from a paper map.

Taking a look at the relevant OS sheet for Blaven, I see that the car park is at NG 560216 (to the nearest 100m). So that is 1560,8216 in full numerical style (to the nearest 100m), or 156000,821600 if we add trailing zeroes to give a figure correct to the metre. Comparing this to the figure I pulled directly off QGIS (156064,821604) shows that everything is internally consistent. So I can take coordinates from a paper map and convert them to something QGIS understands. Or I can just read coordinates directly from QGIS itself.

But how do I get those figures into QGIS? I’m going to write a simple little text file of Comma Separated Values. Here it is, giving the data for the car park:

ID,Nature,X,Y,Orientation,Name
1,Carpark,156064,821604,,Blaven Car Park

The first line gives the names for each field in the dataset. ID is a unique identifier that I probably don’t really need in a tiny file like this, Nature contains information about the kind of feature I’m describing, X and Y give the coordinates of the feature, Orientation lets me specify a rotation for any label applied, and Name is … well, the name. All fields are separated by commas. The next line is the entry for my car park, using coordinates I’ve read off the QGIS map. Since I’m not interested in specifying an orientation I can leave that field blank—one comma follows immediately after another in that location.

Now I’ll add a couple more items to my list:

ID,Nature,X,Y,Orientation,Name
1,Carpark,156076,821610,,Car Park
2,Feature,153792,822358,30,Choire a’ Caise
3,Building,151379,819984,,Boat House

I save this as a text file, but with the suffix *.csv to specify its nature. Then I can load it into QGIS using Layer/Add Layer/Add Delimited Text Layer…, selecting Project CRS for the “Geometry CRS” option, and ticking “First record has field names”. You can see the little database that produces at the bottom of the Delimited Text dialogue box:

With the layer loaded, I can now set up filters and rules based on the content of the Nature field. Here, for instance, is the “Symbology” entry for the Layer Properties, showing how I’ve set up “Nature” filters. (I gave a detailed description of using this sort of rule-based labelling system in Part 3 of this series.)

I gave the car park its own symbol, I formatted Choire a’ Caise so that its text matched the other corries, and the Boat House so that it matched other buildings. Here’s the result, with the new features circled:

QGIS provides a good selection of different symbols, but I designed the car park symbol myself, to roughly match British road signs:

If you don’t fancy drawing your own symbols, you can usually find suitable Public Domain images, like this one on Wikimedia Commons.

Symbols need to be in *.svg (Scalable Vector Graphics) format. The Wikimedia symbol I linked to above already is, but if you’re faced with a *.jpg or *.png symbol (like the one I produced), then there are many free and easy-to-use conversion utilities on-line—I used this one. Once you’ve produced your *.svg file, copy it into a sub-folder of the QGIS program directory on your hard drive. For QGIS 3, the sub-folder is /apps/qgis/svg/, which contains a number of themed sub-folders. For lack of a better idea, I dropped my carpark.svg into the /symbols sub-folder. Once there, it became available to me when editing “Symbology”—by changing the “Symbol Layer Type” to SVG Marker, I was able to scroll down and find my new symbol amid the pre-existing selection.

Finally, I confess that an Ordnance Survey map always looks naked to me without a superimposed one-kilometre grid, which is also an aid to judging scale. Charles Roper has produced a Public Domain set of ESRI shape files for the Ordnance Survey grid. The main trick to using these grid files is to select “Transparent Fill” for the fill colour—otherwise you’ll just end up with an opaque tiling that obscures everything else! ( I dealt in detail with managing shape files in Part 1 and Part 2.)

So here’s the final map. There are still things that could be improved—for instance, the ability to edit layers in QGIS goes far beyond simply being able to delete unwanted labels, as I did above. But I hope I’ve shown you how easy it is to produce useful and attractive UK maps using only open data sources.

# Ordnance Survey OpenData In QGIS 3: Part 3

I finish my last post about using Ordnance Survey OpenData in QGIS having produced this map of the area around Blaven, on the Isle of Skye:

It’s tinted for height, shaded and marked up with contours to emphasize landforms, and has features such as surface water, coastline, roads and buildings added.

Now it needs some labels. If you’ve been following along with the previous posts, you’ve already downloaded the relevant ESRI vector data for OS grid square NG. The relevant shape file is NG_NamedPlace.shp, which is sitting in the directory OS OpenMap Local (ESRI Shape File) NG/data. Add it to the map by going through the top menu bar in QGIS, Layer/Add Layer/Add Vector Layer….

When the shape file is first loaded, you’ll just see an array of dots marking named locations, but with no names, which is a little disappointing. The Ordnance Survey actually provides a NamedPlace.qml stylesheet which will do some basic formatting for you (I discussed the use of OS stylesheets in my previous post), but I think it’s more useful to build the formatting a step at a time on this occasion, to introduce some of the more complicated things you can do with QGIS.

So before we do anything else, we need to turn on the names. To see how to do this, double-click on the NamedPlace layer in the Layers window to bring up its Layer Properties dialogue window. At left, choose “Source Fields” to see a list of all the attributes associated with NamedPlace entities. It looks like this:

Checking the OS OpenMap – Local Product Guide (1.7MB pdf)* we can find out what these attribute names mean:

ID is a unique identifier we don’t need to worry about
DISTNAME is the distinctive name of the place
HTMLNAME repeats the distinctive name, but using html control characters to deliver any accented letters—you might need to use this to display Welsh place names properly
CLASSIFICA is the classification of the named place—the rather limited options are Populated Place, Landform, Woodland Or Forest, Hydrography or Landcover
FONTHEIGHT gives a recommendation (Small, Medium or Large) for the size of label to be used, according to the size of the named feature
ORIENTATIO is the orientation of the label, measured in degrees from the horizontal, to align it with a linear feature
FEATCODE is a feature code number—in the case of named places, it simply provides one of five numerical codes according to the feature’s classification, so adds no new information

So we can use either DISTNAME or HTMLNAME to label our features, and CLASSIFICA, FONTHEIGHT and ORIENTATIO to determine what the label looks like.

So now click on “Labels” at the left of the Layer Properties dialogue window. Choose Single Labels from the drop-down dialogue box, and choose DISTNAME in the “Label with” dialogue that now appears. The “Text Sample” box shows you the size your labels will appear on the map. Provided that is a reasonable size (it should be, the default is 10-point lettering), leave everything else alone at present, and click Apply at lower right to see the result.

My display comes up looking like this, with lots of question marks where special characters should be:

I can fix this by selecting “Source” at the left of the Layer Properties dialogue window, and then changing “Data source encoding” to System.

Now it would be nice to label the five different classifications in five different ways. To do this, go the “Labels” section of the Layer Properties dialogue, and switch the drop-down menu at top to Rule-based Labeling. It comes up with a single unnamed default rule, which is to label everything with DISTNAME. Double-click on this rule to bring up the Edit Rule dialogue.

I want to label hydrographic features in blue. I type Hydrography into the “Description” box, to give my rule a name, and then set the “Filter” rule like this:

“CLASSIFICA”=’Hydrography’

It’s important to type the double quotes around the attribute name, the single quotes around the value you want to filter on, and get all the spelling and capitalization correct. The Test button to the right is useful, because it lets you check that your filter is actually working—I pick up 1916 hydrographic features with this filter, so all is well. I need to type DISTNAME into the “Label with” field, and then the whole lower half of the dialogue lets me configure my text, with multiple additional options available by clicking the various headings at left (“Text”, “Formatting”, “Buffer”, and so on). It’s informative to just play with these settings to see what happens, but at present I’ve just changed the “Color” option to blue.

I click on OK at bottom right when I’ve finished setting the text formatting, and that returns me to the rule-based labelling, with the Hydrography rule in place. Now I can add another rule by clicking the green plus sign at bottom left, and filling in a new filter and new text formatting, always remembering to fill the “Label with” field, too. (When my labels don’t appear, it’s usually because I’ve become so preoccupied with setting the rules and formatting, I’ve forgotten to specify the attribute to use for the label.)

Working my way through all five classifications, I end up with a set of rules that look like this:

Within those rules, I’ve set woodland labels to green, landcover to brown, water to blue, and used a different typeface to distinguish populated places and landforms. Only the last three of those actually show up in the area of my Blaven map:

You can do exactly the same thing under “Symbology”, by selecting Ruled-based from the drop-down menu at the top of the window, and then stipulating different markers for different classifications. But for now, I’m just going to make all the markers disappear, by moving the “Opacity” slider to 0% in “Symbology”.

Now I’d like to use FONTHEIGHT to specify larger labels on larger features. Here’s the Edit Rule window for Landforms:

Although it seems to allow only one size of typeface for each rule, the slightly mysterious icons arrayed down the right-hand side of this dialogue window allow me to define more attribute rules to determine each font setting. I Click on the icon next to “Size” (ringed in red above), select Edit…, and I’m presented with the Expression String Builder dialogue window.

it looks a bit daunting, but if I click on “Fields and Values”, I’m presented with a familiar list of the names of all the attributes associated with NamedPlace. Clicking on FONTHEIGHT brings up a window that can be used to display the values of that attribute. A click on the all unique button shows me all the different values that FONTHEIGHT contains:

I’d like to enter a set of conditions using the CASE … WHEN … THEN format. If I know the syntax, I can just type the code into the left-hand window. But if I need guidance, I can click on the “Conditional” option, select “CASE”, and see the syntax spelled out for me in the right-hand window.

Here’s what the window looks like after I’ve entered the necessary code to associate ‘Large’, ‘Medium’ and ‘Small’ features with 30, 20 and 10-point labels, respectively. The text below the left window will let you know if you’ve used the wrong syntax.

(Again, be sure to use double quotes on the attribute names and single quotes on the values, and to get all the capitalization correct.)

When I click on OK, QGIS lets me know that I’ve set up rules for font size by placing a little yellow icon containing an ε next to the “Size” text box. Any value in that box will now be overruled by the conditions I’ve programmed. While I’m in the Edit Rule window, I also set up some conditions for “Style”, by entering the following code in the Expression String Builder:

CASE
WHEN “FONTHEIGHT”=’Large’ THEN ‘Bold’
ELSE ‘Regular’
END

This should make my largest Landform labels appear in bold.

Finally, I centre the label over the Landform by calling up “Placement” in Edit Rule, clicking on the “Offset from point” radio button, and selecting the central placement from the grid of options presented. Like this:

And here’s the result when I apply that set of rules to my map labels:

It’s a bit shouty and needs some fine tuning, but you get the idea. Then it’s just a matter of going through the other four classifications, and tweaking them to achieve the effect you want—I’ve used some very simple rules, but a little experimentation will show you the wealth of configuration options you have at your fingertips.

Here’s what it looks like with a little more formatting:

For Hydrography I’ve used the “Formatting” options to set “Wrap on character” to a space, so that those long Gaelic names are stacked one word above the other.

Finally, we need to use the ORIENTATIO attribute to align the text labelling linear features. Going back to the Edit Rule window we can enter a rule for “Rotation” (down near the bottom of the “Placement” window). In the Expression String Builder I just type “ORIENTATIO” to set the text rotation to equal the Ordnance Survey’s orientation attribute. It seems to have no effect on any feature except Landforms, so that’s the only rule worth changing. But when I make the change, I get a disappointing result:

You can see that the names of the two corries below Blaven are orientated almost crosswise to the features they label. The problem is that QGIS changed the expected direction of its rotation angles in the transition from QGIS 2 to QGIS 3, and the OS’s dataset hasn’t changed to match that, at time of writing. But it’s easily fixed, once you understand the problem. Just put a minus sign in front of “ORIENTATIO” in the Expression String Builder for “Rotation”, and order is restored:

That’s it for this time. In my next post on this topic, I’ll add in some data from sources outside the Ordnance Survey.

* Pages 52-57 provide information specific to the NamedPlace shape files.

# Ordnance Survey OpenData In QGIS 3: Part 2

So, by the end of my previous post on this topic, I’d used Ordnance Survey OpenData products in QGIS to produce a nice smooth depiction of the topography of Ordnance Survey grid square NG, tinted to show height and shaded to show relief. It looked like this:

A detail, showing the region around the mountain Blaven, on the Isle of Skye, looked like this:

The next step is to add some contour lines, to give an extra impression of the relief. Last time, I described how I’d downloaded the OS Terrain 50 vector dataset, in the form of ESRI files, and unzipped its data folder inside a folder on my hard drive named OS Terrain 50 (ESRI), so as to keep it separate from the Terrain 50 Grid dataset I used last time. This data folder contains a host of sub-folders with two-letter names, each of them corresponding to one of the OS’s 100km grid squares (shown at left), and each sub-folder containing a set of anything up to a hundred zip files of its own, each containing contour and spot-height data for a 10-kilometre square of terrain. Once these are unzipped, you find the contour data in files with “line” in their names, and the spot-height data in files with names containing the word “point”. There are multiple different files for each 10-kilometre square, but the important ones have the extension .shp—shape files. The rest contain supporting data that QGIS will handle automatically.

To get contours for the whole NG square, I loaded all the “line” shape files from the ng sub-folder. I find the easiest way to do this is to use the Browser window in QGIS to navigate to the appropriate folder, and then to enter “*line.shp” into the filter tool (brought up by clicking on the little picture of a filter funnel in the toolbar of the Browser window). This brings up a list of only the necessary files. I select all of them, right-click, and choose Add Selected Layer(s) to Canvas.

The result looks something like this:

It resembles some sort of mad quilting project, because QGIS has coloured the contours from each 10-kilometre square differently. We need to combine these into a single file, so that they can be consistently coloured. From the main toolbar, select Vector/Data Management Tools/Merge Vector Layers…. In “Input Layers” select the names of all the *line.shp files. For the “Destination CRS” choose the option that includes “OSGB36”, and under “Merged” provide a memorable name for the output file.

Once you have the new merged layer loaded, you can discard all the individual tiles from which it was derived. Now we just need to apply the default style from the OS’s cartographic stylesheets. Double-click on the name of the merged contour layer in the Layers window to bring up its Layer Properties dialogue box. Click on Style/Load Style at lower left of this dialogue. Navigate to the correct style definition file in the Terrain 50 stylesheets folder (I described how to create this last time). There are a lot of nested folders, but the file we want is OS-Terrain-50-stylesheets-master/ESRI Shapefile contour stylesheets/QGIS Stylesheets (QML)/line.qml. Once applied, the final result looks like this:

It’s a bit cluttered at this scale, because the Ordnance Survey style for contours is to plot them at a constant width, irrespective of scale, which makes sense. Zooming in on Blaven, we find things look more reasonable:

The OS contours include three classes—meanHighWater and meanLowWater, which appear along the coast and are tinted blue; and ordinary, which marks land elevations and which defaults to a rather garish russet. I toned this last one down a little by modifying its properties in the Layer Properties dialogue box, under “Symbology”, like this:

I’ve set it to a browner shade, and wound its opacity down to 50%. In general, I’ve tweaked most of the OS’s default styles to suit my own purposes and tastes. (By the way, you can label the contours with their heights by going to “Labels” in the Layers Property dialogue, replacing No Labels with Single Labels and then setting “Label With” to PROP_VALUE, which is the variable in which the dataset stores the height of each contour. I prefer to reduce the clutter and keep labels turned off—for the maps I’m making I just want the contours to indicate the shape of the landform.)

Now to add some more geographical features. The meanLowWater contour looks a little odd at present, sitting in the sea, but in the OpenMap – Local dataset (which I described downloading last time), the OS provides a shape file called *Foreshore.shp, which fills the space between high and low water. In this case the necessary file is NG_Foreshore.shp, which is stored in a folder called OS OpenMap Local (ESRI Shape File) NG/data, along with all the other shape files I’m going to be using in this post. You can add it as a layer using the main toolbar Layer/Add Layer/Add Vector Layer…, and then navigating to the right folder.

While we’re there, we can add surface water features, in the form of NG_SurfaceWater_Line.shp and NG_SurfaceWater_Area.shp. NG_Woodland.shp adds areas of forest. All of these will load with random colours assigned by QGIS, and need to have styles applied from the OS OpenMap – Local stylesheets, which I described downloading last time. The styles we want are installed to your hard drive in a folder called OS-OpenMap-Local-stylesheets-master/ESRI Shapefile stylesheets/QGIS stylesheets (QML)/Full colour style. They have descriptive names that match the shape files—Foreshore.qml and so on.

Here’s what Blaven looks like with those geographical features added, and after a little tweaking to the default OS styles:

It’s becoming important to have all these layers in the right order—you can see the order they’re stacked in in the Layers window, and you can drag them up and down the sequence to achieve the right effect. I have Foreshore sitting immediately on top of TidalWater, and just below the contour layer. SurfaceWater features are on top of the contour layer—I think rivers look better without contours crossing them. Woodland has had special treatment. I’ve turned its opacity down to 30%, so the topographic tints can show through slightly, and slipped it below the Hillshade layer, so the forest areas look like they’re following the slope of the land.

And here’s how it all looks after I’ve slightly tweaked the OS styles:

There are other features in the OpenMap – Local shape dataset, but none of them are relevant to the image I’m working on here. You can add roundabouts (two styles are provided, one for the dark outline and one for a coloured fill, so the layer needs to be duplicated). There are railway tracks, railway stations, and road and rail tunnels, all of which are necessary when producing maps of other areas. (The depicted railway tracks are broken where they pass under bridges, so you should put the railway layer on top of your road layers.) A number of other more specialized features are also available—take a look at the Ordnance Survey’s OpenMap – Local product guide (1.7MB pdf) for more detail.

Next time, I’m going to write about how to add and format place names.

# Ordnance Survey OpenData In QGIS 3: Part 1

Recently, I’ve been preparing my UK walking maps using the Ordnance Survey’s free OpenData products, which I’ve rendered into maps using a free, open-source Geographical Information System, QGIS. I thought I’d write a little bit about that, now that I’ve got my maps looking more or less as I’d like them. For this first part, I’m going to write about using one of the OS’s free topographic datasets. Later posts will deal with adding features like contours, rivers, lakes and roads, and with adding custom data.

First, of course, you need to install QGIS. I’ve been using version 3, which at time of writing is the latest. With that done, you then need some OS data. I downloaded both versions of their Terrain 50 dataset, which provides elevation data at 50m horizontal resolution. The “Grid” format provides raster elevation data—lots of cells in a square grid, each containing a number representing the elevation in metres above sea level for that location. The “Contours” format provides a vector dataset, full of “shape files”—properly interpreted by QGIS, they’ll draw contour lines and spot heights for you. I’ll use only the Grid data for this post, though I’ll be using the ESRI version of Contours later.

These downloads need to be unzipped, at which point they each produce a folder called data, full of subfolders lettered according to the OS’s 100-km grid system. To keep the Grid and Contour data separate, I unzipped them to separate folders named OS Terrain 50 (Grid) and OS Terrain 50 (ESRI). Annoyingly, each lettered grid folder contains a multitude of zipped files—anything up to a hundred, each representing a 10-km square tile of topographic data. QGIS can actually peer inside these zipped files and load tile data directly, but I find it easier to deal with multiple tiles if I do a mass unzipping for each lettered grid square that interests me, and then clear away the zip files.

Features like rivers, lakes, woodlands and buildings are added using more datasets. I’ve been using the Open Map – Local ESRI shape files. These are downloaded in packages that cover a single grid square. For my examples here I’m going to use data from the NG square. Downloaded and unzipped, that produces a folder called OS Open Map Local (ESRI Shape File) NG. (Round about this point, I started stuffing all my data folders inside one big folder called OS Data, to make them easier to find.)

The OS also provides a variety of stylesheets—instructions that tell your GIS software how to display your data. These used to be readily accessible on the OS’s own website, but now (as reported in the Comments section below) they’re rather obscurely located on the GitHub developers’ site. Here are direct links to the two stylesheets required: Open Map – Local and Terrain50. Clicking on these links will download a couple of zipped files, and unzipping those will generate another couple of folders on your hard drive called OS-Terrain-50-stylesheets-master and OS-OpenMap-Local-stylesheets-master. Into the OS Data folder they go.

With all that in place, we can open QGIS 3 and start producing a map. QGIS opens with a load of menu items and toolbars across the top, a “Browser” window at upper left, which allows you to navigate to and load data files, and a “Layers” window at lower left, which keeps track of the various layers you’re putting together to create your map. The rest of the screen will hold the map itself. Navigating to the ng subfolder of my Terrain 50 Grid data (having first unzipped all its files), I find an eyewatering array of files with different names and extensions. I want to load all the files with extension .asc, and I can call them up using the filter tool (shaped like a little funnel) in the Browser window toolbar, and typing *.asc as my filter criterion. Then I select all the .asc files, right-click and select Add Selected Layer(s) to Canvas.

This is what appears in the map window:

All the 10-km tiles containing elevation data have been loaded, and colour-coded by height, from black at the lowest to white at the highest point of each tile. The white squares are areas of no data, containing only sea. If you know your Scottish islands, it’s possible to make out the distinctive shape of the Isle of Skye, but it’s all a bit of a mess because each tile has been tinted individually.

The first thing to do is to merge all the tiles into one big square, so they can be tinted consistently. We can do that from the top menu, Raster/Miscellaneous/Merge…. For “Input Layers”, select all the tiles. Under “Merged”, save the merged data as a file with a descriptive name you can find later. I also nominate a specific “no data” value for the output, setting it to -100, so QGIS knows which areas of the merged file to make transparent.

Here’s what the merged version looks like:

Now it could do with a bit of colour. Double-click on the name of the layer in the “Layers” window, to open the Layer Properties dialogue box. (Choose the “Symbology” view if QGIS doesn’t take you there by default.) Switch the “Render type” to Singleband pseudocolor, and QGIS will offer you a colour ramp—a series of colours that can be used to code the height data on the map. QGIS has a wide variety of built-in ramps, but they’re hidden away. Drop down the menu for “Color ramp”, choose Create New Color Ramp… and drop down the menu in the dialogue box that follows. Catalog: cpt-city contains all sorts of useful stuff.

I made my own ramp, based on the height tints of the classic OS tourist maps of the 1970s, and saved it (using the Style button at lower left) so I could reuse it. Here’s the set-up for the ramp:

And here’s what the map looks like with my colour ramp applied:

No sea colour, but that’s deliberate. Zero height in the topographic data doesn’t correspond precisely to the coastline, and the OS provides a set of tiles in Open Map – Local that will lay a precise coastline on to my topo map. From the QGIS menu, Layer/Add Layer…/Add Vector Layer… brings up a dialogue box that lets you browse to the directory in which the ESRI shape files are stored—in my case, OS Open Map Local (ESRI Shape File) NG/data is the file I’m after . Adding that as a layer on top of the topographic data produces this:

Lovely coastline, shame about the colour, which has been assigned at random by QGIS. Time to use an OS stylesheet. Double-click on the new TidalWater layer to open its Layer Properties dialogue, go to “Symbology”, and use the button at lower left to open Style/Load Style. Navigate to the Open Map – Local stylesheets—in my case exhaustingly named OS-OpenMap-Local-stylesheets-master/ESRI Shapefile stylesheets/QGIS stylesheets (QML)/Full colour style. Load TidalWater.qml. Presto! Now the sea is sea-coloured, and all those tile margins have disappeared:

The awkward white boxes can be eliminated by setting the background colour of the project to the same shade as the OS’s tidal water. Project/Properties… takes you to the dialogue box. Drop down the menu for “Background colour”—Pick Color will give you a little paint-dropper pointer that you can use to copy the tidal water colour off the map by clicking on it. The result looks like this:

Starting to look pretty good, isn’t it? *

Adding a little shading will produce a more three-dimensional effect. Choose Raster/Analysis/Hillshade…. In the dialogue box, choose your topographic layer as the “Input layer”, and assign a filename to save the Hillshade result. The default settings work fine for everything else. Here’s what the hillshade layer looks like when it’s loaded on top of the topographic layer, but below the TidalWater tiles:

Very pretty, but it would be nice to see the topo colours, too. Double-click on the hillshade layer to bring up its Layer Properties dialogue, choose “Transparency”, and fiddle with the “Global opacity” slider. I find an opacity of 20% looks nice:

There’s a problem, though. If you want to use these maps at large scale, you come up against the limited horizontal resolution of 50m imposed by the Ordnance Survey on its free products. Here’s what the mountain Blaven, on the Isle of Skye, looks like if we zoom in on the map above:

Click or tap to enlarge the above view, and it’s glaringly obvious that the 50m squares are showing up intrusively in the map tint and shading.

Since I’m not particularly concerned with the accuracy of the height data, only in using them to generate a nice tint and shade to demonstrate the general topography, I’m happy to run an interpolation to increase the horizontal resolution and produce a smoother effect. I do that from Raster/Projections/Warp (Reproject)…. Here’s the dialogue box completed and ready to start rendering:

I’m not interested in changing the Coordinate Reference System (CRS) of my map—it stays the same (OSGB36). All I’m doing is changing the horizontal resolution to 10m. A bit of experimentation has shown me that using the Cubic spline resampling method produces the most pleasant-looking results. Here’s what I end up with, after producing the 10m-resolution layer and repeating the production of the hillshade layer, as described above:

That’s better! In my next post on this topic, I’ll add some contours and other features.

* The sea colour cover isn’t perfect—depending on your graphics card, you may see occasional seams around tile margins when the view is zoomed out. I have some solutions to this, which I may write about another time.

# PeakFinder

Back in 1995, a little packet of laminated cardboard diagrams fell through my letterbox. Dave Hewitt, editor of The Angry Corrie, wanted me to write a review of these items. Which I did—it appeared in TAC25, Nov ’95-Jan ’96.

They were called ViewFinder Panoramas, they’d been created by Jonathan de Ferranti, and in my opinion they were things of exquisite, minimalist beauty. Each laminated strip showed the view from the summit of some named hill, colour-coded and annotated to allow the easy identification of other hills. They had been produced from Ordnance Survey Digital Elevation Models, rendered so as to depict the curvature of the Earth and the effects of atmospheric refraction, and then carefully annotated with the names and distances of individual peaks. On occasion, magnified sections were inset to provide additional detail. And features near the horizon were subtly stretched vertically so as to bring out the detail without giving the impression of distortion.

Here’s a comparison of the view eastwards from the summit of Ben Hope, compared to the corresponding ViewFinder diagram:

From the colour-coded distances, to the sector indicator at bottom left, to the bearings along the top of frame, it was a beautifully designed product. ViewFinders retailed for £1, or £1.50 for the larger, more complex products. You could order a bespoke view from the summit of your favourite hill for £16. Nowadays the entire catalogue is freely available on-line, covering worldwide views.

In 1999, Jonathan de Ferranti and I wrote an article together for the Scottish Mountaineering Club Journal, investigating whether it was possible to see any part of the Cuillin ridge in Skye from the Cairngorm plateau.* In this, we used Jonathan’s ViewFinder technology to revisit a question first raised by Guy Barlow in the same journal in 1956.Barlow had constructed a wood and paper model, and concluded that Sgurr a’Ghreadaidh, on the Cuillin ridge, would be visible from the summit of Cairn Toul in the Cairngorms, because of a fortuitous sightline down the length of Glen Shiel. Jonathan produced a rendering of the same view and discovered that, although Barlow had the alignments exactly correct, he had neglected to allow for the position of Bla Bheinn, which sits east of the main Cuillin ridge, and which neatly blocked the view of Sgurr a’Ghreadaidh. The glimpse of Bla Bheinn, 90 miles away, was minute, occupying more or less a single pixel of the ViewFinder panorama, and in reality would need a telescope, strong refraction and perfect seeing conditions to appreciate. (In the scan below, Bla Bheinn takes its Anglicized spelling, Blaven.)

All of this is a roundabout introduction to Fabio Soldati’s excellent PeakFinder app, which is the natural successor to ViewFinder—indeed Jonathan de Ferranti is credited with providing some of the Digital Elevation Model data used by PeakFinder. With the huge leaps in processing speed and storage capacity that have occurred during the last two decades, it’s now possible to perform the necessary rendering tasks on the fly, producing annotated panoramas of pretty much anywhere in the world, on demand. Apps are available for Android and Apple phones at a cost of a few pounds, and there’s also a rather lovely on-line version. Here’s the view from Ben Hope again, compared with PeakFinder‘s on-line rendered view, and the version displayed by my rather primitive Android phone.

Apart from using your phone’s current location, you can select a different location by choosing from PeakFinder‘s extensive names database, tapping on Google Maps (you need a data connection for that), or by entering latitude and longitude coordinates.

Tapping on any of the named peaks in the displayed panorama brings up some information about that feature, and you can also flit across to look at the view from its summit. The names of all the visible peaks in the panorama can be displayed, searched, and sorted by elevation, distance or heading. Tap on one of these names, and you’re returned to the display with a handy marker pointing out that summit’s location. So with a couple of quick taps I was able to establish that the most distant feature visible from the summit of Dundee Law is Windlestraw Law, east of Peebles and a remarkable 88km away. Here’s the PeakFinder display showing me where to look for it:

The display units are configurable, and you can pop up depictions of the tracks of sun and moon across the sky, for a given date. Here’s the sun rising over the Orkneys from Ben Hope:

Is it accurate? It certainly seems to be, when compared to summit photographs from my collection. For me, Jonathan de Ferranti’s ViewFinder panoramas are the gold standard, so I challenged PeakFinder to show me the extremely marginal view of Bla Bheinn from Cairn Toul that Jonathan identified twenty years ago. Here it is again:

Here’s the view from the web-based version of PeakFinder, with Bla Bheinn again occupying pretty much a single pixel:

And here’s the view on my phone, zoomed in to its maximum extent:

No Bla Bheinn. I suspect the difference comes from either the screen resolution or the processing limitations of my dumb phone.

Setting aside ridiculously exacting tests like the one above, this is an extremely impressive bit of a kit, even more so if you have a phone that will allow you to use the app’s photographic annotation mode. If you’ve ever sat by a cairn and indulged in an endless, fruitless debate about the identity of some little notch on the horizon (and which of us has not?), then you’ll certainly want to spring a few quid on this lovely little app.

* “On Seeing The Cuillin From The Cairngorms—Again” SMCJ 1999, Vol. 37 No. 190 pp 42-8.
“On The Possibility Of Seeing The Cuillin From The Cairngorms” SMCJ 1956, Vol. 26 No. 147 pp 16-24.

# Which Place Gets The Most Daylight?

So this puzzle isn’t about sunshine (the amount of time the sun shines from a clear sky), or even about the intensity of sunlight (which decreases with increasing latitude), but about cumulative daylight—the length of time between sunrise and sunset in a given place, added up over the course of a year.*

It’s a surprisingly complicated little problem. I addressed it using an antique solar calculator I wrote many years ago, using Peter Duffett-Smith’s excellent books as my primary references:

It runs in Visual Basic 6, which means I had to open up my VirtualBox virtual XP machine to get it running again. The original program calculates the position of the sun by date and time for any given set of coordinates, and also works out the times of sunrise and sunset.

You’ll see it gives sunrise and sunset times to one-second precision, which is entirely spurious—the refractive state of the atmosphere is so variable that there’s no real point in quoting these times to anything beyond the nearest minute. I just couldn’t bring myself to hide the extra column of figures.

Anyway, it was a fairly quick job to write a little routine that cycled this calculator through a full year of daylight, adding up the total and spitting out the results so that I could begin exploring the problem.

At first glance, it seems like there shouldn’t be any particular place that wins out. As the Earth moves around the sun, its north pole is alternately tilted towards the sun and away from it, at an angle of about 23.5º. If we look at a diagram of these two solstice points (which occur in June and December every year), there’s an obvious symmetry between the illuminated and unilluminated parts of the globe:

Between the solstices, the latitude at which the sun is overhead varies continuously from 23.5ºN (in June) to 23.5ºS (in December), and then back again:

So for every long summer day, there should be an equal and opposite long winter night. The short and long days should average out, during the course of a year, to half a day’s daylight per day—equivalent to 4280 hours in a 365-day calendar year.

And that would be true if the Earth’s orbit around the sun was precisely circular—but it isn’t. As I described in my first post about the word perihelion, the Earth is at its closest to the sun in January, and its farthest in July. Since it moves along its orbit more quickly when it’s closer to the sun, it passes through the December solstice faster than through the June solstice. This has the effect of shortening the southern summer and the northern winter. The effect isn’t immediately obvious in my diagram of solar latitudes, above, but it’s there—the sun spends just 179 days in the southern sky, but 186 days north of the equator.

This means that the total number of hours of daylight is biased towards the northern hemisphere. In the diagram below, I plot the hypothetical flat distribution of daylight hours associated with a circular orbit in purple, and compare it to the effect of Earth’s real elliptical orbit in green:

So far, I’ve been treating the sun as if it were a point source of light, rising and setting in an instant of time. But the real sun has a visible disc, about half a degree across in the sky. This means that when the centre of the sun drops below the horizon, it’s only halfway through setting. Sunrise starts when the upper edge of the sun first appears; sunset finishes when the the upper edge of the sun disappears. So the extent of the solar disc slightly prolongs daylight hours, and slightly shortens the night.

At the equator the sun rises and sets vertically, and the upper half of the solar disc takes about a minute to appear or disappear. An extra minute of daylight in the morning, an extra minute of daylight in the evening—that’s more than twelve hours extra daylight during the course of a year, just because the sun is a disc and not a point.

And if we move north or south of the equator, the sun rises and sets at an angle relative to the horizon, and so takes longer to appear and disappear—adding more hours to the total daylight experienced at higher latitudes. There’s a limit to this effect, however. When we get to the polar circles, we run into the paired phenomena of the midnight sun and the polar night. There are days in summer when the sun never sets, and days in winter when the sun never rises.  The extent of the solar disc can make no difference to the length of daylight if the sun is permanently above the horizon, and it can add only a few hours to the total as the sun skims below the horizon at the start and end of polar night.  And as we move towards the poles, the midnight sun and polar night start to dominate the calendar, with only short periods around the equinoxes that have a normal day/night cycle. So although the sunrises and sunsets within the polar circles are notably prolonged, there are fewer of them.

So the prolongation of daylight caused by the rising and setting of the solar disc increases steadily with latitude until it peaks at the polar circles (around 66.5ºN and 66.5ºS), after which it declines again. Here’s a diagram of daylight hours predicted for a point-like sun (my previous green curve) with the effect of the solar disc added in red:

And there’s another effect to factor in at this point—atmospheric refraction. As I described in my post discussing the shape of the low sun, light from the rising and setting sun follows a slightly curved path through the atmosphere, lifting the image of the sun by a little over half a degree above its real position. This means that when we see the sun on the horizon, its real position is actually below the horizon. This effect hastens the sunrise and delays the sunset, and it does so in a way that is identical to simply making the solar disc wider—instead of just an extra couple of minutes’ daylight at the equator, more than six minutes are added when refraction is factored in, with proportional increases at other latitudes. So here’s a graph showing the original green curve of a point-like sun, the red curve showing the effect of the solar disc, and a blue curve added to show the effect of refraction, too:

The longest cumulative daylight is at the Arctic Circle, with latitude 66.7ºN experiencing 4649 hours of daylight in the year 2017. The shortest period is at the south pole, with just 4388 hours. That’s almost eleven days of a difference!

So is the answer to my original question just “the Arctic Circle”? Well, no. I have one more influence on the duration of daylight to deploy, and this time it’s a local one—altitude. The higher you go, the lower the horizon gets, making the sun rise earlier and set later. This only works if you have a clear view of a sea-level (or approximately sea-level) horizon—from an aircraft or the top of a mountain. Being on a high plateau doesn’t work, because your horizon is determined by the local terrain, rather than the distant curvature of the Earth. So although the south pole has an altitude of 2700m, it’s sitting in the middle of the vast polar plateau, and I think there will be a minimal effect from altitude on the duration of its daylight.

So we need to look for high mountains close to the Arctic Circle. A glance at the map suggests four mountainous regions that need to be investigated—the Cherskiy Range, in eastern Siberia; the Scandinavian Mountains; Greenland; and the region in Alaska where the Arctic Circle threads between the Brooks Range to the north and the Alaska Range to the south.

The highest point in the Cherskiy Range is Gora Pobeda (“Victory Peak”). At 65º11′N and 3003m, its summit gets 5002 hours of daylight—almost an hour a day of extra sunlight, on average.

But Pobeda is nudged out of pole position in the Cherskiy Range by an unnamed 2547m summit on the Chemalginskiy ridge, which lies almost exactly on the Arctic Circle, giving it a calculated 5006 hours of daylight.

There’s nothing over 2000m near the Arctic Circle in the Scandinavian Mountains, so we can skip past them to 3383m Mount Forel, in Greenland, at 66º56′N, which beats the Siberian mountains with 5052 hours of daylight.

Finally, the Arctic Circle passes north of Canada’s Mackenzie Mountains, and between the Brooks and Alaska Ranges. Mount Isto, the highest point in the Brooks Range, is 2736m high at 69º12′N, and comes in just behind Pobeda, with 4993 hours of daylight. Mount Igikpak, lower but nearer the Circle (2523m, 67º25′N), pushes past all the Siberian summits to hit 5010 hours. And in the Alaska Range is Denali, the highest mountain in North America. It is 6190m high, and sits at 63º04′N. It could have been a serious contender if it had been just a little farther north—but as it is it merely equals Igikpak, and falls short of Forel’s total.

So the answer to my question appears to be that the summit of Mount Forel, Greenland, sees the most daylight of any place on the planet. I confess I didn’t see that one coming when I started thinking about this.

* “A year” is a slightly slippery concept in this setting. The sun doesn’t return to exactly the same position in the sky at the end of each calendar year, and leap years obviously contain an extra day’s daylight compared to ordinary years. Ideally I should have added up my hours of daylight over a few millennia—but I’m really just interested in the proportions, and they’re not strongly influenced by the choice of calendar year. So for simplicity I ran my program to generate data for 2017 only.

What I wrote at the start of this piece, about spurious precision in rising and setting times, goes doubly for the calculations concerning altitude. These results are exquisitely sensitive to the effects of variable refraction, and my post about the shape of the low sun gives a lot more detail about how the polar regions are home to some surprising mirages that prolong sunrises and sunsets. I can’t hope to account for local miraging, or even to correctly reproduce the temperature gradient in the atmosphere from day to day. I think the best that can really be said is that some of the contenders I list will experience more daylight than anywhere else on the planet, most years, and that Mount Forel will be in with a good shot of taking the record for any given year.

# Running Windows XP Under VirtualBox

As I write, it’s only another month until Microsoft’s free upgrade offer on Windows 10 expires (on 29 July 2016). I am so looking forward to that day, in the hope that it’ll mean an end to Microsoft’s intrusive little pop-up messages in the lower right corner of my monitor, and their increasingly devious attempts to trick me into accidentally upgrading. The experience is a little like having a weeping software engineer repeatedly grip your lapels, shake you gently, and sob: “But it’s cool. It’s free. Why don’t you want it? For pity’s sake, why? Why?

I don’t want it because it offers nothing I need or am even curious to see, while promising inconvenience and hassle during the upgrade process. When Microsoft are willing to pay me for the time I’ll spend on their “free” upgrade, then maybe we can talk.

But at present, I’m much cheered to be running Windows XP again, which marked the last occasion I ever felt a new Microsoft operating system actually constituted an “upgrade”. The fact that you can run a virtual XP machine in a window under Windows 7, 8 or 10 is not as well known as it should be, and it has let me continue using some old software from that era that has resisted running in “compatibility mode” under later operating systems.

I downloaded WindowsXPMode_en-us.exe from my Microsoft link above. Since I wanted to run XP under Windows 8.1, I didn’t run the exectuable—I tucked it away in my Downloads folder. Embedded inside two layers of compression is the virtual hard drive I needed.

I used 7-Zip to find and extract the necessary file. 7-Zip is a handy, open-source archive manipulation program, which adds a couple of options to the Windows menu you see when you right-click on a file. So I navigated to where I’d stored WindowsXPMode_en-us.exe, right clicked on the file, selected “7-Zip” and then “Open archive”. 7-Zip then gave me a view of the contents of the archive file, which includes a folder named sources. In that folder, there’s a file called xpm. This is also compressed, so I right-clicked it and opened it in 7-Zip. In the archive listing for xpm, there’s a file called VirtualXPVHD. That’s the virtual hard drive containing the XP installation. I right-clicked it, and then selected “Copy to …”, telling 7-Zip to extract and decompress VirtualXPHD. I put it in a new folder called XP.

So that Windows could recognize VirtualXPVHD as a virtual hard drive, I now edited the file-name by adding a .vhd extension to it.

For an environment in which to run my new VHD, I downloaded VirtualBox, and installed it with the default options. Then I ran the program, told it I wanted to set up a new virtual XP machine using an existing virtual hard disk, and pointed it to the location of my VirtualXPVHD.vhd file.

And that was that. Now you I could launch an XP machine in a window on my Windows 8.1 desktop.

VirtualBox does a lot of handy things. There are a couple worth knowing about if you’re setting up your own virtual machine:
1) You can scale the window in which the XP machine runs, using the menu option View/Scale Factor. This is useful if you find yourself peering at a tiny window in the middle of your high resolution monitor.
2) At first, you’re going to need to let the XP window “capture” your mouse pointer. When you click inside the window, a dialogue box appears offering to capture the pointer. When you accept, you find your mouse is trapped inside the XP window. You can release it again by tapping a “hot key”, which defaults to the right control key on your keyboard. It’s worth checking that you actually have a right control key before you fire up XP for the first time, otherwise your mouse will be permanently trapped. If you don’t have one, you can change to a new hot key in the VirtualBox menu, File/Preferences/Input/Virtual Machine/Host Key Combination. Select the “Shortcut” box and press whichever key you want to designate as the new hot key.

I found the virtual machine was a bit crashy when XP was going through its initial set-up on my laptop (but not on my desktop). A couple of times I had to press the hot key to free up my mouse, and send a “power off” signal to the virtual machine using the VirtualBox interface. On each occasion it rebooted happily and let me proceed further with the set-up.

The first priority once you have the XP desktop on display is to add some additional function. Free up your mouse pointer so that you can use the VirtualBox menu at the top of the window, click on “Devices”, and then “Insert Guest Additions CD Image …” This makes the XP virtual machine think you’ve inserted a software CD, and it opens a set-up dialogue to let you install the new software. Accept this (and choose “Install Anyway” each time XP objects to the certification of the software that’s being installed).

Now you don’t need to go through the business of capturing and releasing the mouse pointer! The XP window is integrated into your desktop and behaves like any other windowed software. (On occasion, you may find that the mouse pointer behaves a little oddly in some programs—flickering, or scanning too quickly. On those occasions, capturing the mouse is useful. You can turn mouse capture on and off using Input/Mouse Integration in the VirtualBox menu at the top of the window.) I also found that installing the Guest Additions CD eliminated the crashes I’d been having during setup.

One hitch in all this was that XP soon announced that it needed activation, and demanded a Product Key. Now, back in the day, computers used to come with a copy of their operating system on a disc, and I still had the old XP installation disc for a long-defunct computer tucked away in a cupboard. I offered the Product Key from that disc, and XP was happy. In fact, I now have three virtual XP machines all happily registered with the same Product Key. Which is not unreasonable, given that Microsoft haven’t actually been supporting this operating system for a couple of years now.

A couple of the programs I installed under XP are so old they’ll only run if the parent disc is in a drive for them to access. That’s not exactly convenient, so I copied the necessary discs to .iso image files using DVD to ISO , and put the .iso files into a directory on my virtual XP machine.

Then I installed the Virtual CD-ROM Control Panel from Microsoft, which lets XP mount .iso files as if they were physical discs. (Note added, 2020: Sadly, this is no longer available to download from Microsoft. You can still find it on various external sites, but I can’t vouch for the cleanliness of the downloaded files, so I’m not posting any links here.) It’s slightly finicky to install, but the readme file talks you through the process. You need to move the .sys driver file to your Windows XP drivers directory, and then run VCdControlTool.exe to install the driver. Once the driver is installed, running VCdControlTool again lets you create an unused drive letter and then mount an .iso file to that drive. I set my .iso files up as “persistent mounts”, so they’re always available.

Finally, although I’m no great fan of cloud storage (sure, I’ll let a multinational corporation store my personal documents and photographs at some random location on the internet; what could possibly go wrong with that idea?) I do like to share a few program and configuration files between my various devices. The small storage capacity offered by the free version of DropBox has been more than enough for this—and anyone who cares to hack into my DropBox account isn’t going to find much that’s comprehensible to them, let alone useful. But DropBox discontinued support for XP recently, so I needed a way to transfer files from my XP virtual machine to a directory on its parent machine, where DropBox could then take over.

VirtualBox lets you set up folders that are shared between the XP virtual machine and the parent machine (go through Devices/Shared Folders) but some of my XP software is so ancient it refused to recognize the network drive on which these shared folders resided. So I tried using the free version of Tonido. Tonido synchronizes files through their server without ever storing them—I installed the server software on the parent machine, and the client software on the virtual XP machine. Presto! My shared XP files were transferred to the parent machine, where they could be DropBoxed or backed up as required. This had the advantage of being easy to set up, but the rather bizarre and deeply unsatisfying consequence of a computer transferring files to itself over the internet. It also has to be said that the Tonido file synchronization was often slow, and on occasion delayed for hours.

Anyway, once I’d got that straightened out, I used the Devices menu in VirtualBox to make the DropBox folder on the parent machine a shared folder, ticking the boxes to make it “auto-mount” and “permanent”. It then appeared as a network drive the next time I booted the virtual XP machine. Using Link Shell Extension in XP, I could pick folders inside the DropBox network drive, and drop copies of them as symbolic links on the XP c:\ drive, where my ancient programs could see them. When the programs modify those files, the modification is then reflected in the DropBox folder on the parent machine, and that cascades off to my other devices, and their virtual XP machines. Joy!

So now, every time Microsoft offers me a free upgrade, I can sit back and enjoy my free downgrade instead.

# Pennycook et al.: On the reception and detection of pseudo-profound bullshit

This from the November 2015 issue of Judgment And Decision Making. Here are links to the original paper (pdf) and its supplementary tables (pdf).

The authors seek to find a preliminary answer to the questions, “Are people able to detect blatant bullshit? Who is most likely to fall prey to bullshit and why?” Their study is therefore of the characteristics of the  bullshittee, rather than the bullshitter, or of bullshit itself.

They suggest that bullshit occupies a sort of halfway house between lie and truth. Bullshit is “something that is designed to impress but […] constructed absent direct concern for the truth.” (That is, the author of bullshit doesn’t care whether it’s true or not, in contrast to the liar, who is deliberately subverting the truth.) And “bullshit, in contrast to mere nonsense, is something that implies but does not contain adequate meaning or truth.”

I’m indebted to them for providing links to two sources of pseudo-profound bullshit, used in their study.

One, Wisdom of Chopra, uses random words taken from the Twitter feed of Deepak Chopra to construct novel sentences. Here’s an example of its output:

The unexplainable arises and subsides in the doorway to energy

The other, Seb Pearce‘s New-Age Bullshit Generator, generates an entire, beautiful page of random bullshit. Here’s one headline:

You and I are entities of the quantum matrix. By evolving, we believe

So that’s all pseudo-profound bullshit.

According to Pennycook et al., reasons you might mistake that for actual profundity include:

• A deficiency of analytic thinking
• Ontological confusion (confusing different categories of existence, such as the mental and the physical)
• Epistemically suspect beliefs (such as paranormal or supernatural ideas)

Four studies are reported in the paper. They all look for correlations between the particular cognitive biases listed above with a “Bullshit Receptivity” scale—a measure of an individual’s tendency to rate randomly generated bullshit as “profound” on a five-point scale ranging from “not at all profound” to “very profound”.

I haven’t even counted the number of separate correlation measures to which the authors assign significance values; I’ll leave that as an exercise for the  Interested Reader.

But what we seem to see is that:

• Participants tended to score random nonsense as moderately profound.
• Participants scored selected real Deepak Chopra Tweets as a little more profound than random nonsense, but less profound than some motivational quotations.
• Some participants scored even mundane statements like “Most people enjoy some sort of music” as having some level of profundity. These participants tended to give high profundity scores across the board.
• To quote the authors: “Those more receptive to bullshit are less reflective, lower in cognitive ability (ie. verbal and fluid intelligence, numeracy), are more prone to ontological confusions and conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine.”
• Waterloo University undergraduates (or at least, those who sign up for this sort of study) are catastrophically gullible, assigning various levels of profundity to some quite astonishing twaddle (Table 1). Snake-oil salesmen are presumably converging on the campus even as I type.

So it’s good to have all that sorted out.

(Be the first)