Where did the term “full-frame” originate?

Why are digital cameras with sensors the same size as 35mm SLR’s, i.e. 36×24mm, called full-frame cameras? This is somewhat of a strange concept considering that unlike film, where the 35mm dominated the SLR genre, digital cameras did not originate with 35mm film-equivalent sized sensors. In fact for many years, until the release of the first digital SLRs, camera sensors were of the sub-35mm or “crop-sensor” type. It was not until spring 2002 the first full-frame digital SLR appeared, the 6MP Contax Digital N. It was followed shortly after by the 11.1MP Canon EOS-1Ds. It wouldn’t be until 2007 that Nikon offered its first full-frame-camera, the D3. In all likelihood, the appearance of a sensor equivalent in size to 35mm film was in part because the industry wished to maintain the existing standard, allowing the use of standard lenses, and the existing 35mm hierarchy.

One of the first occurrences of the term “full-frame” as it related to digital, may have been in the advertising literature for Canon’s EOS-1Ds.

“A full-frame CMOS sensor – manufactured by Canon – with an imaging area of 24 x 36mm, the same dimensions used by full-frame 35mm SLRs. It has 11.1 million effective pixels with a maximum resolution of 4,064 x 2,704 pixels.”

Canon EOS-1Ds User Manual, 2002

By the mid 2000’s digital cameras using “crop-sensors” like APS-C had become standard, but the rise of 35mm DSLRs may have triggered a need to re-align the market place towards the legacy of 35mm film. As most early digital cameras used sensors that were smaller than 36×24mm, the term “full-frame” was likely used to differentiate it from smaller sized sensors. But the term has other connotations.

  • It is used in the context of fish-eye lenses to denote an image which covered the full 35mm film frame, as opposed to fish-eye lenses which just manifested as a circle.
  • It is used to denote the use of the entire film frame. For example when film APS-C appeared in 1996, the cameras were able to take a number of differing formats: C, H, and P. H is considered the “full-frame” format with a 9:16 aspect ratio, while P is the panoramic format (1:3), and C the classic 35mm aspect ratio (2:3).

In any case, the term “full-frame” is intrinsically linked to the format of 35mm film cameras. The question is whether or not this term is even relevant anymore?

The whole full-frame “equivalence” thing

There is a lot of talk on the internet about the “equivalency” of crop-sensors relative to full-frame sensors – often in an attempt to somehow rationalize things in the context of the ubiquitous 35mm film frame size (36×24mm). Usually equivalence involves the use of the cringe-worthy “crop-factor”, which is just a numeric value which compares the dimensions of one sensor against those of another. For example a camera with an APS-C sensor, e.g. Fuji-X, has a sensor size of 23.5×15.6mm which when compared with a full-frame (FF) sensor gives a crop-factor of approximately 1.5. The crop-factor is calculated by dividing the diagonal of the FF sensor by that of the crop-sensor, in the case of the example 43.42/28.21 = 1.53.

Fig.1: Relative sensor sizes and associated crop-factors.

Easy right? But this only really only matters if you want to know what the full-frame equivalent of a crop-sensor lens is. For example a 35mm lens has an angle of view of rough 37° (horizontal). If you want to compare this to a full-frame lens, you can multiply this by the crop-factor for APS-C sensors, so 35×1.5≈52.5mm. So an APS-C 35mm lens has a full-frame equivalency of 52.5mm which can be rounded to 50mm, the closest full-frame equivalent lens. Another reason equivalency might be important is perhaps you want to take similar looking photographs with two different cameras, i.e. two cameras with differing sensor sizes.

But these are the only real contexts where it is important – regardless of the sensor size, if you are not interested in comparing the sensor to that of a full-frame camera, equivalencies don’t matter. But what does equivalence mean? Well it has a number of contexts. Firstly there is the most commonly used situation – focal-length equivalence. This is most commonly used to relate how a lens attached to a crop-sensor camera behaves in terms of a full-frame sensor. It can be derived using the following equation:

Equivalent-FL = focal-length × crop-factor

The crop-factor in any case is more of a differential-factor which can be used to compare lenses on different sized sensors. Figure 2 illustrates two different systems with different sensor sizes, with two lenses that have an identical angle of view. To achieve the same angle of view on differently sized sensors, a different focal length is needed. A 25mm lens on a MFT sensor with a crop-factor of 2.0 gives the equivalent angle of view as a 50mm lens on a full-frame sensor.

Fig.2: Focal-length equivalence (AOV) between a Micro-Four-Thirds, and a full-frame sensor.

Focal length equivalency really just describes how a lens will behave on different sized sensors, with respect to angle-of-view (AOV). For example the image below illustrates the AOV photograph obtained when using a 24mm lens on three different sensors. A 24mm lens used on an APS-C sensor would produce an image equivalent to a full-frame 35mm lens, and the same lens used on a MFT sensor would produce an image equivalent to a full-frame 50mm lens.

Fig.3: The view of a 24mm lens on three different sensors.

When comparing a crop-sensor camera directly against a FF camera, in the context of reproducing a particular photograph, two other equivalencies come into play. The first is aperture equivalence. An aperture is just the size of the hole in the lens diaphragm that allows light to pass through. For example an aperture of f/1.4 on a 50mm lens means a maximum aperture size of 50mm/1.4 = 35.7mm. A 25mm f/1.8 MFT lens will not be equivalent to a 50mm f/1.8mm FF lens because the hole on the FF lens would be larger. To make the lenses equivalent from the perspective of aperture requires multiplying the aperture value by a crop factor:

Equivalent-Aperture = f-number × crop-factor

Figure 4 illustrates this – a 25mm lens used at f/1.4 on a MFT camera would be equivalent to using a 50mm with an aperture of f/2.8 on a full-frame camera.

Fig.4: Aperture equivalence between a 25mm MFT lens, and a 50mm full-frame lens.

The second is ISO equivalence, with a slightly more complication equation:

Equivalent-ISO = ISO × crop-factor²

Therefore a 35mm APS-C lens at f/5.6 and 800 ISO would be equivalent to a 50mm full frame lens at f/8 and 1800 ISO. Below is a sample set of equivalencies:

           Focal Length / F-stop = Aperture ∅ (ISO)
       MFT (×2.0): 25mm / f/2.8 = 8.9mm (200)
     APS-C (×1.5): 35mm / f/3.9 = 8.9mm (355)
Full-frame (×1.0): 50mm / f/5.6 = 8.9mm (800)
      6×6 (×0.55): 90mm / f/10.0 = 9.0mm (2600)

Confused? Yes, and so are many people. None of this is really that important, except to understand how a lens behaves will be different depending on the size of the sensor in the camera it is used on. Sometimes, focal-length equivalence isn’t even possible. There are full-frame lenses that just don’t have a cropped equivalent. For example a Sigma 14mm f/1.8 would need an APS-C equivalent of 9mm f/1.2, or a MFT equivalent of 7mm f/0.9. The bottom line is that if you only photography using a camera with an APS-C sensor, then how a 50mm lens behaves on that camera should be all that matters.

Shooting photos from an aircraft

Taking photos from a train is not that hard. Taking photos from the window of a plane is trickier for a number of reasons. Firstly, you can’t really wander the aisles of a plane looking for the best vantage point, and secondly, there are very few good photos to be had at 35,000 feet.

There are of course some technical issues, the biggest one being aircraft windows. Plane windows are technically made up of three panes: (i) an outer pane flush with the outside fuselage, (ii) an inner pane (which has a little hole in it), and (iii) a thinner, non-structural plastic pane called a scratch pane. The scratch pane is the part passengers can touch, and inadvertently scratch. And the windows are not made of glass, but rather a type of plexiglass known as “stretched acrylic” (the flight deck windshields are made with glass-faced acrylic). These windows are not ideal to look through, because they are never perfectly clear.

An aerial view of Laval on approach to Pierre-Elliott Trudeau Int. Airport in Montreal. Taken from a De Havilland Canada Dash 8 aircraft which was banking. iPhone 5 (4.12mm; f/2.4; 1/531).

Second is the aircraft itself. In smaller planes windows are often located closer to the centre-line of the plane, so views of the ground are better. The larger the plane, the higher up the windows are on the aircraft’s curved fuselage (largely due to the cargo space below).

Example vertical angles of view for a 35mm lens on an APS-C camera on different aircraft.

Here are some tips for shooting photographs from a plane:

⦿ Plan ahead − This means studying the route – what scenic sites will you be passing over? For example the Icelandair flight from Toronto to Keflavik (ICE604) typically flies over southern Greenland, around 5am (local time) – which from May to August is around sunrise. Sunrise and sunset are great times to try and take a shot – shots of cloud and sky by themselves aren’t exactly inspiring. It might also be good to check weather conditions along the route.

⦿ Choose a seat − With the route and time of day in mind, decide on where you want your window seat. Sitting on the wrong side of the plane at the wrong time of day, might result in you shooting into the glare of the sun. Use an airline seat map to help guide your choice, noting that the type of plane will make a difference in where you want to sit. Optimally, a seat in the fore or aft of the plane is preferable, avoiding over-the-wing seats. However in a turboprop aircraft the wings are less of an issue because they are typically above the window. In some planes the aft of the wing can be problematic because of jet exhaust blurring parts of the image.

⦿ Select an appropriate camera/lens − Smaller is often better when it comes to cameras. So a compact camera, or even smartphones are both good choices because they are both accessible and unobtrusive, and frankly using a DSLR is likely gross overkill. A wide lens is typically best – the longer the lens the more susceptible it is to vibration and turbulence, even with good IBIS. You can experiment with UV and ND filters, but avoid polarizing filters. The plexiglass panel in the window in combination with the polarizing filter actually produces an effect called birefringement, which creates a rainbow effect in an image.

⦿ Make sure the window is clean − Always make sure to clean the scratch pane before take-off – the fewer smudges you have to shoot through the better. The scratch panes may never be perfect, because they tend to take a lot of abuse.

A little bit of art, flying into Montreal. iPhone 5 (4.12mm; f/2.4; 1/343). The lens on the iPhone 5 is roughly equivalent to a 30mm on a full-frame.

⦿ Hold the camera close − You can reduce the effect of scratches etc. by placing the lens as close to the window as possible (but not directly on the window, unless you use a rubber lens hood).

⦿ Choose settings − Faster is better when it comes to shutter speeds, e.g. 1/600 to 1/2000. The further away the object being photographed, the more lenient you can be with shutter speed. A mid-range aperture like f/8 is also quite appropriate – sharpness is all relative when shooting through three panels of plexiglass. If using a smartphone cameras, the camera will handle all the settings.

⦿ Use manual focus − Sometimes the window can be a bit hazy, and this can interfere with auto-focus. Switching to manual focus usually works quite well, making sure to focus at infinity.

The best “aerial” photographs come at landing time, or when a plane is close enough to the ground to provide an aerial view. I’ve taken some great photographs of Montreal from a smaller plane, and even on the approach to Keflavik (Iceland). What to photograph? That really depends on whether you want to take some artisanal/experimental shots, or aerial shots of landscapes. Some people like to take pictures of the wings, and that makes a lot of sense given that it helps put some shots into context. The image shown below wouldn’t be that interesting if it weren’t for the plane’s curved wingtip. Clouds are also interesting, particularly if seen from above, as are human incursions on the landscape e.g. farms, and natural wonders like rivers.

Approaching Keflavik, Iceland. iPhone 6s (4.12mm; f/2.2; 1/950).

Aerial shots can be plagued by a aerial haze, which imparts a gray layer on the image. During the day, the shorter wavelengths of light (blues and violets) are scattered by the gasses in the atmosphere. Light is also reflected by particulates in the atmosphere which results in hazy skies. This can be reduced by using a UV filter, or in post-processing. Reflections can also be an issue, especially if it is dark outside – lights within the cabin will reflect back towards the camera from the three sheets of plexiglass. And no flight is smooth – engine vibration, and air turbulence will make it difficult to achieve long exposures.

The original aerial shot of Iceland with a nice layer of gray haze. (iPhone 6s; 4.12mm; f/2.2; 1/999).
The image modified with some contrast stretching, and enhancement of the blue colour channel.

At the end of the day, there is no guarantee for good photographs shooting through a window. There is every chance that some images may be soft, especially around the edges, or condensation/ice may build-up on the window, thwarting an notion of taking “good” pictures. It might be that the plane is shrouded in clouds the whole way through the journey. The best advice is to take lots of photos, and experiment.

Above the clouds (iPhone 6s; 4.12mm; f/2.2; 1/746).

If you are interested in taking photographs from a small plane, such as the tours offered by Sea to Sky Air in Squamish BC, then you will need a few more tips, and I have provided some resources below. For anyone wanting to visit Iceland, check out my post Visiting Iceland? – Beware of the glaciers.

Further reading:

Shooting photos from a train

Someday you might be in a situation one day where you will need to take photographs through a window. For example travelling on one of the many of the worlds great rail journeys, which often provide scenery which is impossible to see otherwise. Rail trips that are specifically touted as being “scenic journeys” will often have an observation car with large windows, panoramic windows that take in a view of the sky as well, or an open-air carriage, like that found on the Northern Explorer from Auckland to Wellington (in New Zealand). The problem is that not all trains offer a glass-free interface between you and the scenery.

The biggest problem with photographing through windows is that glass (or perspex) is usually not that clean, often plagued by dust and dirt, things about which you can do little or nothing (well you can clean the inside, but not the outside). If it isn’t in a filmy layer of dirt, or a streak, there is likely very little to worry about. Since you will be focusing on distant objects when shooting from a moving train, nearby dirt specks likely will be of little worry, as they will barely show on a photograph. This becomes more problematic in direct sunlight which can emphasize dirt, streaky panes, and dust smears. Obviously, the best thing to do is to try and find a piece of glass that is pretty clear to shoot through. There may be a chance that there are also windows that can be opened.

A shot of a river along from the Bergen Line west of Myrdal (Olympus E-M5Mark II, 12mm, f/2.8, 1/1000)

Another two issues when shooting through glass are reflections and glare, but they can be alleviated by placing the lens hood directly up near the glass (but don’t press the lens against the glass because that can transfer vibrations from the train to your camera). Select a reasonably sized aperture which will reduce the impact caused by details from the glass (e.g. dirt), but not too large as it might impact depth-of-field. Note that the best results will be achieved using manual focus. Shooting through glass (or even wire mesh), the auto-focus can be misled by the surface and may not focus beyond. Autofocus can also take a while to focus, which can lead to you missing the shot. Trains generally move fast, so if you hesitate you loose the shot.

Glare due to the sun peaking out from behind the clouds directly at the window. Bergen Line (Olympus E-M5Mark II, 12mm, f/3.5, 1/1000)
Whoops, pushed the shutter at the wrong moment – nice photo if it wasn’t for the pole. Bergen Line (Olympus E-M5Mark II, 12mm, f/6.3, 1/400)

Here are some general tips:

  • Use continuous shooting mode, because it allows taking many photos at once which in turn means a few may produce really good photographs.
  • Use a polarizing filter to cut some of the reflections.
  • Use fast shutter speeds (and shutter-priority) to compensate for the train’s movement and vibration. Start with 1/500 for distant subjects, and 1/1000 to 1/2000 for nearby ones. Direction matters as well, so moving towards or away from a subject (rather than crossing laterally in front of it) usually allows for a lower shutter speed.
  • Use a wide-angle lens, since the short focal length helps to minimize movement.
  • An overcast sky is better than sunshine or rain. Too much sun will produce shadows and reflections, and rain will end up creating an artistic distortion effect when you shoot through the window.
  • Do research before the train trip to find notable sights, especially where the train may curl itself on a tight curve.
  • There will always be some form of blur in the image. The closer to the horizon, the less blur there is, because the train is moving slower with respect to distance closer to the horizon (i.e. motion parallax).
Running rapids alongside the Flåm Railway (Olympus E-M5Mark II, 12mm, f/2.8, 1/400)

Train speed also plays a factor, both in the shutter speed settings, and timing shots. The Norwegian Flåm Railway which travels between Myrdal and Flåm is an extremely scenic journey (if you can ignore the hoards of tourists). The train journey takes about 60 minutes and travels at a leisurely 40kph along the 20.2 kilometres. Conversely the Bergen Line, all 493km from Oslo to Bergen, the train will travel an average of 70kph.

View of a train on a slight curve, Flåm Railway (Olympus E-M5Mark II, 12mm, f/2.8, 1/800)
Windows that open on the Flåmsbana, Flåm Railway (iPhone 6s, 4.15mm, f/2.2, 1/192)

It is possible to successfully take pictures through glass on a moving vehicle. The caveat is of course that there has to be good scenes to take photos of. For most of the VIA rail trip from Toronto to Montreal, there isn’t a lot to see because the railway line sits level to the surrounding area, and passes through somewhat monotonous scenery (the train travels at 100kph). Some of the best photographs can actually be taken approaching Montreal, when the train slows down. Conversely, train trips like those in Norway offer a richness of photographic scenery. Just remember not to forget those who ride on the train as well.

Don’t forget the human story side to a train journey (Olympus E-M5Mark II, 12mm, f/2.8, 1/800)

Choosing a camera for travel

Many people buy a camera for taking photographs when travelling. Yeah sure, you could use a smartphone, but it won’t provide you with the flexibility of a real camera. Really. Smartphones are restricted to having small sensors (with tiny photosites), a low-power flash, and uber-poor battery life. While they have improved in recent years, offering quite incredible technology inside their limited form factor, they will never replace dedicated cameras. Conversely, you don’t have to carry around a huge DSLR sporting a cumbersome 28-400 zoom lens.

There are so many posts out there which are titled something like “best travel camera 2022”, it’s almost overwhelming. Many of the cameras reviewed in these posts have never really been tested in any sort of real setting (if at all). So below I’m going to outline some of the more important things to consider when choosing a travel camera? Note that this is a list of things to think about, not a definitive and in-depth interpretation of requirements for cameras used for taking travel photos. Note that this discussion related to digital – choosing a good analog cameras for travel is another thing altogether.

What will you be snapping? − buildings? people? close-up shots of flowers?

Budget − Of course how much you want to spend is a real issue. Good cameras aren’t cheap, but spending a reasonable amount on a camera means that it should last you years. You want a good balance of the items described below. If your budget is limited, go for a compact camera of some sort.

Compactness − The first choice from the camera perspective may be whether you want something that will fit in a pocket, a small bag (e.g. mirrorless), or a complete camera backpack (e.g. full-frame, which I would avoid). For a compact, you could go with one that has a zoom, but honestly a fixed focal length works extremely well. Good examples include the Ricoh GRIII (24.3MP, 18.3mm (28mm equiv.) f/2 lens) and Fujifilm X100V (26.1MP, 23mm (35mm equiv.) f/2 lens, 4K video). Because of their size, compacts sometimes have to sacrifice one feature for another. You also don’t want a compact that has too many dials – their real benefit is being able to point-and-shoot.

Mirrorless cameras are smaller than full-frame cameras because they don’t need to fit a mirror inside – they use a digital viewfinder instead of an optical one. They have a compact size, and provide good image quality. The downside is that they generally have smaller sensors, like APS-C and MFT. I normally opt for both a compact pocket camera, and a mirrorless. Some are better suited to some situations, e.g. compact cameras are much less conspicuous in indoor environs, and places like subways – that’s why they are so good for street photography. More compactness = enhanced portability.

Resilience − When you travel, there is often very little time to worry about whether or not a camera is going to get banged up. Cameras made of metal are obviously somewhat heavier, but offer much better survivability if a camer is accidentally dropped, or banged against something. A camera constructed with a body made of magnesium alloy is both durable and lightweight. It is both corrosion resistant and can handle extremes in temperature well. A magnesium alloy body has less chance of cracking as opposed to a polycarbonate body.

Weather resistance − You can never predict weather, anywhere. Some places are rainy or drizzly, others environs are dry and may have particles of stuff blowing in the air. Obviously you’re not going to take photos in pouring rain, but dust and dirt are often a bigger concern. My Ricoh GRIII is not weather sealed, which seems somewhat crazy when you consider it is a street camera, but there are always tradeoffs that have to be considered. In the case of the GRIII, adding weather sealing would have resulted in less flexibility on lens barrel construction, button/dial layout, and heat dissipation. My Fuji X-H1 on the other hand is weather resistent. Of course you should also choose lenses which are weather resistent. If weather resistance is important, be sure to read up on the specifics for a camera. For example the Fuji X100V is deemed to be weather-sealed, but the lens is not. To achieve this you have to buy an adapter, and add a filter.

Weight − How much are you willing to lug about on a daily basis when travelling? You don’t want to choose a camera that is going to give you back or shoulder pain. Larger format cameras like full-frame are heavier, and have heavier, larger lenses. If choosing a camera with interchangeable lenses, you also have to consider their weight, and the weight of batteries, and anything else you want to carry. There are even differences between compact cameras, e.g. the GRIII is 257g, versus the X100V at 478g, 85% more.

Lenses − If you choose an interchangeable lens camera, then the next thing to do is choose some lenses… a topic which deserves numerous posts on its own. The question is what will you be photographing? In general it is easy to narrow the scope of lenses which are good for travelling because some just aren’t practical. Telephoto for example – there are few cases where one will need a telephoto when travelling, unless the scope of the travelling involves nature photography. Same with macro lenses, and fisheye lenses (which really aren’t practical at the best of times). In an ideal world the most practical lenses are in the 24-35mm (full-frame equivalent) range. I think prime lenses are best, but short-range zooms work quite well too. I would avoid long-range zooms, because you will always use the smaller focal lengths, and long-range zooms are heavy.

Batteries − Camera batteries should have a reasonably good use-time. Using camera features, and taking lots of photos will generally have an impact on battery life. For example using image stabilization a lot, being connected to wi-fi, or turning the camera on and off a lot will run down the battery. There are other things to consider as well. For example most batteries run down quicker in colder environs. Full-frame cameras are bigger, and therefore have a longer battery life than cropped-sensor cameras. Also determine if the camera just comes charging in-camera, you will likely need to buy an external charger. Some battery chargers are also slow. Ideally always carry extra batteries no matter what the manufacturer claims.

Use − What is the camera’s main use during travelling? Street-photography? Vlogging? Landscapes (for poster-sized prints)? Or perhaps just simple travel snapshots. If the latter, then a compact will work superbly. If you want to have the flexibility of different lenses, then a mirrorless camera makes the most sense.

Video − Do you plan to take videos on the trip? If yes, then what sort of capabilities are you looking for? Most cameras produce video in HD1080p, and some have 4K capability. Some cameras limit the length of a video. If you plan to use the camera mostly for video, choose one specced out for that purpose.

Stabilization − Many cameras now offer some form of image stabilization, which basically means that the camera compensates for rudimentary camera shake due to hand-holding the camera, and keeping the camera steady in low-light situations. This is more important for travel photography because it is cumbersome to lug around a tripod, and many places, like the Arche de Triumph won’t allow the use of tripods. Some compacts like the Ricoh RGIII do have stabilization, whereas others like the Fuji X100V do not.

The best way of choosing a camera is to first make a list of all the things you want from the camera. Then try and find some cameras which match those specifications. Then see how those cameras stack up against the considerations outlined above. Narrow down the list. When you have about three candidates, start looking at reviews.

I tend to stay away from the generic “big-box” style reviews of cameras, especially those who use the term “best of YEAR” in the title. I instead pivot towards bloggers who write gear reviews – they often own, have rented, or are loaned the cameras, and offer an exceptional insight into a cameras pros and cons, and provide actual photographs. Usually you can find bloggers that specialize in specific types of photography, e.g. street, travel, video. For example, for the Ricoh GRIII, here are some blog reviews worth considering (if anything they provide insight into what to look for in a review):

Lastly, don’t worry about what professional photographers carry when travelling. Chances are they are on assignment, and carry an array of cameras and related equipment.

Pixel peeping and why you should avoid it

In recent years there has been a lot of of hoopla about this idea of pixel peeping. But what is it? Pixel peeping is essentially magnifying an image until individual pixels are perceivable by the viewer. The concept has been around for many years, but was really restricted to those that post-process their images. In the pre-digital era, the closest photographers would come to pixel peeping was the use of a loupe to view negatives, and slides in greater detail. It is the evolution of digital cameras that spurned the widespread use of pixel peeping.

Fig.1: Peeping at the pixels

For some people, pixel-peeping just offers a vehicle for finding flaws, particularly in lenses. But here’s the thing, there is no such thing as a perfect lens. There will always be flaws. A zoomed in picture will contain noise, and grain, unsharp, and unfocused regions. But sometimes these are only a problem because they are being viewed at 800%. Yes, image quality is important, but if you spend all your time worrying about every single pixel, you will miss the broader context – photography is suppose to be fun.

Pixel-peeping is also limited by the resolution of the sensor, or put another way, some objects won’t look good when viewed at 1:1 at 16MP. They might look better at 24MP, and very good at 96MP, but a picture is the sum of all its pixels. My Ricoh GR III allows 16× zooming when viewing an image. Sometimes I use it just to find out it the detail has enough sharpness in close-up or macro shots. Beyond that I find little use for it. The reality is that in the field, there usually isn’t the time to deep dive into the pixel content of a 24MP image.

Of course apps allow diving down to the level of the individual pixels. There are some circumstances where it is appropriate to look this deep. For example viewing the subtle effects of changing settings such as noise reduction, or sharpening. Or perhaps viewing the effect of using a vintage lens on a digital camera, to check the validity of manual focusing. There are legitimate reasons. Pixel peeping on the whole is really only helpful for people who are developing or finetuning image processing algorithms.

Fig.2: Pixel peeping = meaningless detail

One of the problems with looking at pixels 1:1 is that a 24MP image was never meant to be viewed using the granularity of a pixel. Given the size of the image, and the distance it should be viewed at, micro-issues are all but trivial. The 16MP picture in Figure 2 shows pixel-peeping of one of the ducks under the steam engine. The entire picture has a lot of detail in it, but dig closer, and the detail goes away. That makes complete sense because there are not enough pixels to represent everything in complete detail. Pixel-peeping shows the ducks eye – but it’s not exactly that easy to decipher what it is?

People that pixel-peep are too obsessed with looking at small details, when they should be more concerned with the picture as a whole.

Demystifying Colour (ix) : CIE chromaticity diagram

Colour can be divided up into luminosity and chromaticity. The CIE XYZ colour space was designed such that Y is a measure of the luminance of a colour. Consider a 3D plane is described by X=Y=Z=1, as shown in Figure 1. A colour point A=(Xa,Ya,Za) is then found by intersecting the line SA (S=starting point, X=Y=Z=0) with the plane formed within the CIE XYZ colour volume. As it is difficult to perceive 3D spaces, most chromaticity diagrams discard luminance and show the maximum extent of the chromaticity of a particular 2D colour space. This is achieved by dropping the Z component, and projecting back onto the XY plane.

Fig.1: CIE XYZ chromaticity diagram derived from CIE XYZ open cone.
Fig.2: RGB colour space mapped onto the chromaticity diagram

This diagram shows all the hues perceivable by the standard observer for various (x, y) pairs, and indicates the spectral wavelengths of the dominant single frequency colours. When Y is plotted against X for spectrum colours, it forms a horseshoe, or shark-fin, shaped diagram commonly referred to as the CIE chromaticity diagram where any (x,y) point defines the hue and saturation of a particular colour.

Fig.3: The CIE Chromaticity Diagram for CIE XYZ

The xy values along the curved boundary of the horseshoe correspond to the “spectrally pure”, fully saturated colours with wavelengths ranging from 360nm (purple) to 780nm (red). The area within this region contains all the colours that can be generated with respect to the primary colours on the boundary. The closer a colour is to the boundary, the more saturated it is, with saturation reducing towards the “neutral point” in the centre of the diagram. The two extremes, violet (360nm) and red (780nm) are connected with an imaginary line. This represents the purple hues (combinations of red and blue) that do not correspond to primary colours. The “neutral point” at the centre of the horseshoe (x=y=0.33) has zero saturation, and is typically marked as D65, and corresponds to a colour temperature of 6500K.

Fig.4: Some characteristics of the CIE Chromaticity Diagram

Spectre – Does it work?

Over a year ago I installed Spectre (for IOS). The thought of having a piece of software that could remove moving objects from photographs seemed like a real cool idea. It is essentially a long-exposure app which uses multiple images to create two forms of effects: (i) an image sans moving objects, and (ii) images with light (or movement) trails. It is touted as using AI and computational photography to produce these long exposures. The machine learning algorithms provide the scene recognition, exposure compensation, and “AI stabilization”, supposedly allowing for up to a 9-second handheld exposure without the need for a tripod.

It seems as though the effects are provided by means of a computational photography technique known as “image stacking“. Image stacking just involves taking multiple images, and post-processing the series to produce a single image. For removing objects, the images are averaged. The static features will be retained in the image, the moving features will be removed through the image averaging process – which is why a stable image is important. For the light trails it works similar to a long exposure on a digital camera, where moving objects in the image become blurred, which is usually achieved by superimposing the moving features from each frame on the starting frame.

Fig.1: The Spectre main screen.

The app is very easy to use. Below the viewing window are a series of basic controls: camera flip; camera stabilization, and settings. The stabilization control, when activated, provides a small visual feature that determines when the iPhone is STABLE. As Spectre can perform a maximum of 9 seconds worth of processing, stabilization is an important attribute. The length of exposure is controlled by a dial in the lower-right corner of the app – you can choose between 3, 5, and 9 seconds. The Settings really only allows the “images” to be saved as Live Photos. The button at the top-middle turns light trails to ON, OFF, or AUTO. The button in the top-right allows for exposure compensation, which can be adjusted using a slider. The viewing window can also be tapped to set the focus point for the shot.

Fig.2: The use of Spectre to create a motion trail (9 sec). The length of the train, and the slow speed it was moving at created slow-motion perception.

Using this app allows one of two types of processing. As mentioned, one of these modes is the creation of trails – during the day these are motion trails, and at night these are light trails. Motion trails are added by turning “light trails” to the “ON” position (Fig.4). The second mode, with “light trails” to the “OFF” position, basically removes moving objects from the scene (Fig.3)

Fig.3: Light trails off with moving objects removed.
Fig.4: Light trails on with motion trails shown during daylight.

It is a very simple app, for which I do congratulate the app designers. Too many photo-type app designers try and cram 1001 features into an app, often overwhelming the user.

Here are some caveats/suggestions:

  • Sometimes motion trails occur because the moving object is too long to fundamentally change the content of the image stack. A good example is a slow moving train – the train never leaves the scene, during a 9-second exposure, and hence gets averaged into a motion trail. This is an example of a long-exposure image, as aptly shown in Figure 2. It’s still cool from as aesthetics point-of-view.
  • Objects must move in and out of frame during the exposure time. So it’s not great for trying to remove people from tourist spots, because there may be too many of them, and they may not move quick enough.
  • Long exposures tend to suffer from camera shake. Although Spectre offers an indication of stability, it is best to rest the camera on at least one stable surface, otherwise there is a risk of subtle motion artifacts being introduced.
  • Objects moving too slowly might be blurred, and still leave some residual movement in a scene where moving objects are to be removed.

Does this app work? The answer is both yes and no. During the day the ideal situation for his app is a crowded scene, but the objects/people have to be moving at a good rate. Getting rid of parked cars, and slow people is not going to happen. Views from above are obviously ideal, or scenes where the objects to be removed are moving. For example, doing light trails of moving cars at night produces cool images, but only if they are taken from a vantage point – photos taken at the same level of the cars only results in producing a band of bright light.

It would actually be cool if they could extend this app to allow for times above nine seconds, specifically for removing people from crowded scenes. Or perhaps allowing the user to specify a frame count and delay. For example, 30 frames with a 3 second delay between each frame. It’s a fun app to play around with, and well worth the $2.99 (although how long it will be maintained is another question, the last update was 11 months ago).

More myths about travel photography

Below are some more myths associated with travel.

MYTH 13: Landscape photographs need good light.

REALITY: In reality there is no such thing as bad light, or bad weather, unless it is pouring. You can never guarantee what the weather will be like anywhere, and if you are travelling to places like Scotland, Iceland, or Norway the weather can change on the flip of a coin. There can be a lot of drizzle, or fog. You have to learn to make the most the situation, exploiting any kind of light.

MYTH 14: Manual exposure produces the best images.

REALITY: Many photographers use aperture-priority, or the oft-lauded P-mode. If you think something will be over- or under-exposed, then use exposure-bracketing. Modern cameras have a lot of technology to deal with taking optimal photographs, so don’t feel bad about using it.

MYTH 15: The fancy camera features are cool.

REALITY: No, they aren’t. Sure, try the built-in filters. They may be fun for a bit, but filters can always be added later. If you want to add filters, try posting to Instagram. For example, high-resolution mode is somewhat fun to play with, but it will eat battery life.

MYTH 16: One camera is enough.

REALITY: I never travel with less than two cameras, a primary, and a secondary, smaller camera, one that fits inside a jacket pocket easily (in my case a Ricoh GR III). There are risks when you are somewhere on vacation and your main camera stops working for some reason. A backup is always great to have, both for breakdowns, lack of batteries, or just for shooting in places where you don’t want to drag a bigger camera along, or would prefer a more inconspicuous photographic experience, e.g. museums, art galleries.

MYTH 17: More megapixels are better.

REALITY: I think optimally, anything from 16-26 megapixels is good. You don’t need 50MP unless you are going to print large posters, and 12MP likely is not enough these days.

MYTH 18: Shooting in RAW is the best.

REALITY: Probably, but here’s the thing, for the amateur, do you want to spend a lot of time post-processing photos? Maybe not? Setting the camera to JPEG+RAW is the best of both worlds. There is the issue of JPEG editing being destructive and RAW not.

MYTH 19: Backpacks offer the best way of carrying equipment.

REALITY: This may be true getting equipment from A to B, but schlepping a backpack loaded with equipment around every day during the summer can be brutal. No matter the type, backpacks + hot weather = a sweaty back. They also make you stand out, just as much as a FF camera with a 300mm lens. Opt instead for a camera sling, such as one from Peak Design. It has a much lower form factor and with a non-FF camera offers enough space for the camera, an extra lens, and a few batteries and memory cards. I’m usually able to shove in the secondary camera as well. They make you seem much more incognito as well.

MYTH 20: Carrying a film-camera is cumbersome.

REALITY: Film has made a resurgence, and although I might not carry one of my Exakta cameras, I might throw a half-frame camera in my pack. On a 36-roll film, this gives me 72 shots. The film camera allows me to experiment a little, but not at the expense of missing a shot.

MYTH 21: Travel photos will be as good as those in photo books.

REALITY: Sadly not. You might be able to get some good shots, but the reality is those shots in coffee-table photo books, and on TV shows are done with much more time than the average person has on location, and with the use of specialized equipment like drones. You can get some awesome imagery with drones, especially for video, because they can get perspectives that a person on the ground just can’t. If you spend an hour at a place you will have to deal with the weather that exists – someone who spends 2-3 days can wait for optimal conditions.

MYTH 22: If you wait long enough, it will be less busy.

REALITY: Some places are always busy, especially so it if is a popular landmark. The reality is short of getting up at the crack of dawn, it may be impossible to get a perfect picture. A good example is Piazza San Marco in Venice… some people get a lucky shot in after a torrential downpour, or some similar event that clears the streets, but the best time is just after sunrise, otherwise it is swamped with tourists. Try taking pictures of lesser known things instead of waiting for the perfect moment.

MYTH 23: Unwanted objects can be removed in post-processing.

REALITY: Sometimes popular places are full of tourists… like they are everywhere. In the past it was impossible to remove unwanted objects, you just had to come back at a quieter time. Now there are numerous forms of post-processing software like Cleanup-pictures that will remove things from a picture. A word of warning though, this type of software may not always work perfectly.

MYTH 24: Drones are great for photography.

REALITY: It’s true, drones make for some exceptional photographs, and video footage. You can actually produce aerial photos of scenes like the best professional photographers, from likely the best vantage points. However there are a number of caveats. Firstly, travel drones have to be a reasonable size to actually be lugged about from place to place. This may limit the size of the sensor in the camera, and also the size of the battery. Is the drone able to hover perfectly still? If not, you could end up with somewhat blurry images. Flight time on drones is usually 20-30 minutes, so extra batteries are a requirement for travel. The biggest caveat of course is where you can fly drones. For example in the UK, non-commercial drone use is permitted, however there are no-fly zones, and permission is needed to fly over World heritage sites such as Stonehenge. In Italy a license isn’t required, but drones can’t be used over beaches, towns or near airports.

A review of SKRWT – keystone correction for IOS

For a few years now, I have been using  SKRWT, an app that does perspective correction in IOS.

The goal was to have some way of quickly fixing issues with perspective, and distortions, in photographs. The most common form of this is the keystone effect (see previous post) which occurs when the image plane is not parallel to the lines that are required to be parallel in the photograph. This usually occurs when taking photographs of buildings where we tilt the camera backwards, in order to include the whole scene. The building appears to be “falling away” from the camera. Fig.1 shows a photograph of a church in Montreal. Notice, the skew as the building seems to tilt backwards.

The process of correcting distortions with SKRWT is easy. Pick an image, and then a series of options are provided in the icon bar below the imported picture. The option that best approximates the types of perspective distortion is selected, and a new window opens, with a grid overlaid upon the image. A slider below the image can be used to select the magnitude of the distortion correction, with the image transformed as the slider is moved. When the image looks geometrically corrected, pressing the tick stores the newly corrected image.

Using the SKRWT app, the perspective distortion can be fixed, but at a price. The problem is that correcting for the perspective distortion requires distorting the image, which means it will likely be larger than the original, and will need to be cropped (otherwise the image will contain black background regions).

Here is a third example, of Toronto’s flatiron building, with the building surrounded by enough “picture” to allow for corrective changes that don’t cut off any of the main object.

Overall the app is well designed and easy to use. In fact it will remove quite complex distortions, although there is some loss of content in the images processed. To use this, or any similar perspective correction software properly, you  really have to frame the building with enough background to allow for corrections – not so you are left with half a building.

The sad thing about this app is something that plagues a lot of apps – it has become a zombie app. The developer was suppose to release version 1.5 in December 2020, but alas nothing has appeared, and the website has had no updates. Zombie apps work while the system they are on works, but upgrade the phone, or OS, and there is every likelihood it will no longer work.