The simplicity of achromatic photographs

We live in a world where colour surrounds us, so why would anyone want to take an achromatic, black-and-white photograph? What draws us to a B&W photograph? Many modern colour images are brightened to add a sense of the exotic in the same way that B&W invokes an air of nostalgia. B&W does not exaggerate the truth in the same way that colour does. It does sometimes veil the truth, but in many ways it is an equalizer. Colours and the emotions they represent are stripped away, leaving nothing but raw structure. We are then less likely to draw emotions into the interpretation of achromatic photographs. There is a certain rawness to B&W photographs, which cannot be captured by colour.

Every colour image is of course built upon an achromatic image. The tonal attributes provides the structure, the chrominance the aesthetic elements that help us interpret what we see. Black and white photographs offer simplicity. When colour is removed from a photograph, it forces a different perspective of the world. To create a pure achromatic photograph means the photographer has to look beyond the story posed by the chromatic elements of the scene. It forces one to focus on the image. There is no hue, no saturation to distract. The composition of the scene suddenly becomes more important. Both light and the darkness of shadows become more pronounced. The photographic framework of a world without colour forces one to see things differently. Instead of highlighting colour, it helps highlight shape, texture, form and pattern.

Sometimes even converting a colour image to B&W using a filter can make the image content seem more meaningful. Colour casts or odd-ball lighting can often be vanquished if the image is converted. Noise that would appear distracting in a colour image, adds to an image as “grain” in B&W. B&W images will always capture the truth of a subjects structure, but colours are always open to interpretation due to the way individuals perceive colour. 

Above is a colour photograph of a bronze sculpture taken at The Vigeland Park in Oslo, a sculpture park displaying the works of Gustav Vigeland. The colour image is interesting, but the viewer is somewhat distracted by the blue sky, and even the patina on the statue. A more interesting take is the achromatic image, obtained via the Instagram Inkwell filter. The loss of colour has helped improve the contrast between the sculpture and its background.

What is a crop factor?

The crop factor of a sensor is the ratio of one camera’s sensor size in relation to another camera’s sensor of a different size. The term is most commonly used to represent the ratio between a 35mm full-frame sensor and a crop sensor. The term was coined to help photographers understand how existing lenses would perform on new digital cameras which had sensors smaller than the 35mm film format.

How to calculate crop factors?

It is easy to calculate a crop factor using the size of a crop-sensor in relation to a full-frame sensor. This is usually determined by comparing diagonals, i.e. full-frame sensor diagonal/cropped sensor diagonal. The diagonals can be calculated using Pythagorean Theorem. Calculate the diagonal of the crop-sensor, and divide this into the diagonal of a full-frame sensor, which is 43.27mm.

Here is an example of deriving the crop factor for a MFT sensor (17.3×13mm):

  1. The diagonal of a full-frame sensor is √(36²+24²) = 43.27mm
  2. The diagonal of the MFT sensor is √(17.3²+13²) = 21.64mm
  3. The crop factor is 43.27/21.64 = 2.0

This means a scene photographed with a MFT sensor will be smaller by a factor or 2 than a FF sensor, i.e. it will have half the physical size in dimensions.

Common crop factors

TypeCrop factor
1/2.3″5.6
1″2.7
MFT2.0
APS-C (Canon)1.6
APS-C (Fujifilm Nikon, Ricoh, Sony, Pentax)1.5
APS-H (defunct)1.35
35mm full frame1.0
Medium format (Fuji GFX)0.8

Below is a visual depiction of these crop sensors compared to the 1× of the full-frame sensor.

The various crop-factors per crop-sensor.

How are crop factors used?

The term crop factor is often called the focal length multiplier. That is because it is often used to calculate the “full-frame equivalent” focal length of a lens on a camera with a cropped sensor. For example, a MFT sensor has a crop factor of 2.0. So taking a MFT 25mm lens, and multiplying it by 2.0 gives 50mm. This means that a 25mm lens on a MFT camera would behave more like a 50mm lens on a FF camera, in terms of AOV, and FOV. If a 50mm mounted on a full-frame camera is placed next to a 25mm mounted on a MFT camera, and both cameras were the same distance from the subject, they would yield photographs with similar FOVs. They would not be identical of course, because they have different focal lengths which modifies characteristics such as perspective and depth-of-field.

Things to remember

  • The crop-factor is a value which relates the size of a crop-sensor to a full-frame sensor.
  • The crop-factor does not affect the focal length of a lens.
  • The crop-factor does not affect the aperture of a lens.

The low-down on crop sensors

Before the advent of digital cameras, the standard reference format for photography was 35mm film, with frames 36×24mm in size. Everything in analog photography had the same frame of reference (well except for medium format, but let’s ignore that). In the early development of digital sensors, there were cost and technological issues with developing a sensor the same size as 35mm film. The first commercially available dSLR, the Nikon QV-1000C, released in 1988, had a ⅔” sensor with a crop-factor of 4. The first full-frame dSLR would not appear until 2002, the Contax N Digital, sporting 6 megapixels.

Using a camera with a sensor smaller presented one significant problem – the field of view of images captured using these sensors was narrower than the reference 35mm standard. When camera manufacturers started creating sensors smaller than 36×24mm, they had to create a term which described them in relation to a 35mm film-frame (full-frame). For that reason the term crop sensor is used to describe a sensor that is some percentage smaller than a full-frame sensor (sometimes the term cropped is used interchangeably). The picture a crop sensor creates is “cropped” in relation to the picture created with a full-frame sensor (using the lenses with the same focal length). The sensor does not actually cut anything, it’s just that parts of the image are simply ignored. To illustrate what happens in a full-frame versus a cropped sensor, consider Fig.1.

Fig.1: A visual depiction of full-frame versus crop sensor in relation to the 35mm image circle.

Lenses project a circular image, the “image circle”, but a sensor only records a rectangular portion of the scene. A full-frame sensor, like the one from the Leica SL2 captures a large portion of the 35mm lens circle, whereas the Micro-Four-Thirds cropped sensor of the Olympus OM-D E-M1, only captures the central portion of the lens – the rest of the image falls outside the scope of the sensor (the FF sensor is shown as a dashed box). While crop-sensor lenses are smaller than those of full-frame cameras, there are limits to reducing their size from the perspective of optics, and light capture. Fig.2 shows another perspective on crop sensors based on a real scene, comparing a full-frame sensor to an APS-C sensor (assuming the same “size” lenses, say 50mm).

Fig.2: Viewing full-frame versus crop (APS-C)

The benefits of crop-sensors

  • Crop-sensors are smaller than full-frame sensors, therefore the cameras are generally smaller. This means cameras are generally smaller in dimensions and weigh less.
  • The cost of crop-sensor cameras, and the cost of their lenses is generally lower than FF.
  • A smaller size of lens is required. For example, a MFT camera only requires a 150mm lens to achieve the equivalent of a 300mm FF lens, in terms of field-of-view.

The limitations of crop-sensors

  • Lenses on a crop-sensor camera with the same focal-length as those on a full-frame camera will generally have a smaller AOV. For example a FF 50mm lens will have an AOV=39.6°, while a APS-C 50mm lens would have an AOV=26.6°. To get a similar AOV on the cropped sensor APS-C, a 33mm equivalent lens would have to be used.
  • A cropped sensor captures less of the lens image circle than a full-frame.
  • A cropped sensor captures less light than a full-frame (which has larger photosites which are more sensitive to light).

Common crop-sensors

A list of the most common crop-sensor sizes currently used in digital cameras, as well as the average sensor sizes (sensors from different manufacturers can differ by as much as 0.5mm in size), and example cameras is summarized in Table 1. A complete list of sensor sizes can be found here. Smartphones are in a league of their own, and usually have small sensors of the type 1/n”. For example the Apple iPhone 12 Pro max has 4 cameras – the tele camera uses a 1/3.4″ (4.23×3.17mm) sensor, and the tele camera a 1/3.6″ sensor (4×3mm).

TypeExample Cameras
1/2.3″6.16×4.62mmSony HX99, Panasonic Lumix DC-ZS80, Nikon Coolpix P950
1″13.2×8.8mmCanon Powershot G7X M3, Sony X100 VII
MFT / m43 17.3×13mmPanasonic Lumix DC-G95, Olympus OM-D E-M1 Mark III
APS-C (Canon)23.3×14.9mmCanon EOS M50 Mark II
APS-C 23.5×15.6mmRicoh GRIII, Fuji X-E3, Sony α6600, Sigma sd Quattro
35mm Full Frame 36×24mmSigma fpL, Canon EOS R5, Sony α, Leica SL2-S, Nikon Z6II
Medium format44×33mmFuji GFX 100
Table 1: Crop sensor sizes.

Figure 3 shows the relative sizes of three of the more common crop sensors: APS-C (Advanced Photo System type-C), MFT (Micro-Four-Thirds), and 1″, as compared to a full-frame sensor. The APS-C sensor size is modelled on the Advantix film developed by Kodak, where the Classic image format had a size of 25.1×16.7mm.

Fig.3: Examples of crop-sensors versus a full-frame sensor.

Defunct crop-sensors

Below is a list of sensors which are basically defunct, usually because they are not currently being used in any new cameras.

TypeSensor sizeExample Cameras
1/1.7″7.53×5.64mmNikon Coolpix P340 (2014), Olympus Stylus 1 (2013), Leica C (2013)
2/3″8.8×6.6mmFujifilm FinePix X10 (2011)
APS-C Foveon 20.7×13.8mmSigma DP series (2006-2011)
APS-H Foveon26.6×17.9mmSigma sd Quattro H (2016)
APS-H27×18mmLeica M8(2006), Canon EOS 1D Mark IV (2009)
Table 2: Defunct crop sensor sizes.

Are black-and-white photographs really black and white?

Black-and-white photography is somewhat of a strange term, because it alludes to the fact that the photograph is black-AND-white. However black-and-white photographs if interpreted correctly would mean an image which contains only black and white (in digital imaging terms a binary image). Alternatively they are sometimes called monochromatic photographs, but that too is a broad term, literally meaning “all colours of a single hue“. This means that cyanotype and sepia-tone prints, are also to be termed monochromatic. A colour image that contains predominantly bright and dark variants of the same hue could also be considered monochromatic.

Using the term black-and-white is therefore somewhat of a misnomer. The correct term might be grayscale, or gray-tone photographs. Prior to the introduction of colour films, B&W film had no designation, it was just called film. With the introduction of colour film, a new term had to be created to differentiate the types of film. Many companies opted for the use terms like panchromatic, which is an oddity because the term means “sensitive to all visible colors of the spectrum“. However in the context of black-and-white films, it implies a B&W photographic emulsion that is sensitive to all wavelengths of visible light. Afga produced IsoPan and AgfaPan, and Kodak Panatomic. Differentially, colour films usually had the term “chrome” in their names.

Fig.1: A black-and-white image of a postcard

All these terms have one thing in common, they represent the shades of gray across the full spectrum from light to dark. In the digital realm, an 8-bit grayscale image has 256 “shades” of gray, from 0 (black) to 255 (white). A 10-bit grayscale image has 1024 shades, from 0→1023. The black-and-white image shown in Fig.1 illustrates quite aptly an 8-bit grayscale image. But grays are colours as well, albeit without chroma, so they would be better termed achromatic colours. It’s tricky because a colour is “a visible light with a specific wavelength”, and neither black nor white are colours because they do not have specific wavelengths. White contains all wavelengths of visible light and black is the absence of visible light. Ironically, true blacks and true whites are rare in photographs. For example the image shown in Fig.1 only contains grayscale values ranging from 24..222, with few if any blacks or whites. We perceive it as a black-and-white photograph only because of our association with that term.

From photosites to pixels (i) – the process

We have talked briefly about digital camera sensors work from the perspective of photosites, and digital ISO, but what happens after the light photons are absorbed by the photosites on the sensor? How are image pixels created? This series of posts will try and demystify some of the inner workings of a digital camera, in a way that is understandable.

A camera sensor is typically made up of millions of cavities called photosites (not pixels, they are not pixels until they are transformed from analog to digital values). A 24MP sensor has 24 million photosites, typically arranged in the form of a matrix, 6000 pixels wide by 4000 pixel high. Each photosite has a single photodiode which records a luminance value. Light photons enter the lens and pass through the lens aperture before a portion of light is allowed through to the camera sensor when the shutter is activated at the start of the exposure. Once the photons hit the sensor surface they pass through a micro-lens attached to the receiving surface of each of the photosites, which helps direct the photons into the photosite, and then through a colour filter (e.g. Bayer), used to help determine the colour of pixel in an image. A red filter allows red light to be captured, green allows green to be captured and blue allow blue light in.

Every photosite holds a specific number of photons (sometimes called the well depth). When the exposure is complete, the shutter closes, and the photodiode gathers the photons, converting them into an electrical charge, i.e. electrons. The strength of the electrical signal is based on how many photons were captured by the photosite. This signal then passes through the ISO amplifier, which makes adjustments to the signal based on ISO settings. The ISO uses a conversion factor, “M” (Multiplier) to multiply the tally of electrons based on the ISO setting of the camera. For higher ISO, M will be higher, requiring fewer electrons.

Photosite to pixel

The analog signal then passes on to the ADC, which is a chip that performs the role of analog-to-digital converter. The ADC converts the analog signals into discrete digital values (basically pixels). It takes the analog signals as input, and classifies them into a brightness level (basically a matrix of pixels). The darker regions of a photographed scene will correspond to a low count of electrons, and consequently a low ADU value, while brighter regions correspond to higher ADU values. At this point the image can follow one (or both) of two paths. If the camera is set to RAW, then information about the image, e.g. camera settings, etc. (the metadata) is added and the image is saved in RAW format to the memory card. If the setting is RAW+JPEG, or JPEG, then some further processing may be performed by way of the DIP system.

The “pixels” passes to the DIP system, short for Digital Image Processing. Here demosaicing is applied, which basically converts the pixels in the matrix into an RGB image. Other image processing techniques can also be applied based on particular camera settings, e.g. image sharpening, noise reduction, etc. is basically an image. The colour space specified in the camera is applied, before the image as well as its associated meta-data is converted to JPEG format and saved on the memory card.

Summary: A number of photons absorbed by a photosite during exposure time creates a number of electrons which form a charge that is converted by a capacitor to a voltage which is then amplified, and digitized resulting in a digital grayscale value. Three layers of these grayscale values form the Red, Green, and Blue components of a colour image.

Demystifying Colour (iii) : colour models and spaces

Colour is a challenging concept in digital photography and image processing, partially because it is not a physical property, but rather a perceptual entity. Light is made up of many wavelengths, and colour is a sensation that is caused when our brain interprets these wavelengths. In the digital world, colour is represented using global colour models and more specific colour spaces

Colour models

colour model is a means of mapping wavelengths of light to colours, based on some particular scientific process, and a mathematical model, i.e. a way to convert colour into numbers. A colour model on its own is abstract, with no specific association to how the colours are perceived. The components of colour models have a number of distinguishing features. The core feature is the component type (e.g. RGB primaries, hue) and its associated units (e.g. degrees, percent). Other features included scale type (e.g. linear/non-linear), and geometric shape of the model (e.g. cube, cone, etc).

Colour models can be expressed in many different ways, each with their own benefits and limitations. Colour models can be described based on how they are constructed:

  • Colorimetric – These are colour models based on physical measurements of spectral reflectance. One of the CIE chromaticity diagrams is usually the basis for these models.
  • Psychological – These colour models are based on the human perception of colour. They are either designed on subjective observation criteria, and some sort of comparative references, (e.g. Munsell), or are designed through experimentation to comply with the human perception of colour, e.g. HSV, HSL.
  • Physiological – These colour models are based on the three primary colours associated with the three types of cones in the human retina, e.g. RGB.
  • Opponent – Based on perception experiments using pairwise opponent primary colours, e.g. Y-B, R-G.

Sometimes colour models are distinguised based on how colour components are combined. There are two methods of colour mixing – additive or subtractive. Additive colour models use light to display colours, while subtractive models use printing inks. Colours received in additive models such as RGB are the result of transmitted light, whereas those perceived in subtractive models such as CMYK are the result of reflected light. An example of an image showing its colours as represented using the RGB colour model is shown in Fig.1.

Fig.1: A colour image and its associated RGB colour model

Colour models can be described using a geometric representation of colours in a three-dimensional space, such as a cube, sphere or cone. The geometric shape describes what the map for navigating a colour space looks like. For example RGB is shaped like a cube, HSV can be represented by a cylindrical or conical object, and YIQ is a convex-poyhedron (a somewhat skewed rectangular box). The geometric representations of the image in Figure 1 shown using three different colour models is shown in Figure 2.

Fig.2: Three different colour models with differing geometric representations

Colour spaces

colour space, is a specific implementation of a colour model, and usually defines a subset of a colour model. Different colour spaces can exist within a colour model. With a colour model we are able to determine a certain colour relative to other colours in the model. It is not possible to conclude how a certain colour will be perceived. A colour space can then be defined by a mapping of a colour model to a real-world colour reference standard. The most common reference standard is CIE XYZ which was developed in 1931. It defines the number of colours the human eye can distinguish in relation to wavelengths of light.

In the context of photographs colour space is the specific range of colours that can be represented. For example the RGB colour model has several different colour spaces, e.g. sRGB, Adobe RGB. sRGB is the most common colour space and is the standard for many cameras, and TVs. Adobe RGB was designed (by Adobe) to complete with sRGB, and is meant to offer a broader colour gamut (some 35% more). So a photograph taken using sRGB may have more subtle tones, than one taken using Adobe RGB. CIELab, and CIELuv are colour spaces within the CIE colour model.

That being said, the terms colour model and colour space are often used interchangeably, for example RGB is considered both a colour model and a colour space.

Demystifying Colour (ii) : the basics of colour perception

How humans perceive colour is interesting, because the technology of how digital cameras capture light is adapted from the human visual system. When light enters our eye it is focused by the cornea and lens into the “sensor” portion of the eye – the retina. The retina is composed of a number of different layers. One of these layers contains two types of photosensitive cells (photoreceptors), rods and cones, which interpret the light, and convert it into a neural signal. The neural signals are collected and further processed by other layers in the retina before being sent to the brain via the optic nerve. It is in the brain that some form of colour association is made. For example, an lemon is perceived as yellow, and any deviation from this makes us question what we are looking at (like maybe a pink lemon?).

Fig.1: An example of the structure and arrangement of rods and cones

The rods, which are long and thin, interpret light (white) and darkness (black). Rods work only at night, as only a few photons of light are needed to activate a rod. Rods don’t help with colour perception, which is why at night we see everything in shades of gray. The human eye is suppose to have over 100 million rods.

Cones have tapered shape, and are used to process the the three wavelengths which our brains interpret as colour. There are three types of cones – short-wavelength (S), medium-wavelength (M), and long-wavelength (L). Each cone absorbs light over a broad range of wavelengths: L ∼ 570nm, M ∼ 545nm, and S ∼ 440nm. The cones are usually called R, G, and B for L, M, and S respectively. Of course these cones have nothing to do with their colours, just wavelengths that our brain interprets as colours. There are roughly 6-7 million cones in the human eye, divided up into 64% “red” cones, 32% “green” cones, and 2% “blue” cones. Most of these are packed into the fovea. Figure 2 shows how rods and cones are arranged in the retina. Rods are located mainly in the peripheral regions of the retina, and are absent from the middle of the fovea. Cones are located throughout the retina, but concentrated on the very centre.

Fig.2: Rods and cones in the retina.

Since there are three types of cones, how are other colours formed? The ability to see millions of colours is a combination of the overlap of the cones, and how the brain interprets the information. Figure 3 shows roughly how the red, green, and blue sensitive cones interpret different wavelengths as colour. As different wavelengths stimulate the colour sensitive cones in differing proportions, the brain interprets the signals as differing colours. For example, the colour yellow results from the red and green cones being stimulated while the blues cones are not.

Fig.3: Response of the human visual system to light

Below is a list of approximately how the cones make the primary and secondary colours. All other colours are composed of varying strengths of light activating the red, green and blues cones. when the light is turned off, black is perceived.

  • The colour violet activates the blue cone, and partially activates the red cone.
  • The colour blue activates the blue cone.
  • The colour cyan activates the blue cone, and the green cone.
  • The colour green activates the green cone, and partially activates the red and blue cones.
  • The colour yellow activates the green cone and the red cone.
  • The colour orange activates the red cone, and partially activates the green cone.
  • The colour red activates the red cones.
  • The colour magenta activates the red cone and the blue cone.
  • The colour white activates the red, green and blue cones.

So what about post-processing once the cones have done their thing? The sensor array receives the colours, and stores the information by encoding it in the bipolar and ganglion cells in the retina before it is passed to the brain. There are three types of encoding.

  1. The luminance (brightness) is encoded as the sum of the signals coming from the red, green and blue cones and the rods. These help provide the fine detail of the image in black and white. This is similar to a grayscale version of a colour image.
  2. The second encoding separates blue from yellow.
  3. The third encoding separates red and green.
Fig.4: The encoding of colour information after the cones do their thing.

In the fovea there are no rods, only cones, so the luminance ganglion cell only receives a signal from one cone cell of each colour. A rough approximation of the process is shown in Figure 4.

Now, you don’t really need to know that much about the inner workings of the eye, except that colour theory is based a great deal on how the human eye perceives colour, hence the use of RGB in digital cameras.

Demystifying Colour (i) : visible colour

Colour is the basis of human vision. Everything appears coloured. Humans see in colour, or rather the cones in our eyes interpret wavelengths of red, green and blue when they enter the eye in varying proportions, enabling us to see a full gamut of colours. The miracle of the human eyes aside, how does colour exist? Are trees really green? Bananas yellow? Colour is not really inherent in objects, but the surface of an object reflects some colours and absorb others. So the human eye only perceives reflected colours. The clementine in the figure below reflects certain wavelengths, which we perceive as orange. Without light there is no colour.

Reflected wavelengths = perceived colours

Yet even for the simplest of colour theory related things, like the visible spectrum, it is hard to find an exact definition. Light is a form of electromagnetic radiation. Its physical property is described in terms of wavelength (λ) in units of nanometers (nm, which is 10-9 metres). Human eyes can perceive the colours associated with the visible light portion of the electromagnetic radiation spectrum. It was Isaac Newton who in 1666 described the spectrum of white light as being divided into seven distinct colours – red, orange, yellow, green, blue, indigo and violet. Yet in many renditions, indigo has been replaced by blue, and blue by cyan. In some renditions there are only six colours (like in Pink Floyd’s album cover for Dark Side of the Moon), others have eight. It turns out indigo likely doesn’t need to be there (because its hard to tell indigo apart from blue and violet). Another issue is the varied ranges of the visible spectrum in nanometers. Some sources define it as broadly as 380-800nm, while others narrow it to 420-680nm. Confusing right? Well CIE suggests that there are no precise limits for the spectral range of visible radiation – the lower limit is 360-400nm and the upper limit 760-830nm.

The visible spectrum of light (segmented into eight colours)

Thankfully for the purposes of photography we don’t have to delve that deeply into the specific wavelengths of light. In fact we don’t even have to think too much about the exact wavelength of colours like red, because frankly the colour “red” is just a cultural association with a particular wavelength. Basically colours are named for the sake of communications and so we can differentiate thousands of different paints chips. The reality is that while the human visual system can see millions of distinct colours, we only really have names for a small set of them. Most of the worlds languages only have five basic terms for colour. For example, the Berinmo tribe of Papua New Guinea have a term for light, dark, red, yellow, and one that denotes both blue and green [1]. Maybe we have overcomplicated things somewhat when it comes to colour.

But this does highlight some of the issues with colour theory – the overabundance of information. There are various terms which seem to lack a clear definition, or overlap with other terms. Who said colour wasn’t messy? It is. What is the difference between a colour model and a colour space? Why do we use RGB? Why do we care about HSV colour space? This series will look at some colour things as it relates to photography, explained as simply as possible.

  1. Davidoff, J., Davies, I., Roberson, D., “Colour categories in a stone-age tribe”, Nature, 398, pp.203-204 (1999)

Myths about travel photography

Travel snaps have been around since the dawn of photography. Their film heyday was likely the 1950s-1970s when photographs taken using slide film were extremely popular. Of course in the days of film it was hard to know what your holiday snaps would look like until they were processed. The benefit of analog was of course that most cameras offered similar functionality, with the aesthetic provided by the type of film used. While there were many differing lenses available, most cameras came with a stock 50mm lens, and most people travelled with a 50mm lens, possibly a wider lens for landscapes, and later zoom lenses.

With digital photography things got easier, but only in the sense of being able to see what you photograph immediately. Modern photography is a two-edged sword. On one side there are a lot more choices, in both cameras, and lenses, and on the other side digital cameras have a lot more dependencies, e.g. memory cards, batteries etc., and aesthetic considerations, e.g. colour rendition. Below are some of myths associated with travel photography, in no particular order, taken from my own experiences travelling as an amateur photographer. I generally travel with one main camera, either an Olympus MFT, or Fuji X-series APS-C, and a secondary camera, which is now a Ricoh GR III.

The photographs above illustrate three of the issues with travel photography – haze, hard shadows, and shooting photographs from a moving train.

MYTH 1: Sunny days are the best for taking photographs.

REALITY: A sunny or partially cloudy day is not always congenial to good outdoor photographs. It can produce a lot of glare, and scenes with hard shadows. On hot sunny days landscape shots can also suffer from haze. Direct sunlight in the middle of the day often produces the harshest of light. This can mean that shadows become extremely dark, and highlights become washed out. In reality you have to make the most of whatever lighting conditions you have available. There are a bunch of things to try when faced with midday light, such as using the “Sunny 16” rule, and using a neutral density (ND) filter.

MYTH 2: Full-frame cameras are the best for taking travel photography

REALITY: Whenever I travel I always see people with full-frame (FF) cameras sporting *huge* lenses. I wonder if they are wildlife or sports photographers? In reality it’s not necessary to travel with a FF camera. They are much larger, and much heavy than APS-C or MFT systems. Although they produce exceptional photographs, I can’t imagine lugging a FF camera and accessories around for days at a time.

MYTH 3: It’s best to travel with a bunch of differing lenses.

REALITY: No. Pick the one or two lenses you know you are going to use. I travelled a couple of times with an extra super-wide, or telephoto lens in the pack, but the reality is that they were never used. Figure out what you plan to photograph, and pack accordingly. A quality zoom lens is always good because it provides the variability of differing focal lengths in one lens, however fixed focal length lenses often produce a better photograph. I would imagine a 50mm equivalent is a good place to start (25mm MFT, 35mm APS-C).

MYTH 4: The AUTO setting produces the best photographs.

REALITY: The AUTO setting does not guarantee a good photograph, and neither does M (manual). Ideally shooting in P (program) mode probably gives the most sense of flexibility. But there is nothing wrong with using AUTO, or even preset settings for particular circumstances.

MYTH 5: Train journeys are a great place to shoot photographs.

REALITY: Shooting photographs from a moving object, e.g. a train requires the use of S (shutter priority). You may not get good results from a mobile device, because they are not designed for that. Even using the right settings, photographs from a train may not always seem that great unless the scenery allows for a perspective shot, rather than just a linear shot out of the window, e.g. you are looking down into valleys etc. There is issues like glare, and dirty windows to contend with.

MYTH 6: A flash is a necessary piece of equipment.

REALITY: Not really for travelling. There are situations you could use it, like indoors, but usually indoor photos are in places like art galleries and museums who don’t take kindly to flash photography, and frankly it isn’t needed. If you have some basic knowledge it is easy to take indoor photographs with the light available. Even better this is where mobile devices tend to shine, as they often have exceptional low-light capabilities. Using a flash for landscapes is useless… but I have seen people do it.

MYTH 7: Mobile devices are the best for travel photography.

REALITY: While they are certainly compact and do produce some exceptional photographs, they are not always the best for travelling. Mobile devices with high-end optics excel at certain things, like taking inconspicuous photographs, or in low-light indoors etc. However to get the most optimal landscapes, a camera will always do a better job, mainly because it is easier to change settings, and the optics are clearly better.

MYTH 8: Shooting 1000 photographs a day is the best approach.

REALITY: Memory is cheap, so yes you could shoot 1000 frames a day, but is it the best approach? You may as well strap a Go-Pro to your head and video tape everything. At the end of a 10-day vacation you could have 10,000 photos, which is crazy. Try instead to limit yourself to 100-150 photos a day, which is like 3-4 36 exposure rolls of film. Some people suggest less, but then you might later regret not taking a photo. There is something about limiting the amount of photos you take and instead concentrate on taking creative shots.

MYTH 9: A tripod is essential.

REALITY: No, its not. They are cumbersome, and sometimes heavy, and the reality is that in some places, e.g. atop the Arc de Triomphe, you can’t use a tripod. Try walking around the whole day in a city like Zurich during the summer, lugging a bunch of camera gear, *and* a tripod. For a good compromise, consider packing a pocket tripod such as the Manfrotto PIXI. In reality cameras have such good stabilization these days that in most situations you don’t need a tripod.

MYTH 10: A better camera will take better pictures.

REALITY: Unlikely. I would love to have a Leica DLSR. Would it produce better photographs? Maybe, but the reality is that taking photographs is as much about the skill of the photographer than the quality of the camera. Contemporary cameras have so much technology in them, learn to understand it, and better your skills before thinking about upgrading a camera. There will always be new cameras, but it’s hard to warrant buying one.

MYTH 11: A single battery is fine.

REALITY: Never travel with less than two batteries. Cameras use a lot of juice, because features like image stabilization, and auto-focus aren’t free. I travel with at least 3 batteries for whatever camera I take. Mark them as A, B, and C, and use them in sequence. If the battery in the camera is C, then you know A and B need to be recharged, which can be done at night. There is nothing worse than running out of batteries half-way through the day.

MYTH 12: Post-processing will fix any photos.

REALITY: Not so, ever heard of the expression garbage-in, garbage-out? Some photographs are hard to fix, because not enough effort was taken when they were taken. If you take a photograph of a landscape with a hazy sky, it may be impossible to post-process it.

The art of over-processing

“No matter what lens you use, no matter what the speed of the film is, no matter how you develop it, no matter how you print it, you cannot say more than you see”.

Thoreau quoted by Paul Strand, The Snapshot Aperture, 19(1), p.49 (1974)