Colour (photography) is all about the light

Photography in the 21st century is interesting because of all the fuss made about megapixels and sharp glass. But none of the tools of photography matter unless you have an innate understanding of light. For it is light that makes a picture. Without light, the camera is blind, capable of producing only dark unrecognizable images. Sure, artificial light could be used, but photography is mostly about natural light. It is light that provides colour, helps interpret contrast, determines brightness and darkness, and also tone, mood, and atmosphere. However in our everyday lives, light is often taken somewhat fore granted.

One of the most important facets of light is colour. Colour begins and ends with light; without light, i.e. in darkness, there is no colour. Light is an attribute of a big family of “waves” that starts with wavelengths of several thousand kilometres, including the likes of radio waves, heat radiation, infrared and ultraviolet waves, and X rays, and ends with gamma radiation of radium and cosmic rays with wavelengths so short that they have to be measured in fractions of a millionth part of a millimeter. Visible light is of course that part of the spectrum which the human eyes are sensitive to, ca. 400-700nm. For example the wavelength representing the colour green has values in the range 500-570nm.

The visible light spectrum

It is this visible light that builds the colour picture in our minds, or indeed that which we take with a camera. An object will be perceived as a certain colour because it absorbs some colours (or wavelengths) and reflects others. The colours that are reflected are the ones we see. For example the dandelion in the image below looks yellow because the yellow petals in the flower have absorbed all wavelengths of colour except yellow, which is the only colour reflected. If only pure red light were shone onto the dandelion, it would appear black, because the red would be absorbed and there would be no yellow light to be reflected. Remember, light is simply a wave with a specific wavelength or a mixture of wavelengths; it has no colour in and of itself. So technically, there is really no such thing as yellow light, rather, there is light with a wavelength of about 590nm that appears yellow. Similarly, the grass in the image reflects green light.

The colours we see are reflected wavelengths that are interpreted by our visual system.

The colour we interpret will also be different based on the time of day, lighting, and many other factors. Another thing to consider with light is its colour temperature. Colour temperature uses numerical values in degrees Kelvin to measure the colour characteristics of a light source on a spectrum ranging from warm (orange) colours to cool (blue) colours. For example natural daylight has a temperature of about 5000 Kelvin, whereas sunrise/sunset can be around 3200K. Light bulbs on the other hand can range anywhere from 2700K to 6500K. A light source that is 2700K is considered “warm” and generally emits more wavelengths of red, whereas a 6500K light is said to be “cool white” since it emits more blue wavelengths of light.

We see many colours as one, building up a picture.

Q: How many colours exist in the visible spectrum?
A: Technically, none. This is because the visible spectrum is light, with a wavelength (or frequency), not colour per se. Colour is a subjective, conscious experience which exists in our minds. Of course there might be an infinite number of wavelengths of light, but humans are limited in the number they can interpret.

Q: Why is the visible spectrum described in terms of 7 colours?
A: We tend to break the visible spectrum down into seven colours: red, orange, yellow, green, blue, indigo, and violet. Passing a ray of white light through a glass prism, splits it into seven constituent colours, but these are somewhat arbitrary as light comes as a continuum, with smooth transitions between colours (it was Isaac Newton that first divided the spectrum into 6, then 7 named colours). There are now several different interpretations of how spectral colours have been categorized. Some modern ones have dropped indigo, or have replaced it with cyan.

Q: How is reflected light interpreted as colour?
A: Reflected light is interpreted by both camera sensors, film, and the human eye by filtering the light, to interpret the light in terms of the three primary colours: red, green, and blue (see: The basics of colour perception).

Demystifying Colour (i) : visible colour

Colour is the basis of human vision. Everything appears coloured. Humans see in colour, or rather the cones in our eyes interpret wavelengths of red, green and blue when they enter the eye in varying proportions, enabling us to see a full gamut of colours. The miracle of the human eyes aside, how does colour exist? Are trees really green? Bananas yellow? Colour is not really inherent in objects, but the surface of an object reflects some colours and absorb others. So the human eye only perceives reflected colours. The clementine in the figure below reflects certain wavelengths, which we perceive as orange. Without light there is no colour.

Reflected wavelengths = perceived colours

Yet even for the simplest of colour theory related things, like the visible spectrum, it is hard to find an exact definition. Light is a form of electromagnetic radiation. Its physical property is described in terms of wavelength (λ) in units of nanometers (nm, which is 10-9 metres). Human eyes can perceive the colours associated with the visible light portion of the electromagnetic radiation spectrum. It was Isaac Newton who in 1666 described the spectrum of white light as being divided into seven distinct colours – red, orange, yellow, green, blue, indigo and violet. Yet in many renditions, indigo has been replaced by blue, and blue by cyan. In some renditions there are only six colours (like in Pink Floyd’s album cover for Dark Side of the Moon), others have eight. It turns out indigo likely doesn’t need to be there (because its hard to tell indigo apart from blue and violet). Another issue is the varied ranges of the visible spectrum in nanometers. Some sources define it as broadly as 380-800nm, while others narrow it to 420-680nm. Confusing right? Well CIE suggests that there are no precise limits for the spectral range of visible radiation – the lower limit is 360-400nm and the upper limit 760-830nm.

The visible spectrum of light (segmented into eight colours)

Thankfully for the purposes of photography we don’t have to delve that deeply into the specific wavelengths of light. In fact we don’t even have to think too much about the exact wavelength of colours like red, because frankly the colour “red” is just a cultural association with a particular wavelength. Basically colours are named for the sake of communications and so we can differentiate thousands of different paints chips. The reality is that while the human visual system can see millions of distinct colours, we only really have names for a small set of them. Most of the worlds languages only have five basic terms for colour. For example, the Berinmo tribe of Papua New Guinea have a term for light, dark, red, yellow, and one that denotes both blue and green [1]. Maybe we have overcomplicated things somewhat when it comes to colour.

But this does highlight some of the issues with colour theory – the overabundance of information. There are various terms which seem to lack a clear definition, or overlap with other terms. Who said colour wasn’t messy? It is. What is the difference between a colour model and a colour space? Why do we use RGB? Why do we care about HSV colour space? This series will look at some colour things as it relates to photography, explained as simply as possible.

  1. Davidoff, J., Davies, I., Roberson, D., “Colour categories in a stone-age tribe”, Nature, 398, pp.203-204 (1999)

Photosite size and noise

Photosites have a definitive amount of noise that occurs when the sensor is read (electronic/readout noise), and a definitive amount of noise per exposure (photon/shot noise). Collecting more light in photosites allows for a higher signal-to-noise ratio (SNR), meaning more signal, less noise. The lower amount of noise has to do with the accuracy of the light photons measured – a photosite that collects 10 photons will be less accurate than one that collects 50 photons. Consider the figure below. The larger photosite on the left is able to collect many four times as many light photons as the smaller photosite on the right. However the photon “shot” noise acquired by the larger photosite is not four times that of the smaller photosite, and as a consequence, the larger photosite has a much better SNR.

Large versus small photosites

A larger photosite size has less noise fundamentally because the accuracy of the measurement from a sensor is proportional to the amount of light it collects. Photon or shot noise can be approximately described as the square root of signal (photons). So as the number of photons being collected by a photosite (signal) increases, the shot noise increases more slowly, as the square root of the signal.

Two different photosite sizes from differing sensors

Consider the following example, using two differing size photosites from differing sensors. The first is from a Sony A7 III, a full frame (FF) sensor, with a photosite area of 34.9μm²; the second is from an Olympus EM-1(ii) Micro-Four-Thirds (MFT) sensor with a photosite area of 11.02μm². Let’s assume that for the signal, one photon strikes every square micron of the photosite (a single exposure at 1/250s), and calculated photon noise is √signal. Then the Olympus photosite will receive 11 photons for every 3 electrons of noise, a SNR of 11:3. The Sony will receive 35 photons for every 6 electrons of noise, a SNR of 35:6. If both are normalized, we get rations of 3.7:1 versus 5.8:1, so the Sony has the better SNR (for photon noise).

Photon (signal) versus noise

If the amount of light is reduced, by stopping down the aperture, or decreasing the exposure time, then larger photosites will still receive more photons than smaller ones. For example, stopping down the aperture from f/2 to f/2.8 means the amount of light passing through the lens is halved. Larger pixels are also often situated better when long exposures are required, for example low-light scenes such as astrophotography. For example, if we were to increase the shutter speed from 1/250s to 1/125s, then the number of photons collected by a photosite would double. The shot noise SNR in the Sony would increase from 5.8:1 to 8.8:1, that of the Olympus would only increase from 3.7:1 to 4.7:1.

Photosite size and light

It doesn’t really matter what the overall size of a sensor is, it is the size of the photosites that matter. The area of the photosite affects how much light can be gathered. The larger the area, the more light that can be collected, resulting in a greater dynamic range, and potentially a better signal quality. Conversely, smaller photosites can provide more detail for a given sensor size. Let’s compare a series of sensors: a smartphone (Apple XR), a MFT sensor (Olympus E-M1(II)), an APS-C sensor (Ricoh GRII) and a full frame sensor (Sony A7 III).

A comparison of different photosite sizes (both photosize pitch and area are shown)

The surface area of the photosites on the Sony sensor is 34.93µm², meaning there are roughly 3× more photons hitting the full-frame photosite than the MFT photosite (11.02µm²), and nearly 18× more than the photosite on the smartphone. So how does this affect the images created?

The size of a photosite relates directly to the amount of light that can be captured. Large photosites are able to perform well in low-light situations, whereas small photosites struggle to capture light, leading to an increase in noise. Being able to capture more light means a higher signal output from a photosite. This means it will require less amplification (a lower ISO), than a sensor with smaller photosites. Collecting more light with the same exposure time and, therefore, respond with higher sensitivity. An exaggerated example is shown in the figure below.

Small vs. large photosites, normal vs. low light

Larger photosites are usually associated with larger sensors, and that’s the reason why many full-frame cameras are good in low-light situations. Photosites do not exist in isolation, and there are other factors which contribute to the light capturing abilities of photosites, e.g. the microlenses that help to gather more light for a photosite, and the small non-functional gaps between each photosite.