Demystifying Colour (i) : visible colour

Colour is the basis of human vision. Everything appears coloured. Humans see in colour, or rather the cones in our eyes interpret wavelengths of red, green and blue when they enter the eye in varying proportions, enabling us to see a full gamut of colours. The miracle of the human eyes aside, how does colour exist? Are trees really green? Bananas yellow? Colour is not really inherent in objects, but the surface of an object reflects some colours and absorb others. So the human eye only perceives reflected colours. The clementine in the figure below reflects certain wavelengths, which we perceive as orange. Without light there is no colour.

Reflected wavelengths = perceived colours

Yet even for the simplest of colour theory related things, like the visible spectrum, it is hard to find an exact definition. Light is a form of electromagnetic radiation. Its physical property is described in terms of wavelength (λ) in units of nanometers (nm, which is 10-9 metres). Human eyes can perceive the colours associated with the visible light portion of the electromagnetic radiation spectrum. It was Isaac Newton who in 1666 described the spectrum of white light as being divided into seven distinct colours – red, orange, yellow, green, blue, indigo and violet. Yet in many renditions, indigo has been replaced by blue, and blue by cyan. In some renditions there are only six colours (like in Pink Floyd’s album cover for Dark Side of the Moon), others have eight. It turns out indigo likely doesn’t need to be there (because its hard to tell indigo apart from blue and violet). Another issue is the varied ranges of the visible spectrum in nanometers. Some sources define it as broadly as 380-800nm, while others narrow it to 420-680nm. Confusing right? Well CIE suggests that there are no precise limits for the spectral range of visible radiation – the lower limit is 360-400nm and the upper limit 760-830nm.

The visible spectrum of light (segmented into eight colours)

Thankfully for the purposes of photography we don’t have to delve that deeply into the specific wavelengths of light. In fact we don’t even have to think too much about the exact wavelength of colours like red, because frankly the colour “red” is just a cultural association with a particular wavelength. Basically colours are named for the sake of communications and so we can differentiate thousands of different paints chips. The reality is that while the human visual system can see millions of distinct colours, we only really have names for a small set of them. Most of the worlds languages only have five basic terms for colour. For example, the Berinmo tribe of Papua New Guinea have a term for light, dark, red, yellow, and one that denotes both blue and green [1]. Maybe we have overcomplicated things somewhat when it comes to colour.

But this does highlight some of the issues with colour theory – the overabundance of information. There are various terms which seem to lack a clear definition, or overlap with other terms. Who said colour wasn’t messy? It is. What is the difference between a colour model and a colour space? Why do we use RGB? Why do we care about HSV colour space? This series will look at some colour things as it relates to photography, explained as simply as possible.

  1. Davidoff, J., Davies, I., Roberson, D., “Colour categories in a stone-age tribe”, Nature, 398, pp.203-204 (1999)

Photosite size and noise

Photosites have a definitive amount of noise that occurs when the sensor is read (electronic/readout noise), and a definitive amount of noise per exposure (photon/shot noise). Collecting more light in photosites allows for a higher signal-to-noise ratio (SNR), meaning more signal, less noise. The lower amount of noise has to do with the accuracy of the light photons measured – a photosite that collects 10 photons will be less accurate than one that collects 50 photons. Consider the figure below. The larger photosite on the left is able to collect many four times as many light photons as the smaller photosite on the right. However the photon “shot” noise acquired by the larger photosite is not four times that of the smaller photosite, and as a consequence, the larger photosite has a much better SNR.

Large versus small photosites

A larger photosite size has less noise fundamentally because the accuracy of the measurement from a sensor is proportional to the amount of light it collects. Photon or shot noise can be approximately described as the square root of signal (photons). So as the number of photons being collected by a photosite (signal) increases, the shot noise increases more slowly, as the square root of the signal.

Two different photosite sizes from differing sensors

Consider the following example, using two differing size photosites from differing sensors. The first is from a Sony A7 III, a full frame (FF) sensor, with a photosite area of 34.9μm²; the second is from an Olympus EM-1(ii) Micro-Four-Thirds (MFT) sensor with a photosite area of 11.02μm². Let’s assume that for the signal, one photon strikes every square micron of the photosite (a single exposure at 1/250s), and calculated photon noise is √signal. Then the Olympus photosite will receive 11 photons for every 3 electrons of noise, a SNR of 11:3. The Sony will receive 35 photons for every 6 electrons of noise, a SNR of 35:6. If both are normalized, we get rations of 3.7:1 versus 5.8:1, so the Sony has the better SNR (for photon noise).

Photon (signal) versus noise

If the amount of light is reduced, by stopping down the aperture, or decreasing the exposure time, then larger photosites will still receive more photons than smaller ones. For example, stopping down the aperture from f/2 to f/2.8 means the amount of light passing through the lens is halved. Larger pixels are also often situated better when long exposures are required, for example low-light scenes such as astrophotography. For example, if we were to increase the shutter speed from 1/250s to 1/125s, then the number of photons collected by a photosite would double. The shot noise SNR in the Sony would increase from 5.8:1 to 8.8:1, that of the Olympus would only increase from 3.7:1 to 4.7:1.

Photosite size and light

It doesn’t really matter what the overall size of a sensor is, it is the size of the photosites that matter. The area of the photosite affects how much light can be gathered. The larger the area, the more light that can be collected, resulting in a greater dynamic range, and potentially a better signal quality. Conversely, smaller photosites can provide more detail for a given sensor size. Let’s compare a series of sensors: a smartphone (Apple XR), a MFT sensor (Olympus E-M1(II)), an APS-C sensor (Ricoh GRII) and a full frame sensor (Sony A7 III).

A comparison of different photosite sizes (both photosize pitch and area are shown)

The surface area of the photosites on the Sony sensor is 34.93µm², meaning there are roughly 3× more photons hitting the full-frame photosite than the MFT photosite (11.02µm²), and nearly 18× more than the photosite on the smartphone. So how does this affect the images created?

The size of a photosite relates directly to the amount of light that can be captured. Large photosites are able to perform well in low-light situations, whereas small photosites struggle to capture light, leading to an increase in noise. Being able to capture more light means a higher signal output from a photosite. This means it will require less amplification (a lower ISO), than a sensor with smaller photosites. Collecting more light with the same exposure time and, therefore, respond with higher sensitivity. An exaggerated example is shown in the figure below.

Small vs. large photosites, normal vs. low light

Larger photosites are usually associated with larger sensors, and that’s the reason why many full-frame cameras are good in low-light situations. Photosites do not exist in isolation, and there are other factors which contribute to the light capturing abilities of photosites, e.g. the microlenses that help to gather more light for a photosite, and the small non-functional gaps between each photosite.