Light from the sun has appears to have no hue or colour of its own; it is “white” light. But it actually does contain all colours, and if it is projected through a prism it will be separated into a band of colours like a rainbow. A coloured object, for example a flower, has colour because when light strikes it, the flower petals reflect their hue components or wavelengths of the light while absorbing other colours. In the example below the flower reflects the ‘magenta’ components, and the human eye being sensitive to these reflected wavelengths, sees them as magenta. Dyes, such as those found in paint, and colour prints acts just like the flower does in selectively absorbing and reflecting certain wavelengths of light and therefore producing colour.
When taking an image on a digital camera, we are often provided with one or two histograms – the luminance histogram, and the RGB histogram. The latter is often depicted in various forms: as a single histogram showing all three channels of the RGB image, or three separate histograms, one for each of R, G, and B. So how useful is the RGB histogram on a camera? In the context of improving image quality RGB histograms provide very little in the way of value. Some people might disagree, but fundamentally adjusting a picture based on the individual colour channels on a camera, is not realistic (and usually it is because they don’t have a real understanding about how colour spaces work).
Consider the image example shown in Figure 1. This 3024×3024 pixel image has 9,144,576 pixels. On the left are the three individual RGB histograms, while on the right is the integral RGB histogram with the R, G, B, histograms overlapped. As I have mentioned before, there is very little information which can be gleaned by looking at the these two-dimensional RGB histograms – they do not really indicate how much red (R), green (G), or blue (B) there is in an image, because these three components can only be used together to produce information that is useful. This is because RGB is a coupled colour space where luminance and chrominance are coupled together. The combined RGB histogram is especially poor from an interpretation perspective, because it just muddles the information.
Fig.1: The types of RGB histograms found in-camera.
But to understand it better, we need to look at what information is contained in a colour image. An RGB colour image can be conceptualized as being composed of three layers: a red layer, a green layer, and a blue layer. Figure 2 shows the three layers of the image in Figure 1. Each layer represents the values associated with red, green, and blue. Each pixel in a colour image is therefore a set of triplet values: a red, a green, and a blue, or (R,G,B), which together form a colour. Each of the R, G, and B components is essentially an 8-bit (grayscale) image, then can be viewed in the form of a histogram (also shown in Figure 2 and nearly always falsely coloured with the appropriate red, green or blue colour).
Fig.2: The R, G, B components of RGB.
To understand a colour image further, we have to look at the RGB colour model, the method used in most image formats, e.g. JPEG. The RGB model can be visualized in the shape of a cube, formed using the R, G, and B data. Each pixel in an image has an (R, G, B) value which provides a coordinate in the 3D space of the cube (which contains 2563, or 16,777,216 colours). Figure 3 shows two different ways of viewing image colour in 3D. The first is an all-colours view of the colours. This basically just indicates all the colours contained in the image without frequency information. This gives an overall indication on how colours are distributed. In the case of the example image, there are 526,613 distinct colours. The second cube is a frequency-based 3D histogram, grouping like data together in “bins”, in this example the 3D histogram has 83 or 512 bins (which is honestly easier to digest than 16 million-odd bins). Within the image there is shown one pixel with the RGB value (211,75,95), and its location in the 3D histograms.
Fig.3: How to really view the colours in RGB
In either case, visually you can see the distribution of colours. The same can not be said of many of the 2D representations. Let’s look at how the colour information pans out in 2D form. The example image pixel in Figure 3 at location (2540,2228) has the RGB value (211,75,95). If we look at this pixel in the context of the red, green, and blue histograms it exists in different bins (Figure 4). There is no way that these 2D histograms provide anything in the way of context on the distribution of colours. All they do is show the distribution of red, green, and blue values, from 0 to 255. What the red histogram tells us is that at value 211 there are 49972 colour pixels in the image whose first value of the triplet (R) is 211. It may also tell us that the contribution of red in pixels appears to be constrained to the upper and lower bounds of the histogram (as shown by the two peaks). There is only one pure value of red, (255,0,0). Change the value from (211,75,95) to (211,75,195) and we get a purple colour.
Fig.4: A single RGB pixel shown in the context of the separate histograms.
The information in the three histograms is essentially decoupled, and does not provide a cohesive interpretation of colours in the image, for that you need a 3D image of sorts. Modifying one or more of the individual histograms will just lead to a colour shift in the image, which is fine if that is the what is to be achieved. Should you view the colour histograms on a camera viewscreen? I honestly wouldn’t bother. They are more useful in an image manipulation app, but not in the confines of a small screen – stick to the luminance histogram.
Like our sense of taste and smell, colour helps us perceive and understand the world around us. It enriches our lives, and helps us comprehend the aesthetic quality of art, or differentiate between many things in the world around us. Yet colour is not everything. Pablo Picasso, said that “Colors are only symbols. Reality is to be found in luminance alone.” But is this a valid reality?
There is a biological basis for the fact that colour and luminance (what most people think of as B&W) play distinct roles in our perception of art, or of real life – colour and luminance are analyzed by different portions of our visual system, and as such they are responsible for different aspects of visual perception. The parts of our brain that process information about colour are located several cm away from the parts that analyze luminance – as anatomically distinct as vision is from hearing. The part that processes colour information is found in the temporal lobe, whereas luminance information is processed in the parietal lobe.
Below is a comparison of Vincent van Gogh’s Green Wheat Field with Cypress (1889), with a version containing only luminance. Our ability to recognize the various regions of vegetation and to perceive their three-dimensional shape and the spatial organization of this scene depends almost entirely on the luminance of the paints used, and not their colours.
Green Wheat Field with Cypress (1889)
Yet a world without colour is one that forfeit’s crucial elements. While luminance provides the structure for a scene, colours allow us to see the scene more precisely. In the colour image above, it allows us to better differentiate the different greens of the grasses, and the blues of the sky. In the B&W image, the grasses are less distinct, the vibrancy in the green trees and bushes is absent, and there is very little differentiation between the colours of the sides and roof of the cottage. Of course we must always remember that colour is almost never seen exactly as it physically is. All colour perception is relative. The images below compare the luminance and chrominance information for the image above – the chrominance information is extracted from the HSB colour space, which incorporate the hue and saturation components. Note how it lacks the “structural” information which is bestowed by light and dark.
Luminance
Chrominance(i.e. colour information)
Is luminance more important than colour? In some ways yes, because of the way our eyes have evolved. Our eyes perceive light and dark as well as colour through rods and cones. Rods are very sensitive to light and dark (and help give us good vision in low light), whereas cones are responsible for colour information. But rods are more plentiful than cones. In the central fovea there may be about 20 times more rods (≈100-120 million) than cones (≈5-6 million). In reality, the details in what we perceive in a scene are carried mostly by the information we perceive about light and dark. So reality can be found in luminance alone, because even without colour we can still perceive what is in a scene (people with achromatopsia, which is a complete lack of colour, do exactly that).
But for most humans colour is an integral part of our vision, we cannot switch it off at will, in the same way that we engage a B&W mode in a camera. It allowed our early ancestors to see colourful ripe fruit more easily against a background of mostly green forest, and it allows us to appreciate the world around us.
Further reading:
Margaret Livingston, Light Vision, Harvard Medical Alumni Bulletin, pp.15-23 (Autumn, 2003)
One of the more interesting aspects of photographing outdoors is the colour of the sky. You know the situation – you’re out photographing and the sky just isn’t as vivid as you wished it was. This happens a lot in the warmer months of the year.
The sky isn’t blue of course. We interpret it is blue because of light, and its interaction with the atmosphere. The type of blue also changes, but the difference is most noticeable in fall and winter, when the sky appears a more vivid blue than it does throughout the summer months (that’s why you can never expect a really vividly blue sky in summer when travelling).
The blue of the sky is more saturated further from the sun. Note that in this image taken in Toronto in May, the right side, furthest from the sun appears more saturated.
Firstly the blue colour of the sky is due to the scattering of sunlight off molecules in the atmosphere smaller than the wavelength of light (approximately 1/10th the wavelength). There are many gases in the atmosphere, e.g. nitrogen, oxygen, and hydrogen. Mixed in with these elements are particles which include dust, pollen, and pollution. The scattering is known as Rayleigh scattering, and is most effective at the short wavelength (400nm) end of the visible spectrum. Therefore the light scattered down to the earth at a large angle with respect to the direction of the sun’s light is predominantly in the blue end of the spectrum. Because of this wavelength-selective scattering, more blue light diffuses throughout atmosphere than other colours producing the familiarblue sky.
An illustration of how Rayleigh Scattering works in the atmosphere.
During the summer months, when the sun is higher in the sky, light does not have to travel as far through the atmosphere to reach our eyes. Consequently, there is less Rayleigh scattering. Blue skies often appear somewhat hazy, veiled by a thin white sheen. When light encounters large particles suspended in the air, like dust or water vapour droplets, wavelengths are scattered equally. This process is known as Mie scattering, and produces white-coloured light, e.g. making clouds appear white. In summer in particular, increased humidity increases Mie scattering, and as a result the sky tends to be relatively muted, or pale blue.
The visibility of clouds can be attributed to Mie scattering, which is not very wavelength dependent.
In the fall and winter the Northern Hemisphere is tilted away from the sun, the sun’s angle is lower, which increases the amount of Raleigh scattering (light has to travel further through the atmosphere, and the scattering of shorter wavelengths is more complete). The cooler air during this period also decreases the amount of moisture the air can hold, diminishing Mie scattering. These two factors taken in conjunction can produce skies that are vividly blue.
Angle of the sun, summer versus winter.
Q: If the wavelength of purple is only 380nm, why don’t we see more purple skies? A: Purple skies are rare because the sun emits a higher concentration of blue light waves in comparison to violet. Furthermore, our eyes are more sensitive to blue rather than violet meaning to us the sky appears blue.
Q: What particles cause Rayleigh Scattering? A: Small specks of dust or nitrogen and oxygen molecules.
RGB is used to store colour images in image file formats, and view images on a screen, but it’s honestly not very useful in day-to-day image processing. This is because in an RGB image the luminance and chrominance information is coupled together. Therefore when one of the R, G, or B components is modified, the colours within the image will change. For performing many operations we need another form of colour space – one where the characteristics of chrominance and luminance can be separated. One of the most common of these is HSV, or Hue−Saturation−Value.
HSV as an upside-down, six-sided pyramid
HSV/HSB as a cylindrical space
HSV is derived from the RGB model, and is sometimes known as HSB (Hue, Saturation, Brightness) and characterizes colour in terms of hue and saturation. It was created by Alvy Ray Smith in 1978 [1]. Now the space is traditionally represented using an an upside-down hex-cone or six-sided pyramid – however mathematically the space is actually conceptualized as a cylinder. HSV/HSB is a perceptual colour space, i.e. it decomposes colour based on how it is perceived rather than how it is physically sensed, as is the case with RGB. This means HSV (and its associated colour spaces) is more aligned with a more intuitive understanding of colour.
The top of the HSB colour space cylinder showing hue and saturation (left), and a slice through the cylinder showing brightness versus saturation (right).
A point within the HSB space is defined by hue, saturation, and brightness.
Hue represents the chromaticity or pure colour. It is specified by an angular measure from 0 to 360° – red corresponds to 0°, green to 120°, and blue to 240°.
Saturation is the vibrancy, vividness/colourfulness, or purity of a colour. It is defined as a percentage measure from the central vertical axis (0%) to the exterior shell of the cylinder (100%). A colour with 100% saturation will be the purest color possible, while 0% saturation yields grayscale, i.e. completely desaturated.
Value/Brightness is a measure of the lightness or darkness of a colour. It is specified by the central axis of the cylinder, and ranges from white at the top to black at the bottom. Here 0% indicates no intensity (pure blackness), and 100% indicates full intensity (white).
The values are very much interdependent, i.e. if the value of a colour is set to zero, then the amount of hue and saturation will not matter, as the colour will be black. Similarly, if the saturation is set to zero, then hue will not matter, as there will be no colour.
An example of a colour image, and two views of its respective HSB colour space.
Manipulating images in HSB is much more intuitive. In order to lighten a colour in HSB, it is as simple as increasing the brightness value, while a similar technique applied to an RGB image requires scaling each of the R, G, B components proportionally. The task of increasing saturation, i.e. making an image more vivid is easily achieved in this colour space.
Note that converting an image from RGB to HSB involves a nonlinear transformation.
A standard colour image is 8-bit (or 24-bit) containing 2563 = 16,777,216 colours. That seems like a lot right? But can that many colours even be distinguished by the human visual system? The quick answer is no, or rather we don’t exactly know for certain. Research into the number of actual discernible colours is actually a bit of a rabbit’s hole.
A 1998 paper [1] suggests that the number of discernible colours may be around 2.28 million – the authors determined this by calculating the number of colours within the boundary of the MacAdam Limits in CIELAB Uniform Colour Space [2] (for those who are interested). However even the authors suggested this 2.28M may be somewhat of an overestimation. An larger figure of 10 million colours (from 1975) is often cited [3], but there is no information on the origin of this figure. A similar figure of 2.5 million colours was cited in a 2012 article [4]. A more recent article [5] gives a conservative estimate of 40 million distinguishable object color stimuli. Is it even possible to realistically prove such large numbers? Somewhat unlikely, because it may be impossible to quantify – ever. Indications based on existing colour spaces may be as good as it gets, and frankly even 1-2 million colours is a lot.
Of course the actual number of colours someone sees is also dependent on the number and distribution of cones in the eye. For example, dichromat’s only have two types of cones which are able to perceive colour. This colour deficiency manifests differently depending on which cone is missing. The majority of the population are trichromats, i.e. they have three types of cones. Lastly there are the very rare individuals, the tetrachromats who have four different cones. Supposedly tetrachromats can see 100 million colours, but it is thought the condition only exists in women, and in reality, nobody really knows how many are potentially tetrachromatic [6] (and the only definitive way of finding out if you have tetrachromacy is via a genetic test).
The reality is that few if any real pictures contain 16 million colours. Here are some examples (all images contain 9 million pixels). Note the images are shown in association with the hue distribution from the HSB colour-space. The first example is a picture of a wall of graffiti art in Toronto. Now this is an atypical image because it contains a lot of varied colours, most images do not. This image has only 740,314 distinct colours – that’s only 4.4% of the potential colours available.
The next example is a more natural picture, a picture of two building (Nova Scotia). This picture is quite representative of images such as landscapes, that are skewed towards quite a narrow band of colours. It only contains 217,751 distinct colours, or 1.3% of the 16.77 million colours.
Finally we have foody-type image that doesn’t seem to have a lot of differing colours, but in reality it does. There are 635,026 (3.8%) colours in the image. What these examples show is that most images contain fewer than one million different colours. So while there is the potential for an image to contain 16,777,216 colours, in all likely they won’t.
What about 10-bit colour? We’re taking about 10243 or 1,073,741,824 colours – which is really kind of ridiculous.
Further reading:
Pointer, M.R., Attridge, G.G., “The number of discernible colours”, Color Research and Application, 23(1), pp.52-54 (1998)
MacAdam, D.L., “Maximum visual efficiency of colored materials”, Journal of the Optical Society of America, 25, pp.361-367 (1935)
Judd, D.B., Wyszecki, G., Color in Business, Science and Industry, Wiley, p.388 (1975)
Flinkman, M., Laamanen, H., Vahimaa, P., Hauta-Kasari, M., “Number of colors generated by smooth nonfluorescent reflectance spectra”, J Opt Soc Am A Opt Image Sci Vis., 29(12), pp.2566-2575 (2012)
Kuehni, R.G., “How Many Object Colors Can We Distinguish?”, Color Research and Application, 41(5), pp.439-444 (2016)
Jordan, G., Mollon, J., “Tetrachromacy: the mysterious case of extra-ordinary color vision”, Current Opinion in Behavioral Sciences, 30, pp.130-134 (2019)
Most colour images are stored using a colour model, and RGB is the most commonly used one. Digital cameras typically offer a specific RGB colourspace such as sRGB. It is commonly used because it is based on how humans perceive colours, and has a good amount of theory underpinning it. For instance, a camera sensor detects the wavelength of light reflected from an object and differentiates it into the primary colours red, green, and blue.
An RGB image is represented by M×N colour pixels (M = width, N = height). When viewed on a screen, each pixel is displayed as a specific colour. However, deconstructed, an RGB image is actually composed of three layers. These layers, or component images are all M×N pixels in size, and represent the values associated with Red, Green and Blue. An example of an RGB image decoupled into its R-G-B component images is shown in Figure 1. None of the component images contain any colour, and are actually grayscale. An RGB image may then be viewed as a stack of three grayscale images. Corresponding pixels in all three R, G, B images help form the colour that is seen when the image is visualized.
Fig.1: A “deconstructed” RGB image
The component images typically have pixels with values in the range 0 to 2B-1, where B is the number of bits of the image. If B=8, the values in each component image would range from 0..255. The number of bits used to represent the pixel values of the component images determines the bit depth of the RGB image. For example if a component image is 8-bit, then the corresponding RGB image would be a 24-bit RGB image (generally the standard). The number of possible colours in an RGB image is then (2B)3, so for B=8, there would be 16,777,216 possible colours.
Coupled together, each RGB pixel is described using a triplet of values, each of which is in the range 0 to 255. It is this triplet value that is interpreted by the output system to produce a colour which is perceived by the human visual system. An example of an RGB pixel’s triplet value, and the associated R-G-B component values is shown in Figure 2. The RGB value visualized as a lime-green colour is composed of the RGB triplet (193, 201, 64), i.e. Red=193, Green=201 and Blue=64.
Fig.2: Component values of an RGB pixel
One way of visualizing the R,G,B, components of an image is by means of a 3D colour cube. An example is shown in Figure 3. The RGB image shown has 310×510, or 158,100 pixels. Next to it is a colour cube with the three axes, R, G, and B, each with a range of values 0-255, producing a cube with 16,777,216 elements. Each of the images 122,113 unique colours is represented as a point in the cube (representing only 0.7% of available colours).
Fig 2 Example of colours in an RGB 3D cube
The caveat of the RGB colour model is that it is not a perceptual one, i.e. chrominance and luminance are not separated from one another, they are coupled together. Note that there are some colour models/space that are decoupled, i.e. they separate luminance information from chrominance information. A good example is HSV (Hue, Saturation, Value).
Colour can be divided up into luminosity and chromaticity. The CIE XYZ colour space was designed such that Y is a measure of the luminance of a colour. Consider a 3D plane is described by X=Y=Z=1, as shown in Figure 1. A colour point A=(Xa,Ya,Za) is then found by intersecting the line SA (S=starting point, X=Y=Z=0) with the plane formed within the CIE XYZ colour volume. As it is difficult to perceive 3D spaces, most chromaticity diagrams discard luminance and show the maximum extent of the chromaticity of a particular 2D colour space. This is achieved by dropping the Z component, and projecting back onto the XY plane.
Fig.1: CIE XYZ chromaticity diagram derived from CIE XYZ open cone.
Fig.2: RGB colour space mapped onto the chromaticity diagram
This diagram shows all the hues perceivable by the standard observer for various (x, y) pairs, and indicates the spectral wavelengths of the dominant single frequency colours. When Y is plotted against X for spectrum colours, it forms a horseshoe, or shark-fin, shaped diagram commonly referred to as the CIE chromaticity diagram where any (x,y) point defines the hue and saturation of a particular colour.
Fig.3: The CIE Chromaticity Diagram for CIE XYZ
The xy values along the curved boundary of the horseshoe correspond to the “spectrally pure”, fully saturated colours with wavelengths ranging from 360nm (purple) to 780nm (red). The area within this region contains all the colours that can be generated with respect to the primary colours on the boundary. The closer a colour is to the boundary, the more saturated it is, with saturation reducing towards the “neutral point” in the centre of the diagram. The two extremes, violet (360nm) and red (780nm) are connected with an imaginary line. This represents the purple hues (combinations of red and blue) that do not correspond to primary colours. The “neutral point” at the centre of the horseshoe (x=y=0.33) has zero saturation, and is typically marked as D65, and corresponds to a colour temperature of 6500K.
Fig.4: Some characteristics of the CIE Chromaticity Diagram
The Commission Internationale de l’Eclairage (French for International Commission on Illumination) , or CIE is an organization formed in 1913 to create international standards related to light and colour. In 1931, CIE introduced CIE1931, or CIEXYZ, a colorimetric colour space created in order to map out all the colours that can be perceived by the human eye. CIEXYZ was based on statistics derived from extensive measurements of human visual perception under controlled conditions.
In the 1920s, colour matching experiments were performed independently by physicists W. David Wright and John Guild, both in England [2]. The experiments were carried out with 7 (Guild) and 10 (Wright) people. Each experiment involved a subject looking through a hole which allowed for a 2° field of view. On one side was a reference colour projected by a light source, while on the other were three adjustable light sources (the primaries were set to R=700.0nm, G=546.1nm, and B=435.8nm.). The observer would then adjust the values of three primary lights until they can produce a colour indistinguishable from a reference light. This was repeated for every visible wavelength. The result of the colour-matching experiments was a table of RGB triplets for each wavelength. These experiments were not about describing colours with qualities like hue and saturation, but rather just attempt to explain how combinations of light appear to be the same colour to most people.
Fig.1: An example of the experimental setup of Guild/Wright
In 1931 CIE amalgamated Wright and Guild’s data and proposed two sets of of colour matching functions: CIE RGB and CIE XYZ. Based on the responses in the experiments, values were plotted to reflect how the average human eye senses the colours in the spectrum, producing three different curves of intensity for each light source to mix all colours of the colour spectrum (Figure 2), i.e. Some of the values for red were negative, and the CIE decided it would be more convenient to work in a colour space where the coefficients were always positive – the XYZ colour matching functions (Figure 3). The new matching functions had certain characteristics: (i) the new functions must always be greater than or equal to zero; (ii) the y function would describe only the luminosity, and (iii) the white-point is where x=y=z=1/3. This produced the CIE XYZ colour space, also known as CIE 1931.
Fig.2: CIE RGB colour matching functions
Fig.3: CIE XYZ colour matching functions
The CIE XYZ colour space defines a quantitative link between distributions of wavelengths in the electromagnetic visible spectrum, and physiologically perceived colours in human colour vision. The space is based on three fictional primary colours, X, Y, and Z, where the Y component corresponds to the luminance (as a measure of perceived brightness) of a colour. All the visible colours reside inside an open cone-shaped region, as shown in Figure 4. CIE XYZ is then a mathematical generalization of the colour portion of the HVS, which allows us to define colours.
Fig.4: CIE XYZ colour space (G denotes the axis of neutral gray).
Fig.5: RGB mapped to CIE XYZ space
The luminance in XYZ space increases along the Y axis, starting at 0, the black point (X=Y=Z=0). The colour hue is independent of the luminance, and hence independent of Y. CIE also defines a means of describing hues and saturation, by defining three normalized coordinates: x, y, and z (where x+y+z=1).
x = X / (X+Y+Z)
y = Y / (X+Y+Z)
z = Z / (X+Y+Z)
z = 1 - x - y
The x and y components can then be taken as the chromaticity coordinates, determining colours for a certain luminance. This system is called CIE xyY, because a colour value is defined by the chromaticity coordinates x and y in addition to the luminance coordinate Y. More on this in the next post on chromaticity diagrams.
The RGB colour space is related to XYZ space by a linear coordinate transformation. The RGB colour space is embedded in the XYZ space as a distorted cube (see Figure 5). RGB can be mapped onto XYZ using the following set of equations:
X = 0.41847R - 0.09169G - 0.0009209B
Y = -0.15866R + 0.25243G - 0.0025498B (luminance)
Z = -0.082835R + 0.015708G + 0.17860B
CIEXYZ is non-uniform with respect to human visual perception, i.e. a particular fixed distance in XYZ is not perceived as a uniform colour change throughout the entire colour space. CIE XYZ is often used as an intermediary space in determining a perceptually uniform space such as CIE Lab (or Lab), or CIE LUV (or Luv).
CIE 1976 CIEL*u*v*, or CIELuv, is an easy to calculate transformation of CIE XYZ which is more perceptually uniform. Luv was created to correct the CIEXYZ distortion by distributing colours approximately proportional to their perceived colour difference.
CIE 1976 CIEL*a*b*, or CIELab, is a perceptually uniform colour differences and L* lightness parameter has a better correlation to perceived brightness. Lab remaps the visible colours so that they extend equally on two axes. The two colour components a* and b* specify the colour hue and saturation along the green-red and blue-yellow axes respectively.
In 1964 another set of experiments were done allowing for a 10° field of view, and are known as the CIE 1964 supplementary standard colorimetric observer. CIE XYZ is still the most commonly used reference colour space, although it is slowly being pushed to the wayside by CIE1976. There is a lot of information on CIE XYZ and its derivative spaces. The reader interested in how CIE1931 came about in referred to [1,4]. CIELab is the most commonly used CIE colour space for imaging, and the printing industry.
Further Reading
Fairman, H.S., Brill, M.H., Hemmendinger, H., “How the CIE 1931 color-matching functions were derived from Wright-Guild data”, Color Research and Application, 22(1), pp.11-23, 259 (1997)
So we have colour models, colour spaces, gamuts, etc. How do these things relate to digital photography and the acquisition of images? While a 24-bit RGB image can technically can provide up to 16.7 million colours, not all these colours are actually used.
Two of the most commonly used RGB colour spaces are sRGB and Adobe RGB. They are important in digital photography because they are usually the two choices provided in digital cameras. For example in the Ricoh GR III, the “Image Capture Settings” allow the “Color Space” to be changed to either sRGB or Adobe RGB. These choices relate to the JPG files created and not the RAW files (although they may be used in the embedded JPEG thumbnails). All these colour spaces do is set the range of colours available to the camera.
It should be noted that choosing sRGB or Adobe RGB for storing a JPEG makes no difference to the number of colours which can be stored. The difference is in the range of colours that can be represented. So, sRGB represents the same number of colours as Adobe RGB, but the range of colours that it represents is narrower (as seen when the two are compared in a chromaticity diagram). Adobe RGB has a wider range of possible colors, but the difference between individual colours is bigger than in sRGB.
sRGB
Short for “standard” RGB, it was literally described as the “Standard Default Color Space for the Internet“, by its authors. sRGB was developed jointly by HP and Microsoft in 1996 with the goal of creating a precisely specified colour space based on standardized mappings with respect to the CIEXYZ model.
sRGB is the now the most common colour space found in modern electronic devices, e.g. digital cameras, web, etc. sRGB exhibits a relatively small gamut, covering just 33.3% of visible colours – however it includes most colours which can be reproduced by visual devices. EXIF (JPEG) and PNG are based on sRGB colour data, making it the de facto standard for digital cameras, and other imaging devices. Shown on the CIE chromaticity diagram, sRGB shares the same location as Rec.709, the standard colour space for HDTV.
Adobe RGB
The colour space was defined by Adobe Systems in 1998. It is optimized for printing and is the de facto standard in professional colour imaging environments. Adobe RGB covers 38.8% of visible colours, 17% more than sRGB. Adobe RGB extends into richer cyans and greens than sRGB. Converting from Adobe RGB to sRGB results in the loss of highly saturated colour data, and the loss of tonal subtleties. Adobe RGB is typically used in professional photography, and for picture archive applications.
Adobe RGB and sRGB shown in CIELab space
sRGB or Adobe RGB?
For general use, the best option may be sRGB, because it is the standard colour space. It doesn’t have the largest gamut, and may not be ideal for high-quality imaging, but nearly every device is capable of handling an image embedded with the sRGB colour space.
sRGB is suitable for non-professional printing.
Adobe RGB is suited to professional printing, especially good for saturated colours.
A typical computer monitor can display most of the sRGB range but only about 75% of the range found in Adobe RGB.
Adobe RGB can be converted to sRGB, but the reverse is not true.
An Adobe RGB image displayed on a device with a sRGB profile will appear dull and desaturated.