What is image resolution?

Sometimes a technical term gets used without any thought to its meaning, and before you know it becomes an industry standard. This is the case with the term “image resolution”, which has become the standard means of describing how much detail is portrayed in an image. The problem is that the term resolution can mean different things in photography. In one context it is used in describing the pixel density of devices (in DPI or PPI). For example a screen may have a resolution of 218 ppi (pixels-per-inch), and a smartphone might have a resolution of 460ppi. There is also sensor resolution, which is concerned with photosite density on a sensor based on sensor size. You can see how this can get confusing.

Fig.1: Image resolution is about detail in the image.

The term image resolution really just refers to the number of pixels in an image, i.e. pixel count. It is usually expressed in terms of two numbers for the number of pixel rows and columns in an image – this is the spatial resolution. For example the Ricoh GR III has an APS-C sensor with a sensor resolution of 6051×4007, or about 24.2 million photosites on the physical sensor. The effective number of pixels in an image derived from the sensor is 6000×4000, or a pixel count of 24 million pixels – this is considered the image resolution. Image resolution can be used in the context of describing a camera in broad context, e.g., the Sony A1 has 50 megapixels, or based on dimensions, “8640×5760”. It is often used in the context of comparing images, e.g. the Sony A1 with 50MP has a higher resolution than the Sony ZV-1 with 20MP. The image resolution of two images is shown in Figure 1 – the high resolution image has more detail than the image with lower resolution.

Fig.2: Spatial resolution and sensor size for 24MP image resolution.

Technically when talking about the sensor we are talking about photosites, but image resolution is not about the sensor, it is about the image produced from the sensor. This is because it is challenging to attempt to compare cameras based on photosites, as they all have differing properties, e.g. area. Once the data from the sensor has been transformed into an image, then the photosite data becomes pixels, which are dimensionless entities. Note that the two dimensions representing the spatial resolution will change depending on the aspect ratio of the sensor. So while a 24MP image on a 3:2 sensor (APS-C) will have dimensions of 6000 and 4000, a full-frame sensor with the same pixel count will have dimensions of roughly 5657×4243.

Fig.3: Changes in spatial resolution within different sensors.

Increasing image resolution does not always mean increasing detail in the same amount. For example a 16MP image from 3:2 ratio sensor would produce an image with resolution of 4899×3266. A 24MP images from the same type of sensor would increase the pixel count by 50%, however the vertical and horizontal dimensions would only increase by 20% – so a much lower change in linear resolution. To obtain double the linear resolution would require an increase to a 64MP image.

Is image resolution the same as sharpness? Not really (this is where the definition of the word resolution starts to betray itself). Sharpness concerns how clearly defined details within images appear, and is somewhat subjective. It’s possible to have a high resolution image that is not sharp, just like its possible to have a low resolution image that has a good amount of acuity. It really depends on the situation it is being viewed in, i.e. back to device pixel density.

Photosites – Quantum efficiency

Not every photo that makes it through the lens ends up in a photosite. The efficiency with which photosites gather incoming light photons is called its quantum efficiency (QE). The ability to gather light is determined by many factors including the micro lenses, sensor structure, and photosite size. The QE value of a sensor is a fixed value that depends largely on the chip technology of the sensor manufacturer. The QE is averaged out over the entire sensor, and is expressed as the chance that a photon will be captured and converted to an electron.

Quantum efficiency (P = Photons per μm2, e = electrons)

The QE is a fixed value and is dependent on a sensor manufacturers design choices. The QE is averaged out over the entire sensor. A sensor with an 85% QE would produce 85 electrons of signal if it were exposed to 100 photons. There is no way to effect the QE of a sensor, i.e. you can’t change things by changing the ISO.

The QE is typically 30-55% meaning 30-55% of the photons that fall on any given photosite are converted to electrons. (front illuminated sensors). In back illuminated sensors, like those typically found on smartphones, the QE is approximately 85%. The website Photons to Photos has a list of sensor characteristics for a good number of cameras. For example the sensor in my Olympus OM-D E-M5 Mark II has a supposed QE of 60%. Trying to calculate the QE of a sensor in non-trivial.

The early days of image processing: To Mars and beyond

After Ranger 7, NASA moved on to Mars, deploying Mariner 4 in November 1964. It was the first probe to send signals back to Earth in digital form, which was necessitated by the fact that the signals had to travel 216 million km back to earth. The receiver on board could send and receive data via the low- and high-gain antennas at 8⅓ or 33⅓ bits-per-second. So at the low end, one pixel (8-bit) per second. All images were transmitted twice to insure no data were missing or corrupt. In 1965, JPL established the Image Processing Laboratory (IPL).

The next series of lunar probes, Surveyor, were also analog (due to construction being too advanced to make changes), providing some 87,000 images for processing by IPL. The Mariner images also contained noise artifacts that made them look as if they were printed on “herringbone tweed”. It was Thomas Rindfleisch of IPL who applied nonlinear algebra, creating a program called Despike – it performed a 2D Fourier transform to create a frequency spectrum with spikes representing the noise elements, which could then be isolated, removed and the data transformed back into an image.

Below is an example of this process applied to an image from Mariner 9 taken in 1971 (PIA02999), containing a herringbone type artifact (Figure 1). The image is processed using a Fast Fourier Transform (FFT – see examples FFT1, FFT2, FFT3) in ImageJ.

Fig.1: Image before (left) and after (right) FFT processing

Applying a FFT to the original image, we obtain a power spectrum (PS), which shows differing components of the image. By enhancing the power spectrum (Figure 2) we are able to look for peaks pertaining to the feature of interest. In this case the vertical herringbone artifacts will appear as peaks in the horizontal dimension of the PS. Now in ImageJ these peaks can be removed from the power spectrum, (setting them to black), effectively filtering out those frequencies (Figure 3). By applying the Inverse FFT to the modified power spectrum, we obtain an image with the herringbone artifacts removed (Figure 1, right).

Fig.2: Power spectrum (enhanced to show peaks)
Fig.3: Power spectrum with frequencies to be filtered out marked in black.

Research then moved to applying the image enhancement techniques developed at IPL to biomedical problems. Robert Selzer processed chest and skull x-rays resulting in improved visibility of blood vessels. It was the National Institutes of Health (NIH) that ended up funding ongoing work in biomedical image processing. Many fields were not using image processing because of the vast amounts of data involved. Limitations were not posed by algorithms, but rather hardware bottlenecks.

The early days of image processing : the 1960s lunar probes

Some people probably think image processing was designed for digital cameras (or to add filters to selfies), but in reality many of the basic algorithms we take for-granted today (e.g. improving the sharpness of images) evolved in the 1960s with the NASA space program. The space age began in earnest in 1957 with the USSR’s launch of Sputnik I, the first man-made satellite to successfully orbit Earth. A string of Soviet successes lead to Luna III, which in 1959 transmitted back to Earth the first images ever seen of the far side of the moon. The probe was equipped with an imaging system comprised of a 35mm dual-lens camera, an automatic film processing unit, and a scanner. The camera sported a 200mm f/5.6, and a 500mm f/9.5 lens, and carried temperature and radiation resistant 35mm isochrome film. Luna III took 29 photographs over a 40-minute period, covering 70% of the far side, however only 17 of the images were transmitted back to earth. The images were low-resolution, and noisy.

The first image obtained from the Soviet Luna III probe on October 7, 1959 (29 photos were taken of the dark side of the moon).

In response to the Soviet advances, NASA’s Jet Propulsion Lab (JPL) developed the Ranger series of probes, designed to return photographs and data from the moon. Many of the early probes were a disaster. Two failed to leave Earth orbit, one crashed onto the moon, and two left Earth orbit but missed the moon. Ranger 6 got to the moon, but its television cameras failed to turn on, so not a single image could be transmitted back to earth. Ranger 7 was the last hope for the program. On July 31, 1964 Ranger 7 neared its lunar destination, and in the 17 minutes before it impacted the lunar surface it relayed the first detailed images of the moon, 4,316 of them, back to JPL.

Image processing was not really considered in the planning for the early space missions, and had to gain acceptance. The development of the early stages of image processing was led by Robert Nathan. Nathan received a PhD in crystallography in 1952, and by 1955 found himself running CalTech’s computer centre. In 1959 he moved to JPL to help develop equipment to map the moon. When he viewed pictures from the Luna III probe he remarked “I was certain we could do much better“, and “It was quite clear that extraneous noise had distorted their pictures and severely handicapped analysis” [1].

The cameras† used on the Ranger were Vidicon television cameras produced by RCA. The pictures were transmitted from space in analog form, but enhancing them would be difficult if they remained in analog. It was Nathan who suggested digitizing the analog video signals, and adapting 1D signal processing techniques to process the 2D images. Frederick Billingsley and Roger Brandt of JPL devised a Video Film Converter (VFC) that was used to transform the analog video signals into digital data (which was 6-bit, 64 gray levels).

The images had a number of issues. First there was the geometric distortion. The beam that swept electrons across the face of the tube in the spacecraft’s camera moved at nonuniform rates that varied from the beam on the playback tube reproducing the image on Earth. This resulted in images that were stretched or distorted. A second problem was that of photometric nonlinearity. The cameras had a tendency to display brightness in the centre, and a darkness around the edge which was caused by a nonuniform response of the phosphor on the tube’s surface. Thirdly, there was an oscillation in the electronics of the camera which was “bleeding” into the video signal, causing a visible period noise pattern. Lastly there was scan-line noise, which was the nonuniform response of the camera with respect to successive scan lines (the noise is generated at right-angles to the scan). Nathan and the JPL team designed a series of algorithms to correct for the limitations of the camera. The image processing algorithms [2] were programmed on JPL’s IBM 7094, likely in the programming language Fortran.

  • The geometric distortion was corrected using a “rubber sheeting” algorithm that stretched the images to match a pre-flight calibration.
  • The photometric nonlinearity was calculated before flight, and filtered from the images.
  • The oscillation noise was removed by isolating the noise on a featureless portion of the image, created a filter, and subtracted the pattern from the rest of the image.
  • The scan-line noise was removed using a form of mean filtering.

Ranger VII was followed by the successful missions of Ranger VIII and Ranger IX. The image processing algorithms were used to successfully process 17,259 images of the moon from Rangers 7, 8, and 9 (the link includes the images and documentation from the Ranger missions). Nathan and his team also developed other algorithms which dealt with random-noise removal, Sine-wave correction.

Refs:
[1] NASA Release 1966-0402
[2] Nathan, R., “Digital Video-Data Handling”, NASA Technical Report No.32-877 (1966)
[3] Computers in Spaceflight: The NASA Experience, Making New Reality: Computers in Simulations and Image Processing.

† The Ranger missions used six cameras, two wide-angle and four narrow angle.

  • Camera A was a 25mm f/1 with a FOV of 25×25° and a Vidicon target area of 11×11mm.
  • Camera B was a 76mm f/2 with a FOV of 8.4×8.4° and a Vidicon target area of 11×11mm.
  • Camera P used two type A and two type B cameras with a Vidicon target area of 2.8×2.8mm.

Japanese Are-Bure-Boke style photography

Artistic movements don’t arise out of a void. There are many factors which have contributed to the changes in Japanese society. Following World War 2 Japan was occupied by the United States, leading to the introduction of Western popular culture and consumerism, which was aptly termed Americanization. The blend of modernity and tradition was likely to lead to some waves, which was magnified by the turbulent changes occurring in Western society in the late 1960s, e.g. the demonstrations against the Vietnam War. In the late 1960s, Japan’s rapid economic growth began to falter, exposing a fundamental opposition to Japan’s postwar political, economic and cultural structure, which lead to a storm of protests by the likes of students and farmers.

It had a long-term effect on photography, forcing a rethink on how it was perceived. In November 1968 a small magazine called Provoke was published, conceived by art critic Koji Taki (1928-2011) and photographer Takuma Nakahira, with poet Takahiko Okada (1939-1997) and photographer Yutaka Takanashi as dojin members. Daido Moriyama joined a for the second and third issues, bringing with him his early influences of Cartier-Bresson. The subtitle for the magazine was “Provocative Materials for Thought”, and each issue was composed of photographs, essays and poems. The magazine had a lifespan of three issues, the Provoke members disbanding due to a lack of cohesion in their ideals.

The ambitious mission of Provoke to create a new photographic language that could transcend the limitations of the written word was declared with the launch of the magazine’s first issue. The year was 1968 and Japan, like America, was undergoing sweeping changes in its social structure.

Russet Lederman, 2012

The aim of Provoke was to rethink the relationship between word and image, in essence to create a new language. It was to challenge the traditional view of the beauty of photographs, and their function as narrative, pictorial entities. The photographs were fragmented images that rethought the established aesthetic of photography. The photographs they published were an collection of “coarse, blurred and out-of-focus” images, characterized by the phrase Are‑Bure‑Boke (pronounced ah-reh bu-reh bo-keh). It roughly translates to “rough, blurred and out-of-focus”, i.e. grainy (Are), blurry (Bure) and out-of-focus (Boke).

An example of Daido Moriyama’s work.

They tried random triggering, they shot into the light, they prized miss-shots and even no-finder shots (in which no reference is made to the viewfinder). This represented not just a new attitude towards the medium, but a fundamental new outlook toward reality itself. Of course that is not to say that every photograph had the same characteristics, because there are many different ways of taking a picture. The unifying characteristic is the ability to push beyond the static boundaries of traditional photographic aesthetics. Provoke provided an alternative understanding of the post-war years, one that had traditionally been quite Western centric.

Further reading:

Fixing the “crop-factor” issue

We use the term “cropped sensor” only due to the desire to describe a sensor in terms of the 35mm standard. It is a relative term which compares two different types of sensor, but it isn’t really that meaningful. Knowing that a 24mm MFT lens “behaves” like a 48mm full-frame lens is pointless if you don’t understand how a 48mm lens behaves on a full-frame camera. All sensors could be considered “full-frame” in the context of their environment, i.e. a MFT camera has a full-frame sensor as it relates to the MFT standard.

As mentioned in a previous post, the “35mm equivalence” is used to relate a crop-factor lens to its full-frame equivalent. The biggest problem with this is the amount of confusion it creates for novice photographers. Especially as focal lengths on lenses are always the same, yet the angle-of-view changes according to the sensor. However there is a solution to the problem, and that is to stop using the focal length to define a lens, and instead use AOV. This would allow people to pick a lens based on its angle-of view, both in degrees, but also from a descriptive point of view. For example, a wide angle lens in full-frame is 28mm – its equivalent in APS-C in 18mm, and MFT is 14mm. It would be easier just to label these by the AOV as “wide-74°”.

It would be easy to categorize lenses into six core groups based on horizontal AOV (diagonal AOV in []) :

  • Ultra-wide angle: 73-104° [84-114°]
  • Wide-angle: 54-73° [63-84°]
  • Normal (standard): 28-54° [34-63°]
  • Medium telephoto: 20-28° [24-34°]
  • Telephoto: 6-20° [8-24°]
  • Super-telephoto: 3-6° [4-8°]
Lenses could be advertised using a graphic to illustrate the AOV (horizontal) of the lens. This effectively removes the need to talk about focal length.

They are still loosely based on how AOV related to 35mm focal lengths. For example 63° relates to the AOV of a 35mm lens, however it no longer really relates to the focal length directly. A “normal-40°” lens would be 40° no matter the sensor size, even though the focal lengths would be different (see table below). The only lenses left out of this are fish-eye lenses, which in reality are not that common, and could be put into a
specialty lens category, along with tilt-shift etc.

Instead of brochures containing focal lengths they could contain the AOV’s.

I know most lens manufacturers describe AOV using diagonal AOV, but this is actually more challenging for people to perceive, likely because looking through a camera we generally look at a scene from side-to-side, not corner-to-corner.

AOV98°84°65°
MFT8mm10mm14mm
APS-C10mm14mm20mm
FF16mm20mm28mm
Wide/ultra-wide angle lenses

AOV54°49°40°
MFT17mm20mm25mm
APS-C24mm28mm35mm
FF35mm40mm50mm
Normal lenses

AOV28°15°10°
MFT35mm70mm100mm
APS-C45mm90mm135mm
FF70mm135mm200mm
Telephoto lenses

Vintage cameras and lenses – where to buy?

I have been buying vintage analog cameras and lenses for a few years now, and so this article offers a few tips, on where to buy them based on my experiences. Now when you’re dealing with vintage camera equipment, you will quickly realize that there is a lot of inventory around the world. This isn’t so surprising considering how the photographic industry blossomed with the expanding consumer market from 1950 onward. Analog equipment can be old, mostly dating pre-1980s, some quite common, others quite rare. I say pre-1980s because that decade heralded cameras and lenses that were bulky, ugly, made of plastic, and had clumsy auto-focus mechanisms. I will cover what to look for in vintage lenses, and cameras at a later date.

Bricks-and-mortar stores

If you are new to the buying vintage photographic equipment, then the obvious place to start is a store that focuses on vintage gear, but honestly they are few and far in between, which may be the nature of dealing with analog. Sometimes photographic retailers who sell modern camera equipment may deal with some “used” gear, but you often won’t find a really good range of gear, as they tend to deal more with used digital gear. Some people of course will comment that specialized stores tend to have higher prices, but we are talking about vintage equipment here, which may be anywhere from 40-70 years old, so if you are serious about lenses it is worth paying for the expertise to properly assess them.

In Toronto a good place to start is F-Stop Photo Accessories, which has a good amount of online information on their inventory (but does not ship). You will find a good assortment of Japanese gear, with some German and Soviet-era gear as well. The store is tiny, so best to check out the website and email to make sure the items you’re interested in are in stock, then drop by to examine them. In places like the UK, Europe and even Japan there are likely more bricks-and-mortar stores that deal predominantly with vintage. For example Tokyo abounds with used camera stores, some of which have huge inventories.

Fairs / Camera shows

If you are fortunate to live somewhere that has a photographic club, they may also have swap-meets, or auctions. In Toronto there is the Photographic Historical Society of Canada, which typically has two fairs a year, which are a good place to pick up vintage gear. The first time I went in 2019 I managed to find an 8-element Takumar 50mm f/1.4 (C$250), a Helios 58mm Version 4 ($20), a Takumar 35mm f/3.5 ($60), and a Carl Zeiss Tessar and Biotar 58mm f/2 for $140. The benefit is always that you get to examine the lens/camera, and check the functionality. There is generally a huge amount of lenses and cameras, some quite inexpensive for the person wanting to get started in analog photography.

Online stores

What about purchasing from an online reseller? This is somewhat tricky, because you are buying a physical device. I typically don’t buy any vintage electronic things off the internet because you can never be 100% certain. Thankfully the type of vintage we are looking at here, especially as it pertains to lenses, rarely involves any electronics. However it still involve moving parts, i.e. the focusing ring, and the aperture, both of which have to move freely, and are obviously hard to test online. There are a number of differing options for buying online. There are (i) physical stores which have an online presence, (ii) online retailers with a dedicated website, and (iii) online retailers on platforms such as Etsy and eBay.

I have had a number of good experiences when shopping at online stores. The first one was with the Vintage & Classic Camera Co., on Hayling Island near Portsmouth (UK). I bought an Exakta Varex 11a, and the experience was extremely good. Listings are well described, with ample photographs and a condition reported (as a percentage). The second was a recent experience with West Yorkshire Cameras, arguably one of the premium retailers for vintage camera gear. I have also bought lenses from a number of resellers on Etsy and eBay. Etsy provides access to resellers from all over the globe, and vintage products have to be a minimum of 20 years. I have bought some Russian lenses from Aerarium (Ukraine), cameras from Coach Haus Vintage (Toronto, Canada) and Film Culture (Hamilton, Canada). If you are looking for Japanese vintage cameras, I can also recommend Japan Vintage Camera based in Tokyo, who have an Etsy store as well.

What makes a good store?

A good vintage camera reseller will be one who lives and breathes vintage cameras. Typically they might have an Instagram account, offer weekly updates of new inventory, and service/inspect the equipment before even advertising it. If there should be something wrong with an item when you receive it, the reseller should make it good (I mean things do get missed). A good online store will have listings which describe the lens/camera in detail while listing any defects, provide a good series of photographs showing the camera from different angles, and some sort of grading criteria. Ideally the store should also provide some basic information on shipping costs.

Regardless of the store, always be sure to Google them and check online reviews. Don’t be swayed by a cool website, if there is a lack of customer service you won’t want to shop there. Sometimes the company has a Google review, or perhaps a review on Trustpilot. If there are enough negative reviews, then it is safe to say there is likely some truth to them. For example a company that posts 70% bad reviews is one to avoid, regardless of the amount of inventory on their site, how quickly it is updated, or how aesthetically pleasing the website looks. I had an extremely poor experience with a British online reseller that has an extremely good website with weekly updates of inventory. I had purchased a series of vintage lenses in Nov.2020. After one month they had not shipped, after two also nothing. I conversed with the owner twice during the period and each time the items were going to be “shipped tomorrow”. To no avail, after 5 months, I finally submitted a refund request with Paypal, which was duly processed. I have since written a review, which wasn’t favourable, but then neither were 90% of the reviews for that particular reseller.

The website Light Box has a whole list of places to buy film cameras and lenses in the UK, including a section named “Caution advised”, outlining those to avoid. I have created a listing of various stores in the Vintage Lenses etc. page.

Stores by region

Geographical locations do play a role in where to purchase vintage camera equipment. For example during the early decades of the post-war camera boom, there were two core epicentres of camera design and manufacture: Europe (more specifically both East and West), and Japan. So if you are interested in cameras/lenses from these regions, then stores within those geographical locales might offer a better selection. For example there are quite a few vintage camera resellers on Etsy from Ukraine and Russia. This makes sense considering cameras like FED were made in factories in Kharkov, Ukraine. Interested in Pentax or any number of Japanese vintage lenses, then resellers from Japan make sense. There are a lot of good camera stores in places that have few links to manufacturing, but may have had a good consumer base, e.g. UK and the Netherlands. The trick of course is being able to navigate the sites. Many Japanese stores have online presence, but very few provide an English-language portal.

Demystifying colour (vii) : sRGB vs. Adobe RGB

So we have colour models, colour spaces, gamuts, etc. How do these things relate to digital photography and the acquisition of images? While a 24-bit RGB image can technically can provide up to 16.7 million colours, not all these colours are actually used.

Two of the most commonly used RGB colour spaces are sRGB and Adobe RGB. They are important in digital photography because they are usually the two choices provided in digital cameras. For example in the Ricoh GR III, the “Image Capture Settings” allow the “Color Space” to be changed to either sRGB or Adobe RGB. These choices relate to the JPG files created and not the RAW files (although they may be used in the embedded JPEG thumbnails). All these colour spaces do is set the range of colours available to the camera.

It should be noted that choosing sRGB or Adobe RGB for storing a JPEG makes no difference to the number of colours which can be stored. The difference is in the range of colours that can be represented. So, sRGB represents the same number of colours as Adobe RGB, but the range of colours that it represents is narrower (as seen when the two are compared in a chromaticity diagram). Adobe RGB has a wider range of possible colors, but the difference between individual colours is bigger than in sRGB.

sRGB

Short for “standard” RGB, it was literally described as the “Standard Default Color Space for the Internet“, by its authors. sRGB was developed jointly by HP and Microsoft in 1996 with the goal of creating a precisely specified colour space based on standardized mappings with respect to the CIEXYZ model.

sRGB is the now the most common colour space found in modern electronic devices, e.g. digital cameras, web, etc. sRGB exhibits a relatively small gamut, covering just 33.3% of visible colours – however it includes most colours which can be reproduced by visual devices. EXIF (JPEG) and PNG are based on sRGB colour data, making it the de facto standard for digital cameras, and other imaging devices. Shown on the CIE chromaticity diagram, sRGB shares the same location as Rec.709, the standard colour space for HDTV.

Adobe RGB

The colour space was defined by Adobe Systems in 1998. It is optimized for printing and is the de facto standard in professional colour imaging environments. Adobe RGB covers 38.8% of visible colours, 17% more than sRGB. Adobe RGB extends into richer cyans and greens than sRGB. Converting from Adobe RGB to sRGB results in the loss of highly saturated colour data, and the loss of tonal subtleties. Adobe RGB is typically used in professional photography, and for picture archive applications.

Adobe RGB and sRGB shown in CIELab space

sRGB or Adobe RGB?

For general use, the best option may be sRGB, because it is the standard colour space. It doesn’t have the largest gamut, and may not be ideal for high-quality imaging, but nearly every device is capable of handling an image embedded with the sRGB colour space.

  • sRGB is suitable for non-professional printing.
  • Adobe RGB is suited to professional printing, especially good for saturated colours.
  • A typical computer monitor can display most of the sRGB range but only about 75% of the range found in Adobe RGB.
  • Adobe RGB can be converted to sRGB, but the reverse is not true.
  • An Adobe RGB image displayed on a device with a sRGB profile will appear dull and desaturated.

Photography and colour deficiency

How often do we stop and think about how colour blind people perceive the world around us? For many people there is a reduced ability to perceive colours in the same way that the average person perceives them. Colour blindness, which is also known as colour vision deficiency affects some 8% of males, and 5% of females. Colour blindness means that a person has difficulty seeing red, green, or blue, or certain hues of these colours. In extremely rare cases, a person have an inability to see any colour at all. And one term does not fit all, as there are many differing forms of colour deficiency.

The most common form is red/green colour deficiency, split into two groups:

  • Deuteranomaly – 3 cones with a reduced sensitivity to green wavelengths. People with deuteranomaly may commonly confuse reds with greens, bright greens with yellows, pale pinks with light grey, and light blues with lilac.
  • Protanomaly – The opposite of deuteranomaly, a reduced sensitivity to red wavelengths. People with protanomaly may confuse black with shades of red, some blues with reds or purples, dark brown with dark green, and green with orange.

Then there is also blue/yellow colour deficiency. Tritanomaly is a rare color vision deficiency affecting the sensitivity of the blue cones. People with tritanomaly most commonly confuse blues with greens and yellows with purple or violet.

Standard vision
Deuteranomaly
Protanomaly
Tritanomaly

People with deuteranopia, protanopia, or tritanopia are the dichromatic forms where the associated cones (green, red, or blue) are missing completely. Lastly there is monochromacy, achromatopsia, or total colour blindness are conditions of having mostly defective or non-existent cones, causing a complete lack of ability to distinguish colours.

Standard vision
Deuteranopia
Protanopia
Tritanopia

How does this affect photography? Obviously photographs will be the same, but photographers who have a colour deficiency will perceive a scene differently. For those interested, there are some fine articles on how photographers deal with colourblindness.

  • Check here for an exceptional article on how photographer Cameron Bushong approaches colour deficiency.
  • Photographer David Wilder offers some insights into working on location and some tips for editing.
  • David Wilder describes taking photographs in Iceland using special glasses which facilitate the perception of colour.
  • Some examples of what the world looks like when your colour-blind.

Below is a rendition of the standard colour spectrum as it relates to differing types of colour deficiency.

Simulated colour deficiencies applied to the colour spectrum.

In reality people who are colourblind may be better at discerning some things. A 2005 article [1] suggests people with deuteranomaly may actually have an expanded colour space in certain circumstances, making it possible for them to for example discern subtle shades of khaki.

Note: The colour deficiencies shown above were simulated using ImageJ’s (Fiji) “Simulate Color Blindness” function. An good online simulator is the Coblis, Color Blindness Simulator.

  1. Bosten, J.M., Robinson, J.D., Jordan, G., Mollon, J.D., “Multidimensional scaling reveals a color dimension unique to ‘color-deficient’ observers“, Current Biology, 15(23), pp.R950-R952 (2005)

Demystifying Colour (vi) : Additive vs. subtractive

In the real world there are two ways to produce colours, and they are very relevant because they deal with opposite ends of the photographic spectrum – additive and subtractive. Additive colours are formed by combining coloured light, whereas subtractive colours are formed by combining coloured pigments.

Additive colours are so-called because colours are built by combining wavelengths of light. As more colours are added, the overall colour becomes whiter. Add green and blue together and what you get is a washed out cyan. RGB is an example of an additive colour model. mixing various amounts of red, green, and blue produces the secondary colours: yellow, cyan, and magenta. Additive colour models are the norm for most systems of photograph acquisition or viewing.

Additive colour
Subtractive colour

Subtractive colour works the other way, by removing light. When we look at a clementine, what we see is the orange light not absorbed by the clementine, i.e. all other wavelengths are absorbed, except for orange. Or rather the clementine is subtracting the other wavelengths from the visible light, meaning there is only orange left to reflect off. CMYK and RYB (Red-Yellow-Blue) are good examples of subtractive colour models. Subtractive models are for most systems for printed material.

Different colour inks absorb and reflect specific wavelengths. CMYK (0,0,0,0) looks like white (no ink is laid down, so no light is absorbed), whereas (0,0,0,100) looks like black (maximum black is laid down meaning all colours are absorbed). CMYK values range from 0-100%. Below are some examples.

Ink colourabsorbsreflectsappears
cyanredgreen, bluecyan
magentagreenred, bluemagenta
yellowbluegreen, redyellow
magenta + yellowblue, greenredred
cyan + yellowred, bluegreengreen
cyan + magentared, greenblueblue