Rangefinder or reflex?

35mm photography evolved in rangefinder cameras. In the early pre-prism days, photographers using “minicams” had a simple choice of Leica, or Contax. Post WW2, other Leica “knock-offs” would appear, mostly from Japan, but also from countries like Italy, and the USSR. So why did rangefinders languish? To answer that we will look back at two 1956 articles in Popular Photography under the banner: “Which 35 – Reflex or Rangefinder?” [1,2].

Rangefinder versus reflex?

Bob Schwalberg, an advocate for rangefinder cameras, described two of their limitations [1]: long and short views. Rangefinder couplings it seemed had a limitation of 135mm focal length for the purposes of long views, and a limit of 3½ feet in close-up (without accessories like a mirror reflex housing). In fact Schwalberg even commented that “Rangefinders just aren’t worth a speck of dust on your negative for focusing lenses longer than 135mm”. After this he focused on their strengths:

  • Speed in focusing – “With a rangefinder camera you move straight into focus instead of having to twist the lens back and forth several times…”.
  • Ease of focusing – Rangefinder cameras can be “focused under light levels so dim as to make photography unfeasible.”
  • Accuracy of focusing – “Rangefinder focusing is inherently more accurate than ground-glass focusing because the rangefinder mechanism can distinguish much more critically than the human eye.”
  • Time lag #1 (from focusing to stop down) – does not apply because the rangefinder is stopped down before focusing begins.
  • Time lag #2 (from pressing the release button to exposing the film) – rangefinders don’t have mirrors which add 1/50 sec. Reflex cameras have mirror lag.

Schwalberg actually considered the mirror lag to be the single most serious disadvantage of the SLR in as much as “You never see the picture you make with a single-lens reflex until you, develop the film. It all happened why you weren’t looking.” (unfortunately this was before the returning mirror). He goes on to say that “The prism reflex is a useful tool which brings many advantages to a number of specific, and I think special, photographic applications.”

Barrett Gallagher meanwhile made the case for the single-lens reflex [2]. His choice of the SLR was because, in his words, “I couldn’t see clearly through the viewfinders on the rangefinder cameras.” Or in other words “… any separate rangefinder-viewfinder system requires you to shift your eye from one peephole to another at the crucial moment, and with a moving target, you’re dead.” Rangefinder accuracy also falls off with long telephoto lenses, requiring of all things the addition of a clumsy reflex housing.

  • Close-up – it is possible to focus down to 3.5” with no parallax problems. Reflex cameras focus down to 2.5 feet, versus 3.5 feet for rangefinders.
  • Ease of focusing – rangefinders are easier to focus, however in dim light the reflex lens can open wide enough to allow focusing.
  • DOF – the SLR allows the photographer to see the DOF a lens offers at different f-stops.
  • Viewfinders – SLR’s have one viewfinder for all lenses. Rangefinders require supplementary rangefinders for lenses outside 50mm.

Gallagher summed up by saying that “The single-lens reflex is the versatile camera with no parallax, no viewfinders, no mechanical rangefinder limits. It lets you see full size with any lens exactly what you get – including actual depth of field.”

Further reading:

  1. Bob Schwalberg, “Which 35 – Reflex or Rangefinder? – The coupled rangefinder is for me”, Popular Photography, 39(2), pp. 38,108,110 (1956)
  2. Barrett Gallagher, “Which 35 – Reflex or Rangefinder? – I like a single-lens reflex best”, Popular Photography, 39(2), pp. 39,112 (1956)

Feininger on B&W versus colour

“Black-and-white photography is essentially an abstract medium, while color photography is primarily realistic. Furthermore, in black-and-white a photographer is limited to two dimensions – perspective and contrast – whereas in color a photographer works with three: perspective, contrast, and color. In order to be able to exploit the abstract qualities of his medium, a photographer who works in black-and-white deliberately trains himself to disregard color; instead, he evaluates color in terms of black-and-white, shades of gray, and contrast of light and dark. A color photographer’s approach is the exact reverse: not only is he very much aware of color as ‘color’, but he decidedly tries to develop a ‘color eye’ – a sensitivity to the slightest shifts in hue, saturation, and brightness of color.”

Andreas Feininger, “Successful Color Photography (1966)

The HSV−HSB colour space

RGB is used to store colour images in image file formats, and view images on a screen, but it’s honestly not very useful in day-to-day image processing. This is because in an RGB image the luminance and chrominance information is coupled together. Therefore when one of the R, G, or B components is modified, the colours within the image will change. For performing many operations we need another form of colour space – one where the characteristics of chrominance and luminance can be separated. One of the most common of these is HSV, or Hue−Saturation−Value.

HSV as an upside-down,
six-sided pyramid
HSV/HSB as a cylindrical space

HSV is derived from the RGB model, and is sometimes known as HSB (Hue, Saturation, Brightness) and characterizes colour in terms of hue and saturation. It was created by Alvy Ray Smith in 1978 [1]. Now the space is traditionally represented using an an upside-down hex-cone or six-sided pyramid – however mathematically the space is actually conceptualized as a cylinder. HSV/HSB is a perceptual colour space, i.e. it decomposes colour based on how it is perceived rather than how it is physically sensed, as is the case with RGB. This means HSV (and its associated colour spaces) is more aligned with a more intuitive understanding of colour.

The top of the HSB colour space cylinder showing hue and saturation (left), and a slice through the cylinder showing brightness versus saturation (right).

A point within the HSB space is defined by hue, saturation, and brightness.

  1. Hue represents the chromaticity or pure colour. It is specified by an angular measure from 0 to 360° – red corresponds to 0°, green to 120°, and blue to 240°.
  2. Saturation is the vibrancy, vividness/colourfulness, or purity of a colour. It is defined as a percentage measure from the central vertical axis (0%) to the exterior shell of the cylinder (100%). A colour with 100% saturation will be the purest color possible, while 0% saturation yields grayscale, i.e. completely desaturated.
  3. Value/Brightness is a measure of the lightness or darkness of a colour. It is specified by the central axis of the cylinder, and ranges from white at the top to black at the bottom. Here 0% indicates no intensity (pure blackness), and 100% indicates full intensity (white).

The values are very much interdependent, i.e. if the value of a colour is set to zero, then the amount of hue and saturation will not matter, as the colour will be black. Similarly, if the saturation is set to zero, then hue will not matter, as there will be no colour.

An example of a colour image, and two views of its respective HSB colour space.

Manipulating images in HSB is much more intuitive. In order to lighten a colour in HSB, it is as simple as increasing the brightness value, while a similar technique applied to an RGB image requires scaling each of the R, G, B components proportionally. The task of increasing saturation, i.e. making an image more vivid is easily achieved in this colour space.

Note that converting an image from RGB to HSB involves a nonlinear transformation.

  1. Smith, A.R., “Color gamut transform pairs”, Computer Graphics, 12(3), pp.12-19 (1978)

Steinbeck on Robert Capa

“Capa’s pictures were made in his brain – the camera only completed them. You can no more mistake his work than you can the canvas of a fine painter. Capa knew what to look for and what to do with it when he found it. He knew, for example, that you cannot photograph war because it is largely an emotion. But he did photograph that emotion by shooting beside it. He could show the horror of a whole people in the face of a child. His camera caught and held emotion.”

John Steinbeck, “Robert Capa” in The Best of Popular Photography (1979)

Can humans discern 16 million colours in an image?

A standard colour image is 8-bit (or 24-bit) containing 2563 = 16,777,216 colours. That seems like a lot right? But can that many colours even be distinguished by the human visual system? The quick answer is no, or rather we don’t exactly know for certain. Research into the number of actual discernible colours is actually a bit of a rabbit’s hole.

A 1998 paper [1] suggests that the number of discernible colours may be around 2.28 million – the authors determined this by calculating the number of colours within the boundary of the MacAdam Limits in CIELAB Uniform Colour Space [2] (for those who are interested). However even the authors suggested this 2.28M may be somewhat of an overestimation. An larger figure of 10 million colours (from 1975) is often cited [3], but there is no information on the origin of this figure. A similar figure of 2.5 million colours was cited in a 2012 article [4]. A more recent article [5] gives a conservative estimate of 40 million distinguishable object color stimuli. Is it even possible to realistically prove such large numbers? Somewhat unlikely, because it may be impossible to quantify – ever. Indications based on existing colour spaces may be as good as it gets, and frankly even 1-2 million colours is a lot.

Of course the actual number of colours someone sees is also dependent on the number and distribution of cones in the eye. For example, dichromat’s only have two types of cones which are able to perceive colour. This colour deficiency manifests differently depending on which cone is missing. The majority of the population are trichromats, i.e. they have three types of cones. Lastly there are the very rare individuals, the tetrachromats who have four different cones. Supposedly tetrachromats can see 100 million colours, but it is thought the condition only exists in women, and in reality, nobody really knows how many are potentially tetrachromatic [6] (and the only definitive way of finding out if you have tetrachromacy is via a genetic test).

The reality is that few if any real pictures contain 16 million colours. Here are some examples (all images contain 9 million pixels). Note the images are shown in association with the hue distribution from the HSB colour-space. The first example is a picture of a wall of graffiti art in Toronto. Now this is an atypical image because it contains a lot of varied colours, most images do not. This image has only 740,314 distinct colours – that’s only 4.4% of the potential colours available.

The next example is a more natural picture, a picture of two building (Nova Scotia). This picture is quite representative of images such as landscapes, that are skewed towards quite a narrow band of colours. It only contains 217,751 distinct colours, or 1.3% of the 16.77 million colours.

Finally we have foody-type image that doesn’t seem to have a lot of differing colours, but in reality it does. There are 635,026 (3.8%) colours in the image. What these examples show is that most images contain fewer than one million different colours. So while there is the potential for an image to contain 16,777,216 colours, in all likely they won’t.

What about 10-bit colour? We’re taking about 10243 or 1,073,741,824 colours – which is really kind of ridiculous.

Further reading:

  1. Pointer, M.R., Attridge, G.G., “The number of discernible colours”, Color Research and Application, 23(1), pp.52-54 (1998)
  2. MacAdam, D.L., “Maximum visual efficiency of colored materials”, Journal of the Optical Society of America, 25, pp.361-367 (1935)
  3. Judd, D.B., Wyszecki, G., Color in Business, Science and Industry, Wiley, p.388 (1975)
  4. Flinkman, M., Laamanen, H., Vahimaa, P., Hauta-Kasari, M., “Number of colors generated by smooth nonfluorescent reflectance spectra”, J Opt Soc Am A Opt Image Sci Vis., 29(12), pp.2566-2575 (2012)
  5. Kuehni, R.G., “How Many Object Colors Can We Distinguish?”, Color Research and Application, 41(5), pp.439-444 (2016)
  6. Jordan, G., Mollon, J., “Tetrachromacy: the mysterious case of extra-ordinary color vision”, Current Opinion in Behavioral Sciences, 30, pp.130-134 (2019)
  7. All the Colors We Cannot See, Carl Jennings (June 24, 2019)
  8. How Many Colors Can Most Of Us Actually See, USA Art News (July 23, 2020)

Szarkowski on the history of photography

“The history of photography has been less a journey than a growth. Its movement has not been linear and consecutive, but centrifugal. Photography, and our understanding of it, has spread from a center; it has, by infusion, penetrated our consciousness. Like an organism, photography was born whole. It is in out progressive discovery of it that its history lies.”

John Szarkowski, The Photographer’s Eye (1966)

Choosing a vintage lens – some tech FAQ

Not a definitive list, but one which covers a few of the “tech” issues. More will be added as I think of them.

Are all lenses built the same?

Most manufacturing companies provided a good, clean environment for constructing lenses. That’s not to say that there won’t be lousy copies of a particular lens, as well as outstanding copies, due to manufacturing tolerances. This is exacerbated in some lenses from the USSR, mostly because the same lens could be manufactured in a number of different factories, all with differing levels of quality (which during the period could be true of any company running multiple manufacturing locations).

Are vintage lenses radioactive?

There are some lenses that produce low-level radiation because they contain one or more optical elements made using Thorium. It was useful in lens design because it gave optical glass of the period a high refractive index, so fewer lens elements would be needed in a lens.

What sort of aberrations do vintage lenses produce?

No lens is perfect (not even modern ones). Lenses can suffer from soft edges, chromatic aberrations, and vignetting. But that’s not to say these things are negatives. Some vintage lenses can create the same sort of distortions that app filters do – using the lens aberrations.

Do vintage lens have coatings?

Lens coatings first appeared in the 1930s, yet many early vintage lenses only had a single layer coating and as such many lenses are susceptible to internal reflections and lens flare. Lens coatings were made from a variety of materials, including rare-earth elements. Lens coatings were primarily created to eliminate or reduce light reflections. Through the practical application of lens coatings, a significant reduction in the reflective index of the lens allowed for more complex optical designs to be constructed. The lack of coatings can add to a lenses’ character.

Are vintage lenses sharp?

Vintage lenses may not be as sharp as modern ones, but then again vintage lenses aren’t really about sharpness. Older lenses are often sharp in the centre, but decreasingly so as you move to the corners. Stopped down to f/8 many produce good results. The reduced sharpness is due to the use of fewer low-dispersion optics, fewer anti-reflective coatings, and the widespread use of spherical elements in lens construction. The use of low-dispersion glass and aspherical elements has lead to finer detail in modern lenses.

Does bokeh matter?

Does it? Look honestly, buying a lens just for its ability to produce “creamy” bokeh is fine, but you still have to have the right circumstances so the lens will produce bokeh. Bokeh certainly adds interest to a picture, but it’s not the be-all and end-all some people make it out to be.

Is faster better?

An f/1.2 lens is often (incorrectly) considered to be better than an f/1.4 lens, which is turn is better than a f/1.8 lens, while an f/3.5 lens is not even considered. This misconception is derived, in part, from the fact that large aperture lenses are more costly to design and manufacture. However a high cost is not necessarily associated with better quality when all aspects of lens performance are considered. Large aperture lenses do benefit from superior light-gathering power, good in low light situations – but how often is this needed? Large aperture settings also suffer from a very shallow depth-of-field.

Why do later lenses have so few aperture blades?

Lenses of the 1950s often had a lot of aperture blades, from a low of 8 to a high of 18-20. This means that the apertures produced in scenarios such as Bokeh are almost perfectly round. However with the introduction of fully automatic aperture in 1961, there was a need to reduce the operating resistence of the blades, hence many manufacturers chose to reduce the number of aperture blades to 6.

Can vintage lenses be stabilized?

Vintage lenses don’t come with built-in stabilization. This is not a problem with cameras that have in-body stabilization like Olympus, but can be an issue with those that rely on lens-based stabilization.

Do vintage lenses produce EXIF data?

Vintage lenses do not have an electronic connection, so that means the camera will only record metadata (EXIF) for images relative to camera settings like shutter speed, ISO, FPS, picture profiles, etc. However, no lens data will be included, such as f-stop, or focal length. The camera also won’t think there is a lens attached, so it is necessary to change the setting “Release without lens” to activate the shutter release. This can really hamper some people as it requires taking notes while out shooting, and it isn’t always practical – like when you are taking a few shots in sequence. With no lens specific information, the camera has little ability to correct for things like vignetting.

Vintage lenses – Why are telephoto lenses so cheap?

Go on to any vintage camera resellers website, and you will see that there are some lenses, notably telephoto lenses, that are inexpensive – I mean really cheap. Why? Doesn’t it require more material to make? Well, yes and no. They do have more metal (body), but the amount of glass is probably less than lenses with shorter focal lengths. Telephoto lenses generally have a very simple lens formulae, and so most of the added expense went into creating a large lens body. But that’s not really the problem.

Nearly all camera manufacturers provided an array of telephoto lenses. It’s a wonder they sold them all. For the reality is, then as now, telephoto lenses have a very narrow scope of use. The amateur photographer was likely only interested in the moderate telephotos, up until 135mm. The remaining lenses were the purvey of the professional photographer, and cinematographer. Who really needed a 300mm or 500mm lens, let alone 800mm? For example, in 1971, Asahi-Pentax sold 12 different Super-Takumar telephoto lenses:

  • Moderate : 85mm f/1.9 105mm f/2.8, 135mm f/2.5, 135mm f/3.5, 150mm f/4
  • Standard : 200mm f/5.6, 200mm f/4, 300mm f/6.3, 300mm f/4
  • Super : 400mm f/5.6, 500mm f/4.5, 1000mm f/8

The problem is that these telephoto lenses were only used for a narrow scope of use. Even a 300mm lens only has a horizontal AOV of only 10°. By the time you get down to 400mm it’s only 5°. Both are very low angles.

For the purpose of this discussion, let’s consider telephoto lenses above 120mm. That leaves three core categories: (i) the moderate telephoto’s around 135mm, (ii) the upper-end standards 200mm and 300mm, and the super-telephoto range > 300mm. Of the telephotos below 120mm, the most common are the 80-90mm lenses may be the most expensive of all telephotos, due to their popularity in portraiture work. Note that the prices quoted are for lenses in average to good condition, meaning that they are functional, yet may have minor optical issues, that won’t impact the quality of the image.


The most common lens in the moderate telephoto category is the 135mm, and there are a lot of them. Almost every lens manufacturer produced the 135mm as a “standard” telephoto lens. This may have been a legacy of rangefinder 25mm cameras which maxed out at 135mm (without the use of specialized devices). As such they are cheap because they are plentiful. The price only varies depending on manufacturer, lens speed, and mount (obscure mounts will reduce the price). If you search Kamerastore, you will find hundreds of 135mm lenses. A Soligor 135mm f/3.5 Tele-Auto (M42) can cost as little as C$60, whereas a Schneider-Kreuznach 135mm f/3.5 (M42) will only cost C$155. The rare exceptions seem to be lenses like the KMZ 135mm f/2.8 Tair-11, which sporting 20 aperture blades sells at about C$338.

Prices are also low because their use in as lenses on digital cameras is just not that popular, largely because once adapted to crop-sensors, a 135mm becomes a 200mm (APS-C) or 270mm (MFT) lens. Other reasons they aren’t popular include being slow, with an average aperture of f/2.8-4.0, and some lenses like the Meyer-Optik Görlitz Orestor 135mm f/2.8 are heavy, i.e. over 500g.


The “standard” telephoto range is often even cheaper relative to it’s size. A 200mm Asahi Super-Takumar f/4 usually sells for around C$200, the Jupiter 21M for C$175. Once you move higher than 200mm, prices seem to stabilize at around C$1 per mm of focal length. Here the higher prices indicate some historically significant lens. For example both a Meyer-Optik Görlitz 300mm f/4.5 Telemegor, or an early Pentax 300mm f/4 Takumar might be priced around C$400.


Again, these lenses can be cheap, even though they are not as abundant as smaller telephoto lenses. You can get an Asahi Super-Multi-Coated Takumar 400mm f/5.6 for around C$400. A Meyer-Optik Görlitz 400mm f5.5 Telemegor on the other hand might only cost C$200. The expensive 400mm lenses are often those with some history. For example a Kilfitt Fern-Kilar 400mm f/5.6 normally costs upwards for C$600-800 because it is a rarer lens, and due to its association with the film Rear Window.

The verdict? Telephoto lenses above 120mm can be fun to play with, but most people won’t use them that often. I think that is partially the reason why 135mm lenses are so cheap (and often in such good condition). People bought them to broaden their focal length choices, found they weren’t very practical, and relegated them to a cupboard somewhere. They weren’t that useful for everyday shots, and certainly too bulky to travel with. Eventually the market for them likely waned due to the growth of the zoom lens market. I would honestly avoid telephotos above 200mm unless you have a good use for the lens (and you choose a lens with good reviews). Longer lenses are fun to play around with, but may not exactly be that practical. Super telephotos are for the birds (literally).

P.S. There are also a lot of third-party lenses suppliers that produced telephoto lenses that are even cheaper than camera brands. For example Chinon, Sigma, Soligor, Tokina, Hanimex and Vivitar.

Old Lens Life magazine from Japan

In Japan they actually publish a magazine dedicated to vintage lenses – Old Lens Life. You can get copies from Amazon Japan (either paper or digital). It’s fascinating because a lot of the article titles are in English, and the text in Japanese, but with translation it shouldn’t be too hard to get the gist of what it being said. This magazine would actually do really well outside of Japan. Has anyone read a copy?

Why 24-26 megapixels is just about right

When cameras were analog, people cared about resolving power – but of film. Nobody purchased a camera based on resolution because that was contained in the film (and different films have different resolutions). So you purchased a new camera only when you wanted to upgrade features. Analog cameras focused on the tools needed to capture an optimal scene on film. Digital cameras on the other hand focus on megapixels, and the technology to capture photons with photosites, and convert these to pixels. So megapixels are often the name of the game – the first criteria cited when speculation of a new camera arises.

Since the inception of digital sensors, the number of photosites crammed onto various sensor sizes has steadily increased (while at the same time the size of those photosites has decreased). Yet we are now reaching what some could argue is a megapixel balance-point, where the benefits of a jump in megapixels may no longer be that obvious. Is 40 megapixels inherently better than 24? Sure a 40MP image has more pixels, 1.7 times more pixels. But we have to question at what point is there too many pixels? At what point does the pendulum start to swing towards overkill? Is 24MP just about right?

First let’s consider what is lost with more pixels. More pixels means more photosites on a sensor. Cramming more photosites on a sensor will invariably result in smaller photosites (assuming the sensor dimensions do not change). Small photosites mean less light. That’s why 24MB is different on each of MFT, APS-C and full-frame sensors – more space means larger photosites, and better ability in situations such as low-light. Even with computational processing, smaller photosites still suffer from things like increased noise. The larger the sensor, the larger the images produced by the camera, and the greater the post-processing time. There are pros and cons to everything.

Fig.1: Fig: Compare a 24 megapixel image against devices that can view it.

There is also something lost from the perspective of aesthetics. Pictures should not be singularly about resolution, and sharp content. The more pixels you add to an image, there has to be come sort of impact on the aesthetics of an image. Perhaps a sense of hyper-realism? Images that seem excessively digital? Sure some people will like the the highly digital look, with uber saturated colour, and sharp detail. But the downside is that these images tend to lack something from an aesthetic appeal.

Many photographers who long for more resolution are professionals. People who may crop their images, or work on images such as architectural shots or complex landscapes that may require more resolution. Most people however don’t crop their images, and few people make poster-sized prints, so there is little or no need for more resolution. For people that just use photos in a digital context, there is little or no gain. The largest monitor resolution available is 8K, i.e. 7680×4320 pixels, or roughly 33MP, so a 40MP image wouldn’t even display to full resolution (but a 24MP image would). This is aptly illustrated in Figure 1.

Many high-resolution photographs live digitally, and the resolution plays little or no role in how the image is perceived. 24MP is more than sufficient to produce a 24×36 inch print, because nobody needs to pixel-peep a poster. A 24×36” poster has a minimum viewing distance of 65 inches – which at 150dpi, would require a 20MP image.

The overall verdict? Few people need 40MP, and fewer still will need 100MP. It may be fun to look at a 50MP image, but in all practical sense it’s not much better than a 24MP. Resolutions of 24-26MP (still) provide exceptional resolution for many photographic needs. It’s great for magazine spreads (max 300dpi), and fine art prints. So unless you are printing huge posters, it is a perfectly fine resolution for a camera sensor.