The aesthetic appeal of mid-century vintage lenses

When you look at modern lenses, there isn’t much that sets them apart. They are usually pretty plain black cylinders, partially due to the consistency of modern lens design. The same could not be said of vintage lenses. Maybe this has something to do with the fact that many vintage lenses were made by companies that focused purely on lenses, and as such tried hard to differentiate their lenses from their competitors. For example a company like Meyer Optik Gorlitz manufactured lenses for cameras using the Exakta mount had to compete for the consumer spending with lenses from a myriad of other companies (at least 25-30).

Over time the appearance of lenses naturally changed, as new materials were introduced, often for the purpose of reducing the overall cost of lenses. For example, many early 35mm lenses had a shiny, chrome-like appearance. The earliest, pre-war lenses were often made of chrome-plated brass. As the Second World War progressed, shortages or re-direction of materials like brass led some manufacturers had begun to transition towards aluminum, which was both less expensive, easier to manufacture, and produced a lighter lens. While these early aluminum lenses were aesthetically pleasing there was little that differentiated them in a world where there was an increasing number of 3rd party lens manufacturers.

Fig.1: Evolution of the aluminum design of the Zeiss Jena Biotar 58mm f/2

When it first appeared as a lens material, aluminum was chic. The 1950s was the age of aluminum, which was a symbol of modernism. Many of the largest aluminum producers pursued new markets to absorb their increased wartime production capacity, used in everything from drink cans to kitchenware and Airstream trailers (there was also extra aluminum from scrapping of war surplus aircraft etc.). These aluminum lenses were initially clear-coated to reduce the likelihood of tarnishing, but eventually anodized to provide a robust black coating. Also in the 1950s, lens manufacturers to realize changing trends in lens design – buyers had moved away from the idea of pure practicality, and focused also on design. This wasn’t really surprising considering the broad scope of modernist design during this period – design tended to favour sleek and streamlined silhouettes. It is interesting to note that most of the aesthetically pleasing lenses of the post-1950 period originated from Germany.

Fig.2: Every lens manufacturer had a different interpretation of both “berg-and-tal”, and the black-and-white “zebra” aesthetic

The first notable change was the gradual move towards what in German manufacturers called the “berg und tal” design, or rather “mountain and valley” design of the grips on a lens – usually knurled depressions milled into the surface of the ring (but also the opposite like the lenses of Steinheil where the depressions are smooth and the mountains are knurled). English-speaking regions often referred to this as a “knurled grip”. Appearing in the early 1950s, it was particularly common for focusing rings, making them more prominent, and likely more ergonomic, i.e. easier to grip. Some lenses started with the focusing ring, and eventually used the same design on the aperture ring. Prior to this most lenses used a simple straight knurl on the adjustment rings.

Towards the end of the 1950s, the pure-aluminum design transitioned to a combination of silver and black anodized aluminum. The lens bodies themselves were mostly black, with the “berg und tal” designs alternating between black and silver. This alternating pattern is what is colloquially known as “zebra” design. Many lens manufacturers utilized the zebra aesthetic in one form or another including Schacht, Enna, Steinheil, Schneider-Kreuznach, Meyer Optik, Rodenstock, ISCO etc..

Fig.3: Meyer Optik had an interesting twist on the zebra design. There were very few of these lenses and they are very minimalistic in design.

Zeiss probably produced the best known examples of the zebra aesthetic design with the Pancolar and Flektogon series of lenses. Although these lenses did not appear until the early 1960s, they bypassed the more prominent berg-und-tal in favour of a much subdued black-and-white knurled grip (which is also something Meyer Optik did with lenses like the Lydith 30mm). This design for both focusing and aperture rings replaced the rough textured rings of the earlier lenses. Some call these lenses the “Star Wars lens”. The Pancolar 50mm f/2 appeared ca. 1960 in the form of an f/2 lens with dual black-silver body encompassing a “converging-distance” depth of field range indicator, and either a textured or nubbed rubber focusing ring. This evolved a few years later to the classic “zebra” design, shortly before the release of the classic Pancolar 50mm f/1.8, which also sported the zebra design. By the 1970s, the Pancolar 50mm f/1.8 had morphed into a complete black configuration with a large rubber cross knurling focus grip and a finely knurled aperture ring.

Fig.4: Evolution of design aesthetics of the Zeiss Pancolar 50mm lens.

Japanese manufacturers transitioned from aluminum/chrome to black bypassing the zebra design. The one exception seems to be the Asahi Auto-Takumar 55mm F/1.8, which appeared in 1958, but is the sole example of zebra design (at least by Asahi). Japanese manufacturers did however embrace the berg-and-tal design.

Fig.5: Some lens companies couldn’t settle on a design. Here we have differing focus ring designs from the same Meyer Optik catalog in the 1960s

By the mid-1960s many camera manufacturers were producing their own lenses, particularly in Japan. As such lenses became more consistent, with little need to compete with other lens manufacturers. There were still 3rd party lens manufacturers but their perspective was to concentrate more on the manufacture of inexpensive lenses. Most lenses transitioned to using standardized, nonchalant black aluminum lenses, with the onus being more on the quality of the optics. Grips transitioned from berg-und-tal to a flatter, square-grooved style, still using a in black/chrome contrast (which likely resulted in a cost saving). By the mid-1970s focus rings were provided with a ribbed rubber coating, still common today on some lenses.

Fig.6: Berg-und-tal overkill?
Fig.7: One of the few Japanese zebra lenses.

Today, the sleek aluminum lenses are sought after because of their “retro” appeal, as too are the zebra lenses.

How do we define beauty?

It’s funny when someone says a photograph is beautiful, because not everyone will have the same perception. This is because the idea of beauty is a very subjective one. Beauty is a term which cannot truly be quantified in any real manner. What society has done is imprint certain standards of beauty based on a few peoples opinions. If you look at the picture of the pink flower below, you might say it’s beautiful – but why is it beautiful? Is it because most people would say that, or is it because it is colourful. A brown flower would likely be considered not-so-beautiful. Is it because the flower smells nice? (which obviously you cannot tell from a photograph). The second flower below, a Frangipani is simpler, but may be beautiful because of its decadently sweet, floral, fragrance. Could beauty be an amalgam of visual and olfactory senses?

Are pink roses considered more beautiful?
This Frangipani flower is plain, but still beautiful.

For most of human existence, beauty has not really mattered that much (well, except maybe for those who had wealth, I mean gold is shiny, which likely contributes to its allure). Most humans were concerned with survival. That is not to say that aesthetics did not play a role in the things they made, but let’s face it, catching food took precedence over making things look pretty. Beauty may have existed more in the natural world. In fact it may be these natural patterns that exist in nature that has lead to humans being somewhat hardwired to experience beauty.

“Beauty is no quality in things themselves: It exists merely in the mind which contemplates them; and each mind perceives a different beauty. One person may even perceive deformity, where another is sensible of beauty; and every individual ought to acquiesce in his own sentiment, without pretending to regulate those of others.”

Hume, David, “Of the Standard of Taste”, Essays Moral and Political, p.136 (1757)

Beauty has to do with the idea of aesthetics, which is essentially the appreciation of beauty. The term “aesthetics” was introduced in 1750 by German philosopher Alexander Gottlieb Baumgarten who defined taste, in its wider meaning, as the ability to judge according to the senses, instead of according to the intellect. When we say something is beautiful, we are expressing an aesthetic judgment. When you pick a raspberry from a bush, you tend to choose the bright red, firm raspberries, with no apparent visual defects, those that are most beautiful (of course these is nothing to say they will taste good from pure visual assessment alone).

Is there not beauty in the piped twist of a French crullers?
The beauty in a matcha latte lies in the contrast between the green of the matcha and the foamy heart.

Beauty can be objective and universal, as certain things are beautiful to everyone. Perhaps flowers are a good example, or things in the natural world. However beauty in the human-made world is more subjective and individual. It is no different with our other senses. A delicious food to some, may taste repugnant to others. Another good example is art. Some people can find a piece of art beautiful, while others find it loathsome. Beauty truly is in the eye of the beholder. Each person’s perception of beauty is also influenced by their environment. In 1951 artist Robert Rauschenberg produced White Painting, basically white latex house paint applied with a roller and brush on two canvas panels. Some will find beauty in this nothingness, many won’t (well because there is nothing there).

The same is true of photographs, where beauty truly is subjective, mainly because photographs inherently represent the visual perspectives of the photographer, not necessarily those of the viewer. In some cases what is viewed in a photograph may not have the same beauty as the scene in real life, perhaps due to the lack of depth (i.e. flatness), or the misinterpretation of colour. In other cases, the photograph tells a different story of beauty to the real world. For instance colour may not be quintessential to beauty. The absence of colour in B&W images is not to everyone’s taste, yet it helps to tell a story in a way that means the colour does not distract the viewer from the image’s inner beauty, perhaps highlighting the expressions and textures of the scene.

There are many elements to producing a beautiful photograph, but at the end of the day, beauty is very much tied to the perceptions of the viewer. And unlike the physical world where we can harness all out senses to decipher our understanding of beauty, in visual media we have only our eyes.

The Retinex algorithm for beautifying pictures

There are likely thousands of different algorithms out in the ether to “enhance” images. Many are just “improvements” of existing algorithms, and offer a “better” algorithm – better in the eyes of the beholder of course. Few are tested in any extensive manner, for that would require subjective, qualitative experiments. Retinex is a strange little algorithm, and like so many “enhancement” algorithms is often plagued by being described in a too “mathy” manner. The term Retinex was coined by Edwin Land [2] to describe the theoretical need for three independent colour channels to describe colour constancy. The word was a contraction or “retina”, and “cortex”. There is an exceptional article [3] on the colour theory written by McCann which can be found here.

The Retinex theory was introduced by Land and McCann [1] in 1971 and is based on the assumption of a Mondrian world, referring to the paintings by the dutch painter Piet Mondrian. Land and McCann argue that human color sensation appears to be independent of the amount of light, that is the measured intensity, coming from observed surfaces [1]. Therefore, Land and McCann suspect an underlying characteristic guiding human color sensation [1].

There are many differing algorithms for implementing Retinex. The algorithm illustrated here can be found in the image processing software ImageJ. This algorithm for Retinex is based on the multiscale retinex with colour restoration algorithm (MSRCR) – it combines colour constancy with local contrast enhancement. In reality it’s quite a complex little algorithm with four parameters, as shown in Figure 1.

Fig.1: ImageJ Retinex parameters
  • The Level specifies the distribution of the [Gaussian] blurring used in the algorithm.
    • Uniform treats all image intensities similarly.
    • Low enhances dark regions in the image.
    • High enhances bright regions in the image.
  • The Scale specifies the depth of the Retinex effect
    • The minimum value is 16, a value providing gross, unrefined filtering. The maximum value is 250. Optimal and default value is 240.
  • The Scale division specifies the number of iterations of the multiscale filter.
    • The minimum required is 3. Choosing 1 or 2 removes the multiscale characteristic and the algorithm defaults to a single scale Retinex filtering. A value that is too high tends to introduce noise in the image.
  • The Dynamic adjusts the colour of the result, with large valued producing less saturated images.
    • Extremely image dependent, and may require tweaking.

The thing with Retinex, like so many of its enhancement brethren is that the quality of the resulting image is largely dependent on the person viewing it. Consider the following, fairly innocuous picture of some clover blooms in a grassy cliff, with rock outcroppings below (Figure 2). There is a level of one-ness about the picture, i.e. perceptual attention is drawn to the purple flowers, the grass is secondary, and the rock, tertiary. There is very little in the way of contrast in this image.

clover in grass
Fig.2: A picture showing some clover blooms in a grassy meadow.

The algorithm is suppose to be able to do miraculous things, but that does involve a *lot* of tweaking the parameters. The best approach is actually to use the default parameters. Figure 3 shows Figure 2 processed with the default values shown in Figure 1. The image appears to have a lot more contrast in it, and in some cases features in the image have increased their acuity.

Fig.3: Retinex applied with default values.

I don’t find these processed images are all that useful when used by themselves, however averaging the image with the original produces an image with a more subdued contrast (see Figure 4), having features with increased sharpness.

Fig.4: Comparing the original with the averaged (Original and Fig.3)

What about the Low and High versions? Examples are shown below in Figures 5 and 6, for the Low and High settings respectively (with the other parameters used as default). The Low setting produces an image full of contrast in the low intensity regions.

Fig.5: Low
Fig.6: High

Retinex is quite a good algorithm for dealing with suppressing shadows in images, although even here there needs to be some serious post-processing in order to create an aesthetically pleasing. The picture in Figure 7 shows a severe shadow in a inner-city photograph of Bern (Switzerland). Using the Low setting, the shadow is suppressed (Figure 8), but the algorithm processes the whole image, so other details such as the sky are affected. That aside, it has restored the objects hidden in the shadow quite nicely.

Fig.7: Photograph with intense shadow
Fig.8: Shadow suppressed using “Low” setting in Retinex

In reality, Retinex acts like any other filter, and the results are only useful if they invoke some sense of aesthetic appeal. Getting the write aesthetic often involves quite a bit of parameter manipulation.

Further reading:

  1. Land, E.H., McCann, J.J., ” Lightness and retinex theory”, Journal of the Optical Society of America, 61(1), pp. 1-11 (1971).
  2. Land, E., “The Retinex,” American Scientist, 52, pp.247-264 (1964).
  3. McCann, J.J., “Retinex at 50: color theory and spatial algorithms, a review“, Journal of Electronic Imaging, 26(3), 031204 (2017)

In image processing, have we have forgotten about aesthetic appeal?

In the golden days of photography, the quality and aesthetic appeal of the photograph was unknown until after the photograph was processed, and the craft of physically processing it played a role in how it turned out. These images were rarely enhanced because it wasn’t as simple as just manipulating it in Photoshop. Enter the digital era. It is now easier to take photographs, from just about any device, anywhere. The internet would not be what it is today without digital media, and yet we have moved from a time when photography was a true art, to one in which photography is a craft. Why a craft? Just like a woodworker crafts a piece of wood into a piece of furniture, so to do photographers  crafting their photographs in the like of Lightroom,or Photoshop.There is nothing wrong with that, although I feel like too much processing takes away from the artistic side of photography.

Ironically the image processing community has spent years developing filters to process images, to make them look more visually appealing – sharpening filters to improve acuity, contrast enhancement filters to enhance features. The problem is that many of these filters were designed to work in an “automated” manner (and many really don’t work well), and the reality is that people prefer to use interactive filters. A sharpening filter may work best when the user can modify its strength, and judge its aesthetic appeal through qualitative means. The only place “automatic” image enhancement algorithms exist are those in-app filters, and in-camera filters. The problem is that it is far too difficult to judge how a generic filter will affect a photograph, and each photograph is different. Consider the following photograph.

Cherries in a wooden bowl, medieval.

A vacation pic.

The photograph was taken using the macro feature on my 12-40mm Olympus m4/3 lens. The focal area is the top-part of the bottom of the wooden bucket. So some of the cherries are in focus, others are not, and there is a distinct soft blur in the remainder of the picture. This is largely because of the low depth of field associated with close-ip photographs… but in this case I don’t consider this a limitation, and would not necessarily want to suppress it through sharpening, although I might selectively enhance the cherries, either through targeted sharpening or colour enhancement. The blur is intrinsic to the aesthetic appeal of the image.

Most filters that have been incredibly successful are usually proprietary, and so the magic exists in a black box. The filters created by academics have never faired that well. Many times they are targeted to a particular application, poorly tested (on Lena perhaps?), or not at all designed from the perspective of aesthetics. It is much easier to manipulate a photograph in Photoshop because the aesthetics can be tailored to the users needs. We in the image processing community have spent far too many years worrying about quantitative methods of determining the viability of algorithms to improve images, but the reality is that aesthetic appeal is all that really matters. Aesthetic appeal matters, and it is not something that is quantifiable. Generic algorithms to improve the quality of images don’t exist, it’s just not possible in the overall scope of the images available. Filters like Instagram’s Larkwork because they are not changing the content of the image really, they are modifying the colour palette, and they do that applying the same look-up table for all images (derived from some curve transformation).

People doing image processing or computer vision research need to move beyond the processing and get out and take photographs. Partially to learn first hand the problems associated with taking photographs, but also to gain an understanding of the intricacies of aesthetic appeal.

Why aesthetic appeal in image processing matters

What makes us experience beauty?

I have spent over two decades writing algorithms for image processing, however I have never really created anything uber fulfilling . Why? Because it is hard to create generic filters, especially for tasks such as image beautification. In many ways improving the aesthetic appeal of photographs involves modifying the content on an image in more non natural ways. It doesn’t matter how AI-ish an algorithm is, it cannot fathom what the concept of aesthetic appeal is.  A photograph one person may find pleasing may be boring to others. Just like a blank canvas is considered art to some, but not to others. No amount of mathematical manipulation will lead to a algorithmic panacea of aesthetics. We can modify the white balance and play with curves, indeed we can make 1001 changes to a photograph, but the final outcome will be perceived differently by different people.

After spending years researching image processing algorithms, and designing some of my own, it wasn’t until I decided to take the art of acquiring images to a greater depth that I realized algorithms are all good and well, but there is likely little need for the plethora of algorithms created every year. Once you pick up a camera, and start playing with different lenses, and different camera settings, you begin to realize that part of the nuance any photograph is its natural aesthetic appeal. Sure, there are things that can be modified to improve aesthetic appeal, such as contrast enhancement or improving the sharpness, but images also contain unfocused regions that contribute to their beauty.

If you approach image processing purely from a mathematical (or algorithmic) viewpoint, what you are trying to achieve is some sort of utopia of aesthetics. But this is almost impossible, largely because every photography is unique.  It is possible to improve the acuity of objects in an image using techniques such as unsharp masking, but it is impossible to resurrect a blurred image – but maybe that’s the point. One could create an fantastic filter that sharpens an image beautifully, but with the sharpness of modern lenses, that may not be practical. Consider this example of a photograph taken in Montreal. The image has good definition of colour, and has a histogram which is fairly uniform. There isn’t a lot that can be done to this image, because it truly does represent the scene as it exists in real life. If I had taken this photo on my iPhone, I would be tempted to post it on Instagram, and add a filter… which might make it more interesting, but maybe only from the perspective of boosting colour.

aestheticAppeal1

A corner hamburger joint in Montreal – original image.

Here is the same image with only the colour saturation boosted (by ×1.6). Have its visual aesthetics been improved? Probably. Our visual system would say it is improved, but that is largely because our eyes are tailored to interpret colour.

aestheticAppeal2

A corner hamburger joint in Montreal – enhanced image.

If you take a step back from the abyss of algorithmically driven aesthetics, you begin to realize that too few individuals in the image processing community have taken the time to really understand the qualities of an image. Each photograph is unique, and so the idea of generic image processing techniques is highly flawed. Generic techniques work sufficiently well in machine vision applications where the lighting is uniform, and the task is also uniform, e.g. inspection of rice grains, or identification of burnt potato chips. No aesthetics are needed, just the ability to isolate an object and analyze it for whatever quality is needed. It’s one of the reasons unsharp masking has always been popular. Alternative algorithms for image sharpening really don’t work much better. And modern lenses are sharp, in fact many people would be more likely to add blur than take it away.