So mechanical cameras were simple right?

There were various types of analog cameras, but the simplest were mechanical cameras, that contained no electronics at all. That means everything that happened inside was mechanical in nature. Not that much really happened, I mean the mechanisms basically moved the film forward, set the film and shutter speeds, and set/activated the shutter mechanism (and move the mirror). But these mechanisms were inherently complex, and the cameras themselves were typically built by hand. A plan view of an Exakta VX1000 camera shows how simple it was…

… but the workings inside were another matter altogether – which was basically comprised of a whole lot of sprockets, rods, and some levers. Things got even more complicated once electronics were introduced.

Advertisement

How a f/0.71 lens helped advance TB screening

Charles G. Wynne (1911-1999) was a lens pioneer, but not in the traditional sense, i.e. his forte was not traditional photographic lenses. We presume sometimes that all advances in photography were made in the realm of cameras, but there are other fields that require lenses as well. Wynne began has optical career at Taylor, Taylor and Hobson Ltd. in 1936, after graduating from Oxford. Wynne worked for TT&H until 1943, when he moved to Wray Optical Works. Here he was not just an assistant, but a lens designer in his own right. His first job at Wray was improving the short focal length aerial reconnaissance lenses that the company made for the RAF.

Wynne designed a series of interchangeable lenses for Wray’s 35mm SLR, the Wrayflex camera, the only British full-frame 35mm SLR camera ever made. Around 1950, there was an opportunity for developing fast lenses for use in the photography of cathode-ray tube (CRT) images and the phosphor screens that were used in X-ray machines. Wynne developed a f/0.71 lens, which although too expensive for industrial CRT photography, was ideally suited the the mobile mass screening program of the 1950s that helped eradicate TB. Wynne likely gleaned some personal satisfaction from this lens, as he had contracted tuberculosis whilst at Oxford. The f/0.71 lens used exposure times eight times shorter than a typical modern photographic lens with an aperture of f/2.0.

The Wynne 64mm f/0.7 lens
  1. Wynne, C.G., Wray, P., “A new form of f/0.71 lens for 35 mm cine-radiography”, Journal of Scientific Instruments, 28, pp.172-173 (1951)
  2. Maxwell, J., Wormell, P.M.J.H., “Charles Gorrie Wynne“, The Royal Society, p.499-514 (2001)

Glass from the past, aka vintage lenses

When digital cameras started to supplant analog ones, everyone likely thought that the manual focus interchangeable lenses of yore would be relegated to dark closets, attics, and the few who still used film. It became rare to find these lenses, except perhaps languishing in the “used” section of a camera store, often gathering dust. Digital cameras used digital lenses, and as such there was very little need for analog lenses. There were also few means of adapting these lenses for use on DSLR’s, largely because of the lack of mount adapters, but also because of compatibility issues with mirror-based cameras, both full frame and crop-sensor. This changed with the advent of the mirrorless camera which having a shorter distance to the sensor allowed the adoption of lens adapters.

So what is a vintage lens? This is somewhat of a loaded question because there is no definitive answer. One of the defining characteristics of a vintage lens is that it is manual, i.e. it relies on both manual focus, and aperture setting. But there are a lot of manual lenses available. There are lenses available from the 1930’s, 40’s and even the 19th century. But many of these suffer from not being easy to adapt to digital cameras. In all likelihood, anything pre-digital could be construed as vintage, however I hesitate to include the pre-digital lenses with electronic components in them, e.g. auto-focus, because most cannot be easily converted for use on a digital camera. But in the end, vintage really means interchangeable lenses made for cameras that used film, and specifically 35mm film cameras, either SLR or rangefinders.

There are millions of vintage lenses in the world today, the majority of these interchangeable lenses hail from the period 1950-1985, predominantly made in Japan and Europe. Some brands have a large ubiquity in the world of vintage lenses, such as Asahi Takumar, while others such as Minolta’s Rokkor have a more subdued presence, e.g. the Rokkor 58mm f/1.4 lens an example of a star performer. Vintage lenses come in various focal lengths, but many are in the “normal” range 45-58mm. They can be fast, i.e. have a large aperture, aesthetically pleasing, e.g. made of chrome, or just come from a company with an exceptional optical reputation. All vintage lenses have their own character, from optical anomalies and aberrations, to colour rendering, and boken, and the out-of-focus qualities. Many of the Carl Zeiss Jena lenses such as the Flektogon 35mm f/2.4 is renowned for how it renders out-of-focus regions. At the opposite end of the spectrum, is the Jupiter 9, an 85mm f/2 lens made in the USSR – it has a wonderful 15 blade aperture, and what some people call “dreamy bokeh”.

In some cases a particular lens may have been made for only a couple of years, in limited quantities, and in other cases a lens may have evolved over a dozen or more years, with slight changes in lens formulae, glass composition, and mounts. For example Asahi Pentax produced a huge number of Takumar branded lenses in the 1960s. Some like the 8-bladed Super-Takumar 50mm f/1.4, a Planar-type lens, almost have legendary status, the optics are that good. The lens evolved over the years from the legendary 8-element Super-Takumar (1964-65) to the thoriated 7-element Super-Takumar (1965-71), Super-Multi-Coated Takumar (1971-72) and SMC Takumar (1972-75). At more than 50 years old, many of these lenses still pass muster. So why choose a vintage lens?

This series will focus on vintage lenses. Over the course of the next few months we will explore various aspects of vintage lenses, from questioning why they are of interest to digging down into the intricacies of choosing a lens, adapters, and how to examine a lens prior to purchase. This won’t be a review of specific lenses (that may come later), but more of a broad overview, providing links to extra information that might be of interest.

A photograph’s life in the world

“As an object, a photograph has its own life in the world. It can be saved in a shoebox or in a museum. It can be reproduced as information or as as advertisement. It can be bought and sold. It may be regarded as a utilitarian object or as a work of art. The context in which a photograph is seen effects the meanings a viewer draws from it.”

Stephen Shore, The Nature of Photographs

Asahi and the Pentax name

If you do a search for “German Pentax” you will likely come across a reference to a German camera. Of course the name brand Pentax is most often associated with Japan’s Asahi Optical, but it wasn’t always the case. The name Pentax started life behind the Iron Curtain at VEB Zeiss Ikon Dresden. Zeiss Ikon was one of the photographic companies formed in East Germany after the division of Germany into East and West.

Zeiss Ikon Pentax

In 1954 Zeiss Ikon, based in Dresden, began work on a new 35mm camera. It was designed to use the new Zeiss 50mm f/2.8 lens, but was quite radical from a design perspective, looking more like a 120 film camera of the period. There only seem to be prototypes of this camera, and if you want to learn more you can check out the post on Marco Kroger’s website zeissikonveb.de. He says the first version of the camera was intended to be a 6×4.5 120-film camera, with the film loaded in removable cassettes. The page includes some interesting technical drawings of the camera.

But where did the name Pentax come from? Well due to the division of a number of German camera companies, there were some issues with product naming, mostly related to trademark infringement. As East German companies wanted to sell their products in the West, they often had to come up with new names. For example the name Contax was already being used by the West German Contax company. To circumvent this, East German companies often created portmanteau words by blending two words. For example, Pentacon was derived from “PENTAprism” and “CONtax”. Therefore it is thought that the registered trademark Pentax was derived from PENaprism ConTAX.

Because Zeiss Ikon had a name but no camera, it sold the name to Asahi in 1954 who attached it to their first Pentaprism SLR in 1957 – the Asahi Pentax.

The origins of Asahi’s Takumar

In the days of film cameras, every company had it’s own way of “naming” cameras and lenses. This made it very easy to identify a lens. Asahi Pentax had the ubiquitous Takumar (TA-KOO-MA) name associated with its 35mm SLR and 6×7 lenses. The name would adorn the lenses from the period of the Asahiflex cameras with their M37 mount, through the M42 mount until 1975 when the switch to K-mount came with a change to the lens branding.

Asahi was founded by in 1919 by Kumao Kajiwara as Asahi Optical Joint Stock Co. Asahi began making film projection lenses in 1923, and by the early 1930s was producing camera lenses for the likes of future companies Minolta (1932), and Konica (1933). In 1937 with the installation of a military government in Japan, Asahi’s operations came under government control. By this time Kajiwara had passed away (it is not clear exactly when), and the business passed to his nephew Saburo Matsumoto (possibly 1936?). It was Matsumoto who had the vision of producing a compact reflex camera. In 1938 he bought a small factory in Tokyo and renamed it as Asahi Optical Co. Ltd.

It seems as though the lens series was named in honour of one of the founders brother, Takuma Kajiwara. There might have been the analogy that photography was a means of painting with light, and lenses were like an artists brushes. On a side note, the name Takuma in Japanese is an amalgam of “taku” meaning “expand, open, support”, and “ma” meaning “real, genuine”.

A photograph by Takuma titled “Domestic Life in Japan”, published in the September 1905 issue of Brush and Pencil (“St. Louis Art at the Portland Exposition” XVI(3), p.75).

Takuma Kajiwara (1876-1960) was a Japanese-American photographic artist and painter who specialize d in portraits. Born in Kyushi, Japan he was the third of five brothers in a Samurai family. Emigrating to America at the age of 17, he settled in Seattle, and became a photographer. He later moved to St.Louis and opened a portrait studio, turning from photography to painting. In 1935 he moved to New York. In 1951 he won the gold medal of honour for oils from the Allied Artists of America for his expressionist painting of the Garden of Eden titled “It All Happened in Six Days”. Takuma himself had an interest in cameras, patenting a camera in 1915 (Patent No. US1193392).

Note that it is really hard to determine the exact story due to the lack of accessible information.

My thoughts on algorithms for image aesthetics

I have worked on image processing algorithms on and off for nearly 30 years. I really don’t have much to show for it because in reality I found it was hard to build on algorithms that already existed. What am I talking about, don’t all techniques evolve? Well, yes and no. What I have learned over the years is that although it is possible to create unique, automated algorithms to process images, in most cases it is very hard to make those algorithms generic, i.e. apply the algorithm to all images, and get aesthetically pleasing results. And I am talking about image processing here, i.e. improving or changing the aesthetic appeal of images, not image analysis, whereby the information in an image is extracted in some manner – there are some good algorithms out there, especially in machine vision, but predominantly for tasks that involve repetition in controlled environments, such as food production/processing lines.

The number one thing to understand about the aesthetics of an image is that it is completely subjective. In fact image processing would be better termed image aesthetics, or aesthetic processing. Developing algorithms for sharpening an image is all good and well, but it has to actually make a difference to an image from the perspective of human perception. Take unsharp masking for example – it is the classic means of applying sharpening to an image. I have worked on enhanced algorithms for sharpening, involving morphological shapes that can be tailored to the detail in an image, and while they work better, for the average user, there may not be any perceivable difference. This is especially true of images obtained using modern sharp optics.

How does an algorithm perceive this image? How does an algorithm know exactly what needs sharpening? Does an algorithm understand the aesthetics underlying the use of Bokeh in this image?

Part of the process of developing these algorithms is understanding the art of photography, and how simple things like lenses, and how various methods of taking a photo effect the outcome. If you ignore all those and just deal with the mathematical side of things, you will never develop a worthy algorithm. Or possibly you will, but it will be too complicated for a user to understand, let alone use. As for algorithms that supposedly quantify aesthetics in some manner – they will never be able to aesthetically interpret an image in the same way as a human.

Finally, improving the aesthetic appeal of an image can never be completely given over to an automated process, although the algorithms provided in many apps these days are good. Aesthetic manipulation is still a very fluid, dynamic, subjective process accomplished best through the use of tools in an app, making subtle changes until you are satisfied with the outcome. The problem with many academically-motivated algorithms is that they are driven more from a mathematical stance, rather than one based on aesthetics.

Viewing distances, DPI and image size for printing

When it comes to megapixels, the bottom line might be how an image ends up being used. If viewed on a digital device, be it an ultra-resolution monitor or TV, there are limits to what you can see. To view an image on an 8K TV at full resolution, we would need a 33MP image. However any device smaller than this will happily work with a 24MP image, and still not display all the pixels. Printing is however another matter all together.

The standard for quality in printing is 300dpi, or 300 dots-per-inch. If we equate a pixel to a dot, then we can work out the maximum size an image can be printed. 300 dpi is generally the “standard”, because that is the resolution most commonly used. To put this into perspective, at 300dpi, or 300 dots per 25.4mm, each pixel printed on a medium would be 0.085mm, or about as thick as 105 GSM weight paper. That means a dot area of roughly 0.007mm². For example a 24MP image containing 6000×4000 pixels can be printed to a maximum size of 13.3×20 inches (33.8×50.8cm) at 300dpi. The print sizes for a number of different sized images printed using 300dpi are shown in Figure 1.

Fig.1: Maximum printing sizes for various image sizes at 300dpi

The thing is that you may not even need 300dpi? At 300dpi the minimum viewing distance is theoretically 11.46”, whereas dropping it down to 180dpi means the viewing distance increases to 19.1” (but the printed size of an image can increase). In the previous post we discussed visual acuity in terms of the math behind it. Knowing that a print will be viewed from a minimum of 30” away allows us to determine that the optimal DPI required is 115. Now if we have a large panoramic print, say 80″ wide, printed at 300dpi, then the calculated minimum viewing distance is ca. 12″ – but it is impossible to view the entire print being only one foot away from it. So how do we calculate the optimal viewing distance, and then use this to calculate the actual number of DPI required?

The amount of megapixels required of a print can be guided in part by the viewing distance, i.e. the distance from the centre of the print to the eyes of the viewer. The golden standard for calculating the optimal viewing distance involves the following process:

  • Calculate the diagonal of the print size required.
  • Multiply the diagonal by 1.5 to calculate the minimum viewing distance
  • Multiply the diagonal by 2.0 to calculate the maximum viewing distance.

For example a print which is 20×30″ will have a diagonal of 36″, so the optimal viewing distance range from minimum to maximum is 54-72 inches (137-182cm). This means that we are no longer reliant on the use of 300dpi for printing. Now we can use the equations set out in the previous post to calculate the minimum DPI for a viewing distance. For the example above, the minimum DPI required is only 3438/108=64dpi. This would imply that the image size required to create the print is (20*64)×(30*64) = 2.5MP. Figure 2 shows a series of sample print sizes, viewing distances, and minimum DPI (calculated using dpi=3438/min_dist).

Fig.2: Viewing distances and minimum DPI for various common print sizes

Now printing at such a low resolution likely has more limitations than benefits, for example there is no guarantee that people will view the panorama from a set distance. So there likely is a lower bound to the practical amount of DPI required, probably around 180-200dpi because nobody wants to see pixels. For the 20×30″ print, boosting the DPI to 200 would only require a modest 24MP image, whereas a full 300dpi print would require a staggering 54MP image! Figure 3 simulates a 1×1″ square representing various DPI configurations as they might be seen on a print. Note that even at 120dpi the pixels are visible – the lower the DPI, the greater the chance of “blocky” features when view up close.

Fig.3: Various DPI as printed in a 1×1″ square

Are the viewing distances realistic? As an example consider the viewing of a 36×12″ panorama. The diagonal for this print would be 37.9″, so the minimum distance would be calculated as 57 inches. This example is illustrated in Figure 4. Now if we work out the actual viewing angle this creates, it is 37.4°, which is pretty close to 40°. Why is this important? Well THX recommends that the “best seat-to-screen distance” (for a digital theatre) is one where the view angle approximates 40 degrees, and it’s probably not much different for pictures hanging on a wall. The minimum resolution for the panoramic print viewed at this distance would be about 60dpi, but it can be printed at 240dpi with an input image size of about 25MP.

Fig.4: An example of viewing a 36×12″ panorama

So choosing a printing resolution (DPI) is really a balance between: (i) the number of megapixels an image has, (ii) the size of the print required, and (iii) the distance a print will be viewed from. For example, a 24MP image printed at 300dpi will allow a maximum print size of 13.3×20 inches, which has an optimal viewing distance of 3 feet, however by reducing the DPI to 200, we get an increased print size of 20×30 inches, with an optimal viewing distance of 4.5 feet. It is an interplay of many differing factors, including where the print is to be viewed.

P.S. For small prints, such as 5×7 and 4×6, 300dpi is still the best.

P.P.S. For those who who can’t remember how to calculate the diagonal, it’s using the Pythagorean Theorem. So for a 20×30″ print, this would mean:

diagonal = √(20²+30²)
         = √1300
         = 36.06

The math behind visual acuity

The number of megapixels required to print something, or view a television is ultimately determined by the human eye’s visual acuity, and the distance the object is viewed from. For someone with average vision (i.e. 20/20), their acuity would be defined as one arcminute, or 1/60th of a degree. For comparison, a full moon in the sky appears about 31 arcminutes (1/2 a degree) across (Figure 1).

Fig.1: Looking at the moon

Now generally, some descriptions skip from talking about arcminutes to describing how the distance between an observer and an object can calculated given the resolution of the object. For example, the distance (d, in inches) at which the eye reaches its resolution limit is often calculated using:

d = 3438 / h

Where h, is the resolution, and can be ppi for screens, and dpi for prints. So if h=300, then d=11.46 inches. Now to calculate the optimal viewing distance involves a magic number – 3438. Where does this number come from? Few descriptions actually give any insights, but we can can start with some basic trigonometry. Consider the diagram in Figure 2, where h is the pixel pitch, d is the viewing distance, and θ is the angle of viewing.

Fig.2: Viewing an object

Now we can use the basic equation for calculating an angle, Theta (θ), given the length of the opposite and adjacent sides:

tan(θ) = opposite/adjacent

In order to apply this formula to the diagram in Figure 2, only θ/2 and h/2 are used.

tan(θ/2) = (h/2)/d

So now, we can solve for h.

d tan(θ/2) = h/2
2d⋅tan(θ/2) = h

Now if we use visual acuity as 1 arcminute, this is equivalent to 0.000290888 radians. Therefore:

h = 2d⋅tan(0.000290888/2) 
  = 2d⋅0.000145444

So for d=24”, h= 0.00698 inches, or converted to mm (by multiplying by 25.4), h=0.177mm. To convert this into PPI/DPI, we simply take the inverse, so 1/0.00698 = 143 ppi/dpi. How do we turn this equation into one with the value 3438 in it? Well, given that the resolution can be calculated by taking the inverse, we can modify the previous equation:

h = 1/(2d⋅0.000145444)
  = 1/d * 1/2 * 1/0.000145444
  = 1/d * 1/2 * 6875.49847
  = 1/d * 3437.749
  = 3438/d

So for a poster viewed at d=36″, the value of h=95dpi (which is the minimum). The viewing distance can be calculated by rearranging the equation above to:

d = 3438 / h

As an example, consider the Apple Watch Series 8, whose screen has a resolution of 326ppi. Performing the calculation gives d=3438/326 = 10.55”. So the watch should be held 10.55” from one’s face. For a poster printed at 300dpi, d=11.46”, and for a poster printed at 180dpi, d=19.1”. This is independent of the size of the poster, just printing resolution, and represents the minimum resolution at a particular distance – only if you move closer do you need a higher resolution. This is why billboards can be printed at a low resolution, even 1dpi, because when viewed from a distance it doesn’t really matter how low the resolution is.

Note that there are many different variables at play when it comes to acuity. These calculations provide the simplest case scenario. For eyes outside the normal range, visual acuity is different, which will change the calculations (i.e. radians expressed in θ). The differing values for the arcminutes are: 0.75 (20/15), 1.5 (20/30), 2.0 (20/40), etc. There are also factors such as lighting, how eye prescriptions modify acuity, etc. to take into account. Finally, it should be added that these acuity calculations only take into account what is directly in front of our eyes, i.e. the narrow, sharp, vision provided by the foveola in the eye – all other parts of a scene, will have slightly less acuity moving out from this central point.

Fig.3: At 1-2° the foveola provides the greatest amount of acuity.

p.s. The same system can be used to calculate ideal monitor and TV sizes. For a 24″ viewing distance, the pixel pitch is h= 0.177mm. For a 4K (3840×2160) monitor, this would mean 3840*0.177=680mm, and 2160*0.177=382mm which after calculating the diagonal results in a 30.7″ monitor.

p.p.s. If using cm, the formula becomes: d = 8595 / h

the histogram exposed (vi) – multipeak

This series of photographs and their associated histograms covers images with multipeak histograms, which come is many differing forms. None of these images is perfect, but they illustrate the fact that sometimes the perfect image is not possible given the constraints of the scene – shadows, bright skies and haze, sometimes they are just unavoidable.

Histogram 1: A church and hazy hills

This photograph is taken in Locarno, Switzerland, with the church in the foreground the Madonna del Sasso. The histogram is of the multipeak variety, with few highlights. The left-most hump (①) represents the majority of the darker colours in the foreground, e.g. vegetation, and the parts of the building in shadow. The remaining two peaks are in the midtones, and represent various portions of the sky (③) as well as the lake, and hazed over mountains in the distance (②). Finally a small amount of highlights (④) represent the clouds and brightly lit portions of the church.

Fujifilm X10 (12MP): 7.1mm; f/5; 1/850

Histogram 2: Light hillside, dark forest

This is the Norwegian countryside, taken from the Bergen Line train. It is a well contrasted image, with only one core patch of shadow (①), behind the trees in the bottom left (there are a few other shadows in foreground objects such as the trees on the right). The midtones, ②, represent the rest of the landscape, with the lightest midtones and highlights composing the sky, ③.

Olympus E-M5II (12MP): 12mm; f/5; 1/400

Histogram 3: Gray station

This photograph is of the train station in Voss, Norway. It is an image with a good distribution of intensities, with four dominant peaks. The first peak, ①, is representative of the dark vegetation, and metal railings in the scene, overlapping somewhat into the mid-tones. The central peak (②) which is in the midtones, represents the light green pastures, and the large segments of asphalt on the station platform. The third peak, ③, which transitions into the highlights mostly deals with the light concreted areas. Finally there is a fourth peak, ④, which is really just a clipping artifact related to the small region of white sky.

Olympus E-M5II (12MP): 40mm; f/5; 1/320