The Eyes of Eagles (and why Zeiss used them to advertise its lenses)

It was Zeiss who came up with the “the eagle eye of your camera” slogan in the 1930s to advertise their lenses (or in German “Das Adlerauge Ihrer Kamera” – eagle eye being Adlerauge) [1]. Of course they were mostly talking about the Tessar series of lenses.

“The objective should be as the eagle’s eye, whose acuity is proverbial. Where its glance falls, every finest detail is laid bare. Just as the wonderful acuity of the eagle’s eye has its origin, partly in the sharpness of the image produced by its cornea and lens, and partly in the ability of the retina – far exceeding that of man’s vision – to resolve and comprehend the finest details of this delicate image, so, for efficiency, must the camera be provided on the one hand with a ‘retina’ (the plate or film) of the highest resolving power – a fine grain emulsion – and on the other hand with an objective which can produce the needle sharp picture of the eagle’s lens and cornea.”

The Eagle Eye of your Camera (1932)

Zeiss took great lengths to use this simile to describe their lenses. A lens must have the sharpness of an eagle’s eye, and the ability to admit a large amount of light – sharpness and rapidity over a wide field of view – the Zeiss Tessar. While Leica named their lenses to indicate their widest aperture, Zeiss instead opted to name their lenses for the design used. Indeed the Tessar came in numerous focal length/aperture combinations, from a 3¾cm f/2.8 to a 60cm f/6.3.

Zeiss “Eagle Eye” advertising in the 1930s

The Tessar is an unsymmetrical system of lenses : 7 different curvatures, 4 types of glass, 4 lens thicknesses, 2 air separations, i.e. 17 elements which can be varied. Zeiss went to great lengths to disseminate the message about Tessar lenses:

  • sharp, flare-free definition
  • great rapidity (allowing short instantaneous exposures)
  • exceptional freedom from distortion (obviating any objectionable curvature)
  • good colour correction
  • compact design (so that light falling off near the edge is reduced to a minimum)
  • sufficient separation of the components of the lens (to allow a between lens shutter)
  • the use of types of glass as free as possible of colour
  • reduction to the minimum of the number of lenses, and particularly of glass-air surfaces
“It must have the sharpness of the eagle’s eye”

It is then not surprising that Zeiss choose to compare the lens to an eagle’s eye. The eagle is considered to be the pinnacle of visual evolution. They can see prey from a distance – it is said they can see a rabbit in a field while soaring at 10,000 feet (1.9 miles or 3km). It was Aristotle (in 350BCE) who in his manuscript “Aristotle, History of Animals” pointed out that “the eagle is very sharp-sighted”. The problem however is that it’s not really possible to compare a simple lens against the eye of a living organism. Zeiss was really comparing the lens of an eagle’s eye against the Tessar, or rather the Tessar and the human eye behind it, because the camera lens is just a part of the equation of analog picture taking. So how does an eagle eye compare to a human one?

It’s kind of hard to really compare eyes from different species because they are all designed to do different things. In all likelihood, human eyes have evolved over time as our environment changed. In birds, unlike humans, each eye looks outwards at a differing scene, and the overlap of the visual field of both eyes, i.e. the binocular region, is relatively small. This is typically less than 60° in birds, versus 120° in humans, and can be as narrow as 5-10° in some species. Because of this a birds total visual field is quite extensive, with just a narrow blind region behind the head. Eagle’s have a highly developed sense of sight which allows them to easily spot prey. They have 20/5 vision compared to the average human who has 20/20 vision. This means they can see things at 20’ away that humans only have the ability to see at 5’. They have fixed eye sockets, angled 30° from the mid-line of their face, giving them a 340° view. Many also have violet-sensitive visual systems, i.e. the ability to see ultraviolet light and detect more colours than human eyes can.

A Golden eagle, and a cross-section of an eagle’s eye

The first thing to consider may be the size of the eye. We will pick one eagle to compare against human vision, and the best option is the (European) Golden Eagle, because it is quite common in Germany. The average weight of a Golden Eagle is 6.1kg, versus the average weight of a European (human) at 70.8kg. If we work on the principle that an eagle’s eye is similar in weight to a human eye (ca. 7.5g) then an eagle’s eye would comprise 0.12% of its body weight, versus 0.01% of a human. So for the human’s eye to be equivalent in mass based on eye:body weight ration, it would need to be 85g. But this is really an anecdotal comparison, the bigger picture lies with the construction of the eye.

One of the reasons birds of prey have such incredible eye-sight is the fact that their deeper fovea allows them to accommodate a greater number of photoreceptors and cones. The central fovea in an eagle’s eye has 1 million cones per square millimetre, compared to 200,000 in a human eye. One way that eagles do this is by having increased resolution. This is achieved by have reduced space between their photoreceptors. Due to the physics of light, the absolute minimum separation between cones for an eye to function correctly is 2µm (0.002mm). As the space between the photoreceptors decreases, so too does the minimum size of the detail.

Parts of an eagle’s vision

The other thing of relevance is that while humans have one fovea, eagles generally have two – a central fovea used for hunting (cone separation 2µm, versus human cone separation of 3µm), and a secondary fovea which provides a high resolution field of view to the side of their head. So like a camera sensor, more cones means better resolution. In context Robert Shlaer [2] suggested that the resolution of a Golden eagle’s eye may be anywhere from 2.4 to 2.9 times that of a human, with the Martial Eagle somewhere between 3.0 and 3.6 times. The spatial resolution of a Wedge-Tailed eagle is between 138-142 cycles per degree [3], while that of a human is a mere 60. Their foveae are also distinctly shaped, deep and convex, as opposed to the rounded and shallow single fovea of human eyes. In a 1978 article for the scientific journal Nature, Snyder and Miller [4] proposed that the unique shape of foveae found in some birds of prey may act like a telephoto lens, magnifying their vision, which is perhaps why these feathered predators can spot food from so far in the sky. Like humans, eagles can change the shape of their lens, however in addition they can also change the shape of their corneas. This allows them more precise focusing and accommodation than humans.

But Zeiss themselves harked on the limitations of the simile: The fact that an eagle can quickly turn its head to allow for viewing in any direction; the fact that the retina is curved, not flat. From the perspective of resolution the ads were true to form, however the other aspects of the an eagle’s vision did not ring true. Yes, telephoto lenses based on the Tessar design could certainly see further than a human, and given the right lens and film could see into the violet spectrum, but Zeiss’s claim was really more about finding a way to describe it’s lenses in a provoking manner, one which would ultimately sell lenses.

Further reading:

  1. Zeiss Brochure: “The Eagle Eye of your Camera”, Carl Zeiss, Jena (1932)
  2. Robert Shlaer, “An Eagle’s Eye: Quality of the Retinal Image”, Science, 176, pp. 920-922 (1972)
  3. Reymond, L. (1985). Spatial visual acuity of the eagle Aquila audax: A behavioural, optical and anatomical investigation. Vision Research, 25(10), 1477–1491.
  4. Snyder, A.W., Miller, W.H., “Telephoto lens system of falconiform eyes”, Nature, 275, pp.127-129 (1978)

The whole “compact” thing

There was a time when the compact camera had quite a market. These cameras sat somewhere between the larger sensor cameras (full-frame/APS-C), and tiny sensor “pocket” cameras. The tiny sensor cameras died off with the evolution of the smartphone. Nowadays there aren’t many compacts left, perhaps victims to the success of the smartphone cameras, or to the success of mirrorless. Contenders now include the Ricoh GR series, the Fujifilm X100V, and Sony RX100. A mirrorless offers almost the same form factor with interchangeable lenses, and more features. Compacts often try to do too much, and maybe that is a function of modern technology where smaller does not mean reduced functional capacity. Many compacts do nothing at all particularly well, but maybe they were never meant to. They offer too many controls to be simple, but too few to show versatility. They are often built by trying to cram too much technology into the one thing that unifies them all – space. For a compact camera should be exactly that, compact. If they are compact, it is unlikely they will win awards for ergonomics. Compact cameras with small footprints, may not fit into everyone’s hands comfortably.

Compact cameras are exceptional for the purposes of street photography. The best example of this is legendary Japanese street photographer Daidō Moriyama. He has used Ricoh compacts for years, from the early GR film series to the digital GR.

“The GR has been my favorite since it was a film camera. Because I’m so used to it, I felt comfortable with the new GR III immediately. When I shoot with a 28mm fixed lens machine, I remember my old days. Comfortable enough to take photographs to your heart’s content. For my street photography, the camera must be compact and light-weighted.”

Daidō Moriyama

But here’s the thing, I don’t buy a compact to be the pinnacle of cameras. The job of the compact is to be compact. It should offer a good amount of features, but obviously cannot offer them all. The role of the compact in my life is simple – pocketable, easy to use, small, inconspicuous. It’s for that reason, my GR III sits around the kitchen for photographing recipes, or slips into a pocket for a walk about town. It’s small, compact, and oh so discreet. You can get into trouble in places like transit systems using mirrorless cameras because they seem too professional, but compacts? Compacts scream inconspicuous.

Comparing some features of the Ricoh GR III (compact) against the Fuji X-H1 (mirrorless). Both have the same 24MP APS-C sensor and IBIS.

It is of course impossible to find the perfect camera – it doesn’t exist. Compact cameras are even less so, but the modern ones offers a good amount of technology and convenience. The Ricoh GR III for example offers image stabilization, and snappy focus, at the expense of loosing the flash (not something I use much anyways), not a great battery life (carry an extra), and no weather sealing (not that big a deal). It’s low-light performance is impressive, and I don’t need much more than a 28mm equivalent lens. It’s role is street-photography, or kitchen-photography, or back-up travel camera, for taking photographs in those places where photography is technically “not allowed”. It also offers a 24MP APS-C sensor, which is more pixels than anyone needs. In fact these cameras could likely get even better if we ditched some of the onboard shrot. Compacts don’t necessarily need 4K video, or 300-point AF tracking. The more features, and customization, the more the likelihood that you will loose track of what is going on.

Prosversatility – Fills a niche between smartphones and mirrorless cameras.
macro – Many provide some sort of capacity to take close-up photos.
small – Unobtrusive, and lightweight, making them easy to carry.
automatic – No fiddling with settings and missing the shot.
Conslimited control – Lacks low-level controls found in interchangeable lens cameras.
low-light – Often not well suited to low-light conditions.
fixed lens – Not as flexible as interchangeable lens cameras.
battery – Shorter battery life because of the smaller battery.
Pros and cons of compact cameras

This is the fourth compact I’ve owned. The first was a Canon Powershot G9, then the Fuji X10, followed by the Leica D-Lux6 (likely the only Leica I will ever own). The Ricoh GR III provides me with the same sensor size as my Fuji X-H1, but is much easier to take some places when travelling, and provides much more in the way of versatility than does my iPhone 14, and twice the resolution.

Further reading:

Vintage lens makers – Heinz Kilfitt (Germany)

If it were not for one particular point in time, Kilfitt may not be as well known a brand as it is. That event was the use of the Kilfitt Fern-Kilar f/5.6 400mm lens in Alfred Hitchcock’s 1954 movie “Rear Window”, where the lens, as well as the Exakta camera it was attached to, played a prominent role in the movie (in fact no other camera/lens combination likely ever had such a leading role).

Kilfitt was one of the most innovative lens makers of the 1950s. Born in Westphalia in 1898, Heinz Kilfitt had quite a pedigree for design. Before the war he had established his reputation designing the Robot I camera (24×24mm format), the first motorized camera, introduced in 1934. Rejected by Agfa and Kodak, Kilfitt partnered with Hans-Heinrich Berning to develop the camera. In 1939 Kilfitt sold his interests in the Robot to Berning. In Munich, Kilfitt acquired a small optical company, Werkstätte für Präzisionsoptik und Mechanik GmbH, where he began developing lenses for the like of 35mm systems.

The Kilfott lens used in Rear Window.

By the end of the war in 1945 Kilfitt had very little left, basically a run-down plant, and few workers. He started a camera repair shop for US army personnel, and by 1948 had started to manufacture precision lenses. Kilfitt devoted himself to what he considered an inherent problem with the photographic industry – the lack of lens mount universality. Every camera had to have its own set of lenses. This led him to introduce the “basic lens” system in 1949. In this system, each lens was supplied with a “short mount”, the rear of which had a male thread which accommodated a series of adapters [1]. Some for SLR, some for C-mount, or reflex housings.

Like many independent lens companies, Kilfitt produced a series of lenses which could be adapted to almost any camera by means of lens mounts. One of their core brands was Kilar.

While the company is famous for its telephoto lenses, it actually specialized in another area: macro. Early SLR lenses such as the Biotar 58mm f/2 were able to focus as close as 18 inches, which likely seemed quite amazing, considering the best a rangefinder could do was 60-100cm. Kilfitt thought he could do better, producing the world’s first 35mm macro lens, the 40mm f/2.8 Makro-Kilar in 1955 [3]. It would be what Norman Rothschild called the first “infinity-to-gnats’-eyeball” [2]. It was offered in two versions: one that focuses from ∞ to 10cm, with a reproduction of 1:2, and one that focused from ∞ to 5cm, with 1:1.

The early version of the Makro-Kilar, showing the Edixa-Reflex version.

Heinz Kilfitt also continued developing cameras. The Kilfitt-Reflex 6×6 appeared around 1952, a camera that had a new system for quickly changing lenses, a complex viewfinder and a swing-back mirror. It influenced the design of other 6×6 format cameras, e.g. Kowa 6. There was also the Mecaflex SLR, another 24×24mm camera produced from 1953-1958 (first by Metz Apparatefabrik, Fürth, Germany later by S.E.R.A.O. Monaco). It was constructed by Heinz Kilfitt, who also supplied the lenses (Kilfitt Kamerabau, Vaduz, Liechtenstein).

LensSmallest apertureAOVShortest focusWeight
40mm Makro-Kilar f/2.8f/2254°2-4″150g
90mm Makro-Kilar f/2.8f/2228°8″480g
135mm KILAR f/3.8f/3218°60″260g
150mm KILAR f/3.5f/2216°60″400g
300mm TELE-KILAR f/5.6f/32120″990g
300mm PAN-TELE-KILAR f/4f/3266″1930g
400mm FERN-KILAR f/4f/4530′1760g
400mm SPORT-FERN-KILAR f/4f/4516′2720g
600mm SPORT-FERN-KILAR f/5.6f/4535′4080g
The more commonly available Kilfitt lenses

When Heinz Kilfitt retired in 1968 he sold the company to Dr. Back, who operated it under the Zoomar name from its headquarters in Long Island, New York. Dr. Back designed the first production 35mm SLR zoom, the famous 36-82/2.8 Zoomar in 1959. The company eventually transitioned the brand to Zoomar-Kilfitt, and then merged it completely into Zoomar. By this stage the company was providing lenses for 12.84×17.12mm, 24×36mm and 56×56mm cameras. The most notable addition to the line-up was a Macro Zoomar 50-125mm f/4.

The lens selection provided by Zoomar-Kilfitt

Note that the Zoomar lenses are often cited as products of Kilfitt, however although some of them may have been produced in the Kilfitt factories, Zoomar was its own entity. Kilfitt was contracted to manufacture the groundbreaking 1960 Zoomar 36-82mm lens for Voigtländer.

The evolution of the Kilfitt brand logos

Notable lenses: FERN-KILAR 400mm f/4, Makro-Kilar 40mm f/2.8

Further reading:

  1. Norman Rothschild, “An updated view of the Kilfitt system”, The Camera Craftsman, 10(2), pp.10-15 (1964)
  2. Norman Rothschild, “The revolution in SLR lenses”, Popular Photography, (60(6), pp.90-91,130-131 (1967)
  3. Berkowitz, G., “New.. Makro Kilar Lens”, Popular Photography, pp.86-87,106,108 (Mar, 1955)
  4. Kilfitt Optik, Photo But More
  5. ROBOT – Who came up with the idea? Kilfitt or Berning? Two genealogists come together to new discoveries…, fotosaurier (2021) article in German

Is luminance the true reality?

Like our sense of taste and smell, colour helps us perceive and understand the world around us. It enriches our lives, and helps us comprehend the aesthetic quality of art, or differentiate between many things in the world around us. Yet colour is not everything. Pablo Picasso, said that “Colors are only symbols. Reality is to be found in luminance alone.” But is this a valid reality?

There is a biological basis for the fact that colour and luminance (what most people think of as B&W) play distinct roles in our perception of art, or of real life – colour and luminance are analyzed by different portions of our visual system, and as such they are responsible for different aspects of visual perception. The parts of our brain that process information about colour are located several cm away from the parts that analyze luminance – as anatomically distinct as vision is from hearing. The part that processes colour information is found in the temporal lobe, whereas luminance information is processed in the parietal lobe.

Below is a comparison of Vincent van Gogh’s Green Wheat Field with Cypress (1889), with a version containing only luminance. Our ability to recognize the various regions of vegetation and to perceive their three-dimensional shape and the spatial organization of this scene depends almost entirely on the luminance of the paints used, and not their colours.

Green Wheat Field with Cypress (1889)

Yet a world without colour is one that forfeit’s crucial elements. While luminance provides the structure for a scene, colours allow us to see the scene more precisely. In the colour image above, it allows us to better differentiate the different greens of the grasses, and the blues of the sky. In the B&W image, the grasses are less distinct, the vibrancy in the green trees and bushes is absent, and there is very little differentiation between the colours of the sides and roof of the cottage. Of course we must always remember that colour is almost never seen exactly as it physically is. All colour perception is relative. The images below compare the luminance and chrominance information for the image above – the chrominance information is extracted from the HSB colour space, which incorporate the hue and saturation components. Note how it lacks the “structural” information which is bestowed by light and dark.

Luminance
Chrominance (i.e. colour information)

Is luminance more important than colour? In some ways yes, because of the way our eyes have evolved. Our eyes perceive light and dark as well as colour through rods and cones. Rods are very sensitive to light and dark (and help give us good vision in low light), whereas cones are responsible for colour information. But rods are more plentiful than cones. In the central fovea there may be about 20 times more rods (≈100-120 million) than cones (≈5-6 million). In reality, the details in what we perceive in a scene are carried mostly by the information we perceive about light and dark. So reality can be found in luminance alone, because even without colour we can still perceive what is in a scene (people with achromatopsia, which is a complete lack of colour, do exactly that).

But for most humans colour is an integral part of our vision, we cannot switch it off at will, in the same way that we engage a B&W mode in a camera. It allowed our early ancestors to see colourful ripe fruit more easily against a background of mostly green forest, and it allows us to appreciate the world around us.

Further reading:

Margaret Livingston, Light Vision, Harvard Medical Alumni Bulletin, pp.15-23 (Autumn, 2003)

Zeiss versus Zeiss : the trademark dispute

As cooperation deteriorated, and finally terminated in 1953, it was inevitable that eventually there were some issues with trademarks between the two Zeiss’s. I mean they were on different sides of the Iron Curtain. The East German Carl Zeiss company did not own all the rights to some of the names and brands. This would likely have been fine had they just been sold within the eastern-bloc countries, however many were made to be exported to the west (which is really somewhat ironic) – lenses were developed to sell in the West to produce hard currency. They achieved this at the beginning by resurrecting pre-war designs. Political influence over East Germany did not have any influence in how products were manufactured.

Zeiss vs. Zeiss branding over the years

In February 1954 Zeiss in Heidenheim fired the first shots in what would eventually become a worldwide litigation. They obtained an injunction in the District Court of Goettingen to prevent the continued sale of Jena-made, Zeiss-marked goods [1]. In April Zeiss Jena countered in West Germany by seeking an injunction and an order registering the Zeiss marks in West Germany in its name. That action was dismissed in 1960 when the West German Supreme Court ruled that there was no one in the Soviet Zone having capacity to represent the Zeiss Foundation.

In the same year Zeiss Heidenheim brought action against the Zeiss Jena to prevent them from using the Zeiss name and trademarks anywhere in the world. The Supreme Court of the Federal German Republic determined that the Heidenheim firm was entitled to exclusive use of the Zeiss name and trademarks in West Germany and West Berlin [1]. Interestingly, a CIA report from 1954 [2] suggests that should the naming issues take an “unfavourable” turn for VEB CZJ, then the plan was to change its name to VEB Ernst Abbe Werk (which they obviously never did).

Information provided by lens markings

There was also a long court battle in the US over who owned the rights to the Zeiss name. The litigation commenced on February 14, 1962, filed by Carl Zeiss Foundation and Zeiss Ikon AG against VEB Carl Zeiss Jena and its US distributors [1] (Carl Zeiss Stiftung v. VEB Carl Zeiss Jena). The case went to discovery from 1963-1967 and finally to trial in November 1967. On November 7, 1968, the court found in favour of the plaintiffs, deciding that the US trademarks “Zeiss”, “Zeiss Ikon”, and “Carl Zeiss Jena”, were the property of the Zeiss firm located in West Germany. As to the legitimacy of this? The courts found that the original “Stiftung” ceased to exist in Jena when it had been stripped of its assets. The Stiftung’s domicile was then changed from Jena to Heidenheim. It was not until 1971 [3] that the US Supreme Court finally settled the case of Carl Zeiss vs. VEB Carl Zeiss Jena, after a long 9½ year battle for control of the “Zeiss” trademark, siding with Heidenheim.

Examples of Carl Zeiss Jena lens markings over the years.

After this, Carl Zeiss marketed their lenses as “Carl Zeiss” exclusively in the United States, whereas Carl Zeiss Jena exported their lenses to the US with the marking “aus Jena”, or sometimes “JENOPTIK”, or even “JENOPTIK JENA”. The branding on these lenses was changed: “T” instead of Tessar, “B” for Biotar, “Bm” for Biometar, “S” for Sonnar, “F” for Flektogon, etc. in order not to infringe on the copyright. Therefore a lens might be labelled “Carl Zeiss Jena s”, or “aus Jena s”, and be exactly the same lens. It really depended on where the lenses were sold.

  • In the Eastern-bloc countries, CZJ could use the name “Carl Zeiss”. Carl Zeiss Oberkochen was not allowed to use “Zeiss” by itself, and instead used the name “Opton” or “Zeiss-Opton”.
  • In some western countries – namely West Germany, Italy, Greece, Holland, Belgium, Luxembourg, and Austria – CZO was allowed to use the name “Carl Zeiss”. CZJ chose to use the name “aus Jena” in the case of lenses.
  • The rest of the world, i.e. Commonwealth countries like England and Canada, Switzerland, Japan, both companies could use the name “Carl Zeiss”, but only if there was an indicator of origin. For example CZO used “Carl Zeiss West Germany”, and CZJ used “Carl Zeiss Jena” or the term DDR somewhere.
Examples of Carl Zeiss Opton lens markings over the years.

Of course it is also easy to identify a lens if it is marked with DDR. Some lenses were made in only East or West Germany, while others had names which continued to be shared.

  • East German only lenses: Biometar (a modified Planar), Flektogon (similar to Distagon), Flexon, Pancolar
  • West German only lenses: Distagon
  • Shared lenses: Hologon, Biogon, Biotar, Magnar, Planar, Protar, Sonnar, Tessar, Topogon, Triotar

Further reading:

  1. Shapiro, I., “Zeiss v. Zeiss – The Cold War in a Microcosm”, International Lawyer, 7(2) pp.235-251 (1973)
  2. “Possible Name Change of VEB Carl Zeiss Jena”, Central Intelligence Agency, Information Report, 22 Nov (1954)
  3. Allison, R.C., “The Carl Zeiss Case”, International Lawyer, 3(3), pp.525-535 (1969)

Why is the sky different shades of blue?

One of the more interesting aspects of photographing outdoors is the colour of the sky. You know the situation – you’re out photographing and the sky just isn’t as vivid as you wished it was. This happens a lot in the warmer months of the year.

The sky isn’t blue of course. We interpret it is blue because of light, and its interaction with the atmosphere. The type of blue also changes, but the difference is most noticeable in fall and winter, when the sky appears a more vivid blue than it does throughout the summer months (that’s why you can never expect a really vividly blue sky in summer when travelling).

The blue of the sky is more saturated further from the sun. Note that in this image taken in Toronto in May, the right side, furthest from the sun appears more saturated.

Firstly the blue colour of the sky is due to the scattering of sunlight off molecules in the atmosphere smaller than the wavelength of light (approximately 1/10th the wavelength). There are many gases in the atmosphere, e.g. nitrogen, oxygen, and hydrogen. Mixed in with these elements are particles which include dust, pollen, and pollution. The scattering is known as Rayleigh scattering, and is most effective at the short wavelength (400nm) end of the visible spectrum. Therefore the light scattered down to the earth at a large angle with respect to the direction of the sun’s light is predominantly in the blue end of the spectrum. Because of this wavelength-selective scattering, more blue light diffuses throughout atmosphere than other colours producing the familiar blue sky.

An illustration of how Rayleigh Scattering works in the atmosphere.

During the summer months, when the sun is higher in the sky, light does not have to travel as far through the atmosphere to reach our eyes. Consequently, there is less Rayleigh scattering. Blue skies often appear somewhat hazy, veiled by a thin white sheen. When light encounters large particles suspended in the air, like dust or water vapour droplets, wavelengths are scattered equally. This process is known as Mie scattering, and produces white-coloured light, e.g. making clouds appear white. In summer in particular, increased humidity increases Mie scattering, and as a result the sky tends to be relatively muted, or pale blue.

The visibility of clouds can be attributed to Mie scattering, which is not very wavelength dependent.

In the fall and winter the Northern Hemisphere is tilted away from the sun, the sun’s angle is lower, which increases the amount of Raleigh scattering (light has to travel further through the atmosphere, and the scattering of shorter wavelengths is more complete). The cooler air during this period also decreases the amount of moisture the air can hold, diminishing Mie scattering. These two factors taken in conjunction can produce skies that are vividly blue.

Angle of the sun, summer versus winter.

Q: If the wavelength of purple is only 380nm, why don’t we see more purple skies?
A: Purple skies are rare because the sun emits a higher concentration of blue light waves in comparison to violet. Furthermore, our eyes are more sensitive to blue rather than violet meaning to us the sky appears blue.

Q: What particles cause Rayleigh Scattering?
A: Small specks of dust or nitrogen and oxygen molecules.

Moriyama on the power of photographs

“Of course, in the instant you press the shutter button, a memory of the image flashes across your mind, together with the various things you’re thinking about in that moment – aesthetic considerations, concepts, desires. But whatever’s in the photograph stands completely independent of those thoughts. That is what remains – and it’s completely independent. That is what calls to you years, maybe decades later. “Hey! What do you think?” That’s what’s so amazing. that’s why photography is so powerful.”

Daido Moriyama How I Take Photographs, Takeshi Nakamoto (2019)

The real info regarding angle-of-view on iPhone cameras

I must say, I quite like the wide lenses on the iPhone 14. It has two rear-facing cameras, an ultra-wide with a focal length of 13mm, and a 26mm wide (full-frame equiv). I don’t really want to get into reviewing these cameras, because other people have already done extensive reviews. An example of a portrait shot taken with each camera is shown below in Figure 1 (picture of the Gooderham “flatiron” Building in Toronto).

Fig.1: Example of portrait photos using both 26mm and 13mm cameras.

But I do want to talk briefly about the Angle of View (AOV) of these cameras. Firstly, you really have to hunt for some of this information. Apple doesn’t really talk about sensor size, or even AOV to any great extent. The most they give you is that the AOV of the ultrawide camera is 120°. But they don’t tell you the full story (maybe because most people don’t care?). It may be 120°, but only in landscape mode, and that angle describes the diagonal angle, which as I have mentioned before isn’t really that useful for most people because it is much harder to conceptualize than horizontal degrees (it’s no different to TV’s, and nobody measures a TV based on its diagonal).

Pixel countFocal lengthSensor sizef-numberAOV
landscape
Crop factor
12MP26mm (equiv.)Type 1/1.7 (9.5×7.6mm)f/1.569° (H)4.6
12MP13mm (equiv.)Type 1/3.4 (4×3mm)f/2.4108°(H)
120°(D)
8.6
iPhone 14 (rear-facing) camera specs

So the wide-angle camera has a horizontal AOV of 69°, and the ultrawide has an AOV of 108°. But this is when a photograph is taken in landscape mode. When a photograph is taken in portrait mode, the horizontal AOV defaults to the vertical AOV from landscape mode – this means 85° for the wide, and a mere 50° for the ultrawide. This concept is the same for all sensors in all cameras, because in portrait mode the width of the photo is obviously less than that of the landscape photo. In mobile devices such as the iPhone this does become a little trickier, because most photos are likely taken in portrait mode.

Examples of the AOV’s in portrait mode for each of the focal lengths as they relate to the photographs in Figure 1 are shown below in Figure 2 (along with the potential AOV’s for landscape mode).

Fig.2: A visual depiction of the portrait AOV’s associated with the photographs of Fig.1

This is really more of a specification problem, information which I wish Apple would just post instead of ignoring. Some people are actually interested in these sort of things.

Zeiss versus Zeiss : the postwar split

One of the things that gets very confusing for some people is differentiating between Zeiss lenses from East and West Germany. First, let’s look at the backstory. Prior to World War II, Carl Zeiss Jena had been one of the largest suppliers of optical goods in the world. Note that Carl Zeiss was an optical company and different to Zeiss Ikon, which was a camera company formed in 1926 from the merger of four camera makers: Contessa-Nettel, Ernemann, Goerz and Ica. Both were members of the Carl Zeiss Foundation.

During the war, Jena had been pounded by allied bombing – the British bombed the Zeiss works on 27 May 1943, and the Americans repeated this twice in 1945. Mind you, there was not enough damage to put the factories out of commission but enough to slow production. Jena was captured by the American 80th Infantry Division on April 13th, 1945, and would remain in US control for two months before withdrawing in favour of the Soviet forces. As Americans departed, they took with them 122 key personnel from Jena to Heidenheim in the US zone of occupation (the personnel were from Carl Zeiss and Schott). At the conclusion of hostilities in 1945, Germany was split into differing zones, and as Jena was in the German state of Thuringia, it came under Soviet control (based on the Yalta Conference agreement).

A New York Times article in September 1946 suggested that the Russians were taking US$3,000,000 worth of finished products monthly for reparations [1]. At this stage there was very little in the way of dismantling equipment to ship back to Russia. In fact an earlier NYTimes article [2], suggested Russian occupation authorities had actually stimulated production at the Zeiss plants to pre-war levels, in order to facilitate reparations. It should be noted that the Zeiss plant produced more than just photographic optics – it also produced microscopes, medical and surgical instruments, ophthalmic instruments, geodetic instruments, electron microscopes, binoculars, etc., and military items [3].

The bombing damage to the Zeiss Jena plant

By 22 October 1946, the Soviet occupation authorities began dismantling the Zeiss plant [3] as war reparation payments agreed upon in the Potsdam Agreement. This was known as Operation Osoaviakhim, and involved many industries across Germany. It resulted in the removal of 93% of Zeiss’ equipment (including raw material, pipes, boilers, sanitary installations, etc), and 275 Zeiss specialists [4] deported to various locations in the USSR (approximately 90% of those deported would return to Jena in 1952). The taking of war “booty” was of course entirely legitimate, yet as Peter Nettl put it in a 1951 article, “Like a child long deprived of chocolate, the first Soviet ‘dismantlers’ flung themselves on all the available tidbits” [5].

A US intelligence report from July 1947 described the status of the Zeiss works at Jena [6]. In it they suggest that optical and photographic production had been least affected by the dismantling, with the plant producing lenses for the Soviets (Tessar 5cm f/3.5). The dismantling program had been completed by April 1947 [7], after which the Soviet High Command turned the plant over to the Germans, who re-established the plant. About 1000 machines remained at Jena after the dismantling, allowing for the continued production of eye glasses, camera lenses, medical glass and measuring instruments [8]. There was every hope at this time (at least from the West German side of things), that this was a temporary situation and that in 3-5 years Heidenheim staff would move back to Jena [6].

In June 1948, the Zeiss Jena plants were expropriated by the Land Expropriation Commission [9] and transferred to state ownership, becoming known as “VEB Carl Zeiss Jena”. In the American zone, Zeiss was reborn as “Opton Optische Werke Oberkochen GmbH” in 1946, becoming “Zeiss-Opton Optische Werke Oberkochen GmbH” in 1947, and Carl Zeiss in 1951. They had very little except the relocated personnel and supposedly a quantity of Zeiss documents. In 1949 Germany officially split into East Germany (Deutsche Demokratische Republik) and West Germany (Bundesrepublik Deutschland). Between 1948 and 1953 the two firms cooperated commercially with one another, after which cooperation deteriorated as the East German regime tightened control on VEB.

Like Zeiss, Zeiss Ikon (Dresden), best known for its Contax camera, also split in 1948. In the west, it was reformed into Zeiss Ikon AG Stuttgart. In the mid 1960s it merged with Voigtländer. It followed the Contax rangefinder line releasing the Contax IIa and IIIa cameras in the early 1950s. In the east, Zeiss Ikon became state owned, known as VEB Zeiss Ikon Dresden (ZID). ZID may be best known for its advanced SLR model, the Contax S, introduced in 1948.

Further reading:

  1. “Russians take 90% of Zeiss Output”, The New York Times, Sept.10, 1946.
  2. “Russians Increase German Industry”, The New York Times, July.5, 1946.
  3. “Activities at the Zeiss Plant, Jena”, Central Intelligence Agency, Information Report, 28 May (1953)
  4. “Deportation of Technicians and Specialists from Karl Zeiss, Jena”, Central Intelligence Group, Information Report, 13 January (1947)
  5. Nettl, P., “German Reparations in the Soviet Empire”, Foreign Affairs, 29(2), pp.300-307 (1951)
  6. “Status of the Zeiss Works in Jena and Moscow”, Central Intelligence Group, Intelligence Report, July (1947)
  7. “Layout and Organizational Setup of the Jena VEB Carl Zeiss”, Central Intelligence Agency, Information Report, 29 August (1955)
  8. “Dismantling, Production in the Societ Zone”, Central Intelligence Group, Information Report, May (1947)
  9. Allison, R.C., “The Carl Zeiss Case”, The International Lawyer, 3(3), pp.525-535 (1969)

Colour (photography) is all about the light

Photography in the 21st century is interesting because of all the fuss made about megapixels and sharp glass. But none of the tools of photography matter unless you have an innate understanding of light. For it is light that makes a picture. Without light, the camera is blind, capable of producing only dark unrecognizable images. Sure, artificial light could be used, but photography is mostly about natural light. It is light that provides colour, helps interpret contrast, determines brightness and darkness, and also tone, mood, and atmosphere. However in our everyday lives, light is often taken somewhat fore granted.

One of the most important facets of light is colour. Colour begins and ends with light; without light, i.e. in darkness, there is no colour. Light is an attribute of a big family of “waves” that starts with wavelengths of several thousand kilometres, including the likes of radio waves, heat radiation, infrared and ultraviolet waves, and X rays, and ends with gamma radiation of radium and cosmic rays with wavelengths so short that they have to be measured in fractions of a millionth part of a millimeter. Visible light is of course that part of the spectrum which the human eyes are sensitive to, ca. 400-700nm. For example the wavelength representing the colour green has values in the range 500-570nm.

The visible light spectrum

It is this visible light that builds the colour picture in our minds, or indeed that which we take with a camera. An object will be perceived as a certain colour because it absorbs some colours (or wavelengths) and reflects others. The colours that are reflected are the ones we see. For example the dandelion in the image below looks yellow because the yellow petals in the flower have absorbed all wavelengths of colour except yellow, which is the only colour reflected. If only pure red light were shone onto the dandelion, it would appear black, because the red would be absorbed and there would be no yellow light to be reflected. Remember, light is simply a wave with a specific wavelength or a mixture of wavelengths; it has no colour in and of itself. So technically, there is really no such thing as yellow light, rather, there is light with a wavelength of about 590nm that appears yellow. Similarly, the grass in the image reflects green light.

The colours we see are reflected wavelengths that are interpreted by our visual system.

The colour we interpret will also be different based on the time of day, lighting, and many other factors. Another thing to consider with light is its colour temperature. Colour temperature uses numerical values in degrees Kelvin to measure the colour characteristics of a light source on a spectrum ranging from warm (orange) colours to cool (blue) colours. For example natural daylight has a temperature of about 5000 Kelvin, whereas sunrise/sunset can be around 3200K. Light bulbs on the other hand can range anywhere from 2700K to 6500K. A light source that is 2700K is considered “warm” and generally emits more wavelengths of red, whereas a 6500K light is said to be “cool white” since it emits more blue wavelengths of light.

We see many colours as one, building up a picture.

Q: How many colours exist in the visible spectrum?
A: Technically, none. This is because the visible spectrum is light, with a wavelength (or frequency), not colour per se. Colour is a subjective, conscious experience which exists in our minds. Of course there might be an infinite number of wavelengths of light, but humans are limited in the number they can interpret.

Q: Why is the visible spectrum described in terms of 7 colours?
A: We tend to break the visible spectrum down into seven colours: red, orange, yellow, green, blue, indigo, and violet. Passing a ray of white light through a glass prism, splits it into seven constituent colours, but these are somewhat arbitrary as light comes as a continuum, with smooth transitions between colours (it was Isaac Newton that first divided the spectrum into 6, then 7 named colours). There are now several different interpretations of how spectral colours have been categorized. Some modern ones have dropped indigo, or have replaced it with cyan.

Q: How is reflected light interpreted as colour?
A: Reflected light is interpreted by both camera sensors, film, and the human eye by filtering the light, to interpret the light in terms of the three primary colours: red, green, and blue (see: The basics of colour perception).