“This I did shortly afterwards, my basis being the theory of refraction. First I prepared a tube of lead , at the ends of which I fitted two glass lenses, both plane on one side while on the other side one was spherically convex. Then placing my eye near the concave lens I perceived objects satisfactorily large and near, for they appeared three times closer and nine times larger than when seen with the naked eye alone .”
− Galileo Galilei published the initial results of his telescopic observations of the heavens in Starry Messenger (Sidereus Nuncius) in 1610
Vintage camera makers – The origins of Pentacon
Post-WW2 there were still a lot of camera companies in Germany, and particularly in East Germany. In fact the heart of the German camera industry lay in Dresden, Jena and the surround area. Over the next decade, many of the companies were merged into a series of VEBs (Volkseigener Betrieb or Publicly Owned Enterprise) culminating with VEB Pentacon.
On January 1, 1959 a series of Dresden camera manufacturers were merged to create the large state-owned VEB Kamera und Kinowerke Dresden (KKWD). The company was a conglomerate of existing companies which produced a broad range of products and had numerous production sites. Joining them together meant production could be rationalized, yet cameras were still produced under their brands names, e.g. Contax, Welta, Altissa, Reflekta, Belfoca.
- VEB Kinowerke Dresden − Formerly VEB Zeiss Ikon
- VEB Kamera-Werke Niedersedlitz − This is where the Praktiflex, precursor of the Praktica, was invented; it included VEB Belca-Werk absorbed in 1957.
- VEB Welta-Kamera Werke Freital − This included the VEB Reflekta-Kamerawerk Tharandt and Welta-Kamera-Werk Freital (Reflekta II, Weltaflex und Penti).
- VEB Altissa Kamerawerke Dresden − Formerly Altissa-Camera-Werk Berthold Altmann, (including Altissa, Altiflex and Altix cameras).
- VEB Aspecta Dresden − Formerly Filmosto-Projektoren Johannes (including projectors, enlargers, lenses).
In 1964 the company was renamed to VEB Pentacon Dresden Kamera-und Kinowerke. This was intended to provide a catchy name for the company (not forgetting that a lot of its products were intended for Western markets). Pentacon was already being used as the export name for the mirror Contax D, and was derived from PENTAprisma and CONtax. Pentacon used the stylized silhouette of the Ernemann Tower (on the old Ernemann camera factory site, which belonged to the former Zeiss Ikon) as its corporate logo. The company continued to produce good SLRs: Praktica V (1964), Praktica Nova with return mirror (1964), Praktica Nova B with uncoupled light meter (1965), Praktica Mat for the first time with TTL interior light metering (1965). In 1966 the 6×6 format Pentacon Six appeared, with the Praktica PL Nova I in 1967.
On January 2, 1968, the VEB was restructured, and more companies were added into the fold, including Ihagee Kamerawerk (which had remained independent until this point), and VEB Feinoptisches Werk Görlitz. The name became Kombinat VEB Pentacon Dresden.
- Ihagee Kamerawerk AG i.V. − Produced Exakta and Exa cameras.
- VEB Feinoptisches Werk Görlitz − Formerly Meyer-Optik Görlitz
The continuous expansion and bundling of technical expertise and concentration of the production capacities of the Pentacon, led to the incorporation of three more companies in 1980.
- VEB Kameratechnik Freital − Formerly Freitaler camera industry Beier & Co., including Beirette cameras.
- VEB Mentor Großformatkamera − large format cameras
- VEB Certo Kamerawerk Dresden − folding cameras
On January 1, 1985, the VEB Pentacon, which by now had absorbed most of the East German camera industry was formally incorporated into Kombinat VEB Carl Zeiss Jena. This move amalgamated nearly the entire East German photography industry under the Zeiss umbrella. There were scarce few years between this and the reunification of Germany. After reunification, VEB Carl Zeiss Jena was reabsorbed into the Stiftung and was completely restructured. VEB Pentacon was renamed PENTACON DRESDEN GmbH, in July 1990, but by October it was being liquidated.
What is a mirrorless camera?
It is a camera without a mirror of course!
Next you’ll ask why a camera would ever need a mirror.
Over the last few years we have seen an increased use of the term “mirrorless” to describe cameras. But what does that mean? Well, 35mm SLR (Single Lens Reflex) film cameras all contained a reflex mirror. The mirror basically redirects the light (i.e. view) coming through the lens to the film by means of a pentaprism, to the optical viewfinder (OVF) – which is then viewed by the photographer. Without it, the photographer would have to view the scene by means of an offset window (like in a rangefinder camera, which were technically mirrorless). This basically means that the photographer sees what the lens sees. When the photographer presses the shutter-release button, the mirror swings out of the way, temporarily blocking the light from passing through the viewfinder, and instead allowing the light to pass through the opened shutter onto the film. This is depicted visually in Figure 1.

When DSLR (Digital Single Lens Reflex) cameras appeared they used similar technology. The problem is that this mirror, together with the digital electronics, meant that the cameras became larger than traditional film SLRs. The concept of mirrorless cameras appeared in 2008, with the introduction of the Micro-Four-Thirds system. The first mirrorless interchangeable lens camera was the Panasonic Lumix DMC-G1. It replaced the optical path of the OVF with an electronic viewfinder (EVF), making it possible to remove the mirror completely, hence reducing the size of cameras. The EVF shows the image that the sensor outputs, displaying the output on a small LCD or OLED screen.

As a result of nixing the mirror, mirrorless cameras are typically have fewer moving parts, and are slimmer than DSLRs, shortening the distance between the lens and the sensor. The loss of the mirror also means that it is easier to adapt vintage lenses for use on digital cameras. Some people still prefer using an OVF, because it is optical, and does not require as much battery-life as an EVF.
These days the only cameras still containing mirrors are usually full-frame DSLRs, and they are slowly disappearing, replaced by mirrorless cameras. Basically all recent crop-sensor cameras are mirrorless. DSLR sales continue to decline. Looking only at interchangeable lens cameras (ILC), according to CIPA, mirrorless cameras in 2022 made up 68.7% of all ILD units (4.07M versus 1.85M), and 85.8% of shipped value (out of 5.927 million units shipped).
How natural light and meaningful darkness tell a story
Have you ever been somewhere, and want to take a photograph, and there just isn’t much natural light, or perhaps the light is only coming from a single source, such as a window? Are you tempted to use a flash? Well don’t even think about it, because doing so takes away from the story of what you are photographing. Usually this sort of scenario manifests itself inside historical buildings where there just isn’t much natural light, and in context, no artificial light. Think anything before electric lighting – houses, castles, outbuildings, etc. Photography in historical buildings can be burdened by a lack of light – but that’s how they were when inhabited.
I photograph mostly using natural light. I don’t like using a flash, because ultimately there are certain qualities of natural light that enhance the colours and aesthetics of an object or scene. I find flash light too harsh, even when used with diffusers. But that’s just me. Below is an image from the attic space of a building at the Voss Folkemuseum in Norway. The room contained some beds, and storage chests, so obviously it was used as a bedroom. The light streaming through the window is enough to bathe the room with enough light to show its use (typically windows would only have been installed where the light would be most concentrated, in this case south-facing). Notice the spinning wheel next to the window where the light is most concentrated?

A lack of light often tells a story. It shows you what the space really was like for those who inhabited it long ago. Before the advent of electricity, most buildings relied on natural light during the day, and perhaps candle-light at night. Windows were small because glass was inherently expensive, and the more glass one had, the more heat that was lost in winter. If you were documenting a scene in a more archival manner, you might naturally flood the scene with artificial light of a sort, but historical photography should not be harshly lit.
Many historic buildings were built at a time of very little beyond natural light and candles. The light today is that very same light, and to bath it with artificial light would be unnatural. These nooks and crannies were never meant to be bathed in complete light. Consider the images below, taken at different folke-museums in Norway. The images are of cooking fires inside historic buildings, which had no openings except in the roof. The one from the Norsk Folkemuseum is Saga-Stau, a replica of an open-hearth house from about 3000 years ago.
On a bright sunny day, dark spaces are bathed in whatever available light is able to seep through every opening. In a dark space this light can often appear harsh, blinding window openings to the point where there is little cognition of the scene beyond the window. Yet it also tends to produce shards of light puncturing into a space. On clouded days, the light can be more muted. In the image below of the living space, the light coming through the window is harsh enough to produce highlight clipping of both the window frame and part of the table. However the light adds a sense of Norwegian Hygge to the entire scene. To light this scene with a flash would simply reduce the scene to a series of artifacts, rather than a slice of history.

Upgrading camera sensors – the megapixel phenomena
So if you are planning to purchase a new camera with “upgraded megapixels”, what makes the most sense? In many cases, people will tend to continue using the same brand or sensor. This makes sense from the perspective of existing equipment such as lenses, but sometimes an increase in resolution requires moving to a new sensor. There are of course many things to consider, but the primary ones when it comes to the images produced by a sensor are: aggregate MP and linear dimensions (we will consider image pixels rather than sensor photosites). Aggregate MP are the total number of pixels in an image, whereas linear dimensions relate to the width and height of an image. Doubling the number of pixels in an image does not double an images linear dimensions. Basically doubling the megapixels will double the aggregate megapixels in an image. To double the linear dimensions of an image, the megapixels need to be quadrupled. So 24MP needs to ramp up to 96MP in order to double the linear dimensions.
Table 1 shows some sample multiplication factors for aggregate and linear dimensions when upgrading megapixels, ignoring sensor size. The image sizes offer a sense of what is what is offered, with the standard MP sizes offered by various manufacturers shown in Table 2.
| → | 16MP | 24MP | 30MP | 40MP | 48MP | 60MP |
|---|---|---|---|---|---|---|
| 16MP | − | 1.5 (1.2) | 1.9 (1.4) | 2.5 (1.6) | 3.0 (1.7) | 3.75 (1.9) |
| 24MP | − | − | 1.25 (1.1) | 1.7 (1.3) | 2.0 (1.4) | 2.5 (1.6) |
| 30MP | − | − | − | 1.3 (1.2) | 1.6 (1.3) | 2.0 (1.4) |
| 40MP | − | − | − | − | 1.2 (1.1) | 1.5 (1.2) |
| 48MP | − | − | − | − | − | 1.25 (1.1) |
Same sensor, more pixels
First consider a different aggregate of megapixels on the same size sensor – the example compares two Fuji cameras, both of which use an APS-C sensor (23.6×15.8mm).
Fuji X-H2 − 40MP, 7728×5152
Fuji X-H2S − 26MP, 6240×4160
So there are 1.53 times more pixels in the 40MP sensor, however from the perspective of linear resolution (comparing dimensions), there is only a 1.24 times differential. This means that horizontally (and vertically) there are only one-quarter more pixels in the 40MP versus the 26MP. But because they are on the same size sensor, the only thing that really changes is the size of the photosites (known as the pitch). Cramming more photosites on a sensor means that the photosites get smaller. In this case the pitch reduces from 3.78µm (microns) in the X-H2S to 3.05µm in the X-H2. Not an incredible difference, but one that may affect things such as low-light performance (if you care about these sort of things).
Larger sensor, same pixels
Then there is the issue of upgrading to a larger sensor. If we were to upgrade from an APS-C sensor to an FF sensor, then we typically get more photosites on the sensor. But not always. For example consider the following upgrade from a Fuji X-H2 to a Leica M10-R:
FF: Leica M10-R (41MP, 7864×5200)
APS-C: Fuji X-H2 (40MP, 7728×5152)
So there are very few differences from the perspective of either image resolution, or linear resolution (dimensions). The big difference here is the photosite pitch. The Leica has a pitch of 4.59µm, versus the 3.05µm of the Fuji. From the perspective of photosite area, this means that 21µm² versus 9.3µm², or 2.25 times the light-gathering space on the full-frame sensor. How much difference this makes from the perspective of the end-picture is uncertain due to the multiplicities of factors involved, and computational post-processing each camera provides. But it is something to consider.
Larger sensor, more pixels
Finally there is upgrading to more pixels on a larger sensor. If we were to upgrade from an APS-C sensor (Fuji X-H2S) to a FF sensor (Sony a7R V) with more pixels:
FF: Sony a7R V (61MP, 9504×6336)
APS-C: Fuji X-H2S (26MP, 6240×4160)
Like the first example, there are 2.3 times more pixels in the 61 MP sensor, however from the perspective of linear resolution, there is only a 1.52 times differential. The challenge here can be that the photosite pitch can actually remain the same. The pitch on the Fuji sensor is 3.78µm, versus the 3.73µm of the Sony.
| brand | MFT | APS-C | Full-frame | Medium |
|---|---|---|---|---|
| Canon | − | 24, 33 | 24, 45 | − |
| Fuji | − | 16, 24, 26, 40 | − | 51, 102 |
| Leica | 17 | 16, 24 | 24, 41, 47, 60 | − |
| Nikon | − | 21, 24 | 24, 25, 46 | − |
| OM/Olympus | 16, 20 | − | − | − |
| Panasonic | 20, 25 | − | 24, 47 | − |
| Sony | − | 24, 26 | 33, 42, 50, 60, 61 | − |
Upgrading cameras is not a trivial thing, but one of the main reasons people do so is more megapixels. Of all the brands listed above, only one, Fuji, has taken the next step, and introduced a medium format camera (apart from the medium format camera manufacturers, e.g. Hasselblad), allowing for increased sensor size and increased pixels, but not at the expense of photosite size. The Fujifilm GFX 100S has a medium format sensor, 44×33mm in size, providing 102MP with 3.76µm. This means it provides approximately double the dimensional pixels as a Fuji 24MP APS-C camera (and yes it costs almost three times as much, but there’s no such thing as a free lunch).
At the end of the day, you have to justify why more pixels are needed to yourself. They are only part of the equation in the acquisition of good images, but small upgrades like 24MP to 40MP may not actually provide much of a payback.
Vintage lens makers – Piesker (Germany)
Paul Piesker & Co. was founded in 1936 as a Berlin manufacturer of lenses and lens accessories for reflex cameras (in West Germany). After WW2 the company focused on lenses with long focal lengths for the Exakta and cameras with M42 mounts. Like its competitors, Astro-Berlin, and Tewe, Piesker lenses don’t seem to very common, at least not in Europe. Most of the lenses produced seem to have been for the US market, where they appeared in ads in Popular Photography in the mid 1950s. The lenses can also be found under the “Kalimar” trademark, and also rebranded for Sterling Howard, under the trademark “Astra”, and “Voss” (in addition to other brands: Picon, Votar, Telegon). Production at Piesker was discontinued in 1964.

Are (camera-based) RGB histograms useful?
When taking an image on a digital camera, we are often provided with one or two histograms – the luminance histogram, and the RGB histogram. The latter is often depicted in various forms: as a single histogram showing all three channels of the RGB image, or three separate histograms, one for each of R, G, and B. So how useful is the RGB histogram on a camera? In the context of improving image quality RGB histograms provide very little in the way of value. Some people might disagree, but fundamentally adjusting a picture based on the individual colour channels on a camera, is not realistic (and usually it is because they don’t have a real understanding about how colour spaces work).
Consider the image example shown in Figure 1. This 3024×3024 pixel image has 9,144,576 pixels. On the left are the three individual RGB histograms, while on the right is the integral RGB histogram with the R, G, B, histograms overlapped. As I have mentioned before, there is very little information which can be gleaned by looking at the these two-dimensional RGB histograms – they do not really indicate how much red (R), green (G), or blue (B) there is in an image, because these three components can only be used together to produce information that is useful. This is because RGB is a coupled colour space where luminance and chrominance are coupled together. The combined RGB histogram is especially poor from an interpretation perspective, because it just muddles the information.

But to understand it better, we need to look at what information is contained in a colour image. An RGB colour image can be conceptualized as being composed of three layers: a red layer, a green layer, and a blue layer. Figure 2 shows the three layers of the image in Figure 1. Each layer represents the values associated with red, green, and blue. Each pixel in a colour image is therefore a set of triplet values: a red, a green, and a blue, or (R,G,B), which together form a colour. Each of the R, G, and B components is essentially an 8-bit (grayscale) image, then can be viewed in the form of a histogram (also shown in Figure 2 and nearly always falsely coloured with the appropriate red, green or blue colour).

To understand a colour image further, we have to look at the RGB colour model, the method used in most image formats, e.g. JPEG. The RGB model can be visualized in the shape of a cube, formed using the R, G, and B data. Each pixel in an image has an (R, G, B) value which provides a coordinate in the 3D space of the cube (which contains 2563, or 16,777,216 colours). Figure 3 shows two different ways of viewing image colour in 3D. The first is an all-colours view of the colours. This basically just indicates all the colours contained in the image without frequency information. This gives an overall indication on how colours are distributed. In the case of the example image, there are 526,613 distinct colours. The second cube is a frequency-based 3D histogram, grouping like data together in “bins”, in this example the 3D histogram has 83 or 512 bins (which is honestly easier to digest than 16 million-odd bins). Within the image there is shown one pixel with the RGB value (211,75,95), and its location in the 3D histograms.

In either case, visually you can see the distribution of colours. The same can not be said of many of the 2D representations. Let’s look at how the colour information pans out in 2D form. The example image pixel in Figure 3 at location (2540,2228) has the RGB value (211,75,95). If we look at this pixel in the context of the red, green, and blue histograms it exists in different bins (Figure 4). There is no way that these 2D histograms provide anything in the way of context on the distribution of colours. All they do is show the distribution of red, green, and blue values, from 0 to 255. What the red histogram tells us is that at value 211 there are 49972 colour pixels in the image whose first value of the triplet (R) is 211. It may also tell us that the contribution of red in pixels appears to be constrained to the upper and lower bounds of the histogram (as shown by the two peaks). There is only one pure value of red, (255,0,0). Change the value from (211,75,95) to (211,75,195) and we get a purple colour.

The information in the three histograms is essentially decoupled, and does not provide a cohesive interpretation of colours in the image, for that you need a 3D image of sorts. Modifying one or more of the individual histograms will just lead to a colour shift in the image, which is fine if that is the what is to be achieved. Should you view the colour histograms on a camera viewscreen? I honestly wouldn’t bother. They are more useful in an image manipulation app, but not in the confines of a small screen – stick to the luminance histogram.
What is a micron?
The nitty-gritty of digital camera sensors takes things down to the micron. For example the width of photosites on a sensor is measured in microns, more commonly represented using the unit µm, e.g. 3.71µm. But what is a micron?

Basically a micron is a micrometre, a metric unit of measure for length equal to 0.001 mm (or 1/1000mm). I mean it’s small, like really small. To put it also into perspective table salt has a particle size of 120µm. Human hair has an average diameter of 70µm, milled flour can be anywhere in the range 25-200µm, and spider silk is a measly 3µm.
To put it another way, for a photosite that is 3.88µm in size, we could fit 257 of them in just 1mm of space.
What is a photosite?
When people talk about cameras, they invariably talk about pixels, or rather megapixels. The new Fujifilm X-S20 has 26 megapixels. This means that the image produced by the camera will contain 26 million pixels. But the sensor itself does not have any pixels, the sensor has photosites.
The job of photosites is to capture photons of light. After a bunch of processing, the data captured by each photosite is converted into a digital signal, and processed into a pixel. All the photosites on a sensor contribute to the resultant image. On a sensor there are two numbers used to define the number of photosites. The first is the physical sensor resolution which is the actual number of physical photosites found on the sensor. For example on the Sony a7RV (shown below), there are 9566×6377 physical photosites (61MP). However not all the photosites are used to create an image – the ones that are form the maximum image resolution, i.e. the maximum number of pixels in an image. For the Sony a7RV this is 9504×6336 photosites (60.2MP) used to create an image. This is sometimes known as the effective number of photosites.

There are two major differences between photosites and pixels. Firstly, photosites are physical entities, pixels are not, they are digital entities. Secondly, while photosites have a size, and are different based on the sensor type, and number of photosites on a sensor, pixels are dimensionless. For example each photosite on the Sony a7RV has a pitch (width) of 3.73µm, and an area of 13.91µm2.
The Eyes of Eagles (and why Zeiss used them to advertise its lenses)
It was Zeiss who came up with the “the eagle eye of your camera” slogan in the 1930s to advertise their lenses (or in German “Das Adlerauge Ihrer Kamera” – eagle eye being Adlerauge) [1]. Of course they were mostly talking about the Tessar series of lenses.
“The objective should be as the eagle’s eye, whose acuity is proverbial. Where its glance falls, every finest detail is laid bare. Just as the wonderful acuity of the eagle’s eye has its origin, partly in the sharpness of the image produced by its cornea and lens, and partly in the ability of the retina – far exceeding that of man’s vision – to resolve and comprehend the finest details of this delicate image, so, for efficiency, must the camera be provided on the one hand with a ‘retina’ (the plate or film) of the highest resolving power – a fine grain emulsion – and on the other hand with an objective which can produce the needle sharp picture of the eagle’s lens and cornea.”
The Eagle Eye of your Camera (1932)
Zeiss took great lengths to use this simile to describe their lenses. A lens must have the sharpness of an eagle’s eye, and the ability to admit a large amount of light – sharpness and rapidity over a wide field of view – the Zeiss Tessar. While Leica named their lenses to indicate their widest aperture, Zeiss instead opted to name their lenses for the design used. Indeed the Tessar came in numerous focal length/aperture combinations, from a 3¾cm f/2.8 to a 60cm f/6.3.
The Tessar is an unsymmetrical system of lenses : 7 different curvatures, 4 types of glass, 4 lens thicknesses, 2 air separations, i.e. 17 elements which can be varied. Zeiss went to great lengths to disseminate the message about Tessar lenses:
- sharp, flare-free definition
- great rapidity (allowing short instantaneous exposures)
- exceptional freedom from distortion (obviating any objectionable curvature)
- good colour correction
- compact design (so that light falling off near the edge is reduced to a minimum)
- sufficient separation of the components of the lens (to allow a between lens shutter)
- the use of types of glass as free as possible of colour
- reduction to the minimum of the number of lenses, and particularly of glass-air surfaces

It is then not surprising that Zeiss choose to compare the lens to an eagle’s eye. The eagle is considered to be the pinnacle of visual evolution. They can see prey from a distance – it is said they can see a rabbit in a field while soaring at 10,000 feet (1.9 miles or 3km). It was Aristotle (in 350BCE) who in his manuscript “Aristotle, History of Animals” pointed out that “the eagle is very sharp-sighted”. The problem however is that it’s not really possible to compare a simple lens against the eye of a living organism. Zeiss was really comparing the lens of an eagle’s eye against the Tessar, or rather the Tessar and the human eye behind it, because the camera lens is just a part of the equation of analog picture taking. So how does an eagle eye compare to a human one?
It’s kind of hard to really compare eyes from different species because they are all designed to do different things. In all likelihood, human eyes have evolved over time as our environment changed. In birds, unlike humans, each eye looks outwards at a differing scene, and the overlap of the visual field of both eyes, i.e. the binocular region, is relatively small. This is typically less than 60° in birds, versus 120° in humans, and can be as narrow as 5-10° in some species. Because of this a birds total visual field is quite extensive, with just a narrow blind region behind the head. Eagle’s have a highly developed sense of sight which allows them to easily spot prey. They have 20/5 vision compared to the average human who has 20/20 vision. This means they can see things at 20’ away that humans only have the ability to see at 5’. They have fixed eye sockets, angled 30° from the mid-line of their face, giving them a 340° view. Many also have violet-sensitive visual systems, i.e. the ability to see ultraviolet light and detect more colours than human eyes can.

The first thing to consider may be the size of the eye. We will pick one eagle to compare against human vision, and the best option is the (European) Golden Eagle, because it is quite common in Germany. The average weight of a Golden Eagle is 6.1kg, versus the average weight of a European (human) at 70.8kg. If we work on the principle that an eagle’s eye is similar in weight to a human eye (ca. 7.5g) then an eagle’s eye would comprise 0.12% of its body weight, versus 0.01% of a human. So for the human’s eye to be equivalent in mass based on eye:body weight ration, it would need to be 85g. But this is really an anecdotal comparison, the bigger picture lies with the construction of the eye.
One of the reasons birds of prey have such incredible eye-sight is the fact that their deeper fovea allows them to accommodate a greater number of photoreceptors and cones. The central fovea in an eagle’s eye has 1 million cones per square millimetre, compared to 200,000 in a human eye. One way that eagles do this is by having increased resolution. This is achieved by have reduced space between their photoreceptors. Due to the physics of light, the absolute minimum separation between cones for an eye to function correctly is 2µm (0.002mm). As the space between the photoreceptors decreases, so too does the minimum size of the detail.
The other thing of relevance is that while humans have one fovea, eagles generally have two – a central fovea used for hunting (cone separation 2µm, versus human cone separation of 3µm), and a secondary fovea which provides a high resolution field of view to the side of their head. So like a camera sensor, more cones means better resolution. In context Robert Shlaer [2] suggested that the resolution of a Golden eagle’s eye may be anywhere from 2.4 to 2.9 times that of a human, with the Martial Eagle somewhere between 3.0 and 3.6 times. The spatial resolution of a Wedge-Tailed eagle is between 138-142 cycles per degree [3], while that of a human is a mere 60. Their foveae are also distinctly shaped, deep and convex, as opposed to the rounded and shallow single fovea of human eyes. In a 1978 article for the scientific journal Nature, Snyder and Miller [4] proposed that the unique shape of foveae found in some birds of prey may act like a telephoto lens, magnifying their vision, which is perhaps why these feathered predators can spot food from so far in the sky. Like humans, eagles can change the shape of their lens, however in addition they can also change the shape of their corneas. This allows them more precise focusing and accommodation than humans.
But Zeiss themselves harked on the limitations of the simile: The fact that an eagle can quickly turn its head to allow for viewing in any direction; the fact that the retina is curved, not flat. From the perspective of resolution the ads were true to form, however the other aspects of the an eagle’s vision did not ring true. Yes, telephoto lenses based on the Tessar design could certainly see further than a human, and given the right lens and film could see into the violet spectrum, but Zeiss’s claim was really more about finding a way to describe it’s lenses in a provoking manner, one which would ultimately sell lenses.
Further reading:
- Zeiss Brochure: “The Eagle Eye of your Camera”, Carl Zeiss, Jena (1932)
- Robert Shlaer, “An Eagle’s Eye: Quality of the Retinal Image”, Science, 176, pp. 920-922 (1972)
- Reymond, L. (1985). Spatial visual acuity of the eagle Aquila audax: A behavioural, optical and anatomical investigation. Vision Research, 25(10), 1477–1491.
- Snyder, A.W., Miller, W.H., “Telephoto lens system of falconiform eyes”, Nature, 275, pp.127-129 (1978)





