Viewing distances, DPI and image size for printing

When it comes to megapixels, the bottom line might be how an image ends up being used. If viewed on a digital device, be it an ultra-resolution monitor or TV, there are limits to what you can see. To view an image on an 8K TV at full resolution, we would need a 33MP image. However any device smaller than this will happily work with a 24MP image, and still not display all the pixels. Printing is however another matter all together.

The standard for quality in printing is 300dpi, or 300 dots-per-inch. If we equate a pixel to a dot, then we can work out the maximum size an image can be printed. 300 dpi is generally the “standard”, because that is the resolution most commonly used. To put this into perspective, at 300dpi, or 300 dots per 25.4mm, each pixel printed on a medium would be 0.085mm, or about as thick as 105 GSM weight paper. That means a dot area of roughly 0.007mm². For example a 24MP image containing 6000×4000 pixels can be printed to a maximum size of 13.3×20 inches (33.8×50.8cm) at 300dpi. The print sizes for a number of different sized images printed using 300dpi are shown in Figure 1.

Fig.1: Maximum printing sizes for various image sizes at 300dpi

The thing is that you may not even need 300dpi? At 300dpi the minimum viewing distance is theoretically 11.46”, whereas dropping it down to 180dpi means the viewing distance increases to 19.1” (but the printed size of an image can increase). In the previous post we discussed visual acuity in terms of the math behind it. Knowing that a print will be viewed from a minimum of 30” away allows us to determine that the optimal DPI required is 115. Now if we have a large panoramic print, say 80″ wide, printed at 300dpi, then the calculated minimum viewing distance is ca. 12″ – but it is impossible to view the entire print being only one foot away from it. So how do we calculate the optimal viewing distance, and then use this to calculate the actual number of DPI required?

The amount of megapixels required of a print can be guided in part by the viewing distance, i.e. the distance from the centre of the print to the eyes of the viewer. The golden standard for calculating the optimal viewing distance involves the following process:

  • Calculate the diagonal of the print size required.
  • Multiply the diagonal by 1.5 to calculate the minimum viewing distance
  • Multiply the diagonal by 2.0 to calculate the maximum viewing distance.

For example a print which is 20×30″ will have a diagonal of 36″, so the optimal viewing distance range from minimum to maximum is 54-72 inches (137-182cm). This means that we are no longer reliant on the use of 300dpi for printing. Now we can use the equations set out in the previous post to calculate the minimum DPI for a viewing distance. For the example above, the minimum DPI required is only 3438/54=64dpi. This would imply that the image size required to create the print is (20*64)×(30*64) = 2.5MP. Figure 2 shows a series of sample print sizes, viewing distances, and minimum DPI (calculated using dpi=3438/min_dist).

Fig.2: Viewing distances and minimum DPI for various common print sizes

Now printing at such a low resolution likely has more limitations than benefits, for example there is no guarantee that people will view the panorama from a set distance. So there likely is a lower bound to the practical amount of DPI required, probably around 180-200dpi because nobody wants to see pixels. For the 20×30″ print, boosting the DPI to 200 would only require a modest 24MP image, whereas a full 300dpi print would require a staggering 54MP image! Figure 3 simulates a 1×1″ square representing various DPI configurations as they might be seen on a print. Note that even at 120dpi the pixels are visible – the lower the DPI, the greater the chance of “blocky” features when view up close.

Fig.3: Various DPI as printed in a 1×1″ square

Are the viewing distances realistic? As an example consider the viewing of a 36×12″ panorama. The diagonal for this print would be 37.9″, so the minimum distance would be calculated as 57 inches. This example is illustrated in Figure 4. Now if we work out the actual viewing angle this creates, it is 37.4°, which is pretty close to 40°. Why is this important? Well THX recommends that the “best seat-to-screen distance” (for a digital theatre) is one where the view angle approximates 40 degrees, and it’s probably not much different for pictures hanging on a wall. The minimum resolution for the panoramic print viewed at this distance would be about 60dpi, but it can be printed at 240dpi with an input image size of about 25MP.

Fig.4: An example of viewing a 36×12″ panorama

So choosing a printing resolution (DPI) is really a balance between: (i) the number of megapixels an image has, (ii) the size of the print required, and (iii) the distance a print will be viewed from. For example, a 24MP image printed at 300dpi will allow a maximum print size of 13.3×20 inches, which has an optimal viewing distance of 3 feet, however by reducing the DPI to 200, we get an increased print size of 20×30 inches, with an optimal viewing distance of 4.5 feet. It is an interplay of many differing factors, including where the print is to be viewed.

P.S. For small prints, such as 5×7 and 4×6, 300dpi is still the best.

P.P.S. For those who who can’t remember how to calculate the diagonal, it’s using the Pythagorean Theorem. So for a 20×30″ print, this would mean:

diagonal = √(20²+30²)
         = √1300
         = 36.06

The math behind visual acuity

The number of megapixels required to print something, or view a television is ultimately determined by the human eye’s visual acuity, and the distance the object is viewed from. For someone with average vision (i.e. 20/20), their acuity would be defined as one arcminute, or 1/60th of a degree. For comparison, a full moon in the sky appears about 31 arcminutes (1/2 a degree) across (Figure 1).

Fig.1: Looking at the moon

Now generally, some descriptions skip from talking about arcminutes to describing how the distance between an observer and an object can calculated given the resolution of the object. For example, the distance (d, in inches) at which the eye reaches its resolution limit is often calculated using:

d = 3438 / h

Where h, is the resolution, and can be ppi for screens, and dpi for prints. So if h=300, then d=11.46 inches. Now to calculate the optimal viewing distance involves a magic number – 3438. Where does this number come from? Few descriptions actually give any insights, but we can can start with some basic trigonometry. Consider the diagram in Figure 2, where h is the pixel pitch, d is the viewing distance, and θ is the angle of viewing.

Fig.2: Viewing an object

Now we can use the basic equation for calculating an angle, Theta (θ), given the length of the opposite and adjacent sides:

tan(θ) = opposite/adjacent

In order to apply this formula to the diagram in Figure 2, only θ/2 and h/2 are used.

tan(θ/2) = (h/2)/d

So now, we can solve for h.

d tan(θ/2) = h/2
2d⋅tan(θ/2) = h

Now if we use visual acuity as 1 arcminute, this is equivalent to 0.000290888 radians. Therefore:

h = 2d⋅tan(0.000290888/2) 
  = 2d⋅0.000145444

So for d=24”, h= 0.00698 inches, or converted to mm (by multiplying by 25.4), h=0.177mm. To convert this into PPI/DPI, we simply take the inverse, so 1/0.00698 = 143 ppi/dpi. How do we turn this equation into one with the value 3438 in it? Well, given that the resolution can be calculated by taking the inverse, we can modify the previous equation:

h = 1/(2d⋅0.000145444)
  = 1/d * 1/2 * 1/0.000145444
  = 1/d * 1/2 * 6875.49847
  = 1/d * 3437.749
  = 3438/d

So for a poster viewed at d=36″, the value of h=95dpi (which is the minimum). The viewing distance can be calculated by rearranging the equation above to:

d = 3438 / h

As an example, consider the Apple Watch Series 8, whose screen has a resolution of 326ppi. Performing the calculation gives d=3438/326 = 10.55”. So the watch should be held 10.55” from one’s face. For a poster printed at 300dpi, d=11.46”, and for a poster printed at 180dpi, d=19.1”. This is independent of the size of the poster, just printing resolution, and represents the minimum resolution at a particular distance – only if you move closer do you need a higher resolution. This is why billboards can be printed at a low resolution, even 1dpi, because when viewed from a distance it doesn’t really matter how low the resolution is.

Note that there are many different variables at play when it comes to acuity. These calculations provide the simplest case scenario. For eyes outside the normal range, visual acuity is different, which will change the calculations (i.e. radians expressed in θ). The differing values for the arcminutes are: 0.75 (20/15), 1.5 (20/30), 2.0 (20/40), etc. There are also factors such as lighting, how eye prescriptions modify acuity, etc. to take into account. Finally, it should be added that these acuity calculations only take into account what is directly in front of our eyes, i.e. the narrow, sharp, vision provided by the foveola in the eye – all other parts of a scene, will have slightly less acuity moving out from this central point.

Fig.3: At 1-2° the foveola provides the greatest amount of acuity.

p.s. The same system can be used to calculate ideal monitor and TV sizes. For a 24″ viewing distance, the pixel pitch is h= 0.177mm. For a 4K (3840×2160) monitor, this would mean 3840*0.177=680mm, and 2160*0.177=382mm which after calculating the diagonal results in a 30.7″ monitor.

p.p.s. If using cm, the formula becomes: d = 8595 / h

the histogram exposed (vi) – multipeak

This series of photographs and their associated histograms covers images with multipeak histograms, which come is many differing forms. None of these images is perfect, but they illustrate the fact that sometimes the perfect image is not possible given the constraints of the scene – shadows, bright skies and haze, sometimes they are just unavoidable.

Histogram 1: A church and hazy hills

This photograph is taken in Locarno, Switzerland, with the church in the foreground the Madonna del Sasso. The histogram is of the multipeak variety, with few highlights. The left-most hump (①) represents the majority of the darker colours in the foreground, e.g. vegetation, and the parts of the building in shadow. The remaining two peaks are in the midtones, and represent various portions of the sky (③) as well as the lake, and hazed over mountains in the distance (②). Finally a small amount of highlights (④) represent the clouds and brightly lit portions of the church.

Fujifilm X10 (12MP): 7.1mm; f/5; 1/850

Histogram 2: Light hillside, dark forest

This is the Norwegian countryside, taken from the Bergen Line train. It is a well contrasted image, with only one core patch of shadow (①), behind the trees in the bottom left (there are a few other shadows in foreground objects such as the trees on the right). The midtones, ②, represent the rest of the landscape, with the lightest midtones and highlights composing the sky, ③.

Olympus E-M5II (12MP): 12mm; f/5; 1/400

Histogram 3: Gray station

This photograph is of the train station in Voss, Norway. It is an image with a good distribution of intensities, with four dominant peaks. The first peak, ①, is representative of the dark vegetation, and metal railings in the scene, overlapping somewhat into the mid-tones. The central peak (②) which is in the midtones, represents the light green pastures, and the large segments of asphalt on the station platform. The third peak, ③, which transitions into the highlights mostly deals with the light concreted areas. Finally there is a fourth peak, ④, which is really just a clipping artifact related to the small region of white sky.

Olympus E-M5II (12MP): 40mm; f/5; 1/320

Are all 50mm lenses equivalent?

So a 50mm lens is a 50mm lens, is a 50mm lens, right? Well that’s not exactly true. The focal length of a 50mm lens is always 50mm, regardless of the system it is associated with. The focal length of a lens is independent of the camera system. So a 50mm lens on an SLR will have the same focal length as a 50mm lens on a DSLR, which is the same as one on an APS-C sensor, or a medium-format sensor. What is different is how they behave in terms of angle-of-view (AOV), with respect to a particular sensor size.

Fig.1: 50mm lenses all have the same focal length

Table 1 shows the behavioural differences of 50mm lenses on various systems. For example a 50mm lens from a 35mm rangefinder camera has a (horizontal) AOV of 39.6°, whereas the AOV of an APS-C camera, is 26.6°. This is because due to crop-factors, a 50mm lens on an APS-C sensor is equivalent to a 75mm on a full-frame camera (from an AOV perspective). To get a 39.6° equivalent AOV on an APS-C camera, you need roughly a 33mm lens – but the closest lens to this is a 35mm APS-C lens (35mm×1.5≈52mm).

SystemAOV (diag)AOV (hor)Crop-factorFF equiv.
16mm cine14.5°11.7°×3.4170mm
1″ sensor18.2°14.6°×2.7135mm
Micro-Four Thirds24.5°19.6°×2.0100mm
APS-C31.7°26.6°×1.575mm
film SLR46.8°39.6°×1.050mm
film rangefinder 46.8°39.6°×1.050mm
digital SLR (full-frame)46.8°39.6°×1.050mm
digital Medium (44×33mm)57.4°47.5°×0.840mm
6×7 (72×56mm)84°67.6°×0.525mm
4×5”117°104°×0.2713.5mm
Table 1: Differences in 50mm lenses used on different systems

Note that because a 50mm lens on a Micro-Four-Thirds camera behaves like a 100mm FF lens, most manufacturers won’t sell a native 50mm MFT lens, opting instead for the 50mm FF equivalent – the 25mm. That’s because a 25mm MFT lens provides the “normal” angle-of-view, just like a 35mm APS-C lens, or a 100mm 6×7 lens. A vintage 50mm SLR lens used on an APS-C camera will behave like it was designed for APS-C, i.e. it will have a horizontal AOV of around 26.6°. The remaining 6.5° either side is just cut off because of the smaller sensor (as shown in Figure 2).

Fig.2: A visualization of what a 50mm lens sees on different sensors.

Invariably, all focal lengths are treated similarly. A 35mm is always 35mm, an 85mm is always 85mm. It’s just their behaviour, or rather their “view on life”, that changes.

The camera versus the eye

There are some similarities between the camera and the human eye. Both have a lens, shutter, and light sensitive material. In the eye, the image is formed by a combination of the cornea, the aqueous humor, and the lens. The eye-lid is the shutter, and the retina is the light sensitive material. The other similarity is that both cameras and the eye control the image brightness by means of an iris diaphragm. In the eye the amount of light is involuntarily controlled by opening and closing the iris. A camera controls the light transmitted through the lens by means of the aperture diaphragm.

But comparing the eye and the camera with one another by stressing only the similarities in their construction has confused the understanding of photography, because it disregards the differences in their function. These differences make the eye superior to the cameras in some instances, and the camera superior to the eye in others.

  • Human vision is binocular and stereoscopic, that of the camera is monocular. This is why photographs lack the same “depth of field” that is seen through the human eyes. A camera sees a scene without depth, and a photograph appears flat.
  • The eye’s view of the world is subjective, viewing what the mind is interested in, has a wish to see, or is forced to see. The camera sees objectively, recording everything in its field of view. This is the reason so many pictures, are just pictures, full of superfluous subject matter.
  • The eye is sensitive to colour. Cameras and different lenses can see colour differently, and black-and-white photography sees colour as shades of gray (the transformation of colour to gray is also varied).
  • The eye does not normally perceive minor changes in the colour of light. Both film and sensors are sensitive to such small changes. This failure to detect changes in light colour manifests itself in what the eye considers “unnatural” colours.
  • The eye cannot “store” and combine bracketed images, or stay open for an amount of time and “add up light”. The dimmer the light, the less we see, no matter how long we look at a scene. Both film and sensors can do this – and this ability to accumulate light impressions makes images in low light possible – at levels where nothing can be seen by the human eye.
  • The eye is sensitive only to that part of the electromagnetic spectrum which is known as light. Photographic films and sensors can be sensitive to other types of radiation, e.g. infrared, ultraviolet, and x-rays.
  • The focal length of the eye is fixed, and as such is limited. A camera cam be equipped with lenses of almost any focal length.
  • The angle of view of the eyes is fixed, but lenses range in angle from a few degrees to 220°. The monocular AOV of an eye is (160° wide by 135° high), whereas binocular AOV is 200°×135° with an overlap of 120°.
  • Human vision functions to see 3D things in a rectilinear perspective. Most lenses produce a perspective that is cylindrical or spherical.
  • The focusing ability of the eye is severely limited with respect to close distances. Anything closer than about 25cm can usually only be seen indistinctly, with objects perceived less and less clearly the smaller they are, to the point where they become invisible to the naked eye. The camera, with the right accessories, has none of these restrictions.
  • To the human eye, everything appears sharp at the same time (actually an illusion caused by the ability of the eye to autofocus). A camera can produce images with any degree of unsharpness, or images in which a predetermined zone is rendered sharp, while everything else is out-of-focus.
  • The eye can adjust almost instantaneously to changes in illumination, by contracting and enlarging the iris as it views light and dark scenes respectively. The camera’s “iris”, its diaphragm, can only be adjusted for overall brightness. Therefore the contrast range of the human eye is much wider than that of a camera. On a sensor/film too much contrast would show up as an over-exposed (featureless, white) region, whereas too little contrast would show up as underexposed (dark) regions.
  • The eye cannot express movement by instantaneously “freezing” an image of a moving subject, and cannot retain an image”. A camera can do both.
  • The eye “corrects” for receding parallel lines in the vertical plane, e.g. tall buildings, yet considers those in the horizontal plane to be normal. The camera makes no such distinction.
  • The eye sees everything it focuses on in the context of its surroundings, relating the part to the whole. A photograph nearly always shows the subject out of context, cut off from the surrounding visuals – a small limited view.

Vintage digital – The Fuji camera with a weird sensor

The Fujifilm FinePix S1 Pro was a somewhat strange, yet innovative camera. Released in January 2000, it sported a 1.1 inch Super CCD sensor (23.3×15.6mm) producing 3.4 physical MP, but after processing would produce an image with a resolution of 3040×2016 pixels (6MP). But it wasn’t exactly built from the ground up. It was a mash-up of a Nikon N60 film camera body, and Fujifilm electronics. At this stage it was considered a “digital SLR”, because true 35mm DSLR had yet to appear. It used Nikon lenses, sporting an Nikon-F mount. It’s actually a bit weird discussing a digital camera, but at 22 years old, these early digital cameras are likely in the realm of vintage.

The photosites on the sensor took the form of a honeycomb tessellation, oriented in a zig-zag pattern rather than the traditional row/column array. This resulted in the distance between cells being smaller allowing for more photosites than a regular Bayer sensor. The camera then processed the data to produce the equivalent of a 6.2 MP Bayer sensor. A conventional CCD has rectangular photosites arranged in columns and rows. The SuperCCD has octagonal photosites in a honeycomb configuration. By rotating the photosites 45° to form this interwoven layout, the CCD’s photosite pitch in the horizontal and vertical directions is narrower than in the diagonal direction. This provides a larger relative area of the photosites per total size of the CCD than possible with the conventional CCD structure. In high resolution mode, virtual pixels are created within the spatially interleaved real pixels.

Sensor size
Super CCD photosites (physical and virtual pixels)

In comparison to other cameras of 2000, the Canon EOS D30, Canon’s first “home grown” digital SLR produced 3.1 MP, and Nikon’s D1 (1999) produced 2.7MP. The Super CCD sensor evolved through a succession of designs and cameras until the final 12MP SuperCCD EXR sensor in 2010. The FinePix Pro series continued until the S5 finished production in 2009, still using the Nikon-F mount.

Further reading:

What is a “normal” lens anyway?

A “normal lens” for a 35mm camera, either film or digital generally refers to a lens with a focal length of 50mm. When you look through the viewfinder of a camera with a 50mm lens attached, the scene looks about the same as it does with the naked eye. Although a 50mm focal length is considered to be a normal lens for a 35mm film or DSLR camera, the same could not be said for all other formats. That’s why you don’t see a lot of 50mm lenses for Micro-Four-Thirds (MFT). A 50mm lens on MFT behaves likes a 100mm full-frame lens, because of the crop-factor, or basically because the sensor size is smaller. Of course “normal” lenses on a 35mm format camera are not exactly pigeonholed into a single focal length, instead they range anywhere from 40mm to 60mm (although this too may differ slightly depends on who describes it).

The standard idea has always been that the focal length of a normal lens should be about the same as that of the diagonal of the film frame/sensor, i.e. the measured distance from one corner of a negative’s frame to the corner diagonally opposite, in millimetres. For example the diagonal of a 36×24mm full-frame is 43mm (although most SLR cameras use a 50mm lanes as “normal”). Even other formats don’t hold true to this mathematical idea. The Olympus PEN F, half-frame camera should have a standard lens of 30mm, however the three lenses offered are instead the 38, 40, and 42mm, equivalent to 55/58/60mm (on a 35mm camera) respectively (there were also 25mm lenses, but they were considered wide angle).

Every different sized sensor has it’s own “normal” lens. Here is a list of normal focal lengths (FL) for various film/sensor sizes (D=digital; F=film), based on commonly used lenses for each system:

FormatDimensions (mm) H×WDiagonal (mm)Normal lens (mm)
16mm cine (F)7.5×10.312.725
Micro-Four-Thirds (D)13×17.321.6325
APS-C (D)15.1×22.727.335
Half-frame 35mm (F)24×183028
APS-C (F)16.7×25.130.128 (+30)
35mm film/DSLR (F,D)24×3643.350 (+55, 58)
Medium (D)33×445565
645 (F)56×4271.875
6×6 (F)56×5679.280
6×7 (F)56×6787.3105
5×4 (F)93×118150.2150

Where did the term “full-frame” originate?

Why are digital cameras with sensors the same size as 35mm SLR’s, i.e. 36×24mm, called full-frame cameras? This is somewhat of a strange concept considering that unlike film, where the 35mm dominated the SLR genre, digital cameras did not originate with 35mm film-equivalent sized sensors. In fact for many years, until the release of the first digital SLRs, camera sensors were of the sub-35mm or “crop-sensor” type. It was not until spring 2002 the first full-frame digital SLR appeared, the 6MP Contax Digital N. It was followed shortly after by the 11.1MP Canon EOS-1Ds. It wouldn’t be until 2007 that Nikon offered its first full-frame-camera, the D3. In all likelihood, the appearance of a sensor equivalent in size to 35mm film was in part because the industry wished to maintain the existing standard, allowing the use of standard lenses, and the existing 35mm hierarchy.

One of the first occurrences of the term “full-frame” as it related to digital, may have been in the advertising literature for Canon’s EOS-1Ds.

“A full-frame CMOS sensor – manufactured by Canon – with an imaging area of 24 x 36mm, the same dimensions used by full-frame 35mm SLRs. It has 11.1 million effective pixels with a maximum resolution of 4,064 x 2,704 pixels.”

Canon EOS-1Ds User Manual, 2002

By the mid 2000’s digital cameras using “crop-sensors” like APS-C had become standard, but the rise of 35mm DSLRs may have triggered a need to re-align the market place towards the legacy of 35mm film. As most early digital cameras used sensors that were smaller than 36×24mm, the term “full-frame” was likely used to differentiate it from smaller sized sensors. But the term has other connotations.

  • It is used in the context of fish-eye lenses to denote an image which covered the full 35mm film frame, as opposed to fish-eye lenses which just manifested as a circle.
  • It is used to denote the use of the entire film frame. For example when film APS-C appeared in 1996, the cameras were able to take a number of differing formats: C, H, and P. H is considered the “full-frame” format with a 9:16 aspect ratio, while P is the panoramic format (1:3), and C the classic 35mm aspect ratio (2:3).

In any case, the term “full-frame” is intrinsically linked to the format of 35mm film cameras. The question is whether or not this term is even relevant anymore?

The whole full-frame “equivalence” thing

There is a lot of talk on the internet about the “equivalency” of crop-sensors relative to full-frame sensors – often in an attempt to somehow rationalize things in the context of the ubiquitous 35mm film frame size (36×24mm). Usually equivalence involves the use of the cringe-worthy “crop-factor”, which is just a numeric value which compares the dimensions of one sensor against those of another. For example a camera with an APS-C sensor, e.g. Fuji-X, has a sensor size of 23.5×15.6mm which when compared with a full-frame (FF) sensor gives a crop-factor of approximately 1.5. The crop-factor is calculated by dividing the diagonal of the FF sensor by that of the crop-sensor, in the case of the example 43.42/28.21 = 1.53.

Fig.1: Relative sensor sizes and associated crop-factors.

Easy right? But this only really only matters if you want to know what the full-frame equivalent of a crop-sensor lens is. For example a 35mm lens has an angle of view of rough 37° (horizontal). If you want to compare this to a full-frame lens, you can multiply this by the crop-factor for APS-C sensors, so 35×1.5≈52.5mm. So an APS-C 35mm lens has a full-frame equivalency of 52.5mm which can be rounded to 50mm, the closest full-frame equivalent lens. Another reason equivalency might be important is perhaps you want to take similar looking photographs with two different cameras, i.e. two cameras with differing sensor sizes.

But these are the only real contexts where it is important – regardless of the sensor size, if you are not interested in comparing the sensor to that of a full-frame camera, equivalencies don’t matter. But what does equivalence mean? Well it has a number of contexts. Firstly there is the most commonly used situation – focal-length equivalence. This is most commonly used to relate how a lens attached to a crop-sensor camera behaves in terms of a full-frame sensor. It can be derived using the following equation:

Equivalent-FL = focal-length × crop-factor

The crop-factor in any case is more of a differential-factor which can be used to compare lenses on different sized sensors. Figure 2 illustrates two different systems with different sensor sizes, with two lenses that have an identical angle of view. To achieve the same angle of view on differently sized sensors, a different focal length is needed. A 25mm lens on a MFT sensor with a crop-factor of 2.0 gives the equivalent angle of view as a 50mm lens on a full-frame sensor.

Fig.2: Focal-length equivalence (AOV) between a Micro-Four-Thirds, and a full-frame sensor.

Focal length equivalency really just describes how a lens will behave on different sized sensors, with respect to angle-of-view (AOV). For example the image below illustrates the AOV photograph obtained when using a 24mm lens on three different sensors. A 24mm lens used on an APS-C sensor would produce an image equivalent to a full-frame 35mm lens, and the same lens used on a MFT sensor would produce an image equivalent to a full-frame 50mm lens.

Fig.3: The view of a 24mm lens on three different sensors.

When comparing a crop-sensor camera directly against a FF camera, in the context of reproducing a particular photograph, two other equivalencies come into play. The first is aperture equivalence. An aperture is just the size of the hole in the lens diaphragm that allows light to pass through. For example an aperture of f/1.4 on a 50mm lens means a maximum aperture size of 50mm/1.4 = 35.7mm. A 25mm f/1.8 MFT lens will not be equivalent to a 50mm f/1.8mm FF lens because the hole on the FF lens would be larger. To make the lenses equivalent from the perspective of aperture requires multiplying the aperture value by a crop factor:

Equivalent-Aperture = f-number × crop-factor

Figure 4 illustrates this – a 25mm lens used at f/1.4 on a MFT camera would be equivalent to using a 50mm with an aperture of f/2.8 on a full-frame camera.

Fig.4: Aperture equivalence between a 25mm MFT lens, and a 50mm full-frame lens.

The second is ISO equivalence, with a slightly more complication equation:

Equivalent-ISO = ISO × crop-factor²

Therefore a 35mm APS-C lens at f/5.6 and 800 ISO would be equivalent to a 50mm full frame lens at f/8 and 1800 ISO. Below is a sample set of equivalencies:

           Focal Length / F-stop = Aperture ∅ (ISO)
       MFT (×2.0): 25mm / f/2.8 = 8.9mm (200)
     APS-C (×1.5): 35mm / f/3.9 = 8.9mm (355)
Full-frame (×1.0): 50mm / f/5.6 = 8.9mm (800)
      6×6 (×0.55): 90mm / f/10.0 = 9.0mm (2600)

Confused? Yes, and so are many people. None of this is really that important, except to understand how a lens behaves will be different depending on the size of the sensor in the camera it is used on. Sometimes, focal-length equivalence isn’t even possible. There are full-frame lenses that just don’t have a cropped equivalent. For example a Sigma 14mm f/1.8 would need an APS-C equivalent of 9mm f/1.2, or a MFT equivalent of 7mm f/0.9. The bottom line is that if you only photography using a camera with an APS-C sensor, then how a 50mm lens behaves on that camera should be all that matters.

Now that’s a camera!

An 8×10 still camera operated by photographer Neal Harburger used to capture stills on Paramount westerns c.1930s. The camera was a Minex, designed by A. Adams & Co. of London. The camera was 18 inches high, 30 inches long (with the bellows extended) and weighed 34 pounds. From the literature it looks to be the “Tropical” model made of brass, teak, and Russian leather bellows.