Rear Window – the publicity stills

Every movie made publicity stills, and Rear Window was no different. The interesting thing about some of these shots, is the camera used. The two shots in question include Grace Kelly shooting Jimmy Stewart with a Korelle Master Reflex. What I don’t quite understand is why this particular camera was chosen, as opposed to the actual camera used in the movie, the Exakta VX.

This camera was the US version of the Meister-Korelle, a 6×6 SLR which used 120 film. It was the last version of the Reflex-Korelle, a camera made by VEB WEFO (Werkstätte für Feinmechanik und Optik), a short-lived, state-owned company in East Germany. The original Reflex-Korelle was designed by Franz Kochmann released in 1935. Production of the camera lasted from 1950 to 1952. The camera’s basic design and configuration was carried forth in the cameras such as the Exakta 66, Praktisix, and Pentacon 6.

As to why this camera? Likely because it seems to have been a camera commonly used by cinematographers. The lens? Hard to completely decipher, and by no means one of the “standard” lenses listed with the camera. Companies like Astro-Berlin did provide lenses for the Master Reflex, as cited in publications like American Cinematographer. Or perhaps a Kilfitt?

Further reading:

Advertisement

the histogram exposed (i) – unimodal

This is the first post in an ongoing series that looks at the intensity histograms of various images, and what they help tell us about the image. The idea behind it is to try and dispel the myths behind the “ideal” histogram phenomena, as well as helping to learn to read a histogram. The hope is to provide a series of posts (each containing three images and their histograms) based on histogram concepts such as shape, of clipping etc. Histograms are interpreted in tandem with the image.

Histogram 1: Ideal with a hint of clipping

The first image is the poster-boy for “ideal” histograms (almost). A simple image of a track through a forest in Scotland, it has a beautiful bell-shaped (unimodal) curve, almost entiorely in the midtones. A small amount of pixels, less than 1%, form a highlight clipping issue in the histogram, a result of the blown-out, overcast sky. Otherwise it is a well-formed image with good contrast and colour.

Histogram 2: The witches hat

This is a picture taken along the route of the Bergen-Line train in Norway. A symmetric, unimodal histogram, taking on a classic “witches hat” shape. The tail curving towards 0 (①) deals with the darker components of the upper rock-face, and the house. The tail curving towards 255 (③) deals with the lighter components of the lower rock face, and the house. The majority of midtone pixels form the sky, grassland, and rock face.

Olympus E-M5MArkII (16MP): 12mm; f/6.3; 1/400

Histogram 3: An odd peak

This is a photograph of the statue of Leif Eriksson which is in front of Reykjavik’s Hallgrímskirkja. It provides for a truly odd histogram – basically the (majority of) pixels form a unimodal histogram, ③ , which represents the sky surrounding the statue. The tiny hillocks to either side (①,②) form the sculpture itself – the left forming the shadows, and the right forming the bright regions. However overall, this is a well formed image, even though it may appear as if the sculpture is low contrast.

Leica D-Lux 6 (10MP): 14.7mm; f/2.8; 1/1600

Moriyama on clarity

“For me, capturing what I feel with my body is more important than the technicalities of photography. If the image is shaking, it’s OK, if it’s out of focus, it’s OK. Clarity isn’t what photography is about.”

Daido Moriyama

What is (camera) sensor resolution?

What is sensor resolution? It is not the number of photosites on the sensor, that is just photosite count. In reality sensor resolution is a measure of density, usually the number of photosites per some area, e.g. MP/cm2. For example a full-frame sensor with 24MP has an area of 36×24mm = 864mm2, or 8.64cm2. Dividing 24MP by this gives us 2.77 MP/cm2. It could also mean the actual area of a photosite, usually expressed in terms of μm2.

Such measures are useful in comparing different sensors from the perspective of density, and characteristics such as the amount of light which is absorbed by the photosites. A series of differing sized sensors with the same pixel count (image resolution) will have differing sized photosites and sensor resolutions. For 16 megapixels, a MFT sensor will have 7.1 MP/cm2, APS-C 4.4 MP/cm2, full-frame 1.85 MP/cm2, and medium format 1.1 MP/cm2. For the same pixel count, the larger the sensor, the larger the photosite.

Sensor resolution for the same image resolution, i.e. the same pixel count (e.g. 16MP)

It can also be used in comparing the same sized sensor. Consider the following three Fujifilm cameras and their associated APS-C sensors (with an area of 366.6mm2):

  • The X-T30 has 26MP, 6240×4160 photosites on its sensor. The photosite pitch is 3.74µm (dimensions), and the pixel density is 7.08 MP/cm2.
  • The X-T20 has a pixel count of 24.3MP, or 6058×4012 photosites with a photosite pitch is 3.9µm (dimensions), and a pixel density is 6.63 MP/cm2.
  • The X-T10 has a pixel count of 16.3MP, or 4962×3286 photosites with a photosite pitch is 4.76µm (dimensions), and a pixel density is 4.45 MP/cm2.

The X-T30 has a higher sensor resolution than both the X-T20 and X-T10. The X-T20 has a higher sensor resolution than the X-T10. The sensor resolution of the X-T30 is 1.61 times as dense as that of the X-T10.

Sometimes different sensors have similar photosite sizes, and similar photosite densities. For example the Leica SL2 (2019), is full-frame 47.3MP sensor with a photosite area of 18.23 µm2 and a density of 5.47 MP/cm2. The antiquated Olympus PEN E-P1 (2009) is MFT 12MP sensor with a photosite area of 18.32 µm2 and a density of 5.47 MP/cm2.

The art of over-processing

“No matter what lens you use, no matter what the speed of the film is, no matter how you develop it, no matter how you print it, you cannot say more than you see”.

Thoreau quoted by Paul Strand, The Snapshot Aperture, 19(1), p.49 (1974)

Full frame sensors

Now that we have looked at the origin of 35mm film cameras, let’s turn our attention to full-frame sensors. A full-frame sensor is so named because the sensor is 24mm×36mm in size, equivalent in size to a film frame in a film camera. Why are we basing new technology on old concepts? Mostly for posterity’s sake, because why fix something that isn’t broken? The 24mm×36mm size first appeared in the early 20th century, and eventually became the standard film “frame size”. When the transition to digital occurred, keeping the image size the same meant that photographers could easily transition the use of legacy lenses onto digital bodies. The 35mm format became the reference format. Full frame is now the largest consumer sensor format before medium format (like the Fujifilm GFX100).

An analog film frame versus a digital full-frame

The size of a full-frame sensor has a significant impact on image quality. The large surface area means that more light can be gathered by the sensor, which is particularly beneficial in low-light conditions. The photosites often have a large pitch (more commonly referred to by manufacturers as the pixel pitch), it provides a broad dynamic range, and good low-light/high ISO performance. For example, the Leica SL2 (47.3MP) has a pixel pitch of 4.3μm, whereas the Olympus E-M1 (MII) with a MFT sensor has a pixel pitch of 3.32μm. When the area of a photosite is calculated, a photosite on the SL2 is 68% larger than one on the E-M1.

Pixel pitch differences: MFT vs. FF

The downside of a full-frame sensor is that camera’s must be larger to accommodate the large sensor.Larger cameras mean heavier cameras.

Fixed forever

In every photograph the moment is fixed forever. In some it is the very moment that we prize, because it is such vivid history. In a few the moment magically becomes forever.

Beaumont Newhall in The History of Photography, the Museum of Modern Art, New York, 1949.

Pixels and resolution

Pixels actually help define image resolution. Image resolution is the level of visual detail in an image, and is usually represented as the number of pixels in an image. An image with a high density of pixels will have a higher resolution, providing both better definition, and more details. An image with low resolution will have less pixels and consequently less details and definition. Resolution is the difference between a 24MP image, and a 4MP image. Consider the example below which shows four different resolutions of the same image, shown as they would appear on a screen.

image resolution
Various image resolutions

Each image is ¼ the size of the previous image, meaning it has 75% less detail. However it is hard to determine how much detail has been lost. In some cases the human visual system will compensate for details lost by filling in information. To understand how resolution impacts the quality of an image, it is best to look the images using the same image dimensions. This means zooming in the images with less resolution so they appear the same size as the 100% image.

You can clearly see that when the resolution of an image decreases, the finer details tend to get washed out. This is especially prevalent in regions of this image which have text. Low resolution essentially means details become pixelated or blobby. These examples are quite extreme of course. With the size of modern camera sensors, taking a 24MP (6000×4000) image, and reducing it 25% would still result in an image 1500×1000 pixels in size. The quality of these lower resolution images is actually perceived to be quite good, because of the limited resolution of screens. Below is an example of a high resolution image and the same image in low resolution at 1/8th the size.

comparison of resolutions

They are perceptually quite similar. It is not until one enlarges a region that the degradation becomes clear. These artifacts are particularly prevalent in fine details, such as text.

An example of an enlarged region showing high versus low resolution images.

How do camera sensors work?

So we have described photosites, but how does a camera sensor actually work? What sort of magic happens inside a digital camera? When the shutter button is pressed, and the sensor exposed to light, the light passes through the lens, and then through a series of filters, a microlens array, and a colour filter, before being deposited in the photosite. A photodiode then converts the light into an electrical signal produced into a quantifiable digital value.

Cross-section of a sensor.

The uppermost layer of a sensor typically contains certain filters. One of these is the infrared (IR) filter. Light contains both ultraviolet and infrared parts, and most sensors are very sensitive to infrared radiation. Hence the IR filter is used to eliminate the IR radiation. Other filters include anti-aliasing (AA) filters which blur the lines between repeating patterns in order to avoid wavy lines (moiré).

Next come the microlenses. One would assume that photosites are butted up against one another, but in reality that’s not the case. Camera sensors have a “microlens” above each photosite to concentrate the amount of light gathered.

Photosites by themselves have a problem distinguishing colour.  To capture colour, a filter has to be placed over each photosite, to capture only specific colours. A red filter allows only red light to enter the photosite, a green filter only green, and a blue filter only blue. Therefore, each photosite contributes information about one of the three colours that, together, comprise the complete colour system of a photograph (RGB).

sensor-colour1
Filtering light using colour filters, in this case showing a Bayer filter.

The most common type of colour filter array is called a Bayer filter. The array in a Bayer filter consists of a repetitive pattern of 2×2 squares comprised of a red, blue, and two green filters. The Bayer filter has more green than red or blue because human vision is more sensitive to green light.

A basic diagram of the overall process looks something like this:

Light photons enter the aperture, and a portion are allowed through the shutter. The camera sensor (photosites) then absorbs the light photons producing an electrical signal which may be amplified by the ISO amplifier before it is turned into the pixels of a digital image.

What is a grayscale image?

If you are starting to learn about image processing then you will likely be dealing with grayscale or 8-bit images. This effectively means that they contain 2^8 or 256 different shades of gray, from 0 (black), to 255 (white). They are the simplest form of image to create image processing algorithms for. There are some image types that are more than 8-bit, e.g. 10-bit (1024 shades of grey), but in reality these are only used in specialist applications. Why? Doesn’t more shades of grey mean a better image? Not necessarily.

The main reason? Blame the human visual system. It is designed for colour, having three cone photoreceptors for conveying colour information that allows humans to perceive approximately 10 million unique colours. It has been suggested that from the perspective of grays, human eyes cannot perceptually see the difference between 32 and 256 graylevel intensities (there is only one photoreceptor with deals with black and white). So 256 levels of gray are really for the benefit of the machine, and although the machine would be just as happy processing 1024, it is likely not needed.

Here is an example. Consider the following photo of the London Blitz, WW2 (New Times Paris Bureau Collection).

blitz

This is a nice grayscale image, because it has a good distribution of intensity values from 0 to 255 (which is not always easy to find). Here is the histogram:

blitzHST

Now consider the image, reduced to 8, 16, 32, 64, and 128 intensity levels. Here is a montage of the results, shown in the form of a region extracted form he original image.

The same image with differing levels of grayscale.

Not that there is very little perceivable difference, except at 8 intensity levels, where the image starts to become somewhat grainy. Now consider a companion of this enlarged region showing only 256 (left) versus 32 (right) intensity levels.

blitz256vs32

Can you see the difference? There is very little difference, especially when viewed in the over context of the complete image.

Many historic images look like they are grayscale, but in fact they are anything but. They may be slightly yellowish or brown in colour, either due to the photographic process, or due to aging of the photographic medium. There is no benefit to processing these type of photographs as colour images however, they should be converted to 8-bit.