Those weird image sensor sizes

Some sensors sizes are listed as some form of inch, for example a sensor size of 1″ or 2/3”. The diagonal size of this sensor is actually only 0.43” (11mm). Cameras sensors of the “inch” type do not signify the actual diagonal size of the sensor. These sizes are actually based on old video cameras tubes where the inch measurement referred to the out diameter of the video tube. 

The world use to use vacuum tubes for a lot of things, i.e. far beyond just the early computers. Video cameras like those used on NASA’s unmanned deep space probes like Mariner used vacuum tubes as their image sensors. These were known as vidicon tubes, basically a video camera tube design in which the target material is a photoconductor. There were a number of branded versions, e.g. Plumicon (Philips), Trinicon (Sony).

A sample of the 1″ vidicon tube, and its active area.

These video tubes were described using the outside diameter of the overall glass tube, and always expressed in inches. This differed from the area of the actual imaging sensor, which was typically two-thirds of the size. For example, a 1″ sized tube typically had a picture area of about 2/3″ on the diagonal, or roughly 16mm. For example, Toshiba produced Vidicon tubes in sizes of 2/3″, 1″, 1.2″ and 1.5″.

These vacuum tube based sensors are long gone, yet some manufacturers still use this deception to make tiny sensors seem larger than they are. 

Image sensorImage sensor sizeDiagonalSurface Area
1″13.2×8.8mm15.86mm116.16mm2
2/3″8.8×6.6mm11.00mm58.08mm2
1/1.8”7.11×5.33mm8.89mm37.90mm2
1/3”4.8×3.6mm6.00mm17.28mm2
1/3.6″4.0×3.0mm5.00mm12.00mm2
Various weird sensor sizes

For example, a smartphone may have a camera with a sensor size of 1/3.6″. How does it get this? The actual sensor will be approximately 4×3mm in size, with a diagonal of 5mm. This 5mm is multiplied by 3/2 giving 7.5mm (0.295″). 1” sensors are somewhere around 13.2×8.8mm in size with a diagonal of 15.86mm. So 15.86×3/2=23.79mm (0.94″), which is conveniently rounded up to 1″. The phrase “1 inch” makes it seem like the sensor is almost as big as a FF sensor, but in reality they are nowhere near the size. 

Various sensors and their fractional “video tube” dimensions.

Supposedly this is also where MFT gets its 4/3 from. The MFT sensor is 17.3×13mm, with a diagonal of 21.64mm. So 21.64×3/2=32.46mm, or 1.28″, roughly equating to 4/3″. Although other stores say 4/3 is all about the aspect ratio of the sensor, 4:3.

The Bayer filter

Without the colour filters in a camera sensor, the images acquired would be monochromatic. The most common colour filter used by many camera is the Bayer filter array. The pattern was introduced by Bryce Bayer of Eastman Kodak Company in a 1975 patent (No.3,971,065). The raw output of the Bayer array is called a Bayer pattern image. The most common arrangement of colour filters in Bayer uses a mosaic of the RGBG quartet, where every 2×2 pixel square is composed of a Red and Green pixel on the top row, and a Green and Blue pixel on the bottom row. This means that not every pixel is sampled as Red-Green-Blue, but rather one colour for each photosite. The image below shows how the Bayer mosaic is decomposed.

bayer-array
Decomposing the Bayer colour filter.

But why are there more green filters? This is largely because human vision is more sensitive to colour green, so the ratio is 50% green, 25% red and 25% blue. So in a sensor with 4000×6000 pixels, 12,000 would be green, and red and blur would have 6,000 each. The green channels are used to gather luminance information. The Red and Blue channels each have half the sampling resolution of the luminance detail captured by the green channel. However human vision is much more sensitive to luminance resolution than it is to colour information so this is usually not an issue. An example of what a “raw” Bayer pattern image would look like is shown below.

bayer-testout
Actual image (left) versus raw Bayer pattern image (right)

So how do we get pixels that are full RGB? To obtain a full-color image, a demosaicing algorithm has to be applied to interpolate a set of red, green, and blue values for each pixel. These algorithms make use of the surrounding pixels of the corresponding colors to estimate the values for a particular pixel. The simplest form of algorithm averages the surrounding pixels to derive the missing data. The exact algorithm used depends on the camera manufacturer.

Of course Bayer is not the only filter pattern. Fuji created its own version, the X-Trans colour filter array which uses a larger 6×6 pattern of red, green, and blue.