From photosites to pixels (ii) – ADC

The inner workings of a camera are much more complex than most people care to know about, but everyone should have a basic understanding of how digital photographs are created.

The ADC is the Analog-to-Digital Converter. After the exposure of a picture ends, the electrons captured in each photosite are converted to a voltage. The ADC takes this analog signal as input, and classifies it into a brightness level represented by a binary number. The output from the ADC is sometimes called an ADU, or Analog-to-Digital Unit, which is a dimensionless unit of measure. The darker regions of a photographed scene will correspond to a low count of electrons, and consequently a low ADU value, while brighter regions correspond to higher ADU values.

Fig. 1: The ADC process

The value output by the ADC is limited by its resolution (or bit-depth). This is defined as the smallest incremental voltage that can be recognized by the ADC. It is usually expressed as the number of bits output by the ADC. For example a full-frame sensor with a resolution of 14 bits can convert a given analog signal to one of 214 distinct values. This means it has a tonal range of 16384 values, from 0 to 16,383 (214-1). An output value is computed based on the following formula:

ADU = (AVM / SV) × 2R

where AVM is the measured analog voltage from the photosite, SV is the system voltage, and R is the resolution of the ADC in bits. For example, for an ADC with a resolution of 8 bits, if AVM=2.7, SV=5.0, and 28, then ADU=138.

Resolution (bits)Digitizing stepsDigital values
82560..255
1010240.1023
1240960..4095
14163840..16383
16655360..65535
Dynamic ranges of ADC resolution

The process is roughly illustrated in Figure 1. using a simple 3-bit, system with 23 values, 0 to 7. Note that because discrete numbers are being used to count and sample the analog signal, a stepped function is used instead of a continuous one. The deviations the stepped line makes from the linear line at each measurement is the quantization error. The process of converting from analog to digital is of course subject to some errors.

Now it’s starting to get more complicated. There are other things involved, like gain, which is the ratio applied while converting the analog voltage signal to bits. Then there is the least significant bit, which is the smallest change in signal that can be detected.

Why camera sensors don’t have pixels

The sensor in a digital camera is equivalent to a frame of film. They both capture light and use it to generate a picture, it is just the medium which changes: film uses light sensitive particles, digital uses light sensitive diodes. These specks of light work together to form a cohesive continuous tone picture when viewed from a distance. 

One of the most confusing things about digital cameras is the concept of pixels. They are confusing because some people think they are a quantifiable entity. But here’s the thing, they aren’t. Typically a pixel, short for picture element, is a physical point in an image. It is the smallest single component of an image, and is square in shape – but it is just a unit of information, without a specific quantity, i.e. a pixel isn’t 1mm2. The interpreted size of a pixel depends largely on the device it is viewed on. The terms PPI (pixels per inch) and DPI (dots per inch) were introduced to relate the theoretical concept of a pixel to real-world resolution. PPI describes how many pixels there are in an image per inch of distance. DPI is used in printing, and varies from device to device because multiple dots are sometimes needed to create a single pixel. 

But sensors don’t really have “pixels”. They have an array of cavities, better known as “photosites”, which are photo detectors that represent the pixels. When the shutter opens, each photosite collects light photons and stores them as electrical signals. When the exposure ends, the camera then assesses the signals and quantifies them as digital values, i.e. the things we call pixels. We tend to use the term pixel interchangeably with photosite in relation to the sensor because it has a direct association with the pixels in the image the camera creates. However a photosite is physical entity on the sensor surface, whereas pixels are abstract concepts. On a sensor, the term “pixel area” is used to describe the size of the space occupied by each photosite on the sensor. For example, a Fuji X-H1 has a pixel area of 15.05 µm² (micrometres²), which is *really* tiny.

A basic photosite

NB: Sometimes you may see photosites called “sensor elements”, or sensels.