What is image resolution?

Sometimes a technical term gets used without any thought to its meaning, and before you know it becomes an industry standard. This is the case with the term “image resolution”, which has become the standard means of describing how much detail is portrayed in an image. The problem is that the term resolution can mean different things in photography. In one context it is used in describing the pixel density of devices (in DPI or PPI). For example a screen may have a resolution of 218 ppi (pixels-per-inch), and a smartphone might have a resolution of 460ppi. There is also sensor resolution, which is concerned with photosite density on a sensor based on sensor size. You can see how this can get confusing.

Fig.1: Image resolution is about detail in the image (the image on the right becomes pixelated when enlarged)

The term image resolution really just refers to the number of pixels in an image, i.e. pixel count. It is usually expressed in terms of two numbers for the number of pixel rows and columns in an image, often known as the linear resolution. For example the Ricoh GR III has an APS-C sensor with a sensor resolution of 6051×4007, or about 24.2 million photosites on the physical sensor. The effective number of pixels in an image derived from the sensor is 6000×4000, or a pixel count of 24 million pixels – this is considered the image resolution. Image resolution can be used in the context of describing a camera in broad context, e.g., the Sony A1 has 50 megapixels, or based on dimensions, “8640×5760”. It is often used in the context of comparing images, e.g. the Sony A1 with 50MP has a higher resolution than the Sony ZV-1 with 20MP. The image resolution of two images is shown in Figure 1 – a high resolution image has more detail than an image with lower resolution.

Fig.2: Image resolution and sensor size for 24MP.

Technically when talking about the sensor we are talking about photosites, but image resolution is not about the sensor, it is about the image produced from the sensor. This is because it is challenging to attempt to compare cameras based on photosites, as they all have differing properties, e.g. photosite area. Once the data from the sensor has been transformed into an image, then the photosite data becomes pixels, which are dimensionless entities. Note that the two dimensions representing the image resolution will change depending on the aspect ratio of the sensor. So while a 24MP image on a 3:2 sensor (APS-C) will have dimensions of 6000 and 4000, a full-frame sensor with the same pixel count will have dimensions of roughly 5657×4243.

Fig.3: Changes in image resolution within different sensors.

Increasing image resolution does not always mean increasing the linear resolution, or detail in the same amount. For example a 16MP image from 3:2 ratio sensor would produce an image with resolution of 4899×3266. A 24MP images from the same type of sensor would increase the pixel count by 50%, however the vertical and horizontal dimensions would only increase by 20% – so a much lower change in linear resolution. To double the linear resolution would require an increase in resolution to a 64MP image.

Is image resolution the same as sharpness? Not really, this has more to do with an images spatial resolution (this is where the definition of the word resolution starts to betray itself). Sharpness concerns how clearly defined details within images appear, and is somewhat subjective. It’s possible to have a high resolution image that is not sharp, just like its possible to have a low resolution image that has a good amount of acuity. It really depends on the situation it is being viewed in, i.e. back to device pixel density.

Megapixels and sensor resolution

A megapixel is 1 million pixels, and when used in terms of digital cameras, represents the maximum number of pixels which can be acquired by a camera’s sensor. In reality it conveys a sense of the image size which is produced, i.e. the image resolution. When looking at digital cameras, this can be somewhat confusing because there are different types of terms used to describe resolution.

For example the Fuji X-H1 has 24.3 megapixels. The maximum image resolution is is 6000×4000 or 24MP. This is sometimes known as the number of effective pixels (or photosites), and represents those pixels within the actual image area. However if we delve deeper into the specifications (e.g. Digital Camera Database), and you will find a term called sensor resolution. This is the total number of pixels, or rather photosites¹, on the sensor. For the X-H1 this is 6058×4012 pixels, which is where the 24.3MP comes from. The sensor resolution is calculated from sensor size and effective megapixels in the following manner:

  • Calculate the aspect ratio (r) between width and height of the sensor. The X-H1 has a sensor size of 23.5mm×15.6mm so r=23.5/15.6 = 1.51.
  • Calculate the √(no. pixels / r), so √(24300000/1.51) = 4012. This is the vertical sensor resolution.
  • Multiply 4012×1.51=6058, to determine the horizontal sensor resolution.

The Fuji X-H1 is said to have a sensor resolution of 24,304,696 (total) pixels, and a maximum image resolution of 24,000,000 (effective) pixels. So effectively 304,696 photosites on the sensor are not recorded as pixels, representing approximately 1%. These remaining pixels form a border to the image on the sensor.

Total versus effective pixels.

So to sum up there are four terms worth knowing:

  • effective pixels/megapixels – the number of pixels/megapixels in an image, or “active” photosites on a sensor.
  • maximum image resolution – another way to describe the effective pixels.
  • total photosites/pixels – the total number of photosites on a sensor.
  • sensor resolution – another way to describe the total photosites on a sensor.

¹ Remember, camera sensors have photosites, not pixels. Camera manufacturers use the term pixels because it is easier for people to understand.