Choosing an APS-C camera: 26MP or 40MP?

The most obvious choice when it comes to APS-C cameras is usually the number of megapixels. Not that there is really that much to choose from. Usually it is a case of 24/26MP or 40MP. Does the jump to 40MP really make all that much difference? Well, yes and no. To illustrate this we will compare two Fujifilm cameras: (i) the 26MP X-M5 with 6240×4160 photosites, and (ii) the 40MP X-T50 with 7728×5152 photosites.

Firstly, an increase in megapixels just means that more photosites have been crammed onto the sensor, and as a result they have been reduced in size (sensor photosites have dimensions, whereas image pixels are dimensionless). The size of the photosites in the X-M5 is 3.76µm, versus 3.03µm for the X-T50. This is a 35% reduction in the area of a photosite on the X-T50 relative to the X-M5, which might or might not be important (it is hard to truly compare photosites given the underlying technologies and number of variables involved).

Fig.1: Comparing various physical aspects of the 26MP and 40MP APS-C sensors (based on the example Fuji cameras).

Secondly, from an image perspective, a 40MP sensor will produce an image with more aggregate pixels in it than a 26MP image, 1.5 times more in fact. But aggregate pixels only relate to the total amount of pixels in the resulting image. The other thing to consider is the linear dimensions of an image, which relates to its width and height. Increasing the amount of pixels in an image by 50% does not increase the linear dimensions by 50%. For example doubling the photosites on a sensor will double the aggregate pixels in an image. However to double the linear dimensions of an image, the number of photosites on the sensor need to be quadrupled. So 26MP needs to ramp up to 104MP in order to double the linear dimensions. So the X-T50 will produce an image with 39,814,656 pixels in it, versus 25,958,400 pixels for the X-M5. This relates to 1.53 times as many aggregate pixels. However the linear dimensions only increase 1.24 times, as illustrated in Fig.1.

So is the 40MP camera better than the 26MP camera? It does produce images with slightly more resolution, because there are more photosites on the sensor. But the linear dimensions may not warrant the extra cost in going from 26MP to 40MP (US$800 versus US$1400 for the sample Fuji cameras, body only). The 40MP sensor does allow for a better ability to crop, and marginally more detail. It also allows for the ability to print larger posters. Conversely the images are larger, and take more computational resources to process.

At the end of the day, it’s not about how many image megapixels a camera can produce, it’s more about clarity, composition, and of course the subject matter. Higher megapixels might be important for professional photographers or for people who focus on landscapes, but as amateurs, most of us should be more concerned with capturing the moment rather than getting going down the rabbit hole of pixel count.

Further reading:

The APS-C dilemma

Should you buy a camera with an APS-C sensor, or a full-frame?

This argument has been going on for a number of years now, and still divides the photographic community. Is APS-C better than full-frame, or is it sub-optimal? Well, I think it’s all about perspective. APS-C, along with Micro-Four-Thirds are frequently viewed as mere crop-sensors, a designation that only exists because we perpetuate the falsehood that full-frame offers the “standard” sensor size. This stems from the fact that 36×24mm was the standard film size before digital cameras came along. As digital cameras evolved, “full-frame” became the name for the sensor size that matched a 35mm negative.

However we are at the point in time where each sensor size should be considered on its own merits, (and pitfalls) without unnecessary inference that it is a mere “stepping-stone” to a full-frame. Identifying an APS-C sensor, which has a size of 23.6×15.7mm, as “just a crop” sensor does not give the camera the kudos it deserves. The problem lies in every aspect of how these cameras relate to one another, but manifests itself best in lenses.

APS-C versus full-frame camera
The physical differences between APS-C and full-frame. The full-frame Leica SL3 is nearly twice the weight of the APS-C Fujifilm X-T50, and has a much bigger form factor.

Most APS-C sensors have a crop-factor of 1.5 (except Canon which is 1.6). This means lenses function a little differently than on full-frame lenses. Now a 50mm lens is always 50mm, regardless of the system it is associated with − it’s how that 50mm is interpreted in relation to the sensor that is important. For example a 50mm lens is a “normal” lens for a full-frame camera, while in APS-C land a normal is going to be a lens with a focal length of 33-35mm. A 50mm lens on an APS-C sensor will give a smaller picture than a full-frame, because well obviously the sensor is smaller. So an APS-C 50mm has the same effect as a 75mm lens on a full-frame camera in terms of what is in the picture.

Some basic visual comparisons of APS-C versus full-frame

There are obviously things that full-frame sensors do better, and things that APS-C format cameras do better. Image size is the first, which is purely the result of full-frame cameras having more photosites on their sensors. With the evolution of pixel-shifting technology this may be a mute-point as super-resolution images are already available on some systems. Full-frame cameras also tend to have better dynamic range and low-light performance. This is because photosites are often bigger on full-frame cameras, so they can collect more light and better differentiate between light intensities. This means they work better in low-light situations introducing less noise. But digital cameras rely on software to turn the data from photosites into the pixels in an image, and so as software improves, so too will things like noise suppression algorithms in APS-C.

How lenses function on APS-C and full-frame lenses

But not every full-frame has larger photosites. For example a Fuji X-H1 camera with a 24MP sensor has 6000×4000 photosites, with a photosite pitch of 3.88μm. The Sony a7CR has a 61MP sensor (9504×6336) with a pixel pitch of 3.73μm, which is actually smaller than that of the APS-C sensor. So more pixels, but perhaps a low-light performance that isn’t that much better. And what is anyone going to do with images 60MP in size? Post them on the web? I think not.

featureAPS-Cfull-frame
low-light performancegoodexcellent
depth of fielddeepermore shallow
lens availabilitylarge selectiongood selection, fewer third-party lenses
lens costaffordablemore expensive
portabilitylight, easy to carryheavy, bulky
dynamic rangeslightly reducedwider
applicationsstreet photography, sport, wildlife, travellow-light, studio, landscapes, portrait
camera body costtypically affordableusually expensive
wide angle lenses18-23mm28-35mm
normal lenses26-38mm40-58mm
A comparison of some of the characteristics of APS-C versus full-frame

Full-frame cameras, just like medium-format cameras are for people who need the things they provide – high resolution, low-light abilities, etc. Many people tend to correlate a full-frame camera with high quality because of its sensor size, but quality isn’t necessarily associated with high-resolution images. Yes, more data captured by a camera means more detail in an image, but that doesn’t automatically mean that APS-C sensors (or even MFT) are inferior.

Most non-professional photographers don’t need huge image sizes, just like they don’t need a Leica. APS-C cameras are considerably lighter, and more compact than their full-frame brethren. APS-C lens are also cheaper to purchase, because they are easier to build, and require less glass. In all likelihood there is also a broader ecosystem of third-party lenses for non-full-frame cameras as well, as they are cheaper to manufacture. Over time as newer sensors evolve, APS-C may be well positioned to take a more prominent role in the camera world.

Further reading:

Do some sensors have too many photosites?

For years we have seen the gradual creep of increased photosites on sensors (images have pixels, sensors have photosites – pixels don’t really have a dimension, whereas photosites do). The question is, how many photosites is too many photosites (within the physical constraints of a sensor)? It doesn’t matter the type of sensor, they have all become more congested – Micro-Four-Thirds has crept up to 25MP (Panasonic DC-GH6), APS-C to 40MP (Fuji X-T5), and full-frame to 60MP (Sony A7R-V).

Manufacturers have been cramming more photosites into their sensors for years now, while the sensors themselves haven’t grown any larger. When the first Four Thirds (FT) sensor camera, the Olympus E1, appeared in 2005 it had 2560×1920 photosites (5MP). The latest rendition of the FT sensor, on the 2023 Panasonic Lumix DC-G9 II has 5776×4336 photosites (25MP), on the same sized sensor. So what this means of course is that ultimately photosites get smaller. For example the photosite pitch has changed from 6.89μm to 3μm, which doesn’t seem terrible, until you calculate the area of a photosite: 47.47μm2 to 9μm2, which is quite a disparity (pitch is not really the best indicator when comparing photosites, area is better, because it provides an indication of light gathering area). Yes, its five times more photosites, but each photosite is only 16% the area of the original.

Are smaller photosites a good thing? Many would argue that it doesn’t matter, but at some point there will be some diminishing returns. Part of the problem is the notion that more pixels in an image means better quality. But image quality is an amalgam of many differing things beyond sensor and photosite size including the type of sensor, the file type (JPEG vs. RAW), the photographers knowledge, and above all the quality of a lens. Regardless of how many megapixels there are in an image – if a lens is of poor optical quality, it will nearly always manifest in a lower-quality image.

The difference in size between a 24MP and 40MP APS-C sensor. The 40MP photosite (9.12μm2) is 60% the size of the 24MP photosite (15.21μm2).

However when something is reduced in size, there are always potential side-effects. Small photosites might be more susceptible to things like noise because despite algorithmic means of noise suppression, it is impossible to eliminate it completely. Larger pixels also collect more light, and as a result are better at averaging out errant information. If you have two different sized sensors with the same amount of photosites, then the larger sensor will arguably deliver better image quality. The question is whether or not photosites are just getting too small on some of these sensors? When will MFT or APS-C reach the point where adding more photosites is counterproductive?

Some manufacturers like Fuji have circumvented this issue by introducing new larger sensor medium format cameras like the GFX 50S II (44×33mm, 51MP) which has a photosite size of 5.3µm – more resolution, but not at the expense of photosite size. Larger sensors typically have larger photosites, resulting in more light being captured and a better dynamic range. These cameras and their lenses are obviously more expensive, but they are designed for people that need high resolution images. The reality is that the average photographer doesn’t need sensors with more photosites – the images produced are just too large and unwieldy for most applications.

The reality is, that cramming more photosites into any of these sensors does not really make any sense. It is possible that the pixel increase is just a smokescreen for the fact that there is little else in the way of camera/sensor innovations. I mean there are the stacked sensors, but their development has been slow – the Foveon X3 has shown little use beyond those found in Sigma cameras (they haven’t really taken off, probably due in part to the cost). Other stacked CMOS sensors are in development, but again it is slow. So to keep people buying cameras, companies need to cram in more photosites, i.e. more megapixels. Other things haven’t changed much either, I mean aperture is aperture right? For example autofocus algorithms haven’t taken a major step forward, and the usability hasn’t done much of anything (except perhaps catering to video shooters). Let’s face it, the race for megapixels is over. Like really over. Yet every new generation of cameras seems to increase the number slightly.

The term “crop-sensor” has become a bit nonsensical

The term “crop-sensor” doesn’t make much sense anymore, if it ever did. I understand why it evolved, because a term was needed to collectively describe smaller-than-35mm sized sensors (crop means to clip or prune, i.e. make smaller). That is, if it’s not 36×24mm in size it’s a crop-sensor. However it’s also sometimes used to describe medium-format sensors, even though they are larger than 36×24mm. In reality non-35mm sensors do provide an image which is “cropped” in terms of comparison with a full-frame sensor, but taken in isolation they are sensors unto themselves.

The problem lies with the notion that 36×24mm constitutes a “full-frame”, which only exists as such because manufacturers decided to continue using the concept from 35mm film SLR’s. It is true that 35mm was the core film standard for decades, but that was constrained largely by the power of 35mm film. Even half-frame cameras (18×24mm, basically APS-C size) used the same 35mm film. In the digital realm there are no constraints on a physical medium, yet we are still wholly focused on 36×24mm.

Remember, there were sub-full-frame sensors before the first true 36×24mm sensor appeared. Full-frame evolved in part because it made it easier to transition film-based lenses to digital. In all likelihood in the early days there were advantages to full-frame over its smaller brethren, however two decades later we live in a different world. “Crop” sensors should no longer be treated as sub-par players in the camera world. Yet it is this full-frame mantra that sees people ignore the benefits of smaller sensors. Yes, there are benefits to full-frame sensors, but there are also inherent drawbacks. It is the same with the concept of equivalency. We say a 33mm APS-C lens is “equivalent” to a 50mm full-frame. But why? Because some people started the trend of relating everything back to what is essentially a 35mm film format. But does there even need to be a connection between different sensors?

Image showing all APS-C, micro-four thirds  and full-frame cameras.
Sensor equality
What about some sensor equality?

The reason “crop” sensors have continued to evolve is because they are much cheaper to produce, and being smaller, the cameras themselves have a reduced footprint. Lenses also require less glass, making them lighter, and less expensive to manufacture. Maybe instead of using “crop-sensor”, we should just acknowledge the sensors exactly as they are: Medium, APS-C, and MFT, and change full-frame to be “35mm” format instead. So when someone talks about a 35mm sensor, they are effectively talking about a full-frame. All it takes is a little education.

Old versus smarter advertising which puts the emphasis on the angle-of-view. In this case an Fuji APS-C lens – rather than focusing on 16mm, it focuses instead on the horizontal AOV, i.e. 74 degrees. It could also designate that the lens is a wide angle lens.

Using the term-crop sensor also does more harm than good, because it results in more terms: equivalency and crop-factor which are used in the context of focal length, AOV, and even ISO. People get easily confused and then think that a lens with a focal length of 50mm on an APS-C camera is not the same as one on a FF camera. Focal lengths don’t change, a lens that is 50mm is always 50mm. What changes is the Angle-of-View (AOV). A larger sensor gives a wider AOV, whereas a smaller sensor gives a narrower AOV. So while the 50mm lens on the FF camera has a horizontal AOV of 39.6°, the one on the APS-C camera sees only 27°.

It would be easier not to have to talk about a sensor in terms of another sensor. But even though terms like “crop-sensor” and “crop-factor” are nonsensical, in all likelihood the industry won’t change the way they perceive non-35mm sensors anytime soon. I have previously described how we could alleviate the term crop-factor as it relates to lenses, identifying lenses based on their AOV rather than purely by their focal length. This works because nearly all lenses are designed for a particular sensor, i.e. you’re not going to buy a MFT lens for an APS-C camera.

What is a mirrorless camera?

It is a camera without a mirror of course!
Next you’ll ask why a camera would ever need a mirror.

Over the last few years we have seen an increased use of the term “mirrorless” to describe cameras. But what does that mean? Well, 35mm SLR (Single Lens Reflex) film cameras all contained a reflex mirror. The mirror basically redirects the light (i.e. view) coming through the lens to the film by means of a pentaprism, to the optical viewfinder (OVF) – which is then viewed by the photographer. Without it, the photographer would have to view the scene by means of an offset window (like in a rangefinder camera, which were technically mirrorless). This basically means that the photographer sees what the lens sees. When the photographer presses the shutter-release button, the mirror swings out of the way, temporarily blocking the light from passing through the viewfinder, and instead allowing the light to pass through the opened shutter onto the film. This is depicted visually in Figure 1.

Fig.1: A cross-section of a 35mm SLR camera showing the mirror and optical viewfinder (OVF)

When DSLR (Digital Single Lens Reflex) cameras appeared they used similar technology. The problem is that this mirror, together with the digital electronics, meant that the cameras became larger than traditional film SLRs. The concept of mirrorless cameras appeared in 2008, with the introduction of the Micro-Four-Thirds system. The first mirrorless interchangeable lens camera was the Panasonic Lumix DMC-G1. It replaced the optical path of the OVF with an electronic viewfinder (EVF), making it possible to remove the mirror completely, hence reducing the size of cameras. The EVF shows the image that the sensor outputs, displaying the output on a small LCD or OLED screen.

Fig.2: DSLR versus a mirrorless camera. In the DLSR the light path to the OVF by means of the mirror is shown in blue. When the shutter-release button is pressed, the mirror retracts (pink mirror), and the light is allowed to pass through to the sensor (pink path).

As a result of nixing the mirror, mirrorless cameras are typically have fewer moving parts, and are slimmer than DSLRs, shortening the distance between the lens and the sensor. The loss of the mirror also means that it is easier to adapt vintage lenses for use on digital cameras. Some people still prefer using an OVF, because it is optical, and does not require as much battery-life as an EVF.

These days the only cameras still containing mirrors are usually full-frame DSLRs, and they are slowly disappearing, replaced by mirrorless cameras. Basically all recent crop-sensor cameras are mirrorless. DSLR sales continue to decline. Looking only at interchangeable lens cameras (ILC), according to CIPA, mirrorless cameras in 2022 made up 68.7% of all ILD units (4.07M versus 1.85M), and 85.8% of shipped value (out of 5.927 million units shipped).

Upgrading camera sensors – the megapixel phenomena

So if you are planning to purchase a new camera with “upgraded megapixels”, what makes the most sense? In many cases, people will tend to continue using the same brand or sensor. This makes sense from the perspective of existing equipment such as lenses, but sometimes an increase in resolution requires moving to a new sensor. There are of course many things to consider, but the primary ones when it comes to the images produced by a sensor are: aggregate MP and linear dimensions (we will consider image pixels rather than sensor photosites). Aggregate MP are the total number of pixels in an image, whereas linear dimensions relate to the width and height of an image. Doubling the number of pixels in an image does not double an images linear dimensions. Basically doubling the megapixels will double the aggregate megapixels in an image. To double the linear dimensions of an image, the megapixels need to be quadrupled. So 24MP needs to ramp up to 96MP in order to double the linear dimensions.

Table 1 shows some sample multiplication factors for aggregate and linear dimensions when upgrading megapixels, ignoring sensor size. The image sizes offer a sense of what is what is offered, with the standard MP sizes offered by various manufacturers shown in Table 2.

16MP24MP30MP40MP48MP60MP
16MP1.5 (1.2)1.9 (1.4)2.5 (1.6)3.0 (1.7)3.75 (1.9)
24MP1.25 (1.1)1.7 (1.3)2.0 (1.4)2.5 (1.6)
30MP1.3 (1.2)1.6 (1.3)2.0 (1.4)
40MP1.2 (1.1)1.5 (1.2)
48MP1.25 (1.1)
Table 1: Changes in aggregate megapixels, and (linear dimensions) shown as multiplication factors.

Same sensor, more pixels

First consider a different aggregate of megapixels on the same size sensor – the example compares two Fuji cameras, both of which use an APS-C sensor (23.6×15.8mm).

Fuji X-H2 − 40MP, 7728×5152
Fuji X-H2S − 26MP, 6240×4160

So there are 1.53 times more pixels in the 40MP sensor, however from the perspective of linear resolution (comparing dimensions), there is only a 1.24 times differential. This means that horizontally (and vertically) there are only one-quarter more pixels in the 40MP versus the 26MP. But because they are on the same size sensor, the only thing that really changes is the size of the photosites (known as the pitch). Cramming more photosites on a sensor means that the photosites get smaller. In this case the pitch reduces from 3.78µm (microns) in the X-H2S to 3.05µm in the X-H2. Not an incredible difference, but one that may affect things such as low-light performance (if you care about these sort of things).

A visualization of differing sensor size changes

Larger sensor, same pixels

Then there is the issue of upgrading to a larger sensor. If we were to upgrade from an APS-C sensor to an FF sensor, then we typically get more photosites on the sensor. But not always. For example consider the following upgrade from a Fuji X-H2 to a Leica M10-R:

FF: Leica M10-R (41MP, 7864×5200)
APS-C: Fuji X-H2 (40MP, 7728×5152)

So there are very few differences from the perspective of either image resolution, or linear resolution (dimensions). The big difference here is the photosite pitch. The Leica has a pitch of 4.59µm, versus the 3.05µm of the Fuji. From the perspective of photosite area, this means that 21µm² versus 9.3µm², or 2.25 times the light-gathering space on the full-frame sensor. How much difference this makes from the perspective of the end-picture is uncertain due to the multiplicities of factors involved, and computational post-processing each camera provides. But it is something to consider.

Larger sensor, more pixels

Finally there is upgrading to more pixels on a larger sensor. If we were to upgrade from an APS-C sensor (Fuji X-H2S) to a FF sensor (Sony a7R V) with more pixels:

FF: Sony a7R V (61MP, 9504×6336)
APS-C: Fuji X-H2S (26MP, 6240×4160)

Like the first example, there are 2.3 times more pixels in the 61 MP sensor, however from the perspective of linear resolution, there is only a 1.52 times differential. The challenge here can be that the photosite pitch can actually remain the same. The pitch on the Fuji sensor is 3.78µm, versus the 3.73µm of the Sony.

brandMFTAPS-CFull-frameMedium
Canon24, 3324, 45
Fuji16, 24, 26, 4051, 102
Leica1716, 2424, 41, 47, 60
Nikon21, 2424, 25, 46
OM/Olympus16, 20
Panasonic20, 2524, 47
Sony24, 2633, 42, 50, 60, 61
Table 2: General megapixel sizes for the core brands

Upgrading cameras is not a trivial thing, but one of the main reasons people do so is more megapixels. Of all the brands listed above, only one, Fuji, has taken the next step, and introduced a medium format camera (apart from the medium format camera manufacturers, e.g. Hasselblad), allowing for increased sensor size and increased pixels, but not at the expense of photosite size. The Fujifilm GFX 100S has a medium format sensor, 44×33mm in size, providing 102MP with 3.76µm. This means it provides approximately double the dimensional pixels as a Fuji 24MP APS-C camera (and yes it costs almost three times as much, but there’s no such thing as a free lunch).

At the end of the day, you have to justify why more pixels are needed to yourself. They are only part of the equation in the acquisition of good images, but small upgrades like 24MP to 40MP may not actually provide much of a payback.

What is a micron?

The nitty-gritty of digital camera sensors takes things down to the micron. For example the width of photosites on a sensor is measured in microns, more commonly represented using the unit µm, e.g. 3.71µm. But what is a micron?

Various sensor photosite sizes compared to spider silk (size is exaggerated of course).

Basically a micron is a micrometre, a metric unit of measure for length equal to 0.001 mm (or 1/1000mm). I mean it’s small, like really small. To put it also into perspective table salt has a particle size of 120µm. Human hair has an average diameter of 70µm, milled flour can be anywhere in the range 25-200µm, and spider silk is a measly 3µm.

To put it another way, for a photosite that is 3.88µm in size, we could fit 257 of them in just 1mm of space.

What is a photosite?

When people talk about cameras, they invariably talk about pixels, or rather megapixels. The new Fujifilm X-S20 has 26 megapixels. This means that the image produced by the camera will contain 26 million pixels. But the sensor itself does not have any pixels, the sensor has photosites.

The job of photosites is to capture photons of light. After a bunch of processing, the data captured by each photosite is converted into a digital signal, and processed into a pixel. All the photosites on a sensor contribute to the resultant image. On a sensor there are two numbers used to define the number of photosites. The first is the physical sensor resolution which is the actual number of physical photosites found on the sensor. For example on the Sony a7RV (shown below), there are 9566×6377 physical photosites (61MP). However not all the photosites are used to create an image – the ones that are form the maximum image resolution, i.e. the maximum number of pixels in an image. For the Sony a7RV this is 9504×6336 photosites (60.2MP) used to create an image. This is sometimes known as the effective number of photosites.

The Sony a7R V

There are two major differences between photosites and pixels. Firstly, photosites are physical entities, pixels are not, they are digital entities. Secondly, while photosites have a size, and are different based on the sensor type, and number of photosites on a sensor, pixels are dimensionless. For example each photosite on the Sony a7RV has a pitch (width) of 3.73µm, and an area of 13.91µm2.