Do some sensors have too many photosites?

For years we have seen the gradual creep of increased photosites on sensors (images have pixels, sensors have photosites – pixels don’t really have a dimension, whereas photosites do). The question is, how many photosites is too many photosites (within the physical constraints of a sensor)? It doesn’t matter the type of sensor, they have all become more congested – Micro-Four-Thirds has crept up to 25MP (Panasonic DC-GH6), APS-C to 40MP (Fuji X-T5), and full-frame to 60MP (Sony A7R-V).

Manufacturers have been cramming more photosites into their sensors for years now, while the sensors themselves haven’t grown any larger. When the first Four Thirds (FT) sensor camera, the Olympus E1, appeared in 2005 it had 2560×1920 photosites (5MP). The latest rendition of the FT sensor, on the 2023 Panasonic Lumix DC-G9 II has 5776×4336 photosites (25MP), on the same sized sensor. So what this means of course is that ultimately photosites get smaller. For example the photosite pitch has changed from 6.89μm to 3μm, which doesn’t seem terrible, until you calculate the area of a photosite: 47.47μm2 to 9μm2, which is quite a disparity (pitch is not really the best indicator when comparing photosites, area is better, because it provides an indication of light gathering area). Yes, its five times more photosites, but each photosite is only 16% the area of the original.

Are smaller photosites a good thing? Many would argue that it doesn’t matter, but at some point there will be some diminishing returns. Part of the problem is the notion that more pixels in an image means better quality. But image quality is an amalgam of many differing things beyond sensor and photosite size including the type of sensor, the file type (JPEG vs. RAW), the photographers knowledge, and above all the quality of a lens. Regardless of how many megapixels there are in an image – if a lens is of poor optical quality, it will nearly always manifest in a lower-quality image.

The difference in size between a 24MP and 40MP APS-C sensor. The 40MP photosite (9.12μm2) is 60% the size of the 24MP photosite (15.21μm2).

However when something is reduced in size, there are always potential side-effects. Small photosites might be more susceptible to things like noise because despite algorithmic means of noise suppression, it is impossible to eliminate it completely. Larger pixels also collect more light, and as a result are better at averaging out errant information. If you have two different sized sensors with the same amount of photosites, then the larger sensor will arguably deliver better image quality. The question is whether or not photosites are just getting too small on some of these sensors? When will MFT or APS-C reach the point where adding more photosites is counterproductive?

Some manufacturers like Fuji have circumvented this issue by introducing new larger sensor medium format cameras like the GFX 50S II (44×33mm, 51MP) which has a photosite size of 5.3µm – more resolution, but not at the expense of photosite size. Larger sensors typically have larger photosites, resulting in more light being captured and a better dynamic range. These cameras and their lenses are obviously more expensive, but they are designed for people that need high resolution images. The reality is that the average photographer doesn’t need sensors with more photosites – the images produced are just too large and unwieldy for most applications.

The reality is, that cramming more photosites into any of these sensors does not really make any sense. It is possible that the pixel increase is just a smokescreen for the fact that there is little else in the way of camera/sensor innovations. I mean there are the stacked sensors, but their development has been slow – the Foveon X3 has shown little use beyond those found in Sigma cameras (they haven’t really taken off, probably due in part to the cost). Other stacked CMOS sensors are in development, but again it is slow. So to keep people buying cameras, companies need to cram in more photosites, i.e. more megapixels. Other things haven’t changed much either, I mean aperture is aperture right? For example autofocus algorithms haven’t taken a major step forward, and the usability hasn’t done much of anything (except perhaps catering to video shooters). Let’s face it, the race for megapixels is over. Like really over. Yet every new generation of cameras seems to increase the number slightly.

Vintage digital – the Olympus E-1

The Olympus E-1 was introduced in 2003, the first interchangeable lens camera designed specifically from the ground up to be digital. It would provide the beginning for what would become the “E-System”, containing the 4/3″, or “Four Thirds” sensor. The camera contained a 5-megapixel CCD sensor from Kodak. The 4/3″ sensor had a size of 17.3mm×13.0mm. The size of the film was akin to that of 110 film, with an aspect ratio of 3:2, which breaks from the traditional 35mm 4:3 format.

The E-1 had a magnesium-alloy body, which was solid, dense, and built like a proverbial tank. The camera is also weather-sealed, and offered a feature many through was revolutionary – a “Supersonic Wave Filter”, to clean off the dust on the imaging sensor. From a digital perspective, Olympus designed a lens mount that was wide in relation to the sensor or image-circle diagonal. This enabled the design of lenses to be such that they minimized the angle of light-ray incidence into the corners of the frame. Instead of starting from scratch, Canon, Konica-Minolta, Nikon and Pentax just took their film SLR mounts and installed smaller sensors in bodies based on their film models. The lens system was also designed from scratch.

The tank in guise of a camera

The E-1, with its sensor smaller that the APS-C already available had both pros and cons. A smaller sensor meant lenses could be both physically smaller and lighter. A 50mm lens would be about the same size as other 50mm lenses, but with the crop-factor, it would actually be a 100mm lens. 4/3rd’s was an incredibly good system for telephoto’s because they were half the size and shape than their full-frame counterparts.

Although quite an innovative camera, it never really seemed to take off in a professional sense. It didn’t have continuous shooting or even the auto-focus speed needed for genres like sports photography. It also fell short on the megapixel side of things, as the Canon EOS-1Ds with its full-frame 11-magapixel sensor had already appeared in 2002. A year later in 2004, the Olympus E-300 had already bypassed the 5MP with 8MP, making the E-1 somewhat obsolete from a resolution viewpoint. The E-1’s photosite pitch was also smaller than most of its APS-C rivals sporting 6MP sensors.

Further Reading

Vintage lenses – was there a Biotar 70mm f/1.4?

On the heals of the Biotar 75mm f/1.5, I came across a posting for a Biotar 70mm f/1.4 in a Leica L39 mount. Was this a real lens? The serial number of the lens is 2620709, and it was selling for ca. C$78K. The serial number suggests it was produced in 1939.

This is a strange lens because there is very little information regarding its provenance. CollectiBlend suggests only 116 lenses were produced between 1929 and 1939. Most seem like they have been adapted to mounts such as M42. The early 1930s Zeiss catalogs do specify a 70mm f/1.4 lens, however it is for cinematographic work, specifically recommended for 40×35mm format. By the late 1930s they were also being advertised for miniature, i.e. 35mm cameras. This was however not advertised for use with either Contax 35mm camera offered by Zeiss-Ikon, which advertised a 85mm as a portrait lens.

Early brochure information on the Biotar f/1.4

According to the catalogs of the period, there were a series of f/1.4 Biotars, in 2cm, 2.5cm, 4cm, and 5cm focal lengths in addition to the 7cm (70mm). The Biotar initially played virtually no role at all for still image cameras. In fact one of the most numerous Biotars produced at that time was the Biotar 2.5cm f/1.4 which went into production in1928. By the end of WW2, just over 1,300 units had been manufactured, most of which were delivered to Bell & Howell or Kodak, but also to Siemens, among others. The Biotar 4cm f/1.4 was created as a medium focal length for 18×24mm standard film cameras – the format used in 35mm cine cameras.

Dating based on serial numbers: 2620709 (1939) and 950044 (1929-30)

Now I have seen three different versions of this lens, none of which really meshes with the descriptions found in early catalogs – here the lenses are cited to a mount diameter of 60mm, and come in either an “N” mount (for cameras with bellows extensions), or an “A” mount (for folding and other hand cameras). The few lenses available today are in the form of re-housed optics, i.e. the lens has been adapted at come point to fit mounts like the Leica mount. Some of these lenses were made for “miniature” cameras, and so some may actually have native LTM mounts.

Further reading:

Colour (photography) is all about the light

Photography in the 21st century is interesting because of all the fuss made about megapixels and sharp glass. But none of the tools of photography matter unless you have an innate understanding of light. For it is light that makes a picture. Without light, the camera is blind, capable of producing only dark unrecognizable images. Sure, artificial light could be used, but photography is mostly about natural light. It is light that provides colour, helps interpret contrast, determines brightness and darkness, and also tone, mood, and atmosphere. However in our everyday lives, light is often taken somewhat fore granted.

One of the most important facets of light is colour. Colour begins and ends with light; without light, i.e. in darkness, there is no colour. Light is an attribute of a big family of “waves” that starts with wavelengths of several thousand kilometres, including the likes of radio waves, heat radiation, infrared and ultraviolet waves, and X rays, and ends with gamma radiation of radium and cosmic rays with wavelengths so short that they have to be measured in fractions of a millionth part of a millimeter. Visible light is of course that part of the spectrum which the human eyes are sensitive to, ca. 400-700nm. For example the wavelength representing the colour green has values in the range 500-570nm.

The visible light spectrum

It is this visible light that builds the colour picture in our minds, or indeed that which we take with a camera. An object will be perceived as a certain colour because it absorbs some colours (or wavelengths) and reflects others. The colours that are reflected are the ones we see. For example the dandelion in the image below looks yellow because the yellow petals in the flower have absorbed all wavelengths of colour except yellow, which is the only colour reflected. If only pure red light were shone onto the dandelion, it would appear black, because the red would be absorbed and there would be no yellow light to be reflected. Remember, light is simply a wave with a specific wavelength or a mixture of wavelengths; it has no colour in and of itself. So technically, there is really no such thing as yellow light, rather, there is light with a wavelength of about 590nm that appears yellow. Similarly, the grass in the image reflects green light.

The colours we see are reflected wavelengths that are interpreted by our visual system.

The colour we interpret will also be different based on the time of day, lighting, and many other factors. Another thing to consider with light is its colour temperature. Colour temperature uses numerical values in degrees Kelvin to measure the colour characteristics of a light source on a spectrum ranging from warm (orange) colours to cool (blue) colours. For example natural daylight has a temperature of about 5000 Kelvin, whereas sunrise/sunset can be around 3200K. Light bulbs on the other hand can range anywhere from 2700K to 6500K. A light source that is 2700K is considered “warm” and generally emits more wavelengths of red, whereas a 6500K light is said to be “cool white” since it emits more blue wavelengths of light.

We see many colours as one, building up a picture.

Q: How many colours exist in the visible spectrum?
A: Technically, none. This is because the visible spectrum is light, with a wavelength (or frequency), not colour per se. Colour is a subjective, conscious experience which exists in our minds. Of course there might be an infinite number of wavelengths of light, but humans are limited in the number they can interpret.

Q: Why is the visible spectrum described in terms of 7 colours?
A: We tend to break the visible spectrum down into seven colours: red, orange, yellow, green, blue, indigo, and violet. Passing a ray of white light through a glass prism, splits it into seven constituent colours, but these are somewhat arbitrary as light comes as a continuum, with smooth transitions between colours (it was Isaac Newton that first divided the spectrum into 6, then 7 named colours). There are now several different interpretations of how spectral colours have been categorized. Some modern ones have dropped indigo, or have replaced it with cyan.

Q: How is reflected light interpreted as colour?
A: Reflected light is interpreted by both camera sensors, film, and the human eye by filtering the light, to interpret the light in terms of the three primary colours: red, green, and blue (see: The basics of colour perception).

More myths about travel photography

Below are some more myths associated with travel.

MYTH 13: Landscape photographs need good light.

REALITY: In reality there is no such thing as bad light, or bad weather, unless it is pouring. You can never guarantee what the weather will be like anywhere, and if you are travelling to places like Scotland, Iceland, or Norway the weather can change on the flip of a coin. There can be a lot of drizzle, or fog. You have to learn to make the most the situation, exploiting any kind of light.

MYTH 14: Manual exposure produces the best images.

REALITY: Many photographers use aperture-priority, or the oft-lauded P-mode. If you think something will be over- or under-exposed, then use exposure-bracketing. Modern cameras have a lot of technology to deal with taking optimal photographs, so don’t feel bad about using it.

MYTH 15: The fancy camera features are cool.

REALITY: No, they aren’t. Sure, try the built-in filters. They may be fun for a bit, but filters can always be added later. If you want to add filters, try posting to Instagram. For example, high-resolution mode is somewhat fun to play with, but it will eat battery life.

MYTH 16: One camera is enough.

REALITY: I never travel with less than two cameras, a primary, and a secondary, smaller camera, one that fits inside a jacket pocket easily (in my case a Ricoh GR III). There are risks when you are somewhere on vacation and your main camera stops working for some reason. A backup is always great to have, both for breakdowns, lack of batteries, or just for shooting in places where you don’t want to drag a bigger camera along, or would prefer a more inconspicuous photographic experience, e.g. museums, art galleries.

MYTH 17: More megapixels are better.

REALITY: I think optimally, anything from 16-26 megapixels is good. You don’t need 50MP unless you are going to print large posters, and 12MP likely is not enough these days.

MYTH 18: Shooting in RAW is the best.

REALITY: Probably, but here’s the thing, for the amateur, do you want to spend a lot of time post-processing photos? Maybe not? Setting the camera to JPEG+RAW is the best of both worlds. There is the issue of JPEG editing being destructive and RAW not.

MYTH 19: Backpacks offer the best way of carrying equipment.

REALITY: This may be true getting equipment from A to B, but schlepping a backpack loaded with equipment around every day during the summer can be brutal. No matter the type, backpacks + hot weather = a sweaty back. They also make you stand out, just as much as a FF camera with a 300mm lens. Opt instead for a camera sling, such as one from Peak Design. It has a much lower form factor and with a non-FF camera offers enough space for the camera, an extra lens, and a few batteries and memory cards. I’m usually able to shove in the secondary camera as well. They make you seem much more incognito as well.

MYTH 20: Carrying a film-camera is cumbersome.

REALITY: Film has made a resurgence, and although I might not carry one of my Exakta cameras, I might throw a half-frame camera in my pack. On a 36-roll film, this gives me 72 shots. The film camera allows me to experiment a little, but not at the expense of missing a shot.

MYTH 21: Travel photos will be as good as those in photo books.

REALITY: Sadly not. You might be able to get some good shots, but the reality is those shots in coffee-table photo books, and on TV shows are done with much more time than the average person has on location, and with the use of specialized equipment like drones. You can get some awesome imagery with drones, especially for video, because they can get perspectives that a person on the ground just can’t. If you spend an hour at a place you will have to deal with the weather that exists – someone who spends 2-3 days can wait for optimal conditions.

MYTH 22: If you wait long enough, it will be less busy.

REALITY: Some places are always busy, especially so it if is a popular landmark. The reality is short of getting up at the crack of dawn, it may be impossible to get a perfect picture. A good example is Piazza San Marco in Venice… some people get a lucky shot in after a torrential downpour, or some similar event that clears the streets, but the best time is just after sunrise, otherwise it is swamped with tourists. Try taking pictures of lesser known things instead of waiting for the perfect moment.

MYTH 23: Unwanted objects can be removed in post-processing.

REALITY: Sometimes popular places are full of tourists… like they are everywhere. In the past it was impossible to remove unwanted objects, you just had to come back at a quieter time. Now there are numerous forms of post-processing software like Cleanup-pictures that will remove things from a picture. A word of warning though, this type of software may not always work perfectly.

MYTH 24: Drones are great for photography.

REALITY: It’s true, drones make for some exceptional photographs, and video footage. You can actually produce aerial photos of scenes like the best professional photographers, from likely the best vantage points. However there are a number of caveats. Firstly, travel drones have to be a reasonable size to actually be lugged about from place to place. This may limit the size of the sensor in the camera, and also the size of the battery. Is the drone able to hover perfectly still? If not, you could end up with somewhat blurry images. Flight time on drones is usually 20-30 minutes, so extra batteries are a requirement for travel. The biggest caveat of course is where you can fly drones. For example in the UK, non-commercial drone use is permitted, however there are no-fly zones, and permission is needed to fly over World heritage sites such as Stonehenge. In Italy a license isn’t required, but drones can’t be used over beaches, towns or near airports.

Are black-and-white photographs really black and white?

Black-and-white photography is somewhat of a strange term, because it alludes to the fact that the photograph is black-AND-white. However black-and-white photographs if interpreted correctly would mean an image which contains only black and white (in digital imaging terms a binary image). Alternatively they are sometimes called monochromatic photographs, but that too is a broad term, literally meaning “all colours of a single hue“. This means that cyanotype and sepia-tone prints, are also to be termed monochromatic. A colour image that contains predominantly bright and dark variants of the same hue could also be considered monochromatic.

Using the term black-and-white is therefore somewhat of a misnomer. The correct term might be grayscale, or gray-tone photographs. Prior to the introduction of colour films, B&W film had no designation, it was just called film. With the introduction of colour film, a new term had to be created to differentiate the types of film. Many companies opted for the use terms like panchromatic, which is an oddity because the term means “sensitive to all visible colors of the spectrum“. However in the context of black-and-white films, it implies a B&W photographic emulsion that is sensitive to all wavelengths of visible light. Afga produced IsoPan and AgfaPan, and Kodak Panatomic. Differentially, colour films usually had the term “chrome” in their names.

Fig.1: A black-and-white image of a postcard

All these terms have one thing in common, they represent the shades of gray across the full spectrum from light to dark. In the digital realm, an 8-bit grayscale image has 256 “shades” of gray, from 0 (black) to 255 (white). A 10-bit grayscale image has 1024 shades, from 0→1023. The black-and-white image shown in Fig.1 illustrates quite aptly an 8-bit grayscale image. But grays are colours as well, albeit without chroma, so they would be better termed achromatic colours. It’s tricky because a colour is “a visible light with a specific wavelength”, and neither black nor white are colours because they do not have specific wavelengths. White contains all wavelengths of visible light and black is the absence of visible light. Ironically, true blacks and true whites are rare in photographs. For example the image shown in Fig.1 only contains grayscale values ranging from 24..222, with few if any blacks or whites. We perceive it as a black-and-white photograph only because of our association with that term.

Myths about travel photography

Travel snaps have been around since the dawn of photography. Their film heyday was likely the 1950s-1970s when photographs taken using slide film were extremely popular. Of course in the days of film it was hard to know what your holiday snaps would look like until they were processed. The benefit of analog was of course that most cameras offered similar functionality, with the aesthetic provided by the type of film used. While there were many differing lenses available, most cameras came with a stock 50mm lens, and most people travelled with a 50mm lens, possibly a wider lens for landscapes, and later zoom lenses.

With digital photography things got easier, but only in the sense of being able to see what you photograph immediately. Modern photography is a two-edged sword. On one side there are a lot more choices, in both cameras, and lenses, and on the other side digital cameras have a lot more dependencies, e.g. memory cards, batteries etc., and aesthetic considerations, e.g. colour rendition. Below are some of myths associated with travel photography, in no particular order, taken from my own experiences travelling as an amateur photographer. I generally travel with one main camera, either an Olympus MFT, or Fuji X-series APS-C, and a secondary camera, which is now a Ricoh GR III.

The photographs above illustrate three of the issues with travel photography – haze, hard shadows, and shooting photographs from a moving train.

MYTH 1: Sunny days are the best for taking photographs.

REALITY: A sunny or partially cloudy day is not always congenial to good outdoor photographs. It can produce a lot of glare, and scenes with hard shadows. On hot sunny days landscape shots can also suffer from haze. Direct sunlight in the middle of the day often produces the harshest of light. This can mean that shadows become extremely dark, and highlights become washed out. In reality you have to make the most of whatever lighting conditions you have available. There are a bunch of things to try when faced with midday light, such as using the “Sunny 16” rule, and using a neutral density (ND) filter.

MYTH 2: Full-frame cameras are the best for taking travel photography

REALITY: Whenever I travel I always see people with full-frame (FF) cameras sporting *huge* lenses. I wonder if they are wildlife or sports photographers? In reality it’s not necessary to travel with a FF camera. They are much larger, and much heavy than APS-C or MFT systems. Although they produce exceptional photographs, I can’t imagine lugging a FF camera and accessories around for days at a time.

MYTH 3: It’s best to travel with a bunch of differing lenses.

REALITY: No. Pick the one or two lenses you know you are going to use. I travelled a couple of times with an extra super-wide, or telephoto lens in the pack, but the reality is that they were never used. Figure out what you plan to photograph, and pack accordingly. A quality zoom lens is always good because it provides the variability of differing focal lengths in one lens, however fixed focal length lenses often produce a better photograph. I would imagine a 50mm equivalent is a good place to start (25mm MFT, 35mm APS-C).

MYTH 4: The AUTO setting produces the best photographs.

REALITY: The AUTO setting does not guarantee a good photograph, and neither does M (manual). Ideally shooting in P (program) mode probably gives the most sense of flexibility. But there is nothing wrong with using AUTO, or even preset settings for particular circumstances.

MYTH 5: Train journeys are a great place to shoot photographs.

REALITY: Shooting photographs from a moving object, e.g. a train requires the use of S (shutter priority). You may not get good results from a mobile device, because they are not designed for that. Even using the right settings, photographs from a train may not always seem that great unless the scenery allows for a perspective shot, rather than just a linear shot out of the window, e.g. you are looking down into valleys etc. There is issues like glare, and dirty windows to contend with.

MYTH 6: A flash is a necessary piece of equipment.

REALITY: Not really for travelling. There are situations you could use it, like indoors, but usually indoor photos are in places like art galleries and museums who don’t take kindly to flash photography, and frankly it isn’t needed. If you have some basic knowledge it is easy to take indoor photographs with the light available. Even better this is where mobile devices tend to shine, as they often have exceptional low-light capabilities. Using a flash for landscapes is useless… but I have seen people do it.

MYTH 7: Mobile devices are the best for travel photography.

REALITY: While they are certainly compact and do produce some exceptional photographs, they are not always the best for travelling. Mobile devices with high-end optics excel at certain things, like taking inconspicuous photographs, or in low-light indoors etc. However to get the most optimal landscapes, a camera will always do a better job, mainly because it is easier to change settings, and the optics are clearly better.

MYTH 8: Shooting 1000 photographs a day is the best approach.

REALITY: Memory is cheap, so yes you could shoot 1000 frames a day, but is it the best approach? You may as well strap a Go-Pro to your head and video tape everything. At the end of a 10-day vacation you could have 10,000 photos, which is crazy. Try instead to limit yourself to 100-150 photos a day, which is like 3-4 36 exposure rolls of film. Some people suggest less, but then you might later regret not taking a photo. There is something about limiting the amount of photos you take and instead concentrate on taking creative shots.

MYTH 9: A tripod is essential.

REALITY: No, its not. They are cumbersome, and sometimes heavy, and the reality is that in some places, e.g. atop the Arc de Triomphe, you can’t use a tripod. Try walking around the whole day in a city like Zurich during the summer, lugging a bunch of camera gear, *and* a tripod. For a good compromise, consider packing a pocket tripod such as the Manfrotto PIXI. In reality cameras have such good stabilization these days that in most situations you don’t need a tripod.

MYTH 10: A better camera will take better pictures.

REALITY: Unlikely. I would love to have a Leica DLSR. Would it produce better photographs? Maybe, but the reality is that taking photographs is as much about the skill of the photographer than the quality of the camera. Contemporary cameras have so much technology in them, learn to understand it, and better your skills before thinking about upgrading a camera. There will always be new cameras, but it’s hard to warrant buying one.

MYTH 11: A single battery is fine.

REALITY: Never travel with less than two batteries. Cameras use a lot of juice, because features like image stabilization, and auto-focus aren’t free. I travel with at least 3 batteries for whatever camera I take. Mark them as A, B, and C, and use them in sequence. If the battery in the camera is C, then you know A and B need to be recharged, which can be done at night. There is nothing worse than running out of batteries half-way through the day.

MYTH 12: Post-processing will fix any photos.

REALITY: Not so, ever heard of the expression garbage-in, garbage-out? Some photographs are hard to fix, because not enough effort was taken when they were taken. If you take a photograph of a landscape with a hazy sky, it may be impossible to post-process it.

The facts about camera aspect ratio

Digital cameras usually come with the ability to change the aspect ratio of the image being captured. The aspect ratio has a little to do with the size of the image, but more to do with its shape. The aspect ratio describes the relationship between an image’s width (W) and height (H), and is generally expressed as a ratio W:H (the width always comes first). For example a 24MP sensor with 6000×4000 pixels has an aspect ratio of 3:2.

Choosing a different sized aspect ratio will change the shape of the image, and the number of pixels stored in it. When using a different aspect ratio, the image is effectively cropped with the pixels outside the frame of the aspect ratio thrown away. 

The core forms of aspect ratios.

The four most common examples of aspect ratios are:

  • 4:3
    • Used when photos to be printed are 5×7″, or 8×10″.
    • Quite good for landscape photographs.
    • The standard ratio for MFT sensor cameras.
  • 3:2
    • The closest to the Golden Ratio of 1.618:1, which makes things appear aesthetically pleasing.
    • Corresponds to 4×6″ printed photographs.
    • The default ratio for 35mm cameras, and many digital cameras, e.g FF, APS-C sensors.
  • 16:9
    • Commonly used for panarama’s, or cinematographic purposes.
    • The most common ratio for video formats, e.g. 1920×1080
    • The standard aspect ratio of HDTV and cinema screens.
  • 1:1
    • Used for capturing square images, and to simplify scenes.
    • The standard ratio for many medium-format cameras.
    • Commonly used in social media, e.g. Instagram.

How an aspect ratio appears on a sensor is dependent on the sensors default aspect ratio.

Aspect ratios visualized on different sensors.

Analog 35mm cameras rarely had the ability to change the aspect ratio. One exception to the rule is the Konica Auto-Reflex, a 35mm camera with the ability to switch between full and half-frame (18×24mm) in the middle of a roll of film. It achieved this by moving a set of blinds in to change the size of the exposed area of the film plane to half-frame.

Could blur be the new cool thing in photography?

For many years the concept of crisp, sharp images was paramount. It lead to the development of a variety of image sharpening algorithms to suppress the effect of blurring in an image. Then tilt-shift appeared, and was in vogue for a while (it’s still a very cool effect). Here blur was actually being introduced into an image. But what about actually taking blurry images?

I have been experimenting with adding blur to an image, either through the process of  manually defocusing the lens, or by taking a picture of a moving object. The results? I think they are just as good, if not better than if I had “stopped the motion”, or created a crisp photograph. We worry far too much about defining every single feature in an image, and too little on a bit of creativity. Sometimes it would be nice to leave something in an image that inspires thought.

Here’s an example of motion-blur, a Montreal Metro subway car coming into a platform. It is almost the inverse of tilt-shift. Here the object of interest is blurred, and the surround area is kept crisp. Special equipment needed? Zip.

More on Mach bands

Consider the following photograph, taken on a drizzly day in Norway with a cloudy sky, and the mountains somewhat obscured by mist and clouds.

Now let’s look at the intensity image (the colour image has been converted to 8-bit monochrome):

If we look at a region near the top of the mountain, and extract a circular region, there are three distinct regions along a line. To the human eye, these appear as quite uniform regions, which transition along a crisp border. In the profile of a line through these regions though, there are two “cliffs” (Aand B) that marks the shift from one region to the next. Human eyes don’t perceive these “cliffs”.

The Mach bands is an illusion that suggests edges in an image where in fact the intensity is changing in a smooth manner.

The downside to Mach bands is that they are an artificial phenomena produced by the human visual system. As such, it might actually interfere with visual inspection to determine the sharpness contained in an image.