The math behind visual acuity

The number of megapixels required to print something, or view a television is ultimately determined by the human eye’s visual acuity, and the distance the object is viewed from. For someone with average vision (i.e. 20/20), their acuity would be defined as one arcminute, or 1/60th of a degree. For comparison, a full moon in the sky appears about 31 arcminutes (1/2 a degree) across (Figure 1).

Fig.1: Looking at the moon

Now generally, some descriptions skip from talking about arcminutes to describing how the distance between an observer and an object can calculated given the resolution of the object. For example, the distance (d, in inches) at which the eye reaches its resolution limit is often calculated using:

d = 3438 / h

Where h, is the resolution, and can be ppi for screens, and dpi for prints. So if h=300, then d=11.46 inches. Now to calculate the optimal viewing distance involves a magic number – 3438. Where does this number come from? Few descriptions actually give any insights, but we can can start with some basic trigonometry. Consider the diagram in Figure 2, where h is the pixel pitch, d is the viewing distance, and θ is the angle of viewing.

Fig.2: Viewing an object

Now we can use the basic equation for calculating an angle, Theta (θ), given the length of the opposite and adjacent sides:

tan(θ) = opposite/adjacent

In order to apply this formula to the diagram in Figure 2, only θ/2 and h/2 are used.

tan(θ/2) = (h/2)/d

So now, we can solve for h.

d tan(θ/2) = h/2
2d⋅tan(θ/2) = h

Now if we use visual acuity as 1 arcminute, this is equivalent to 0.000290888 radians. Therefore:

h = 2d⋅tan(0.000290888/2) 
  = 2d⋅0.000145444

So for d=24”, h= 0.00698 inches, or converted to mm (by multiplying by 25.4), h=0.177mm. To convert this into PPI/DPI, we simply take the inverse, so 1/0.00698 = 143 ppi/dpi. How do we turn this equation into one with the value 3438 in it? Well, given that the resolution can be calculated by taking the inverse, we can modify the previous equation:

h = 1/(2d⋅0.000145444)
  = 1/d * 1/2 * 1/0.000145444
  = 1/d * 1/2 * 6875.49847
  = 1/d * 3437.749
  = 3438/d

So for a poster viewed at d=36″, the value of h=95dpi (which is the minimum). The viewing distance can be calculated by rearranging the equation above to:

d = 3438 / h

As an example, consider the Apple Watch Series 8, whose screen has a resolution of 326ppi. Performing the calculation gives d=3438/326 = 10.55”. So the watch should be held 10.55” from one’s face. For a poster printed at 300dpi, d=11.46”, and for a poster printed at 180dpi, d=19.1”. This is independent of the size of the poster, just printing resolution, and represents the minimum resolution at a particular distance – only if you move closer do you need a higher resolution. This is why billboards can be printed at a low resolution, even 1dpi, because when viewed from a distance it doesn’t really matter how low the resolution is.

Note that there are many different variables at play when it comes to acuity. These calculations provide the simplest case scenario. For eyes outside the normal range, visual acuity is different, which will change the calculations (i.e. radians expressed in θ). The differing values for the arcminutes are: 0.75 (20/15), 1.5 (20/30), 2.0 (20/40), etc. There are also factors such as lighting, how eye prescriptions modify acuity, etc. to take into account. Finally, it should be added that these acuity calculations only take into account what is directly in front of our eyes, i.e. the narrow, sharp, vision provided by the foveola in the eye – all other parts of a scene, will have slightly less acuity moving out from this central point.

Fig.3: At 1-2° the foveola provides the greatest amount of acuity.

p.s. The same system can be used to calculate ideal monitor and TV sizes. For a 24″ viewing distance, the pixel pitch is h= 0.177mm. For a 4K (3840×2160) monitor, this would mean 3840*0.177=680mm, and 2160*0.177=382mm which after calculating the diagonal results in a 30.7″ monitor.

p.p.s. If using cm, the formula becomes: d = 8595 / h

Advertisement

A ballad of the senses

When you’re an infant those memories made aren’t really that accessible when you get older. That’s because humans generally suffer from something scientists term infant amnesia. Something to do with rapid neuron growth disrupting the brain circuitry that stores old memories, making them inaccessible (they are not lost, but tucked away). Of course you don’t want to remember everything that happens in life… that would clog our brains with a bunch of nothingness. But we all have selective memories from infancy which we can visualize when they are triggered. For me there are but a couple, and they are usually triggered by an associative sense.

The first is the earthy smell of a cellar, which triggers fleeting memories of childhood times at my grandmothers house in Switzerland. The second is also of the same time and place – the deep smell of wild raspberries. These memories are triggered by olfactory senses, making the visual, however latent, emerge even if for a brief moment. It is no different to the other associations we make between vision, smell, and taste. Dragonfruit is a beautiful looking tropical fruit, but it can have a bitter/tart taste. Some of these associations have helped us survive over the millennia.

Raspberries on a bush.
Mmmm… raspberries… but you can’t smell them, or taste the ethyl formate (the chemical partially responsible for their flavour)

It makes you wonder then if these sense-experiences don’t allow us to better retain memories. If you travel to somewhere like Iceland, and take a picture of a geyser, you may also smell faint wisps of sulphur. There is now an association between a photograph of geyser, and physically experiencing it. The same could be said of the salty Atlantic air of Iles de la Madeleine, or the resinous smell of walking through a pine forest. Memory associations. Or maybe an Instagram of a delicious ice cream from Bang Bang ice-cream. Again an association. But how many of the photos we view lack context because we don’t have an association between the visual, and information gathered from our other senses. You can view a picture of the ice cream on Instagram, but you won’t know what it tastes or smells like, and therefore the picture only provides half the experience.

When visual data becomes a dull noise

There was a time when photographs had meaning, and held our attention, embedded something inside our minds. Photographs like The Terror of War taken by Nick Ut in 1972 during the Vietnam War.  But the digital age has changed the way we consume photographs. Every day we are bombarded with visual content, and due to the sheer volume, most of it makes little if any lasting impact.

Eventually, the visual data around us becomes an amalgam of blurriness and noise, limiting the amount of information we gain from it.

The human visual system is extremely adept at processing visual information. It can process something like 70 images per second [1,2], and identify images in as little as 13 milliseconds. But it was never really designed to see the variety of visual data now thrust at it. When we evolved, vision was purely to used to interpret the world directly surrounding us, primarily from a perspective of survival, and the visual data it provided was really quite simple. It was never really designed to look at screens, or read books. There was no real need for Palaeolithic humans to view something as small as text in a book. Over time visual processing systems evolved as human life evolved.

The greatest change in visual perception likely occurred when the first civilizations appeared. Living in communities meant that the scope and type of visual information changed. The world became a busier place, more cluttered from a sensory perspective. People no long had to use their vision as much for hunting and gathering, but adapted to live in a community setting, and an agricultural way of life. There likely was very little change in thousands of years, maybe even until the advent of the Industrial Revolution. Society became much more fast paced, and again our vision had to adapt. Now in addition to the world around us, people were viewing static images called photographs, often of far-flung exotic places. In the ensuing century, visual information would play an increasing role in people’s lives. Then came the 21st century, and the digital age.

The transient nature of digital information has likely changed the way we perceive the visual world around us. There was a time when viewing a photograph may have been more of an ethereal experience. It can still be a magical experience, but few people likely realize this. We are so bombarded with images that they fill every niche of our lives, and many people likely take them for granted. Our visual world has become super-saturated. How many Instagram photographs do we view every day? How many of these really make an impact on our lives? It may be that too much visual information has effectively morphed what we perceive on a daily basis into a dull noise. It’s like living next to a busy rail-line – what seems noisy at first over time gets filtered out. But what are we loosing in the process?

[1] Potter, M., “Meaning in visual search”, Science, 187(4180), pp.965–966 (1975)
[2] Thorpe, S., Fize, D., & Marlot, C., “Speed of processing in the human visual system”, Nature, 381(6582), pp.520–522 (1996)