Colour versus grayscale pixels

Colour pixels are different from grayscale pixels. Colour pixels are RGB, meaning they have three pieces of information associated with them, namely the Red, Green and Blue components. Grayscale pixels have one component, a gray tone derived from a graduate scale from black to white. A colour pixel is generally 24-bit (3 × 8-bit), and a gray pixel is just 8-bit. This basically means that a colour pixel has a triplet value comprised of 0..255 for each of red, green and blue components, whereas a grayscale pixel has a single values 0..255. The figure below compares a colour and grayscale pixel. The colour pixel has the R-G-B value 61-80-136.The grayscale pixel has the value 92.

It is easy to convert a pixel from colour to grayscale (like applying a monochrome filter in a digital camera). The easiest method is simply averaging the three values of R, G, and B. In the sample above, the grayscale pixel is actually the converted RGB: (61+80+136)/3 = 92.

Now colour images also contain regions that are gray in colour – these are 24-bit “gray” pixels, as opposed to 8-bit grayscale pixels. The example below shows a pixel in a grayscale image, and the corresponding “gray” pixel in the colour image. Grayscale pixels are pure shades of gray. Pure shades of gray in colour images are often represented with RGB all having the same value, e.g. R=137, G=137, B=137.

Pure gray versus RGB gray

Image enhancement (4) : Contrast enhancement

Contrast enhancement is applied to images where there is a lack of “contrast”. Lack of contrast manifests itself as a dull or lacklustre appearance, and can often be identified in image histograms.  Improving contrast, and making an image more visually (or aesthetically) appealing is incredibly challenging. This is in part because the result of contrast enhancement truly is a very subjective thing. This is even more relevant with colour images, as modifications to a colour, can impact different people differently. What ideal colour green should trees be? Here is a brief example grayscale image and its intensity histogram.

A picture of Reykjavik from a vintage postcard

It is clear from the histogram that the intensity values do not span the entire range of values, effectively reducing the contrast in the image. Some parts of the image that could be brighter, are dull, and other parts of the image that could be darker, are lightened. Stretching both ends of the histogram out, effectively improves the contrast in the image.

The picture enhanced by stretching the histogram, and improving the contrast

This is the simplest way of enhancing the contrast of an image, although the level of contrast enhancement applied is always guided by the visual perception of the person performing the enhancement.

Image enhancement (3) : Noise suppression

Noise suppression may be one of the most relevant realms of image enhancement. There are all kinds of noise, and even digital photographs are not immune to it. Usually the algorithms that deal with noise are grouped into two categories: those that deal with spurious noise (often called shot or impulse noise), and those that deal with noise that can envelop a whole image (in the guise of Gaussian-type noise). A good example of the latter is the “film grain” often found in old photographs. Some might think this is not “true” noise, but it does detract from the visual quality of the image, so should be considered as such. In reality noise suppression is not as important in enhancing images from digital cameras because a lot of effort has been placed on in-camera noise suppression.

Below is an example of an image with Gaussian noise. This type of noise can be challenging to suppress because it is “ingrained” in the structure of the image.

Image with Gaussian noise
Image with Gaussian noise

Here are some different attempts at trying  to suppress the noise in the image using different algorithms (many of these algorithms can be found as plug-ins to the software ImageJ):

  • A Gaussian blurring filter (σ=3)
  • A median filter (radius=3)
  • The Perona-Malik Anisotropic Diffusion filter
  • Selective mean filter
Examples of noise suppressed using various algorithms.

To show the results, we will look at the extracted regions from some of the algorithmic results compared to the original noisy image:

Images: (A) Noisy images, (B) Perona-Malik, (C) Gaussian blur, (D) Median filter

It is clear the best results are from the Perona-Malik Anisotropic Diffusion filter [1], which has suppressed the noise whilst preserving the outlines of the major objects in the image. The median filter has performed second best, although there is some blurring which has occurred in the processed image, which letters in the poster starting to merge together. Lastly, the Gaussian blurring has obviously suppressed the noise, whilst incorporating significant blur into the image.

Suppressing noise in an image is not a trivial task. Sometimes it is a tradeoff between the severity of the noise, and the potential to blur out fine details.

[1] Perona, P.,  Malik, J., “Scale-space and edge detection using anisotropic diffusion”, In: Proceedings of IEEE Computer Society Workshop on Computer Vision,. pp.16–22. (1987)

Image enhancement (2) : the fine details (i.e. sharpening)

More important than most things in photography is acuity – which is really just a fancy word for sharpness, or even image crispness. Photographs can be blurry for a number of reasons, but usually they are all trumped by lack of proper focusing, which adds a softness to an image. Now in a 3000×4000 pixel image, this blurriness may not be that apparent – and will only manifest itself when an enlargement is made of a section of the image. In terms of photographing landscapes, the overall details in the image may be crisp, however small objects may “seem” blurry, because they are small, and lack detail in any case. Sharpening will also fail to fix large blur artifacts – i.e. it’s not going to remove defocus from a photograph which was not properly focused. It is ideal for making fine details crisper.

Photo apps and “image editing” software often contains some means of improving the sharpness of images. Usually by means of the “cheapest” algorithm in existence – “unsharp masking”. It works by subtracting  a “softened” copy of an image from the original. And by softened, I mean blurred. It basically reduces the lower frequency components of the image. But it is no magical panacea. If there is noise in an image, it too will be attenuated. The benefit of sharpening can often be seen best on images containing fine details. Here are examples of three different types of sharpening algorithms on an image with a lot of fine detail.

Sharpening: original (top-L); USM (top-R); CUSM (bot-L); MS (bot-R)

Three filters are shown here are (i) Unsharp masking (USM), (ii) Cubic Unsharp masking (CUSM) and (iii) Morphological sharpening (MS). Each of these techniques has its benefits and drawbacks, and the final image with improved acuity can only really be judged through visual assessment. Some algorithms may be more attune to sharpening large nonuniform regions (MS), whilst others (USM, CUSM) may be more aligned with sharpening fine details.

Image enhancement (1) : The basics

Image enhancement involves improving the perceived quality of an image, either for the purpose of aesthetic appeal, or for further processing. Therefore you are either enhancing features within an image, or suppressing artifacts. The basic forms of enhancement include:

  • contrast enhancement: enhancing the overall contrast of an image, to improve dynamic range of intensities.
  • noise suppression: reducing the effect of noise contained within an image
  • sharpening: improving the acuity of features within an image.

These relate to both monochromatic grayscale and colour images (there are additional mechanisms for colour images to deal with enhancing colour). The trick with these enhancement mechanisms is determining when they have achieved the required effect. In image processing this is often a case of the rest being “in the eye of the beholder”. A photograph who’s colour has been enriched may seem pleasing to one person, and over saturated to another. To illustrate, consider the following example. This image is an 8-bit image that is 539×699 pixels in size.

Original Image

Here is the histogram of its pixel intensities:

From both the image and histogram, it is possible to discern that the image lacks contrast, with the majority of gray intensities situated between values 25 and 195. So one of the enhancements could be to improve its contrast. Here is the result of a simple histogram stretching:

Contrast enhancement by stretching the histogram

It may then be interesting to smooth noise in the image or, sharpen the image to enhance the letters in the advertising. The sub-image extracted from the above shows three different techniques.

Forms of image enhancement (sub-image extracted from contrast enhanced image): original (top-left), noise suppression using a 3×3 mean filter (top-right), image sharpening using unsharp masking (bottom-left), and unsharp masking applied after mean filtering (bottom-right).

Can blurry images be fixed?

Some photographs contain blur which is very challenging to remove. Large scale blur, which is the result of motion, or defocus can’t really be suppressed in any meaningful manner. What can usually be achieved by means of image sharpening algorithms is that finer structures in an image can be made to look more crisp. Take for example the coffee can image shown below, in which the upper lettering on the label in almost in focus, while the lower lettering has the softer appearance associated with de-focus.

The problem with this image is partially the fact that the blur is not uniform. Below are two regions enlarged:containing text from opposite ends of the blur spectrum.

Reducing blur, involves a concept known as image sharpening(which is different from removing motion blur, a much more challenging task). The easiest technique for image sharpening, and the one most often found in software such as Photoshop is known as unsharp masking. It is derived from analog photography, and basically works by subtracting a blurry version of the original image from the original image. It is by no means perfect, and is problematic in images where there is noise, as it tends to accentuate the noise, but it is simple.

Here I am using the “Unsharp Mask” filter from ImageJ. It subtracts a blurred copy of the image and rescales the image to obtain the same contrast of low frequency structures as in the input image. It works in the following manner:

  1. Obtain a Gaussian blurred image, by specifying a blur radius (in the example below the radius = 5).
  2. Filter the blurred image using a “Mask Weight, which determines the strength of filtering. A value from 0.1-0.9. (In the example below, the mask weight =0.4)
  3. Subtract the filtering image from the original image.
  4. Divide the resulting image by (1.0-mask weight) – 0.6 in the case of the example.
1. Original image; 2. Gaussian blurred image (radius=5); 3. Filtered image (multiplied by 0.4); 4. Subtracted image (original-filtered); 5. Final image (subtracted image / 0.6)

If we compare the resulting images, using an enlarged region, we find the unsharp masking filter has slightly improved the sharpness of the text in the image, but this may also be attributed to the slight enhancement in contrast. This part of the original image has less blur though, so let’s apply the filter to the second image.

The original image (left) vs. the filtered image (right)

Below is the result on the second portion of the image. There is next to no improvement in the sharpness of the image. So while it may be possible to slightly improve sharpness, where the picture is not badly blurred, excessive blur is impossible to “remove”. Improvements in acuity may be more to the slight contrast adjustments and how they are perceived by the eye.