I have done image processing in one form or another for over 30 years. What I have learnt may only come with experience, but maybe it is an artifact of growing up in the pre-digial age, or having interests outside computer science that are more of an aesthetic nature. Image processing started out being about enhancing pictures, and extracting information in an automated manner from them. It evolved primarily in the fields of aerial and aerospace photography, and medical imaging, before there was any real notion that digital cameras would become ubiquitous items.
The problem is as the field evolved, people started to forget about the context of what they were doing, and focused solely on the pixels. Image processing became about mathematical algorithms. It is like a view of painting that focuses just on the paint, or the brushstrokes, with little care about what they form (and having said that, those paintings do exist, but I would be hesitant to call them art). Over the past 20 years, algorithms have become increasingly complex, often to perform the same task that simple algorithms would perform. Now we see the emergence of AI-focused image enhancement algorithms, just because it is the latest trend. They supposedly fix things like underexposure, overexposure, low contrast, incorrect color balance and subjects that are out of focus. I would almost say we should just let the AI take the photo, still cameras are so automated it seems silly to think you would need any of these “fixes”.
There are now so many publications on subjects like image sharpening, that it is truly hard to see the relevance of many of them. If you spend long enough in the field, you realize that the simplest methods like unsharp masking still work quite well on most photographs. All the fancy techniques do little to produce a more aesthetically pleasing image. Why? Because the aesthetics of something like “how sharp an image is”, is extremely subjective. Also, as imaging systems gain more resolution, and lenses become more “perfect”, more detail is present, actually reducing the need for sharpening. There is also the specificity of some of these algorithms, i.e. there are few inherently generic image processing algorithms. Try to find an algorithm that will accurately segment ten different images?
Part of the struggle is that few have stopped to think about what they are processing. They don’t consider the relevance of the content of a picture. Some pictures contain blur that is intrinsic to the context of the picture. Others create algorithms to reproduce effects which are really only relevant to creation through physical optical systems, e.g. bokeh. Fewer still do testing of any great relevance. There are people who publish work which is still tested in some capacity using Lena, an image digitized in 1973. It is hard to take such work seriously.
Many people doing image processing don’t understand the relevance of optics, or film. Or for that matter even understand the mechanics of how pictures are taken, in an analog or digital realm. They just see pixels and algorithms. To truly understand concepts like blur and sharpness, one has to understand where they come from and where they fit in the world of photography.