Why computational photography is hitting new heights

A photograph of the Google Pixel 6 Pro face-down on a table

I have always been fascinated by photography. The feel of the camera, the satisfaction of the physical action of winding the film forward by one frame. The reassurance of the clunk as the photo is taken. And, if you want to get into the realms of large format photography, using 5 x 4in film, then you open up a whole new delightful world of adjusting the lens position and orientation away from being parallel to the film. This allows all sorts of “movements” to alter the plane of focus, to correct for perspective. Best of all, given the high cost of each shot, you enter a zen state where you are happy to wait half an hour for that cloud to move into exactly the right place relative to the tree.

Then you enter the world of high-end 35mm, and discover it’s possible to have a serious addiction to the incredible glass made by Leica. Wallet meltdown rapidly ensues.

The arrival of the smartphone changed everything. As the quality of the images taken by phones rapidly improved, it became less necessary to bother with the hassle of carrying around a camera body and several lenses. Almost without fanfare, my trips to foreign climes resulted in proper cameras being left at home. What started as “just about good enough” was soon swept aside by the rise of much better optics, sensor capabilities and, most importantly, the software that could work with the images to improve them, correct issues, and produce a better picture.

One of these tricks was to build a high dynamic range (HDR) image by compositing together two images taken one after the other. One image would have the exposure set correctly for the bright parts, the second image would be exposed to be best for the darker parts. Then software stitched the two together to produce a composite HDR image that maintained the detail in the dark areas, without blowing out the bright parts. This was just the start, of course. Soon the CPU was being used for all sorts of cleverness, much of it utterly ridiculous. I could go to my grave a happy man if I never saw another set of bunny ears and glitter added onto a face shot on social media.

Whilst clever, this used computing power to create fun nonsense beloved of our dear youth. And that’s fine, but my interest continued to be driven by better picture quality.

Then came the multiple camera array, with up to three cameras of different focal length (and hence angle of view). Optical stabilisation came too, along with some deeply clever capabilities for taking night-time shots that weren’t covered in a fog of noise. The technology seemed to reach peak cleverness with the LiDAR capabilities on recent iPhones, giving depth measurement to the image and hence better and faster focussing, along with some quite eye-popping 3D visualisation and mapping tools.

But this rise of what I call “computational photography” hasn’t stopped either. It’s not enough to composite together an image, or to apply artificial intelligence (AI) to colour balance. Now you can shoot an image on the latest iPhones and decide after the event where you want the focus point to be. At the same time, capabilities have increased significantly on the high-end tools on your desktop computer, with Adobe Photoshop leading the way with some astonishing retouching and editing tools.

RELATED RESOURCE

Flexible IT for agile service providers

Leverage consumption-based economics to create competitive advantage

FREE DOWNLOAD

The latest arrival into my lab – the new Google Pixel 6 Pro – represents another new high watermark in the area of computational photography: the ability to remove items from the image and have the phone calculate what should have been there “behind” whatever it is that it removed. This isn’t a new concept, with Photoshop offering similar tools for a while, but I haven’t seen it before built into a phone.

Last night I was at the pub with some mates, and took a fairly close-up photo of one where he occupied a significant part of the image. Then I pressed the edit button, drew a line around his chest and head, and watched as he was magically removed from the frame. The background was intelligently filled in – the door panel pattern repeated, the brickwork extended. It would be easy to nitpick and find images where it might trip and stumble. But this sort of computational photography puts capability directly into the hands of the mainstream user.

The arrival of ProRes video recording on the latest iPhone 12 Pro is another such arrival. H.264 and H.265 video encoding is useful, and results in small files of reasonably good quality, but ProRes is way better in terms of quality, even if the data storage requirements can be eyewatering – around 6GB per minute. However, this opens up a whole new world of possibilities for B-roll video recording, or shooting in difficult locations, where a professional-grade camera such as an Arri, Red or BlackMagic might not work.

Evaluating computational photography cameras isn’t easy if you want to do technical benchmarking. Far be it for me to suggest that the vendors are being sneaky and spotting specific test charts, and then quietly compensating the image for any known quirks (although I wouldn’t put this past some companies). The toolkit still stays the same: Imatest for lens and sensor capabilities, dSCLabs for test charts, and the incredibly punishing XYLA21 test system from the same company. This last shows a test chart with 21 stops of dynamic range plus an array of test setups and rotating platforms to evaluate motion artifacts, blur and judder.

It would be naïve to think that we have reached peak picture quality with the Pixel 6 Pro and iPhone 13 Max Pro – there is more to come. But the arrival of both in my pocket has taken me back to the earlier days, where I sat on the side of the hill with a 5 x 4in plate camera, and waited for the correct moment to take one shot. At least back then the quality was from optical and mechanical excellence. Today, it is down to the algorithms and the AI engines.