AI & RoboticsNews

Apple’s Deep Fusion hands-on: AI sharpens photos like HDR fixes colors

Digital photographers coined the term “pixel peepers” years ago to denote — mostly with scorn — people who focused on flaws in the individual dots that create photos rather than the entirety of the images. Zooming in to 100%, it was said, is nothing but a recipe for perpetual disappointment; instead, judge each camera by the overall quality of the photo it takes, and don’t get too mired in the details.

Until now, Apple’s approach to digital photography has been defined by its commitment to improving the quality of the big picture without further compromising pixel-level quality. I say “further” because there’s no getting around the fact that tiny phone camera sensors are physically incapable of matching the pixel-level results of full-frame DSLR camera sensors in a fair fight. Bigger sensors can capture more light and almost invariably more actual pixels than the iPhone’s 12-megapixel cameras. Shoot one photo with practically any DSLR against an iPhone, and the DSLR is going to win on pixel-level quality.

Unfortunately for DSLRs, Apple is not interested in a fair fight. After an on-stage preview at a September media event, a new iPhone 11 feature called Deep Fusion arrived in preview form yesterday with the first beta of iOS 13.2. Deep Fusion is Apple’s name for a new machine learning-aided computational photography trick new iPhones can apply on the fly to enhance detail.

While the DSLR snaps one photo in a split second, A13 Bionic-powered iPhone 11 series cameras will snap three, five, or seven, using tricks like beginning to shoot before the shutter button is pressed and shooting multiple exposures so fast that DSLRs can’t keep up. And while traditional photographers wrestle with questions over the integrity of composite images, Apple AI will pick the best parts from a stack of them and turn them into one idealized “photo” in the time it takes you to blink your eye.

If you accept high-dynamic range (HDR) images as “photography” rather than art — using three or seven exposures to create one image with idealized shadow, highlight, and color detail — you can’t really object to the use of similar techniques to enhance sharpness. Especially when it works so quickly and automatically that you don’t even know that it’s happening.

How profound is the difference? Well, that depends, but let’s take the two comparison images immediately above and at the top of this page as examples. Both are 100% crops from much larger photos. You can clearly see stronger pixel-level details in the skin and metal. The wood is sharp rather than soft. Without getting into too much technical jargon, Apple suggested that fabrics and other textures would be more crisply rendered with Deep Fusion enabled, and that’s definitely true…

…except when you zoom out to look at the entire image. In most of the comparison images I snapped using an iPhone 11 Pro, the overall photos with and without Deep Fusion turned on weren’t night-and-day different. In fact, you’d probably be hard pressed to tell me which of the following two images used Deep Fusion without assistive captions.

Above: The same wood image, zoomed out for full viewing, with Deep Fusion turned off.

Above: The same wood image, zoomed out for full viewing, with Deep Fusion turned on.

Remember the discussion of pixel peepers? Well, they’re the crowd that may* be most impressed by Deep Fusion. Even without the feature turned on, the overall image is going to look excellent by smartphone standards. Perhaps only detail obsessives will really care that the pores on a person’s skin or threads of a given shirt are now obvious on closer inspection.

The asterisk is there because Apple’s decision to exploit its super-fast A13 processor for AI trickery will surely divide some traditional photographers from the modern masses. Well before there was a #nofilter movement — currently at 261 million posts on the filter-focused app Instagram — many photographers argued that adulterating a photo by compositing, Photoshopping, or even post-processing colors in it was cheating: enough to disqualify potential winners from photo contests and call professionals’ reputations into question.

There have been enough valid examples of abuse that this concern isn’t unfounded. Who can trust a “photo” of a president or a public space that has been doctored to add or subtract additional elements? An image made from splicing together photos and/or adding hand-crafted edits is more art than photography, right?

Apple’s use of machine learning and AI for Deep Fusion arguably removes the human element from that equation. You literally flip a switch — in iOS 13.2 beta 1, confusingly labeled “Photos Capture Outside the Frame” — to turn the feature on or off. If it’s on, you’re giving the iPhone permission to extract maximum detail from a series of immediately related photos and deliver that final image to you as your photograph. If it’s off, you’re not, and you get the more conventional “off” results shown above.

Except that even with Deep Fusion “off,” the iPhone is still going to hand you the best results from multiple shots as auto-composited HDR images with properly balanced highlights and shadows. It’s still going to clean up its raw image sensor output for noise, and apply sharpening and/or anti-lens distortion filters to produce cleaner images. Many cameras — even some DSLRs — now do these sorts of things by default, allowing you to manually turn down the sharpening or adjust their punchy color balance if you don’t like them to be so aggressive.

Above: This image snapped with a detailed texture and Deep Fusion turned on looked indistinguishable from one where Deep Fusion was off.

Traditionalists aside, virtually everyone is going to leave Deep Fusion on. Most of the time, it doesn’t noticeably change either the overall quality level of images or the apparent speed of shooting photos — at least, compared against the same phone with Deep Fusion off. When it helps, it helps, and it never seems to hurt. Moreover, the iPhone 11-series phones offer noticeable camera detail improvements over their predecessors even without Deep Fusion, so users are in great shape with or without AI assistance.

As of today, AI-based photo enhancing solutions like Deep Fusion seem a little ahead of their time. But just like Google’s Night Sight, phones that ship without a similarly capable feature next year will be at a disadvantage — at least, for those who care about the finer details in their photos.


Author: Jeremy Horwitz
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!