MobileNews

How to use the Deep Fusion iPhone 11 and iPhone 11 Pro camera feature

iOS 13.2 has arrived and it brings Apple’s Deep Fusion camera tech that the company described as “computational photography mad science” to the iPhone 11 lineup. Follow along for how to use Deep Fusion including how it works and when the feature kicks in.

Deep Fusion is a new image processing system that works automatically behind the scenes in certain conditions. Here’s how Apple describes it:

iOS 13.2 introduces Deep Fusion, an advanced image processing system that uses the A13 Bionic Neural Engine to capture images with dramatically better texture, detail, and reduced noise in lower light…

Unlike the new Night mode feature or other camera options, there’s no user-facing signal that Deep Fusion is being used, it’s automatic and invisible (on purpose).

However, there are a few instances when Deep Fusion won’t be used: any time you’re using the ultra wide lens, any time you have the “Photos Capture Outside the Frame” is turned on, and when shooting burst photos.

Also, keep in mind that Deep Fusion is only available on iPhone 11 and iPhone 11 Pro and Pro Max.

  1. Make sure you’ve updated your iPhone 11, 11 Pro, or 11 Pro Max to iOS 13.2
  2. Head to Settings > Camera > and make sure Photos Capture Outside the Frame is
  3. Make sure you’re using the wide or telephoto lens (1x or greater in Camera app)
  4. Deep Fusion is now working behind the scenes when you shoot photos (won’t work with burst photos)

How to use Deep Fusion iPhone 11 Pro 2

As described by Apple VP Phil Schiller on stage at the iPhone 11 event:

So what is it doing? How do we get an image like this? Are you ready for this? This is what it does. It shoots nine images, before you press the shutter button it’s already shot four short images, four secondary images. When you press the shutter button it takes one long exposure, and then in just one second, the Neural Engine analyzes the fused combination of long and short images picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise, like you see in the sweater there. It’s amazing this is the first time a Neural Processor is responsible for generating the output image. It is computational photography mad science.

Apple told  that it made Deep Fusion invisible to users for a seamless experience:

There’s no indicator in the camera app or in the photo roll, and it doesn’t show up in the EXIF data. Apple tells me that is very much intentional, as it doesn’t want people to think about how to get the best photo. The idea is that the camera will just sort it out for you.

Here are more specifics about how it works (via The Verge):

  • When Deep Fusion is active:
    • With the wide (standard) lens in bright to medium-lit environments, Smart HDR will be used while Deep Fusion will activate for medium to low lit scenes (Night mode naturally kicking in for dim-lit shots)
    • The telephoto lens will generally use Deep Fusion except for shots that are very brightly-lit when Smart HDR will take over
    • For the ultra wide lens, Deep Fusion is never activated, instead, Smart HDR is used

For more help getting the most out of your iPhone 11 or 11 Pro camera:


Check out 9to5Mac on YouTube for more Apple news:

Check out the latest Apple iPhones at great prices from Gizmofashion – our recommended retail partner.


Author: Michael Potuck
Source: 9TO5Mac

Related posts
AI & RoboticsNews

Forget coding bootcamps: Airtable’s AI can build your app in seconds

AI & RoboticsNews

AI wars heat up: OpenAI’s SearchGPT takes on Google’s search dominance

AI & RoboticsNews

AI training costs are growing exponentially —  IBM says quantum computing could be a solution

DefenseNews

Drone killing Marines: Corps seeks ‘buckshot-like’ counter-drone gear

Sign up for our Newsletter and
stay informed!