MobileNews

How to use Deep Fusion with iPhone SE 3, iPhone 13, and more

Apple’s Deep Fusion tech that the company describes as “computational photography mad science” first arrived with iPhone 11. Now it’s even supported on the iPhone SE 3 alongside iPhone 12 and 13. Here’s how to turn on Deep Fusion on iPhone including how it works and when the feature kicks in.

Deep Fusion is an image processing system that works automatically behind the scenes in certain conditions. Apple says the feature is able to produce “images with dramatically better texture, detail, and reduced noise in lower light.”

Unlike the iPhone’s Night mode feature or other camera options, there’s no user-facing signal that Deep Fusion is being used, it’s automatic and invisible (on purpose).

However, there are a few instances when Deep Fusion won’t be used: any time you’re using an ultra wide lens, any time you have the “Photos Capture Outside the Frame” is turned on, and when shooting burst photos.

How to turn on Deep Fusion on iPhone cameras

Keep in mind that Deep Fusion is only available on iPhone 11, 12, 13, and SE 3.

  1. Head to the Settings app then swipe down and tap Camera
  2. Make sure Photos Capture Outside the Frame is turned off
  3. Make sure you’re using the wide (standard) or telephoto lens, 1x or greater
  4. Deep Fusion is now working behind the scenes when you shoot photos (won’t work with burst photos)
How to use Deep Fusion on iPhone cameras

How does Deep Fusion work?

As described by Apple’s former VP Phil Schiller:

So what is it doing? How do we get an image like this? Are you ready for this? This is what it does. It shoots nine images, before you press the shutter button it’s already shot four short images, four secondary images. When you press the shutter button it takes one long exposure, and then in just one second, the Neural Engine analyzes the fused combination of long and short images picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise, like you see in the sweater there. It’s amazing, this is the first time a Neural Engine is responsible for generating the output image. It is computational photography mad science.

When does it work?

Apple told The Verge that it made Deep Fusion invisible to users for a seamless experience:

There’s no indicator in the camera app or in the photo roll, and it doesn’t show up in the EXIF data. Apple tells me that is very much intentional, as it doesn’t want people to think about how to get the best photo. The idea is that the camera will just sort it out for you.

Here are more specifics about when Deep Fusion is active:

  • With the wide (standard) lens in bright to medium-lit environments, Smart HDR will be used while Deep Fusion will activate for medium to low-lit scenes (Night mode naturally kicking in for dim-lit shots)
    • The telephoto lens will generally use Deep Fusion except for shots that are very brightly-lit when Smart HDR will take over
    • For the ultra wide lens, Deep Fusion is never activated, instead, Smart HDR is used

Read more 9to5Mac tutorials:


Check out 9to5Mac on YouTube for more Apple news:

Check out the latest Apple iPhones at great prices from Gizmofashion – our recommended retail partner.


Author: Michael Potuck
Source: 9TO5Google

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!