MobileNews

Apple execs detail approach to iPhone 12 camera design in new interview

Alongside the iPhone 12 mini and Pro Max orders opening this week and the devices arriving to the first customers on November 13, Apple’s VP of camera software engineering Jon McCormack and product line manager Francesca Sweet have shared a behind the scenes look at the company’s philosophy for iPhone camera design, it’s goals for everyday users to pros, and more.

McCormack and Sweet talked with PetaPixel for a new interview diving into Apple’s thinking behind designing its iPhone cameras. Not surprisingly, they revealed in the big picture it’s a holistic approach between both software and hardware:

both made clear that the company thinks of camera development holistically: it’s not just the sensor and lenses, but also everything from Apple’s A14 Bionic chip, to the image signal processing, to the software behind its computational photography.

As for the main goal, Apple wants to make capturing shots with an iPhone camera so seamless that users aren’t distracted from what’s happening in any moment.

“As photographers, we tend to have to think a lot about things like ISO, subject motion, et cetera,” McCormack said “And Apple wants to take that away to allow people to stay in the moment, take a great photo, and get back to what they’re doing.”

McCormack also highlights that even applies to “serious photographers:”

He explained that while more serious photographers want to take a photo and then go through a process in editing to make it their own, Apple is doing what it can to compress that process down into the single action of capturing a frame, all with the goal of removing the distractions that could possibly take a person out of the moment.

“We replicate as much as we can to what the photographer will do in post,” McCormack continued. “There are two sides to taking a photo: the exposure, and how you develop it afterwards. We use a lot of computational photography in exposure, but more and more in post and doing that automatically for you. The goal of this is to make photographs that look more true to life, to replicate what it was like to actually be there.”

Machine learning is used by Apple to individually process different aspects of a photo.

“The background, foreground, eyes, lips, hair, skin, clothing, skies. We process all these independently like you would in Lightroom with a bunch of local adjustments,” he explained. “We adjust everything from exposure, contrast, and saturation, and combine them all together.”

Francesca Sweet commented on the improvement that the iPhone 12 lineup cameras bring to Night mode.

“The new wide camera, improved image fusion algorithms, make for lower noise and better detail,” she said. “With the Pro Max we can extend that even further because the bigger sensor allows us to capture more light in less time, which makes for better motion freezing at night.”

And when it comes to the new ProRAW option, McCormack shared that the idea came from Apple asking what if it could offer the benefits of shooting in RAW while keeping the advantages of computational photography.

The full interview is an interesting read, check it out at PetaPixel.


Check out 9to5Mac on YouTube for more Apple news:

Check out the latest Apple iPhones at great prices from Gizmofashion – our recommended retail partner.


Author: Michael Potuck
Source: 9TO5Google

Related posts
AI & RoboticsNews

Why AI won’t make you a better writer

AI & RoboticsNews

Snowflake Build: the 4 biggest announcements on Cortex AI and more

AI & RoboticsNews

AI search wars heat up: Genspark adds Claude-powered financial reports on demand

DefenseNews

Kongsberg wins biggest-ever missile contract from US Navy, Marines

Sign up for our Newsletter and
stay informed!