Since the Pro Max marks the first time in a while that Apple changed the size of its camera sensor, PetaPixel spoke to two Apple executives who outlined the company’s vision and design philosophy behind camera development.
In an interview with Apple’s Product Line Manager, iPhone Francesca Sweet and Vice President, Camera Software Engineering Jon McCormack, both made clear that the company thinks of camera development holistically: it’s not just the sensor and lenses, but also everything from Apple’s A14 Bionic chip, to the image signal processing, to the software behind its computational photography.
Design Philosphy
Apple says that it’s main goal for smartphone photography is based around the idea of letting folks live their lives, and capture photos of that life without being distracted by the technology.
“As photographers, we tend to have to think a lot about things like ISO, subject motion, et cetera,” McCormack said “And Apple wants to take that away to allow people to stay in the moment, take a great photo, and get back to what they’re doing.”
He explained that while more serious photographers want to take a photo and then go through a process in editing to make it their own, Apple is doing what it can to compress that process down into the single action of capturing a frame, all with the goal of removing the distractions that could possibly take a person out of the moment.
“We replicate as much as we can to what the photographer will do in post,” McCormack continued. “There are two sides to taking a photo: the exposure, and how you develop it afterwards. We use a lot of computational photography in exposure, but more and more in post and doing that automatically for you. The goal of this is to make photographs that look more true to life, to replicate what it was like to actually be there.”
McCormack says that Apple achieves this by using machine learning to break down a scene into more easily understood pieces.
“The background, foreground, eyes, lips, hair, skin, clothing, skies. We process all these independently like you would in Lightroom with a bunch of local adjustments,” he explained. “We adjust everything from exposure, contrast, and saturation, and combine them all together.”
Speaking specifically about Apple’s Smart HDR technology, McCormack explained how we already see the benefits of this kind of computational photography.
“Skies are notoriously hard to really get right, and Smart HDR 3 allows us to segment out the sky and treat it completely independently and then blend it back in to more faithfully recreate what it was like to actually be there.”
Another challenging environment is restaurants or bars.
“All the natural ambient light is annoying to a photographer. Mixed, low light messes with color. We understand what food looks like, and we can optimize the color and saturation accordingly to much more faithfully.”
McCormack repeatedly mentioned the feeling of recreating “what it was like to actually be there,” clearly an important goal for the smartphone company.
Sweet was keen to point out the gains Apple has made with specifically low light photography thanks to Apple’s Night Mode, which builds on Smart HDR.
“The new wide camera, improved image fusion algorithms, make for lower noise and better detail,” she said. “With the Pro Max we can extend that even further because the bigger sensor allows us to capture more light in less time, which makes for better motion freezing at night.”
Apple ProRAW
Apple’s upcoming RAW file was one of the most talked-about pieces of new technology mentioned in Apple’s reveal of the iPhone 12. To this point, McCormack told PetaPixel, photographers have had to make a choice between getting a RAW file or taking advantage of computational photography.
McCormack explained that Apple asked itself: “But what if you could have both?”
Apple created a new imaging pipeline that combines computational photography techniques but then saves the result of those computations out to a digital negative. McCormack explained that this method allowed them to give photographers full control of images in-camera, in real-time.
“When you take the image and take it into the digital darkroom, all of the tonal range is there,” he said.
The New Sensor
When asked about the new sensor and what Apple thinks about the sentiment that it took too long to offer a larger one, McCormack explained Apple’s perspective.
“It’s not as meaningful to us anymore to talk about one particular speed and feed of an image, or camera system,” he said. “As we create a camera system we think about all those things, and then we think about everything we can do on the software side.”
Deep Fusion provided Apple with a lot more noise reduction capability, and while the company could have gone with a larger sensor, Apple instead asked themselves what they could do with the entire image processing picture first before changing the physical parts.
“You could of course go for a bigger sensor, which has form factor issues, or you can look at it from an entire system to ask if there are other ways to accomplish that,” McCormack said of Apple’s perspective. “We think about what the goal is, and the goal is not to have a bigger sensor that we can brag about. The goal is to ask how we can take more beautiful photos in more conditions that people are in. It was this thinking that brought about deep fusion, night mode, and temporal image signal processing.
McCormack emphasized that because Apple is developing the entire system, it sees things differently than if it were only responsible for one or two parts of the process.
“We don’t tend to think of a single axis like ‘if we go and do this kind of thing to hardware’ then a magical thing will happen. Since we design everything from the lens to the GPU and CPU, we actually get to have many more places that we can do innovation.”
Author: Jaron Schneider
Source: Petapixel