NewsPhotography

Google Pixel 8 and Pixel 8 Pro: Camera deep-dive

Photo: Brendan Nystedt

Google’s latest Pixel smartphones follow the recent trends of the industry: more memory and more powerful processing, but with enhanced photo and video features really trying to set them apart from the featureless rectangular masses.

Particularly on the Pixel 8 Pro, Google has made major photographic enhancements, tweaking the three main cameras, adding a greater degree of manual controls and, as you’d expect, squeezing as much out of its latest sensors as it can, in as simple a way as possible.

Hardware: Updated sensors, lenses, and focusing

The Pixel 8 Pro has an autofocus selfie camera for the first time ever.

Photo: Google

We’ve broken out the details of the cameras in the new Pixel 8 and 8 Pro. You’ll note that the equivalent focal lengths we quote aren’t the same as the ones given in Google’s launch materials. In each instance we’ve calculated them from the diagonal angles of view given by Google, rather than relying on the equivalent focal length numbers supplied.

Equiv focal length Sensor resolution / size Aperture value
Pixel 8 Pro
Wide (Main) 25mm
(82°)
50MP Octa PD
(9.8 x 7.4mm)
F1.68
Ultra-wide 11mm
(125.5°)
48MP Quad PD
(6.4 x 4.8mm)
F1.95
Tele 112mm
(21.8°)
48MP Quad PD
(5.6 x 4.2mm)
F2.8
Selfie camera 20mm
(95°)
10.5MP Dual PD
(4.6 x 3.4mm)
F2.2
Pixel 8
Wide (Main) 25mm
(82°)
50MP Octa PD
(9.8 x 7.4mm)
F1.68
Ultra-wide 11mm
(125.8°)
12MP ‘with AF’
(5.0 x 3.8mm)
F2.2
Selfie camera 20mm
(95°)
10.5MP Dual PD
(4.6 x 3.4mm)
F2.2

The Pixel Pro gains the biggest upgrades, with Google saying its three main cameras all receive more light than the ones in the 7 Pro. The main, wide-angle camera is said to gain 21% more light, which can be entirely attributed to the move from a lens with an F1.85 aperture to one that’s F1.68.

The Pixel 8 Pro’s cameras get the biggest improvements across the board.

Photo: Brendan Nystedt

Likewise, the telephoto lens on the 8 Pro gains 56% more light in its move from an F3.5 lens to an F2.8 optic. It’s the 11mm-equiv ultrawide lens that sees the biggest improvement, though, with Google saying it receives 105% more light than the one on the previous model. The move from F2.2 to F1.95 accounts for a third of a stop increase but the camera also now uses a sensor that’s 63% larger, this gives a further 0.71EV increase in light capture, justifying the figure quoted.


Buy now:


Smartphones such as the Pixel combine multiple shots for the majority of their images, significantly boosting image quality (combining frames provides more signal, while helping cancel out noise, improving the signal to noise ratio). This is fundamental to modern smartphones’ ability to derive image quality that appears to transcend the expected limits of their tiny sensors. However, combining eight images from a larger sensor or from a camera with a brighter lens still helps improve the result: even in computational photography, more light is better.

“Smartphones such as the Pixel 8 combine multiple shots for the majority of their images, significantly boosting image quality.”

The Pro 8’s selfie camera gains autofocus and Google says it’s the sharpest and best selfie camera on one of its phones. The Pixel 8 has a comparable camera, with phase detection, but Google says its focus is fixed.

The main camera on the Pixel 8 gains the same 21% light boost as the Pro 8’s, as its aperture is brightened from F1.85 to F1.68 but the specs on the ultrawide camera are unchanged compared to the Pixel 7, other than the closer focusing, which now goes down to 3cm (1.2″).

Quad Bayer, high res, 2x ‘zoom’ and Dual Exposure

The Pixel and Pixel 8 Pro use Quad Bayer technology in their cameras.

Although Google talks a lot about 48 and 50MP images, it’s worth noting that high resolution output is only half of the reason for using these high resolution sensors. In each instance they are Dual Bayer designs, which use an oversized version of the Bayer color filter pattern, stretched out over four pixels. This means you don’t capture the level of color resolution that a 48MP Bayer sensor would capture (and you have to creatively re-interpret the data to try to generate full-res images), but it means that it’s easy to combine data from neighboring rows of pixels.

2x “zoom”

Like the recent Apple launches, Google offers modes in which the cameras crop into their central 25% region to give 12MP images with 2x ‘zoom.’ But, just as the full sensor isn’t quite the same as having a 48MP Bayer sensor, the (tiny) central 12MP region of the sensor isn’t capturing the same thing as having a conventional 12MP sensor in the camera. Each color patch on the sensor covers four pixels, so although you capture the luminance detail level of a 12MP sensor, you’re only capturing as much color resolution as a 3MP Bayer sensor would. Between clever processing and the fact that humans are much more sensitive to luminance resolution than color resolution, this shouldn’t be too much of a problem, but it’s not quite as simple as the phone makers like to make it sound.

Low light

Another way of using Quad Bayer is to combine quartets of pixels so that they behave a single, larger pixels. This should allow higher quality images in low light, especially when multiple frames are shot and combined.

Dual Exposure

Alternatively you can read alternate lines of pixels differently, then combine the results. This is what we believe Google means when it talks about Dual Exposure. What we know for certain is that the two exposures of Dual Exposure are taken simultaneously, to avoid any differences in motion. They’re also taken using the same exposure time (again, to ensure you don’t have half a sensor freezing the action and the alternate lines seeing blurred motion). Instead it seems Google is applying different levels of amplification to the two interleaved images: low gain to protect highlight information, and higher gain to improve the performance in the shadows. These two half-images can be combined to give

Software: New UI, processing techniques, and AI

The Google Pixel 8 Pro has some exclusive software features, like the Pro Controls interface. Image: Google

Pro Controls (8 Pro only)

The new Pro Controls mode gives photographers much greater control over the photographic experience. Rather than just having lightness and contrast sliders, Pro Controls mode provides the ability to manually set shutter speed, ISO, white balance and focus.

Along with Pro Controls comes the option to output DNG files in up to 50MP resolution. These two options can be selected independently: you get a JPEG/Raw selector, Auto/Pro lens section and the choice of whether the camera outputs 12 or 48MP images.

In both instances, the camera’s computational work is going on in the background, so while you can dictate the exposure, the camera will shoot and merge eight frames taken with that exposure, giving an image quality boost by combining the frames. However, they’ll have the level of motion blur (or lack of it) that your chosen shutter speed would dictate. Likewise, the high resolution DNG files are saved after ‘remosaicing’ (trying to re-create the level of color detail a conventional Bayer 48MP sensor would have captured), and after the alignment and combination of multiple frames. So you get the added flexibility of a Raw file in a widely-supported format but it’s not ‘Raw’ in the sensor of particularly resembling the original sensor data, if you’re a purist.

Best Take

The latest version of Google’s software includes ‘Best Take,’ a more flexible version of the existing Top Shot mode (that’s still present on the phones). Rather than the phone simply trying to choose what it thinks is the best image from a burst it’s taken, it’ll do this, then let you choose from all the facial expressions of your subjects captured during the burst, meaning you can select the best smiles or the funniest expression for each person in the scene and combine them in the sharpest version of the shot.

This feature will also apply to existing bursts of images taken with older cameras (including non-Pixel phones), meaning you can use it to fine-tune images you’ve previously taken, even if you were an iPhone user at the time.

Ultra HDR: a new standard for even brighter displays

The screen on the 8 Pro goes as bright as 2400 nits and can cover the entire range of the Display P3 space.

To take advantage of this, Google has introduced an ‘Ultra HDR’ mode designed to create images that can show brighter highlights on devices that can display them. In order to maintain compatibility with the majority of devices, this is done by creating a standard JPEG and an accompanying ‘gain map’ which indicates which parts of the image should be shown as brighter on HDR displays. This approach will be supported across the Android ecosystem but unfortunately means there’s yet another non-standard HDR format to contend with.

“Ultra HDR means there’s yet another non-standard HDR format to contend with.”

While we understand the desire to maximize compatibility and ensure all images can easily be shared, the adoption of yet another approach to HDR takes us further away from a single standard gaining widespread adoption. Whether you consider this approach preferable to Apple’s rather opaque system, where both HDR and SDR versions are stored in a single wrapper, and just the SDR version used if you try to share it or upload it to somewhere HDR images aren’t supported, is a matter of personal preference. But it does make it harder to imagine a day when we can shoot, edit and reliably share true HDR photos.

Video boost: Cloud-enhanced HDR and low light video

Google has added a Video Boost feature that allows its ‘HDR+’ processing system (that includes modes such as Night Sight) to be applied to video. The company says current processing hardware can deal with around 3MP per second, which isn’t nearly fast enough for video, given 4K video is typically thirty 8.3MP images per second. Google’s solution is to upload the footage to its servers and perform the processing then present an enhanced of your video a couple of hours after you originally shot it.

There’s a 30 minute limit for videos that will be processed in this way, and we’ve not yet seen the interface to know how obvious it is that your enhanced video is ready (so I guess don’t rush to share your video until the re-processed version has dropped?). Google didn’t mention any costs or restrictions associated with the service, so it’ll be interesting to see whether it is happy to shoulder the costs of re-processing every throwaway video clip each of their users takes and never shares, over the long term.


Author:
Richard Butler
Source: Dpreview

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!