Google’s recently-announced Pixel 2 and Pixel 2 XL can mimic DSLR’s depth-of-field effects despite having only one camera on the back. But how?
This so-called Portrait Mode has actually been around for a while in the Android world; it was just never really publicized, perhaps because it never really worked quite as well as one would have hoped. The feature suddenly became a must-have for flagship devices last year with the introduction of the iPhone 7 Plus and the growing popularity of dual-lens cameras. With almost all Android OEMs moving towards offering dual-lens cameras on their flagships (and even midrangers), it was particularly odd that Google stuck with single-lens cameras for its flagships this year.
Admittedly, Portrait Mode was one of the first things that came to my mind when I first saw this year’s Pixel 2 duo. I thought, “These phones have one camera on the back, so they must lack this bokeh effect that everyone else seems to be doing.” How could Google think that not offering Portrait Mode was OK? I was devastated. That is, until the search giant explained that the Pixel 2 duo actually has a brand-new Portrait Mode, which works just as well despite the absence of a dual-lens setup.
How could Google think that not offering Portrait Mode was OK? I was devastated.
My assumption that the Pixel 2 duo wouldn’t have this increasingly popular camera feature came from the fact that the most common method of synthetically blurring out the background of a photo is through having two cameras. Usually, manufacturers use two lenses to estimate the distance of every point and replace the pixels that form the background with a blur. The foreground, whether it’s your partner, friend, or Instagram-famous cat, becomes the sole object in focus. So how do the Pixel 2 and Pixel 2 XL, having just one camera, give you that DSLR-like shallow depth-of-field effect when taking photos of people and objects?
For the front-facing camera, it’s pure segmentation. Because the front-facing camera on the Pixel 2 duo isn’t a Dual-Pixel camera, it relies on machine-learning to produce a segmentation mask – essentially a silhouette of the main objects in your photo. Once it maps out what is important and what is not, it applies a uniform blur over all background pixels.
For the front-facing camera, it’s pure segmentation. The rear-facing camera is where it gets interesting because it uses both stereo and segmentation.
The rear-facing camera is where it gets interesting because it uses both stereo and segmentation. First, the camera takes a sharp photo using HDR+ and then uses Google’s trained neural network to identify the main object. It blurs out everything but what is inside the segmentation mask, and that’s where the Dual-Pixel autofocus comes in. Because each lens is essentially split into two – 1 mm or so apart from each other – it creates two viewpoints that are different enough to compute stereo, similar to how dual-lens cameras work. Simply put, Google is using Dual-Pixel technology not just to improve focusing speed but also to create a depth map. The system uses this to blur the background in proportion to how far various objects are from the in-focus segmentation mask. As Google explains, that’s how Pixel 2 devices capture shallow depth of field photos with “a nice approximation to real optical blur.”
Pretty cool, huh? Mind you, Portrait Mode on the Pixel 2 duo isn’t perfect by any means (just have a look at some samples provided by DxO), but that doesn’t necessarily mean that its single-lens approach is inferior to dual-lens systems found on phones like the Galaxy Note 8 and OnePlus 5. After all, they all rely on synthetic, software-induced blurring.
What are your thoughts on Portrait Mode? Useful or gimmicky? Let us know by leaving a comment below!