Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Behind the scenes: Google's Pixel cameras aren't trying to be cameras at all

Google's cameras on the Pixel 4 aren't trying to be cameras. They're trying to solve the problems traditional cameras can't.
By
October 15, 2019

If you’re searching for a smartphone with a good camera, there’s no doubt you’ve seen Google’s Pixel series on a list of recommendations. Year over year, Google has figured out how to make a camera that delivers great results every single time. From the pixel-peeping tech blogger to the everyday consumer, it’s hard to find a single person that doesn’t love the Pixel’s cameras.

Except, Google isn’t trying to make cameras. It’s trying to make magic.

I recently had the opportunity to sit down with Marc Levoy and Isaac Reynolds – the core minds behind the Pixel series’ frustratingly-good camera system. We had long conversations about the new features in the Pixel 4’s camera, from its improved Night Sight to its WYSIWYG (What You See Is What You Get) real-time HDR+ viewfinder. There was a lot of technical talk about how Google is enabling these features, but one thing was made crystal clear by the end. Google’s Pixel camera isn’t about trying to be a camera at all.

“Our core philosophy is building a camera that does magic, which is that combination of simplicity and image quality,” explained Reynolds, “So Night Sight is still there, default HDR+ is still there. All the processing that goes on under the hood to get a great photo from the default mode is still there. And we’ve also actually done a lot more simplification.”

Default mode. Simplification. Magic. These are phrases Google is using as part of its core philosophy for Pixel’s camera. In Levoy and Reynold’s minds, capturing the moment doesn’t need to be about mode dials and settings menus. Google isn’t trying to build a camera into its phone, it’s trying to build something that makes consistently great images out of the gate, through traditional means or otherwise.

What you see is what you get

Pixel 4 XL camera macro 1

One of the Pixel 4’s new features is the WYSIWYG viewfinder, meaning you’ll see the results of HDR+ before you even take the shot. This might seem like a minor feature, but it enables some things that just aren’t possible in non-computationally-driven cameras.

The goal of that WYSIWYG viewfinder is to reduce user interaction as much as possible. By showing the resulting image right when you open the camera, you’ll know if you’re getting an even exposure immediately, and can just focus on nailing your shot.

“If we see that the user tapped, we know that the camera didn’t give them what they wanted from the beginning.” continues Reynolds, “So a tap, to me, is potentially a failure case that we’d like to improve.”

Traditional camera systems are pretty bad at getting the image you want straight out of camera. You can either expose for the highlights and raise the shadows later, or expose for the shadows but blow out the highlights. Thanks to technology, we can do both, and this is where computational photography really starts to make that magic happen.

“Having a WYSIWYG viewfinder now means that we can rethink how you control exposure on the camera if you’d like to.” says Levoy, “So if you do tap, whereas before you would get an exposure compensation slider, you now get two sliders. We call this feature Dual Exposure Control. And it could be highlights and shadows. It could be brightness and dynamic range. There are lots of ways you could do those two variables. We’ve set it up to do brightness and shadows. And that gives you a kind of control that no one has ever had on a camera before.”

You're editing the photo before you even take the shot.

Levoy is right. Dual Exposure Control is something that can only be produced through computational imaging. As a baseline, the image will be even, with preserved highlights and visible shadows. But if you want, you have the power to individually adjust highlights and shadows, before you even take the photo. That’s something you previously could only do in photo editing software, after you take the photo.

Levoy’s team is trying to see past the limitations of the traditional camera by focusing its efforts on the limitations traditional cameras have. While most manufacturers are introducing Pro modes to give you control of aperture, shutter speed, and ISO, Google is trying to automatically make a better image than you could, even if you had those knobs just right.

Kill it with learning

Sodium Vapor light example

So what other ways can computational imaging out-pace traditional camera techniques? This year, Levoy’s team is tackling low-light.

Pixel 4 is introducing learning-based white balance into its camera system. This feature works to continuously improve color in your images, even in extremely bad light. Google is specifically targeting low-light and yellow light and used Sodium Vapor light as an example of something it’s trying to fix, but is aiming to get perfect white balance every time.

Sodium Vapor lamps are a type of gas lamp that casts an almost monochrome effect on subjects due to its extremely narrow wavelength of 589nm to 589.3nm. They’re used because they are a highly efficient source of light, so you’ll often see it in street lamps or other lights that need to last a long time. This is one of the hardest situations to get accurate white balance from, so Google’s software fix is really impressive.

“[The bad light] would be yellow in the case of sodium vapor light, and we’ll try to neutralize that bad lighting,” says Levoy. “[Inaccurate white balance] happens a lot in lower light. If you walk into a disco and there are red neon lights, it will preserve that but will try to neutralize some of the adverse area lighting.”

Learning-based white balance was already present in Google’s Night Sight mode, which is why its final image had much better color than something like auto mode on the HUAWEI P30 Pro. The system learns based on images taken on the device it deems well-balanced, and uses the learned data to produce more color-accurate images in poorly-lit circumstances. This is something traditional camera systems just can’t do. Once a camera ships, auto white balance is auto white balance. On Pixel, it’s always working to get better over time.

Learning-based white balance makes great low-light images even easier, but Levoy wants to use computers to simplify a once-difficult form of imaging – astrophotography.

Look to the stars

Pixel 4 ASTRO Night Sight Sample 6
Source: Google

Levoy calls this new capability “HDR+ on steroids”.  Where standard HDR+ takes a burst of 10-15 short exposures and aligns and averages them to get sharp imagery with low noise, this new mode takes up to 15 sets of 16-second exposures, to create a 4-minute exposure. The system then aligns the images (since stars move over time) and adjusts the appropriate settings while reducing noise with pixel averages to create some astounding images.

This was kind of the Holy Grail for me.Marc Levoy

Levoy showed me some examples of photos his team took of the milky way, and my jaw literally dropped. While it is possible to do long exposures on traditional camera systems, you usually need extra equipment to turn your camera over time if you want extra-sharp imagery. With Night Sight, you can simply prop your phone against a rock, hit the shutter, and the device does the rest.

Perhaps the smartest part of this new ASTRO Night Sight mode is that it isn’t a separate mode at all. It all happens with the Night Sight button. HDR+ already uses the gyroscope to detect motion and align bursts of images, and Night Sight will now detect how long it can feasibly take an image depending on how steady the device is when you hit the shutter button, up to four minutes. It will also detect skies using a method called Semantic Segmentation, which allows the system to treat certain areas of the image differently for the best result.

“We want things to be easy for anyone to use,” says Reynolds, “So anytime we find something in the product that doesn’t need to be there, we can take that responsibility away from you and solve that for you.”

That statement really boils down what Google is trying to do with Pixel’s camera. Instead of looking to how they can make it operate like a camera, Google is trying to solve issues you didn’t even know existed, and present it in the simplest possible form.

Of course, there are merits to both sides. Some people might want a phone camera that operates like a camera, with manual controls and dials. They might want bigger sensors and Pro modes. But while other ODM’s are focusing almost solely on hardware, Google is looking in a totally different direction.

It’s looking to make magic.


 

Want to learn more about computational photography? Check out the video above to see how this field is going to change the way we make images.