Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Here's how Night Sight on the Google Pixel 3 works

The technology behind Google's Night Sight on the Pixel 3 is a little complicated. Let's try to simplify it.
By
April 19, 2019
Pixel 3 XL hands on

One of the most interesting features Google showed off at the New York launch for this year’s Pixel smartphones was one the phones didn’t actually ship with. Night Sight appeared to quite literally challenge the limits of low light imagery. Looking back, it is quite obvious why people were very skeptical of Google pulling it off.

Now that Night Sight is available to the public, we’ve had a chance to put it through its paces. Android Authority’s Robert Triggs did a great job detailing what Night Sight on the Google Pixel 3 can pull off, and we’ve even looked at how it stacks up to the HUAWEI Mate 20 Pro’s night mode.

Google put out a very interesting white paper going the science behind its new technology, offering a look at how the company has combined elements of machine learning with existing hardware to further your phone’s capabilities. It’s very complicated.

Let’s try to simplify the science of the technology behind Night Sight.

The Art of Low Light Photography

There are multiple ways to approach low light photography, each with distinct tradeoffs. A very common way to capture a shot in less than perfect lighting is to increase the ISO. By increasing the sensitivity of the sensor, you can get a fairly bright shot with the tradeoff being a much higher amount of noise. A larger 1-inch APSC or Full Frame sensor on a DSLR might push this limit quite a bit, but the results are usually disastrous on a phone.

A phone’s camera sensor is much smaller than a dedicated camera, with much less space for light to fall on individual photo sites (photo sites are the individual pixels making up the sensor area). Reducing the number of megapixels while keeping the physical dimensions of the sensor the same increases the size of the photosites. The other approach is to physically increase the size of the sensor but since that would increase the size of the phone, it isn’t really ideal.

Don’t miss: Best of Android 2018: What smartphone has the best camera?

A second factor to be considered is the signal-to-noise ratio, which increases with exposure time. By increasing the amount of exposure time, you can increase the amount of light that falls on the camera’s sensor and reduce noise for a brighter shot. This technique has been used in traditional photography for decades. You could increase the exposure time to capture a bright image of a still monument at night or use the same trick to capture light trails or star trails.

The trick to achieving exceptional low light shots is to combine those two factors. As we talked about earlier, a phone has physical constraints on how big a sensor you can cram in. There’s also a limit to how low a resolution you can use, since the camera needs to capture a sufficient amount of detail for day time shots. Then it’s also important to remember a person can only hold their phone so still for so long. The technique won’t work with even a modicum of motion.

Google’s approach is essentially exposure stacking on steroids. The technique is similar to HDR+, where the camera captures anywhere from 9 to 15 images to improve dynamic range. In daylight, the technique manages to prevent highlights from being blown out while also pulling out details from shadow regions. In the dark though, the same technique works wonders to reduce noise.

Do AI cameras matter? LG V40 vs HUAWEI Mate 20 Pro vs Google Pixel 3 edition
Features
LG V40 vs Google Pixel 3 vs HUAWEI Mate 20 Pro Cameras

That alone, however, isn’t enough to create a usable image when the subject is moving. To combat this, Google is using a very nifty technique using of optical flow. Optical Flow refers to the pattern of apparent motion of objects within a scene. By measuring it, the phone can select a different exposure time for each frame. In a frame where it detects movement, the camera will reduce the exposure time. On the flip side, if there isn’t much motion, the phone pushes this up to as much as a second per frame.

Overall, depending on how bright the setting is and the amount of movement and handshake, the phone dynamically shifts the number of frames it captures and the amount of exposure time for each frame. On the Pixel 3 this can be as many 15 frames of up to 1/15 seconds or 6 frames of up to 1 second each. The number will vary on the Pixel 1 and 2 because of differences in hardware. These shots are then aligned using exposure stacking.

Read: All the Google Pixel 3 features coming to the Pixel 2

Google takes two different approaches on how it merges and aligns these images. On the Pixel 3 and 3 XL, the camera uses the same techniques as Super Res Zoom to reduce noise. By capturing frames from slightly different positions, the camera can create a higher resolution shot with more detail than from a single image. Combine this with longer exposure frames, and you can create a bright and highly detailed low light image.

On the Pixel 1 and 2, the phone uses HDR+ to accomplish the stacking and image capture. Since the phone doesn’t have the processing power needed to process Super Res Zoom at an adequate speed, the end result will likely lack detail compared to the Pixel 3. Still, being able to capture a bright image with little to no motion blur is quite a feat in itself.

Google’s white paper talks about a few more steps where the camera uses machine learning-based algorithms to accurately determine the white balance. A longer exposure can oversaturate certain colors. Google claims it tuned its machine learning-based AWB algorithms to deliver a truer to life rendering. This shows in the slightly undersaturated and cooler tones generated in the shots.

It is easy to get awed by what Night Sight achieves. Using software to get around the sheer limits imposed by hardware is impressive, but it isn’t without its flaws. Night shots can often appear unnaturally bright and don’t necessarily always convey the scene how it really was. Additionally, in extreme low light, the images are most certainly noisy. Sure, they help you get a shot where you might not have managed anything, but it is something to be concerned about. Shots with bright sources of light also throw off the camera by creating lens flare artifacts.

What do you think about Night Sight? Is Google’s approach the future, or would you rather have extra hardware like monochrome sensors to improve low light sensitivity? Let us know in the comments section.

Next: Google Pixel 3 camera shootout