Best daily deals

Links on Android Authority may earn us a commission. Learn more.

Here's how the Google Camera's Lens BLUr effect works

Software engineer Carlos Hernández takes us through how the Google Camera's the Lens BLUr depth of field effect works.
By
April 17, 2014
image1

Adjustable depth of field effects are hot new camera features included in both the new Samsung Galaxy S5 and HTC One M8. If you don’t fancy buying a new handset, the newly released Google Camera App includes a similar feature as well. Depth of field is great for adding some additional “pop” to your pictures, bringing smartphone cameras one feature closer to their SLR cousins.

For a little background, Google’s Lens BLUr allows you to change the point or level of focus of a picture even after the photo has been taken, in a similar way to Samsung’s Selective Focus mode in the GS5. Tapping on a part of the image adjusts the focal point, whilst a slider is used to adjust the strength of the effect in real time. If you’d like to know how Google’s software based Lens BLUr effect works, then read on.

As the app is used on devices that only have a single camera, the first step is to capture multiple images from ever so slightly different positions as a basis to begin collecting depth information. Instead of capturing a single photo, you move the camera in an upward sweep to capture a whole series of frames. From these images, the software creates a virtual 3D model of the scene by cross references pixels from the image series.

structure from motion
3D models can be created from images captured from multiple positions. Although with smartphones, the limited capture angle will result in a much more basic model. Source: JVRB & Nghiago

This is where things require a little trickery, as the software has to be able to identify similar features across multiple images before it can build the model. This is accomplished by tracking specific points with algorithms known as Structure-from-Motion (SfM) and bundle adjustment.

Volumetric Stereo
Pixel positioning can be tracked between the newly created 3D model and initial images, in order to track distances and visibility between parts of the scene. Source: UNC

From this 3D depth model, Google employs Multi-View-Stereo algorithms to calculate the approximate distances between points in the scene, which is used to form a depth map. Points of references are figured out using the Sum of Absolute Differences (SAD) of the RGB values of the image’s pixels. In other words, the differences in color between a group of pixels creates a unique identifier which can be found over and over again in different images for a point of reference. To speed up and optimise the process, points of reference are taken from different parts of the images, as pixels close together are quite likely to have very similar depths.

In the end, this works in a very similar way to the human eye, where we can work out the distance of an object by viewing it from two slightly different angles. The image below shows an example of finalized depth information calculated by the app.

image2

Having computed the depth map, it’s then just a case of blurring pixels of similar depth to give the desired focal point. As various points of reference are now stored for the image, the distance information can be recalled if the user wants to adjust the focal point or level of blur.

There’s a lot of math and coding behind all this, but the results really speak for themselves.