I’m sure we have all witnessed those pesky reflections while trying to grab a photograph through a window, but those days may soon be behind us, thanks to research conducted by Google and MIT. The group presented a paper at Siggraph 2015 and has published a video demonstrating its algorithm for removing reflections from your pictures.
The software isn’t just good for reflections though, it can also be used to analyse and remove other obstacles from your pictures, such as raindrops on the glass and even a chain-link fence that partially obstructs your view. It’s not 100 percent perfect, but seems to do a pretty good job at mostly removing these annoyances in a wide range of scenarios, include tough low-light scenes.
The developers state that the algorithm works using a short video clip that could, for example, be capture from your phone. At this stage, the algorithm sorts out the depth of the scene using edge detection differences in the successive frames and can figure out any obstructions in the foreground. A somewhat similar idea is used for techniques like post processing depth of field adjustments and 3D parallax images, which rely on multiple points of view.
From here, the software can fill in the obstructed space with information from other frames, resulting in a clearer final picture. One creepy “side effect” of the technology is that it can also quite accurately recreate a clear image of whatever is contained within a reflection or occlusion.
The video below has a really detailed explanation about how this is accomplished and a few more examples, which is well worth a watch if you’re keen on details.
This type of technique has been tried before, but previous result have been rather mixed. Google and MIT’s implementation seems the best so far. Unfortunately we don’t know if or when this type of technology will become available for smartphone cameras. Here’s hoping that someone picks up the idea and brings it to consumers.