Adios, bothersome fences, reflections, etc. That’s presuming that normal users would be sufficiently motivated to move their devices during capture. Time will hopefully tell.
The video accompanying our SIGGRAPH 2015 paper ” A Computational Approach for Obstruction-Free Photography”. We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.
One thought on “Photography: New Google/MIT algorithm removes visual clutter”
This would be great for digitizing old photos that you don’t want to take out of their frame. Any chance this will be added to Google Photos? 🙂