When I learned After Effects in ye Olden Times (aka the Carter—okay, Clinton—Administration), rotoscoping (selecting a portion of a scene over many frames) was such a brutally slow and painful process, it’d make a Photoshopper weep & wish to run home to the Pen tool. Over the years amazing tools like Roto Brush removed some of the drudgery, but the process has remained largely manual.
Now, though, machine learning can teach computers to perform this kind of segmentation automatically, in realtime, on a friggin’ telephone. Thanks largely to the efforts of the Belarusian team Google acquired in August, this happened:
Today, we are excited to bring precise, real-time, on-device mobile video segmentation to the YouTube app by integrating this technology into stories. Currently in limited beta, stories is YouTube’s new lightweight video format, designed specifically for YouTube creators. Our new segmentation technology allows creators to replace and modify the background, effortlessly increasing videos’ production value without specialized equipment.
So how does this witchcraft actually work? I’m so glad you asked: check out the details in this post on the Google Research blog. And stay tuned, as Teh ML Hotness is just getting warmed up.