I don’t know karate, but I know ka-reepy…
Looks super cool, though the idea of modifying lip syncing makes me envision a car cresting a hill into a desert in which a giant billboard reads “WELCOME TO THE UNCANNY VALLEY.”
The project is a collaboration between computer scientists at Stanford University, the Max Planck Institute, and the University of Erlangen-Nuremberg in Germany. The same team produced a similar study last year, but that iteration required data from special 3D cameras. Their new system works with any camera and any recorded video.
I’m also tempted to run the lip sync enhancer in reverse, causing a disconnect between actors & the words they spoke. I’d then dub-aware fill the resulting gaps with awkward coughing & grunting, a la classic chopsocky.
In a related vein, see previous: Apple now owns this facial animation technology.
[YouTube] [Via Kevin McMahon]