I’m delighted that my teammates are getting to share the details of how the awesome face-tracking tech they built works
We employ machine learning (ML) to infer approximate 3D surface geometry to enable visual effects, requiring only a single camera input without the need for a dedicated depth sensor. This approach provides the use of AR effects at realtime speeds, using TensorFlow Lite for mobile CPU inference or its new mobile GPU functionality where available. This technology is the same as what powers YouTube Stories’ new creator effects, and is also available to the broader developer community via the latest ARCore SDK release and the ML Kit Face Contour Detection API.
We’ve been hard at work ensuring that the tech works well for really demanding applications like realistic makeup try-on:
If you’re a developer, dig into the links above to see how you can use the tech—and everyone else, stay tuned for more fun, useful applications of it across Google products.