Accelerate machine learning through new TensorFlow Lite GPU

I’m thrilled to say that the witchcraft my team has built & used to deliver ML & AR hotness on Pixel 3, YouTube, and beyond is now available to iOS & Android developers:

For Portrait mode on Pixel 3, Tensorflow Lite GPU inference accelerates the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. CPU inference with floating point precision. In YouTube Stories and Playground Stickers our real-time video segmentation model is sped up by 5–10x across a variety of phones.

We found that in general the new GPU backend performs 2–7x faster than the floating point CPU implementation for a wide range of diverse deep neural network models.

A preview release is available now, with a full open source release planned for the near future.

I often note that I came here five (five!) years ago to “Teach Google Photoshop,” and delivering tech like this is a key part of that mission: enable machines to perceive the world, and eventually to see like artists & be your brilliant artistic Assistant. We have so, so far to go, and the road ahead can be far from clear—but it sure is exciting.

Leave a Reply

Your email address will not be published. Required fields are marked *