Accelerate machine learning through new TensorFlow Lite GPU

I’m thrilled to say that the witchcraft my team has built & used to deliver ML & AR hotness on Pixel 3, YouTube, and beyond is now available to iOS & Android developers:

For Portrait mode on Pixel 3, Tensorflow Lite GPU inference accelerates the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. CPU inference with floating point precision. In YouTube Stories and Playground Stickers our real-time video segmentation model is sped up by 5–10x across a variety of phones.

We found that in general the new GPU backend performs 2–7x faster than the floating point CPU implementation for a wide range of diverse deep neural network models.

A preview release is available now, with a full open source release planned for the near future.

I often note that I came here five (five!) years ago to “Teach Google Photoshop,” and delivering tech like this is a key part of that mission: enable machines to perceive the world, and eventually to see like artists & be your brilliant artistic Assistant. We have so, so far to go, and the road ahead can be far from clear—but it sure is exciting.

A little maritime drone fun

The lads and I are just back from an overnight visit to the USS Hornet, a decorated World War II-era carrier we last visited some 7 years ago. This time around we spent the night with our Cub Scout pack & several hundred other scouts & parents from around the area. On the whole we had a ball touring the ship, and I had a little fun flying my drone over the Hornet & her adjacent Navy ships:

And here’s an interactive 360º panorama from overhead. (Obligatory nerdy sidenote: This is the JPEG version stitched on the fly by the drone, and although I was able to stitch the raw source images in Camera Raw & get better color/done, I’ll be damned if I can figure out how to inject the proper metadata to make it display right. As usual I used EXIF Fixer to make the JPEG interactive.)


Designers, engineers: Come work on After Effects

Here’s a rare opportunity to team up with one of the rarest of things—a super friendly, gifted, and yet humble team building a beloved app that makes the world more beautiful. The AE team have long been some of my favorite folks in the industry, and they’re looking to expand their ranks:


“Fluidity” drone controller

I have no idea whether this thing is worth a damn—but I’d sure like to find out (well, with the caveat that if it’s awesome, it’d be one more piece of bulky kit to schlepp around):

Using an astronaut’s perspective on intuitive motion through space, we have patented a unique and intuitive drone controller that anyone, whether they’re eight or eighty, can pick up and begin using immediately.

The FT Aviator is designed to incorporate the relevant 4 degrees of freedom of movement (x, y, z, and yaw) to drone flying, eliminating the awkward interface and steeper learning curve of existing dual thumb-controlled drones. It intuitively unlocks human potential to fly and capture stunning imagery.



Great long(ish) reads about computer science badasses

  •  The Yoda of Silicon Valley discusses the life & work of computer-science OG Don Knuth. The whole article & the accompanying reader comments are fascinating. (Side bonus for me: I ended up learning the names of various people in my extended team (!), who are quoted in the article.) I love that Don’s defiantly 1997-looking personal site includes a list of Infrequently Asked Questions.
  • The New Yorker profiles Google coding duprass Jeff Dean (who leads our org) and Sanjay Ghemawat. They “seem like two halves of a single mind,” and their work enabled planet-scale data infrastructure (among many other things). Retaining as I do the most unimportant details, I now really want to see Jeff’s bespoke basement trampoline. ¯\_(ツ)_/¯ Oh, and you should definitely read Chuck Norris-style Jeff Dean Facts (“Jeff Dean’s PIN is the last 4 digits of pi,” etc.).