Beautiful projection mapping, with robots!

Another from the “Awesome Past Lives I Never Knew My Colleagues Had” Files: I just learned that Tarik Abdel-Gawad, with whom I’ve been collaborating on AR stuff, programmed & performed the amazing “Box” projection-mapping robot demo with Bot & Dolly before Google acquired that company. It’s now a few years old but no less stunning:


Bot & Dolly produced this work to serve as both an artistic statement and technical demonstration. It is the culmination of multiple technologies, including large scale robotics, projection mapping, and software engineering. We believe this methodology has tremendous potential to radically transform theatrical presentations, and define new genres of expression.

Check out this peek behind the scenes:

[YouTube 1 & 2]

“Not Hot Dog”… but ramen, or hummus?

Google had some dorky fun recently with an April Fool’s announcement of a Cloud Hummus API:

Silly, yes—but apparently not as far-fetched as you’d think. Check out computer vision being trained to identify ramen by shop:

Recently, data scientist Kenji Doi used machine learning models and AutoML Vision to classify bowls of ramen and identify the exact shop each bowl is made at, out of 41 ramen shops, with 95 percent accuracy. Sounds crazy (also delicious), especially when you see what these bowls look like. […]

You don’t have to be a data scientist to know how to use it—all you need to do is upload well-labeled images and then click a button. In Kenji’s case, he compiled a set of 48,000 photos of bowls of soup from Ramen Jiro locations, along with labels for each shop, and uploaded them to AutoML Vision.

Days of miracles and wonder… and ramen.



Adobe’s “Project Puppetron” is now in beta

This super fun combo of style transfer & performance capture (see video below in case you missed the sneak peek last fall) is now accepting applications for beta testers:

Project Puppetron lets you capture your own face via webcam and, through a simple setup process, create a puppet of yourself in the style of a piece of referenced art.

[Y]ou perform various facial expressions and mouth shapes for lip sync, and then select the reference art and the level of stylization you want to apply to create a fully-realized, animated puppet.

Once Project Puppetron has created your puppet, you can perform your character or modify your puppet as you would any other puppet in Character Animator. Then, bring further dimension to your character’s performance with rigging, triggerable artwork, layer cycles, etc., through the broad array of tools offered in Character Animator.


[YouTube] [Via Margot Nack]

Play Where’s Waldo in Google Maps (for real!)

Heh—my 8yo Mini-Me Henry just crushed me at this game.

Starting today, you can use Google Maps to join in my amazing adventures for April Fools this week. Are you prepared for a perplexing pursuit? I’ve shared my location with you on Android, iOS and desktop (rolling out now). To start the search, simply update your app or visit on desktop. Then press play when you see me waving at you from the side of your screen. You can even ask the Google Assistant on your phone, Chromebook or Home device, “Hey Google, Where’s Waldo?” to start.


The Bionic(le) Man: Kids literally arms himself via Lego

Weak side: Complaining about doing push-ups.
Strong side: Being born with a partial arm, saying F it, Imma build myself a Lego arm & do push-ups on that

Ever since he was a kid, David Aguilar was obsessed with Lego. He spent his childhood building cars, planes, helicopters, and eventually, his own prosthetic. Born with a deformed arm, the self-named “Hand Solo” decided to take his Lego-building skills to the next level. At age 18, he perfected his designs with the MK2, a prosthetic arm with the ability to bend and pick up objects with a pincer-like grip. Now, he’s the coolest kid on the block.


[Vimeo] [Via Maria Brenny]