Category Archives: Body Tracking

ML Kit gets pose detection

This is kinda inside-baseball, but I’m really happy that friends from my previous team will now have their work distributed on hundreds of millions, if not billions, of devices:

[A] face contours model — which can detect over 100 points in and around a user’s face and overlay masks and beautification elements atop them — has been added to the list of APIs shipped through Google Play Services…

Lastly, two new APIs are now available as part of the ML Kit early access program: entity extraction and pose detection… Pose detection supports 33 skeletal points like hands and feet tracking.

Let’s see what rad stuff the world can build with these foundational components. Here’s an example of folks putting an earlier version to use, and you can find a ton more in my Body Tracking category:

[Via]

Garments strut on their own in a 3D fashion show

No models, no problem: Congolese designer Anifa Mvuemba used software to show off her designs swaying in virtual space:

Cool context:

Inspired by her hometown in Congo, Anifa was intentional about shedding light on issues facing the Central African country with a short documentary at the start of the show. From mineral site conditions to the women and children who suffer as a result of these issues, Anifa’s mission was to educate before debuting any clothes. “Serving was a big part of who I am, and what I want to do,” she said in the short documentary.

Cloaking device engaged: Going invisible via Google’s browser-based ML

Heh—here’s a super fun application of body tracking tech (see whole category here for previous news) that shows off how folks have been working to redefine what’s possible with. realtime machine learning on the Web (!):

Google open-sources PoseNet 2.0 for Web-based body tracking

My teammates Tyler & George have released numerous projects made with their body-tracking library PoseNet, and now v2 has been open-sourced for you to use via TensorFlow.js. You can try it out here.

From last year (post), here’s an example of the kind of fun stuff you can make using it:

[YouTube]

New open-source Google AI experiments help people make art

One’s differing physical abilities shouldn’t stand in the way of drawing & making music. Body-tracking tech from my teammates George & Tyler (see previous) is just one of the new Web-based experiments in Creatability. Check it out:

Creatability is a set of experiments made in collaboration with creators and allies in the accessibility community. They explore how creative tools – drawing, music, and more – can be made more accessible using web and AI technology. They’re just a start. We’re sharing open-source code and tutorials for others to make their own projects.

NewImage

[YouTube]

Absolute witchcraft: AI synthesizes dance moves, entire street scenes

This 💩 is 🍌🍌🍌, B-A-N-A-N-A-S: This Video-to-Video Synthesis tech apparently can take in one dance performance & apply it to a recording of another person to make her match the moves:

It can even semantically replace entire sections of a scene—e.g. backgrounds in a street scene: 

Now please excuse me while I lie down for a bit, as my brain is broken.

NewImageScan

[YouTube]

[YouTube 1 & 2] [Via Tyler Zhu]

Match your body pose to Hollywood imagery via Kinemetagraph

Apropos of Google’s Move Mirror project (mentioned last week), here’s a similar idea:

Kinemetagraph reflects the bodily movement of the visitor in real time with a matching pose from the history of Hollywood cinema. To achieve this, it correlates live motion capture data using Kinect-based “skeleton tracking” to an open-source computer vision research dataset of 20,000 Hollywood film stills with included character pose metadata for each image.

The notable thing, I think, is that what required a dedicated hardware sensor a couple of years ago can now be done plug-in-free using just a browser and webcam. Progress!

NewImage

[Via Paul Chang]

Body Movin’: Drive image search with your body movements

Unleash the dank emotes! My teammates George & Tyler (see previous) are back at it running machine learning in your browser, this time to get you off the couch with the playful Move Mirror:

Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.

 Enjoy!

NewImage

NewImage

[YouTube]

Demo: Realtime pose estimation in a browser

My teammates George & Tyler have been collaborating with creative technologist Dan Oved to enable realtime human pose estimation in Web browsers via the open-source Tensorflow.js (the same tech behind the aforementioned Emoji Scavenger Hunt). You can try it out here and read about the implementation details over on Medium.

Ok, and why is this exciting to begin with? Pose estimation has many uses, from interactive installations that react to the body to augmented reality, animation, fitness uses, and more. […]

With PoseNet running on TensorFlow.js anyone with a decent webcam-equipped desktop or phone can experience this technology right from within a web browser. And since we’ve open sourced the model, JavaScript developers can tinker and use this technology with just a few lines of code. What’s more, this can actually help preserve user privacy. Since PoseNet on TensorFlow.js runs in the browser, no pose data ever leaves a user’s computer.

NewImage

[Via Luca Prasso]

A cool demo of Google’s pose estimation tech

“Teaching Google Photoshop” has been my working mantra here—i.e. getting computers to see like artists & wield their tools. A lot of that hinges upon understanding the shape & movements of the human body. Along those lines, my Google Research teammates Tyler Zhu, George Papandreou, and co. are doing cool work to estimate human poses in video. Check out the demo below, and see their poster and paper for more details.

https://youtu.be/y_BwEqnT8jU

NewImage

[YouTube]