Check out this neat technique in action:
The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.
Google’s Arts & Culture team partnered with CyArk to travel to over 25 sites across 18 countries, using drone imagery and 3D laser scanners to capture intricate portraits of each place. You can explore the story and 3D model of each historic location—from Syria’s Al-Azem palace, to the Temple of Eshmoun in Lebanon, to the Mayan city of Chichen Itza—on the site.
My team has just added some fun new characters to Motion Stills for Android. 9to5 Google writes,
A dog (clear favorite), UFO, heart, basketball, and spider join the dinosaur, chicken, alien, gingerbread man, planet, and robot. The latter six stickers have been slightly rearranged, while the new ones are at the beginning of the carousel.
Enjoy! And let us know what else you’d like to see.
Another from the “Awesome Past Lives I Never Knew My Colleagues Had” Files: I just learned that Tarik Abdel-Gawad, with whom I’ve been collaborating on AR stuff, programmed & performed the amazing “Box” projection-mapping robot demo with Bot & Dolly before Google acquired that company. It’s now a few years old but no less stunning:
Bot & Dolly produced this work to serve as both an artistic statement and technical demonstration. It is the culmination of multiple technologies, including large scale robotics, projection mapping, and software engineering. We believe this methodology has tremendous potential to radically transform theatrical presentations, and define new genres of expression.
Check out this peek behind the scenes:
[YouTube 1 & 2]
This super fun combo of style transfer & performance capture (see video below in case you missed the sneak peek last fall) is now accepting applications for beta testers:
Project Puppetron lets you capture your own face via webcam and, through a simple setup process, create a puppet of yourself in the style of a piece of referenced art.
[Y]ou perform various facial expressions and mouth shapes for lip sync, and then select the reference art and the level of stylization you want to apply to create a fully-realized, animated puppet.
Once Project Puppetron has created your puppet, you can perform your character or modify your puppet as you would any other puppet in Character Animator. Then, bring further dimension to your character’s performance with rigging, triggerable artwork, layer cycles, etc., through the broad array of tools offered in Character Animator.
[YouTube] [Via Margot Nack]
Now, before I tell you who makes this or what it’s called, check it out & tell me it’s not pretty slick:
Now, when you discover that it’s actually Microsoft Paint, do you go full Chris Farley warpath, or maybe just start spontaneously vomiting? Perhaps we need the equivalent of Swiftamine for apps. 🙂
[YouTube 1 & 2]
It’s a running joke at Google that you can spend your whole career not knowing what the people nearby you are doing, or even who they are. (In fact, I continue to harbor a dream about creating an AR overlay that would solve this, but that’s another story.) As it happens, I just discovered that my teammate Bill has been making some really cool augmented reality experiments, including the open source Flight Paths:
Flight Paths is an experiment that transforms your room into a flight path visualization. Touch any horizontal surface and explore as flights take off from JFK or SFO and fly around your space. Learn more at g.co/arexperiments
Continual note to self: Ask people more about what they’re doing!
Draw in 3D space on Android using Just A Line (for which the source code is available):
Get ready for a whole new wave of AR gaming:
Per The Verge,
Unity integration will also allow developers to customize maps with what appears to be a great deal of flexibility and control. Things like buildings and roads are turned into objects, which developers can then tweak in the game engine. During a demonstration, Google showed off real-world maps that were transformed into sci-fi landscapes and fantasy realms, complete with dragons and treasure chests.
Jacoby says that one of the goals of the project was to help developers build detailed worlds using Maps data as a base to paint over. Developers can do things like choose particular kinds of buildings or locations — say, all stores or restaurants — and transform each one. A fantasy realm could turn all hotels into restorative inns, for instance, or anything else.
This demo of the Snappers Facial Rig is pretty damn impressive. Now, how soon until front-facing depth cameras (a la those on iPhone X) can be paired with enough on-device rendering power to produce results like this?
[YouTube 1 & 2]