Hooray! My first real project to ship since joining my new team is here:
Today, we are excited to announce the new Augmented Reality (AR) mode in Motion Stills for Android. With the new AR mode, a user simply touches the viewfinder to place fun, virtual 3D objects on static or moving horizontal surfaces (e.g. tables, floors, or hands), allowing them to seamlessly interact with a dynamic real-world environment. You can also record and share the clips as GIFs and videos.
For the nerdier among us, we’ve put together a Research blog post about The Instant Motion Tracking Behind Motion Stills AR, and on CNET Stephen Shankland gives a nice overview (and has been tweeting out some fun animations):
The Motion Stills app can put AR stickers into any Android device with a gyroscope, which is nothing special these days.
I’ve long been a longtime fan of Motion Stills, posting about it for years. I’m so glad to get to work with these guys now. There’s more good stuff to come, so please us know what you think!
(BTW, the 3D models are among the many thousands you can download for free from poly.google.com.)
Super fun style transfer + facial analysis enables animation:
Puppetron is a way to quickly combine a series of facial photos with artistic portraits to create puppets directly usable in Character Animator.
[YouTube 1 & 2]
According to a recent survey, more than 40% of people under 33 prioritize “Instagrammability” when choosing their next holiday spot. Of course, a ton of the results look incredibly similar, perhaps inducing cases of vemödalen (“the frustration of photographing something amazing when thousands of identical photos already exist”). They’re so repetitive, in fact, that Google researchers have built 3D timelapses from overlapping imagery.
Maybe more people need a “Camera Restricta” to “prevent shooting unoriginal photos.” Or maybe we shouldn’t sweat it, because “there are many like it, but this one is mine,” and because “in the end we shall all be dead.” ¯\_(ツ)_/¯
Happy Friday, weirdoes. 🙂
This is all possible because of something called FakeApp, software that utilizes algorithms and deep-learning to scan someone’s face and graft it into any given video.
This thing is pretty cool! Cornell researchers worked with Googlers to use machine learning in order to fingerprint the songs of various birds, then lay them out in an interactive visualization:
Built by Kyle McDonald, Manny Tan, Yotam Mann, and friends at Google Creative Lab. Thanks to Cornell Lab of Ornithology for their support. The Essential Set for North America sounds are provided by the Macaulay Library. The open-source code is available here. Check out more at A.I. Experiments.
Judith Amores Fernandez is pursuing her PhD at MIT Media Lab & exploring new UX possibilities using Microsoft Holo Lens. Here she presents on her work with HoloARt.
This is a new media of art that explores the use of the holograms in a mixed reality, for creative self-expression. Amores Fernandez shows a video of herself using a Hololens to creates her works of art and then performs a live demonstration.
Check it out:
I’ve yet to see Guillermo del Toro’s watery creation, and though this 5-minute peek into its making seems to include some spoilers, I have every intention of doing so. Enjoy:
CG supervisor Trey Harrell shared some of the team’s thinking with TechCrunch:
“We weren’t going for a hyperreal, CG creature,” he said. “We wanted it to be something that plausibly looked like foam latex and silicon prosthetics, a performance that could plausibly be shot on the day.” […]
That doesn’t mean Harrell is always in favor of practical, or practical-looking, effects: “You’re starting to see smaller, more personal projects take the best of both worlds. I personally don’t think it’s a binary argument. I’ve also got a background in practical prosthetics and makeup. I’m a fan of having a big toolbox.”
Artist Jonathan Yeo used Google Tilt Brush + 3D scanning to create his latest self-portrait, which he then cast in bronze. He tells Wired,
“It’s an incredible 3D sketch book,” says Yeo, 46. “The thing about VR that I think is really powerful is that you can draw freely in space. You don’t have to shape things like stone or clay. You can make these sweeping movements, like painting. It’s a hybrid of painting and sculpture, which is something that would have been impossible to do before.”
In this behind-the-scenes video, he explains how he used these new tools to create the sculpture.
I stumbled across this weirdly charming account of how Brian Eno wrote the 3.25-second Windows 95 startup sound:
The idea came up at the time when I was completely bereft of ideas. I’d been working on my own music for a while and was quite lost, actually. And I really appreciated someone coming along and saying, “Here’s a specific problem – solve it.”
The thing from the agency said, “We want a piece of music that is inspiring, universal, blah-blah, da-da-da, optimistic, futuristic, sentimental, emotional,” this whole list of adjectives, and then at the bottom it said “and it must be 3 1⁄4 seconds long.”
I thought this was so funny and an amazing thought to actually try to make a little piece of music. It’s like making a tiny little jewel.
In fact, I made eighty-four pieces. I got completely into this world of tiny, tiny little pieces of music. I was so sensitive to microseconds at the end of this that it really broke a logjam in my own work. Then when I’d finished that and I went back to working with pieces that were like three minutes long, it seemed like oceans of time.
Oh, and he wrote it on a Mac.
“I do love how Nintendo’s response to photogrammetry and photoreal 4k whatever is ‘Imagination,’” I saw tweeted the other day, and it’s true. Will the cardboard-based Labo, which extends Switch devices & their Joycon controllers (now “Toycons”!) be a hit with my kids & others? I have no idea—but I love that Nintendo has the guts & wit to try.