Sounds handy for storytellers embracing new perspectives:
VR180 Creator currently offers two features for VR videos. “Convert for Publishing” takes raw fisheye footage from VR180 cameras like the Lenovo Mirage Camera and converts it into a standardized equirect projection. This can be edited with the video editing software creators already use, like Adobe Premiere and Final Cut Pro. “Prepare for Publishing” re-injects the VR180 metadata after editing so that the footage is viewable on YouTube or Google Photos in 2D or VR.
You can learn more about how to use VR180 Creator here and you can download it here.
If you’re interested in making augmented reality characters feel natural in the real world, it’s well worth spending a few minutes with this tour of some key insights. I’ve heard once-skeptical Google AR artists praising it, saying, “This video is a treasure trove and every artist, designer or anyone working on front-end AR should watch it.” Enjoy, and remember to bump that lamp. 🙂
[YouTube] [Via Jeremy Cowles]
Cool stuff, coming soon: Basically, “upload portrait-mode image, then let Facebook extrude it into a 3D model, fill in the gaps, and display it interactively a la panoramas.”
Here’s the paper.
Hmm—I’m intrigued by the filmmaking-for-kids possibilities here, but deeply ambivalent about introducing screen time into one of the great (and threatened) pure-imagination oases in my kids’ lives:
Up to four friends can play in the same set on four different iOS devices, and notably all of the virtual aspects of the LEGO AR app will be connected to physical LEGO sets. “We can save our entire world back into our physical set, and pick up where we left off later,” Sanders said.
Hmm—interesting, if embryonic:
Use simple sentences to add objects and give them behaviors. Say ‘I need some sheep’ to add sheep into your world. Then give the sheep something to do by saying ‘Sheep eat grass’ or ‘Sheep breed’.
Everything you add becomes part of a working system. By layering multiple objects and behaviors, you can keep increasing the complexity of your creation.
Everything old is new again: Anyone remember The Subservient Chicken? You could ask it to perform more than 300 commands, many of which live on Wikipedia, because the internet is magic. Anyway, driving things via voice for its own sake is generally cool but stupid, but I know someone will do it well.
[Vimeo] [Via Mike Rotondo]
Hmm—I foresee having fun creating & donning our son’s infamous “Henry Face” and using it as a puppet. The combo of 2D stickers + 3D faces (jump to 5:52) makes me wonder whether we might see Bitmoji, which already exist in a limited 3D form, gain the ability to pair 3D face avatars with 2D preset reaction artwork (sort of the age-old “put your face through a hole in a painted board” tourist photo idea come to more life).
“Oh God, not another haystack,” I found myself pleading as my folks dragged my young self through Chicago’s crowded Art Institute in the 80’s. Happily Google’s new Monet Was Here offers a much less jostling way to visit the places that inspired Monet throughout his life, from the coast to the city to the countryside, explore his paintings by color palette, and more. Enjoy!
Google’s newly announced Cloud Anchors help users place virtual content in the same real world location that can be seen on different devices. You can grab the simple, fun, open-source Just A Line app for iOS and Android to take it for a spin with a friend, or just to sketch in space solo:
Just put two phones side-by-side and tap the partner icon. Once the phones are connected, you and your partner will be able to see, and contribute to, the same drawing.
This makes Just a Line the first app that lets two people create together in AR, at the same time, across Android and iOS.
10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).
The open-source Lantern project promises to transform any surface into AR using Raspberry Pi, a laser projector, and Android Things:
Rather than insisting that every object in our home and office be ‘smart’, Lantern imagines a future where projections are used to present ambient information, and relevant UI within everyday objects. Point it at a clock to show your appointments, or point to speaker to display the currently playing song. Unlike a screen, when Lantern’s projections are no longer needed, they simply fade away.