Nifty, even if it doesn’t include the actual images produced on-device. More details.
Today we’re announcing that VR Creator Lab is coming to London. Participants will receive between $30,000 and $40,000 USD in funding towards their VR project, attend a three day “boot camp” September 18-20, 2018, and receive three months of training from leading VR instructors and filmmakers.
Applications are open through 5pm British Summer Time on August 6, 2018. YouTube creators with a minimum of 10,000 subscribers and independent filmmakers are eligible.
Wow: You can fly through some amazing goals thanks to the Times graphics staff using 3D illustration package Mental Canvas to convert single images into 3D videos. Check it out (click here if the vid below doesn’t load):
Somehow I’d never heard of Mental Canvas previously. Looks rather amazing:
[Via Mark Dochtermann & Chris Bregler]
Can low-res YouTube footage be used to generate a 3D model of a ballgame—one that can then be visualized from different angles & mixed into the environment in front of you? Kinda, yeah!
The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid.
Sounds handy for storytellers embracing new perspectives:
VR180 Creator currently offers two features for VR videos. “Convert for Publishing” takes raw fisheye footage from VR180 cameras like the Lenovo Mirage Camera and converts it into a standardized equirect projection. This can be edited with the video editing software creators already use, like Adobe Premiere and Final Cut Pro. “Prepare for Publishing” re-injects the VR180 metadata after editing so that the footage is viewable on YouTube or Google Photos in 2D or VR.
If you’re interested in making augmented reality characters feel natural in the real world, it’s well worth spending a few minutes with this tour of some key insights. I’ve heard once-skeptical Google AR artists praising it, saying, “This video is a treasure trove and every artist, designer or anyone working on front-end AR should watch it.” Enjoy, and remember to bump that lamp. 🙂
[YouTube] [Via Jeremy Cowles]
Hmm—I’m intrigued by the filmmaking-for-kids possibilities here, but deeply ambivalent about introducing screen time into one of the great (and threatened) pure-imagination oases in my kids’ lives:
LEGO + AR + Apple at WWDC! pic.twitter.com/TXlx0pyTz4
— CNET (@CNET) June 4, 2018
Up to four friends can play in the same set on four different iOS devices, and notably all of the virtual aspects of the LEGO AR app will be connected to physical LEGO sets. “We can save our entire world back into our physical set, and pick up where we left off later,” Sanders said.
Use simple sentences to add objects and give them behaviors. Say ‘I need some sheep’ to add sheep into your world. Then give the sheep something to do by saying ‘Sheep eat grass’ or ‘Sheep breed’.
Everything you add becomes part of a working system. By layering multiple objects and behaviors, you can keep increasing the complexity of your creation.
Everything old is new again: Anyone remember The Subservient Chicken? You could ask it to perform more than 300 commands, many of which live on Wikipedia, because the internet is magic. Anyway, driving things via voice for its own sake is generally cool but stupid, but I know someone will do it well.
[Vimeo] [Via Mike Rotondo]
Hmm—I foresee having fun creating & donning our son’s infamous “Henry Face” and using it as a puppet. The combo of 2D stickers + 3D faces (jump to 5:52) makes me wonder whether we might see Bitmoji, which already exist in a limited 3D form, gain the ability to pair 3D face avatars with 2D preset reaction artwork (sort of the age-old “put your face through a hole in a painted board” tourist photo idea come to more life).