Viewer: “Where the hell should I look?”
Creator: “Where the hell do people look?”
Making compelling 360º content—like both pimpin’ & impin’—ain’t easy. Fortunately YouTube is adding some new analytical tools:
Today we’re introducing heatmaps for 360-degree and VR videos with over 1,000 views, which will give you specific insight into how your viewers are engaging with your content. With heatmaps, you’ll be able to see exactly what parts of your video are catching a viewer’s attention and how long they’re looking at a specific part of the video.
Meanwhile they’ve started a new VR Creator Lab bootcamp:
Take your VR video creation to the next level. YouTube is taking applications for a 3 month learning and production intensive for VR creators. Participants will receive advanced education from leading VR instructors, 1:1 mentoring, and $30K – $40K in funding toward the production of their dream projects.
The application window has now closed (sorry I didn’t the news ’til now), but hopefully this will go well & future openings will emerge.
The makers of Lightform call it “the first computer made for projected augmented reality.”
Lightform scans complex scenes in under a minute, letting you seamlessly mix real objects with projected light. It’s augmented reality without the headset.
Check out a demo made with it & read more on Wired:
The small box contains a processor and a high-res camera. Hook it up to any projector through an HDMI cable, and the projector will cast a series of grids onto the room, which Lightform’s onboard camera uses to assess, in fine detail, the location and dimensions of objects in the space. (Lightform can also scan the room periodically, allowing it to create a new map if anything moves.) The processor converts that information into a 3-D map of surfaces onto which the projector can cast light. […]
In other words: Lightform helps you quickly transform almost anything in a room into a screen
Check out Blocks for Vive & Oculus Rift:
You can browse example content and read Fast Company’s coverage, “Google Is Becoming The Adobe Of VR”:
Google is essentially modeling Adobe to fill some of Adobe’s own gaps. First, it acquired Tiltbrush for VR sketching. Now, it built Blocks for VR-based, 3D object creation…
Google is laying the foundation for a massive play in VR and AR, because Blocks will be the cornerstone of an Adobe-like suite of VR creation apps from Google, which will pave the way for a new wave of user-created 3D movies and interactive experiences to come.
Sometimes the simplest things are the most charming & amazing. Days of miracles & wonder, man.
For more fun examples follow @MadeWithARKit on Twitter.
Great Gaia! Take Elizabeth Edwards’s drawing for a spin, and check out her other 3D work made using Google Tilt Brush.
This monster features 17 4k cameras (!) backed by cloud compute:
Footage from those cameras runs through the Jump Assembler, which uses sophisticated computer vision algorithms and the computing power of Google’s data centers to create 3D 360 video. Amazing VR videos have been made with Jump, such as The New York Times’ Great Performers collection, Within’s “The Possible” series, the NFL Immersed series, and Wevr’s “Internet Surfer” video.
Google is looking to sponsor 100 filmmakers (you?) to use it to make epic stuff:
Jump Start gives selected filmmakers both free access to a Jump camera and free unlimited use of the Jump Assembler for their VR film. Over the next year, the program will give over 100 creators these tools and enable them to make their vision a reality. Applications to Jump Start open today, and filmmakers have until May 22nd to apply.
I gave up my first career as a Web animator/designer & joined Adobe specifically to build out Web standards (SVG back then) and the tools that could push & leverage them. Thus my old grinch-heart grows three sizes seeing the development of WebVR & fun experiments that show it off:
Check out how the Peer concept aims to make abstract concepts tangible for kids:
Fast Company writes,
A lesson in aerodynamics, for instance, would start when students strap on a VR headset, like Google Cardboard or Daydream. Their teacher could then demonstrate how aerodynamics works in mixed reality before the kids remove their headsets and get to work designing windmill arms, working with their hands to create something they think will generate the most wind speed. Then, on goes the headset again. As students begin testing their windmills with a fan, embedded sensors in the windmill spindle record rotational speed, and the headset shows the students the speed of their mills.
Moment’s John Payne says,
“VR is often simply reduced to a storytelling medium, but we believed it could be used in a more integrated way with the real-world environment, more as a ‘tool’ than as an ‘experience.'”
It’s undoubtedly cool, and I’d love to see how students and teachers can put it to use. And beyond that, I’d love to see the tools that’d make it possible for thousands of other lessons (needed to fit a wide range of curricula) to be made in an economically sustainable way.
I’d characterize my outlook at guardedly optimistic. I’m reminded me of when CD-ROM based magazines arrived, and then when tablet-based magazines repeated the whole fantasy of “Now everyone will build/pay for rich, interactive 3D content!” They even debuted with a 3D windmill, for God’s sake. Of course the world moved differently, voting with its feet more for Snapchat stories (crude assemblies of unedited clips, shat out even by well funded orgs like the NYT) than for highly polished, immersive creations.
And yet hope dies last, and all of us toolmakers have the privilege of trying to rebalance the scales. If it weren’t hard, it probably wouldn’t be fun. 🙂
iPad + strapped-on Vive controller = realtime shot-composing! Check out these interesting homebrew tools:
In this video, we see a couple more tools the team used to facilitate the making of the film. The first is a VR video game of sorts that ILM built so that Edwards could move a virtual camera around in a virtual set to find just the right camera angles to capture the action, resulting in a process that was more flexible than traditional storyboarding.
The second tool jumped around a virtual set — a complete digital model of Jedha City — and rendered hundreds of street views from it at random. Then the filmmakers would look through the scenes for interesting shots and found scenes that looked more “natural” than something a digital effects artist might have come up with on purpose — basically massively parallel location scouting.