Category Archives: VR/AR

YouTube VR: 360º heatmaps, Creators Lab

Viewer: “Where the hell should I look?”
Creator: “Where the hell do people look?”

Making compelling 360º content—like both pimpin’ & impin’—ain’t easy. Fortunately YouTube is adding some new analytical tools:

Today we’re introducing heatmaps for 360-degree and VR videos with over 1,000 views, which will give you specific insight into how your viewers are engaging with your content. With heatmaps, you’ll be able to see exactly what parts of your video are catching a viewer’s attention and how long they’re looking at a specific part of the video.

 Meanwhile they’ve started a new VR Creator Lab bootcamp:

Take your VR video creation to the next level. YouTube is taking applications for a 3 month learning and production intensive for VR creators. Participants will receive advanced education from leading VR instructors, 1:1 mentoring, and $30K – $40K in funding toward the production of their dream projects.

The application window has now closed (sorry I didn’t the news ’til now), but hopefully this will go well & future openings will emerge.

NewImage

Lightform promises to bring AR into physical spaces

The makers of Lightform call it “the first computer made for projected augmented reality.”

Lightform scans complex scenes in under a minute, letting you seamlessly mix real objects with projected light. It’s augmented reality without the headset.

Check out a demo made with it & read more on Wired:

The small box contains a processor and a high-res camera. Hook it up to any projector through an HDMI cable, and the projector will cast a series of grids onto the room, which Lightform’s onboard camera uses to assess, in fine detail, the location and dimensions of objects in the space. (Lightform can also scan the room periodically, allowing it to create a new map if anything moves.) The processor converts that information into a 3-D map of surfaces onto which the projector can cast light. […]

In other words: Lightform helps you quickly transform almost anything in a room into a screen

NewImage

Google introduces Blocks, a VR model creation app

Check out Blocks for Vive & Oculus Rift:

You can browse example content and read Fast Company’s coverage, “Google Is Becoming The Adobe Of VR”:

Google is essentially modeling Adobe to fill some of Adobe’s own gaps. First, it acquired Tiltbrush for VR sketching. Now, it built Blocks for VR-based, 3D object creation…

Google is laying the foundation for a massive play in VR and AR, because Blocks will be the cornerstone of an Adobe-like suite of VR creation apps from Google, which will pave the way for a new wave of user-created 3D movies and interactive experiences to come.

NewImage

NewImage

[YouTube]

Google Tilt Brush gets a new sharing platform & more

Good stuff:

Google’s VR paint experience Tilt Brush just got a hefty update with a slew of new features that let users tweak the environment for more dynamic lighting and color options. But even more exciting: The community is getting its own social website where they can upload their art for others to download and remix themselves.

Check it out:

NewImage

Google’s new Jump camera debuts; apply now to use it for free

This monster features 17 4k cameras (!) backed by cloud compute:

Footage from those cameras runs through the Jump Assembler, which uses sophisticated computer vision algorithms and the computing power of Google’s data centers to create 3D 360 video. Amazing VR videos have been made with Jump, such as The New York Times’ Great Performers collection, Within’s “The Possible” series, the NFL Immersed series, and Wevr’s “Internet Surfer” video.

NewImage

 Google is looking to sponsor 100 filmmakers (you?) to use it to make epic stuff:

Jump Start gives selected filmmakers both free access to a Jump camera and free unlimited use of the Jump Assembler for their VR film. Over the next year, the program will give over 100 creators these tools and enable them to make their vision a reality. Applications to Jump Start open today, and filmmakers have until May 22nd to apply.

[YouTube]

VR/AR: Mixed-reality lessons for schools

Check out how the Peer concept aims to make abstract concepts tangible for kids:

Fast Company writes,

A lesson in aerodynamics, for instance, would start when students strap on a VR headset, like Google Cardboard or Daydream. Their teacher could then demonstrate how aerodynamics works in mixed reality before the kids remove their headsets and get to work designing windmill arms, working with their hands to create something they think will generate the most wind speed. Then, on goes the headset again. As students begin testing their windmills with a fan, embedded sensors in the windmill spindle record rotational speed, and the headset shows the students the speed of their mills.

Moment’s John Payne says,

“VR is often simply reduced to a storytelling medium, but we believed it could be used in a more integrated way with the real-world environment, more as a ‘tool’ than as an ‘experience.'”

It’s undoubtedly cool, and I’d love to see how students and teachers can put it to use. And beyond that, I’d love to see the tools that’d make it possible for thousands of other lessons (needed to fit a wide range of curricula) to be made in an economically sustainable way.

I’d characterize my outlook at guardedly optimistic. I’m reminded me of when CD-ROM based magazines arrived, and then when tablet-based magazines repeated the whole fantasy of “Now everyone will build/pay for rich, interactive 3D content!” They even debuted with a 3D windmill, for God’s sake. Of course the world moved differently, voting with its feet more for Snapchat stories (crude assemblies of unedited clips, shat out even by well funded orgs like the NYT) than for highly polished, immersive creations.

And yet hope dies last, and all of us toolmakers have the privilege of trying to rebalance the scales. If it weren’t hard, it probably wouldn’t be fun. 🙂

The new VR filmmaking tools that enabled “Rogue One”

iPad + strapped-on Vive controller = realtime shot-composing! Check out these interesting homebrew tools:

Kottke writes,

In this video, we see a couple more tools the team used to facilitate the making of the film. The first is a VR video game of sorts that ILM built so that Edwards could move a virtual camera around in a virtual set to find just the right camera angles to capture the action, resulting in a process that was more flexible than traditional storyboarding.

The second tool jumped around a virtual set — a complete digital model of Jedha City — and rendered hundreds of street views from it at random. Then the filmmakers would look through the scenes for interesting shots and found scenes that looked more “natural” than something a digital effects artist might have come up with on purpose — basically massively parallel location scouting.

NewImage

[YouTube]