Monthly Archives: June 2018

Photography: Kilauea puts on a show

Mick Kalber was willing to stick his neck out—literally—to offer a glimpse into Hawaii’s explosive landscape. I’m struck by the visual variety of the flows (seemingly crunchy, creamy, crusted, and more):

The Volcano Goddess Pele is continually erupting hot liquid rock into the channelized rivers leading to the Pacific Ocean. Most of the fountaining activity is still confined within the nearly 200-foot high spatter cone she has built around that eruptive vent. Her fiery fountains send 6-9 million cubic meters of lava downslope every day… a volume difficult to even wrap your mind around!

More flyovers are here.


[Vimeo] [Via]

Adobe’s working to detect Photoshopping

Who better to sell radar detectors than the people who make radar guns?

From DeepFakes (changing faces in photos & videos) to Lyrebird (synthesizing voices) to video puppetry, a host of emerging tech threatens to further undermine trust in what’s recorded & transmitted. With that in mind, the US government’s DARPA has gotten involved:

DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

With that in mind, I like seeing that Adobe’s jumping in to detect the work of its & others’ tools:


[YouTube] [Via]

AR soccer on your table? Google & Facebook researchers make it happen

Can low-res YouTube footage be used to generate a 3D model of a ballgame—one that can then be visualized from different angles & mixed into the environment in front of you? Kinda, yeah!

Per TechCrunch,

The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid.



Ohhhhhh yeaahhhh: ML produces super slow mo

Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:

[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.


[YouTube] [Via]

Demo: Synthesizing new views from your multi-lens phone images

You know the “I forced a bot to…” meme? Well, my colleagues Noah & team actually did it, forcing bots to watch real estate videos (which feature lots of stable, horizontal tracking shots) in order to synthesize animations between multiple independent images—say, the ones captured by a multi-lens phone:

We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube.

Check out what it can enable:




VR180 Creator, a new video tool from Google

Sounds handy for storytellers embracing new perspectives:

VR180 Creator currently offers two features for VR videos. “Convert for Publishing” takes raw fisheye footage from VR180 cameras like the Lenovo Mirage Camera and converts it into a standardized equirect projection. This can be edited with the video editing software creators already use, like Adobe Premiere and Final Cut Pro. “Prepare for Publishing” re-injects the VR180 metadata after editing so that the footage is viewable on YouTube or Google Photos in 2D or VR.

You can learn more about how to use VR180 Creator here and you can download it here.