All posts by jnack

Adobe’s working to detect Photoshopping

Who better to sell radar detectors than the people who make radar guns?

From DeepFakes (changing faces in photos & videos) to Lyrebird (synthesizing voices) to video puppetry, a host of emerging tech threatens to further undermine trust in what’s recorded & transmitted. With that in mind, the US government’s DARPA has gotten involved:

DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

With that in mind, I like seeing that Adobe’s jumping in to detect the work of its & others’ tools:

NewImage

[YouTube] [Via]

AR soccer on your table? Google & Facebook researchers make it happen

Can low-res YouTube footage be used to generate a 3D model of a ballgame—one that can then be visualized from different angles & mixed into the environment in front of you? Kinda, yeah!

Per TechCrunch,

The “Soccer On Your Tabletop” system takes as its input a video of a match and watches it carefully, tracking each player and their movements individually. The images of the players are then mapped onto 3D models “extracted from soccer video games,” and placed on a 3D representation of the field. Basically they cross FIFA 18 with real life and produce a sort of miniature hybrid.

NewImage

[YouTube]

Ohhhhhh yeaahhhh: ML produces super slow mo

Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:

[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

NewImage

[YouTube] [Via]

Demo: Synthesizing new views from your multi-lens phone images

You know the “I forced a bot to…” meme? Well, my colleagues Noah & team actually did it, forcing bots to watch real estate videos (which feature lots of stable, horizontal tracking shots) in order to synthesize animations between multiple independent images—say, the ones captured by a multi-lens phone:

We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube.

Check out what it can enable:

 NewImage

NewImage

[YouTube]

VR180 Creator, a new video tool from Google

Sounds handy for storytellers embracing new perspectives:

VR180 Creator currently offers two features for VR videos. “Convert for Publishing” takes raw fisheye footage from VR180 cameras like the Lenovo Mirage Camera and converts it into a standardized equirect projection. This can be edited with the video editing software creators already use, like Adobe Premiere and Final Cut Pro. “Prepare for Publishing” re-injects the VR180 metadata after editing so that the footage is viewable on YouTube or Google Photos in 2D or VR.

You can learn more about how to use VR180 Creator here and you can download it here.

NewImage

[Via]

ZOMG, Lego Hasselblad!

Oh myyyyyy…

Per PetaPixel (which features a great gallery of images):

In all, the build took Sham about 2 hours and used 1,120 different pieces. Sham says she’s hoping to create a system in which you can create photos using the LEGO camera and a smartphone.

Sham has submitted her Hasselblad build to LEGO Ideas, LEGO’s crowdsourced system for suggesting future LEGO kits. LEGO has already selected Sham’s build as a “Staff Pick.” If Sham’s project attracts 10,000 supporters (it currently has around 500 at the time of this writing), then it will be submitted for LEGO Review, during which LEGO decision makers will hand-pick projects to become new official LEGO Ideas sets.

NewImage

[YouTube]

The AI will see you now—specifically, right through your wall

Fascinating:

The system works because those radio waves can penetrate objects like a wall, then bounce off a human body—which is mostly water, no friend to radio wave penetration—and travel back through the wall and to the device. “Now the challenge is: How do you interpret it?” Katabi says. That’s where the AI comes into play.

Now maybe we can get it running in your web browser, too. 🙂

NewImage

[YouTube]

“Bumping the Lamp”: AR storytelling insights from Roger Rabbit (for real!)

If you’re interested in making augmented reality characters feel natural in the real world, it’s well worth spending a few minutes with this tour of some key insights. I’ve heard once-skeptical Google AR artists praising it, saying, “This video is a treasure trove and every artist, designer or anyone working on front-end AR should watch it.” Enjoy, and remember to bump that lamp. 🙂 

NewImage

[YouTube] [Via Jeremy Cowles]