Monthly Archives: April 2018

Wiral: A clever, lightweight wire rig for your camera

What an amazing time we live in for visual storytellers. In my day, says Cranky Old Man Nack, we used to beg our parents for giant, heavy camcorders that we couldn’t afford. Now it’s possible to mount a phone or action camera to this lightweight, powered cable rig and capture amazing moving shots. Check it out:

Tested by athletes, filmmakers and explorers all over the world and actors such as Redbull, GoPro and Nitro Circus. Wiral LITE will bring new angles to filming and make the impossible shots possible. It enables filming where drones might be difficult or even illegal to use, such as the woods, indoors and over people.




Death to the green screen? ARwall promises big things.

Want to “photograph an imagined cyberpunk world in-camera without any post-production”? ARwall promises you can through “a visual effects compositing technology that combines new mixed-reality screens along with the oldest (and maybe best) trick in the filmmaking book.”

This piece delves into the inception, development process, and shows the production test footage of the first ever real-time, in-camera, real-light, real-lens, perspective-adapting, mixed-reality, rear-screen, compositing technique.


[Vimeo] [Via Luca Prasso]

NVIDIA’s giving me the Content-Aware Feels

Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,

What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.

Check it out:

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”


[YouTube] [Via John Lin]

Drone life: Point Reyes from above

I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burntPoint Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.


A few things I’ve learned:

  • I wish I’d taken a few minutes to learn about Point of Interest Mode, which you can invoke easily via the controller (see another great little tutorial on that). It would’ve made getting these orbiting shots far easier & the results much smoother.
  • They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
  • The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
  • Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
  • The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
Tips, tricks, and feedback are most welcome!


Update: Here’s the boat from above (fullscreen):


Tangentially related: I shot this 360º amidst the redwoods where we camped. I stitched it in-app, then uploaded it to Google Maps so that I could embed the interactive pano (see fullscreen).


Creating a 3D model of you via a simple video

Check out this neat technique in action:

Science notes,

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.



Google puts tiny Street View cars into Hamburg’s Minatur Wunderland

Heh—this crazy project provides an extreme close-up on the incredible craftsmanship & patience of the creators of Germany’s famous miniature world. The team writes,

Street View cameras have floated on gondolas in Venice, ridden on camels in the Liwa Desert and soared on snowmobiles on Canadian slopes. But to capture the nooks and crannies in Miniatur Wunderland, we worked with our partner at Ubilabs to build an entirely new—and much smaller—device. Tiny cameras were mounted on tiny vehicles that were able to drive the roads and over the train tracks, weaving through the Wunderland’s little worlds to capture their hidden treasures.

Check out the results.




Design: SNL nails the real use of “athletic wear”

Heh—it’s often fun to see the gap between what products are designed (or at least marketed) to do, and how they’re actually used. Saturday Night Live delightfully skewered yoga pants in this pitch for “Pro-Chiller Leggings,” which can be worn as “pants, pajamas, and a napkin” (reminding me of “a floor wax and a dessert topping!”). Now “just sit the hell down and chill”:



New AR stickers in Motion Stills

My team has just added some fun new characters to Motion Stills for Android. 9to5 Google writes

A dog (clear favorite), UFO, heart, basketball, and spider join the dinosaur, chicken, alien, gingerbread man, planet, and robot. The latter six stickers have been slightly rearranged, while the new ones are at the beginning of the carousel.

Enjoy! And let us know what else you’d like to see.


Beautiful projection mapping, with robots!

Another from the “Awesome Past Lives I Never Knew My Colleagues Had” Files: I just learned that Tarik Abdel-Gawad, with whom I’ve been collaborating on AR stuff, programmed & performed the amazing “Box” projection-mapping robot demo with Bot & Dolly before Google acquired that company. It’s now a few years old but no less stunning:


Bot & Dolly produced this work to serve as both an artistic statement and technical demonstration. It is the culmination of multiple technologies, including large scale robotics, projection mapping, and software engineering. We believe this methodology has tremendous potential to radically transform theatrical presentations, and define new genres of expression.

Check out this peek behind the scenes:

[YouTube 1 & 2]

“Not Hot Dog”… but ramen, or hummus?

Google had some dorky fun recently with an April Fool’s announcement of a Cloud Hummus API:

Silly, yes—but apparently not as far-fetched as you’d think. Check out computer vision being trained to identify ramen by shop:

Recently, data scientist Kenji Doi used machine learning models and AutoML Vision to classify bowls of ramen and identify the exact shop each bowl is made at, out of 41 ramen shops, with 95 percent accuracy. Sounds crazy (also delicious), especially when you see what these bowls look like. […]

You don’t have to be a data scientist to know how to use it—all you need to do is upload well-labeled images and then click a button. In Kenji’s case, he compiled a set of 48,000 photos of bowls of soup from Ramen Jiro locations, along with labels for each shop, and uploaded them to AutoML Vision.

Days of miracles and wonder… and ramen.



Adobe’s “Project Puppetron” is now in beta

This super fun combo of style transfer & performance capture (see video below in case you missed the sneak peek last fall) is now accepting applications for beta testers:

Project Puppetron lets you capture your own face via webcam and, through a simple setup process, create a puppet of yourself in the style of a piece of referenced art.

[Y]ou perform various facial expressions and mouth shapes for lip sync, and then select the reference art and the level of stylization you want to apply to create a fully-realized, animated puppet.

Once Project Puppetron has created your puppet, you can perform your character or modify your puppet as you would any other puppet in Character Animator. Then, bring further dimension to your character’s performance with rigging, triggerable artwork, layer cycles, etc., through the broad array of tools offered in Character Animator.


[YouTube] [Via Margot Nack]

Play Where’s Waldo in Google Maps (for real!)

Heh—my 8yo Mini-Me Henry just crushed me at this game.

Starting today, you can use Google Maps to join in my amazing adventures for April Fools this week. Are you prepared for a perplexing pursuit? I’ve shared my location with you on Android, iOS and desktop (rolling out now). To start the search, simply update your app or visit on desktop. Then press play when you see me waving at you from the side of your screen. You can even ask the Google Assistant on your phone, Chromebook or Home device, “Hey Google, Where’s Waldo?” to start.