Monthly Archives: April 2018

Wiral: A clever, lightweight wire rig for your camera

What an amazing time we live in for visual storytellers. In my day, says Cranky Old Man Nack, we used to beg our parents for giant, heavy camcorders that we couldn’t afford. Now it’s possible to mount a phone or action camera to this lightweight, powered cable rig and capture amazing moving shots. Check it out:

Tested by athletes, filmmakers and explorers all over the world and actors such as Redbull, GoPro and Nitro Circus. Wiral LITE will bring new angles to filming and make the impossible shots possible. It enables filming where drones might be difficult or even illegal to use, such as the woods, indoors and over people.




Death to the green screen? ARwall promises big things.

Want to “photograph an imagined cyberpunk world in-camera without any post-production”? ARwall promises you can through “a visual effects compositing technology that combines new mixed-reality screens along with the oldest (and maybe best) trick in the filmmaking book.”

This piece delves into the inception, development process, and shows the production test footage of the first ever real-time, in-camera, real-light, real-lens, perspective-adapting, mixed-reality, rear-screen, compositing technique.


[Vimeo] [Via Luca Prasso]

NVIDIA’s giving me the Content-Aware Feels

Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,

What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.

Check it out:

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”


[YouTube] [Via John Lin]

Drone life: Point Reyes from above

I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burntPoint Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.


A few things I’ve learned:

  • I wish I’d taken a few minutes to learn about Point of Interest Mode, which you can invoke easily via the controller (see another great little tutorial on that). It would’ve made getting these orbiting shots far easier & the results much smoother.
  • They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
  • The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
  • Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
  • The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
Tips, tricks, and feedback are most welcome!


Update: Here’s the boat from above (fullscreen):


Tangentially related: I shot this 360º amidst the redwoods where we camped. I stitched it in-app, then uploaded it to Google Maps so that I could embed the interactive pano (see fullscreen).


Creating a 3D model of you via a simple video

Check out this neat technique in action:

Science notes,

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.