Taiwan’s Alishan Forest Railway celebrates its 106th birthday this year. Check out the behind-the-scenes on how the Google Street View team worked with a local university to help the Street View Trekker hitch a ride on the century-old railroad.
What an amazing time we live in for visual storytellers. In my day, says Cranky Old Man Nack, we used to beg our parents for giant, heavy camcorders that we couldn’t afford. Now it’s possible to mount a phone or action camera to this lightweight, powered cable rig and capture amazing moving shots. Check it out:
Tested by athletes, filmmakers and explorers all over the world and actors such as Redbull, GoPro and Nitro Circus. Wiral LITE will bring new angles to filming and make the impossible shots possible. It enables filming where drones might be difficult or even illegal to use, such as the woods, indoors and over people.
Well worth the three minutes if you’re one of the billion+ users of this thing & might care about un-sending, tasks, snoozing, and more.
Coincidentally, I just got this Timehop reminder of a Gmail pillow I found at work right after I started. Apparently Gmail’s smarts are powered by advanced PBR technology. 🍻😌
Want to “photograph an imagined cyberpunk world in-camera without any post-production”? ARwall promises you can through “a visual effects compositing technology that combines new mixed-reality screens along with the oldest (and maybe best) trick in the filmmaking book.”
This piece delves into the inception, development process, and shows the production test footage of the first ever real-time, in-camera, real-light, real-lens, perspective-adapting, mixed-reality, rear-screen, compositing technique.
[Vimeo] [Via Luca Prasso]
Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,
What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.
Check it out:
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”
[YouTube] [Via John Lin]
Let’s face it, a Scottish or Irish brogue objectively makes everything more awesome. (As Ron Burgundy would say, “It’s just science.”) With that in mind, I found this short guide from Captain Cornelius (what a name!) both charming & useful:
[YouTube] [Via Guy Einy]
“How would great book covers from the past look like when set in motion?” asks animator Henning M. Lederer. “Here we go…”
I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burnt) Point Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.
A few things I’ve learned:
- I wish I’d taken a few minutes to learn about Point of Interest Mode, which you can invoke easily via the controller (see another great little tutorial on that). It would’ve made getting these orbiting shots far easier & the results much smoother.
- They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
- The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
- Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
- The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
Tips, tricks, and feedback are most welcome!
Update: Here’s the boat from above (fullscreen):
Tangentially related: I shot this 360º amidst the redwoods where we camped. I stitched it in-app, then uploaded it to Google Maps so that I could embed the interactive pano (see fullscreen).
Check out this neat technique in action:
The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.
Check out this impressive little video from Japan. “Drone pilot Katsu FPV,” writes PetaPixel, “says the footage was shot with a 1.6-inch drone and the $80 RunCam Split Mini FPV camera, and that stabilization was applied in post.”