Crazy to think that it’s been 8+ years since Photoshop added Content-Aware Fill—itself derived from earlier PatchMatch technology. Time & tech march onward, and new NVIDIA research promises to raise the game. As PetaPixel notes,
What’s amazing about NVIDIA’s system and the current “Content-Aware Fill” in Photoshop is that NVIDIA’s tool doesn’t just glean information from surrounding pixels to figure out what to fill holes with — it understands what the subject should look like.
Check it out:
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes. Learn more about their research paper “Image Inpainting for Irregular Holes Using Partial Convolutions.”
[YouTube] [Via John Lin]
Let’s face it, a Scottish or Irish brogue objectively makes everything more awesome. (As Ron Burgundy would say, “It’s just science.”) With that in mind, I found this short guide from Captain Cornelius (what a name!) both charming & useful:
[YouTube] [Via Guy Einy]
I remain a rank amateur when it comes to filming with a drone, but my skills are creeping upward when I get a chance to film. Last week we visited Point Reyes & spent a bit of time exploring the famous (and now sadly half-burnt) Point Reyes shipwreck. Besides taking a few shots with my DSLR, I was able to grab some fly-by footage, below.
A few things I’ve learned:
- I wish I’d taken a few minutes to learn about Point of Interest Mode, which you can invoke easily via the controller (see another great little tutorial on that). It would’ve made getting these orbiting shots far easier & the results much smoother.
- They say that “Every unhappy family is unhappy in its own way,” and nearly every 360º stitching attempt with Lightroom or Camera Raw craps out in some uniquely ghoulish manner. (I’m presently gathering materials to share with my Adobe friends.) Having said that, I have a certain affection for the weird result it produced below. ¯\_(ツ)_/¯
- The PT Gui trial seems to handle the images fine, but on principle I don’t feel like paying $125-$250 for a feature in the apps I’m already paying for.
- Consequently I’m finding it much better to stitch panos directly on-device using the DJI Go app. (Even that doesn’t always work, sometimes stalling out for no discernible reason.) I’m also finding it impossible to load pano images back onto the SD card and stitch them in the app—so the the opportunity while you can.
- The stitched results often (always?) fail to register as panos in Facebook & Google Photos, so I use the free Exif Fixer utility to tweak their metadata. It’s all kind of an elaborate pain in the A, but I’m sticking with this flow until I find a smoother one.
Tips, tricks, and feedback are most welcome!
Update: Here’s the boat from above (fullscreen):
Tangentially related: I shot this 360º amidst the redwoods where we camped. I stitched it in-app, then uploaded it to Google Maps so that I could embed the interactive pano (see fullscreen).
Check out this impressive little video from Japan. “Drone pilot Katsu FPV,” writes PetaPixel, “says the footage was shot with a 1.6-inch drone and the $80 RunCam Split Mini FPV camera, and that stabilization was applied in post.”
Rad—and I love that we can still see the Apollo 17 lander & rover on the surface!
As the visualization moves around the near side, far side, north and south poles, we highlight interesting features, sites, and information gathered on the lunar terrain.
Heh—this crazy project provides an extreme close-up on the incredible craftsmanship & patience of the creators of Germany’s famous miniature world. The team writes,
Street View cameras have floated on gondolas in Venice, ridden on camels in the Liwa Desert and soared on snowmobiles on Canadian slopes. But to capture the nooks and crannies in Miniatur Wunderland, we worked with our partner at Ubilabs to build an entirely new—and much smaller—device. Tiny cameras were mounted on tiny vehicles that were able to drive the roads and over the train tracks, weaving through the Wunderland’s little worlds to capture their hidden treasures.
Check out the results.
As I mentioned the other day, Moment is Kickstarting efforts to create an anamorphic lens for phones like Pixel & iPhone. In the quick vid below, they explain its charms—cool lens flares, oval bokeh, and more:
“The network is the computer,” and maybe the image, too: check out this sneak peek of Adobe tech that fills large holes in images by querying a database of images to find suitable matches:
Photographer Páraic McGloughlin took Google Earth satellite photos and strung them together into “Arena,” an extremely fast-paced, bird’s-eye-view animation. (Seriously, don’t watch this if you’re sensitive to strobing.)
Well, that escalated quickly: For this new set of mobile filmmaking tools (lens, battery, gimbal) Moment hit their $50k funding goal in in just over half an hour, and as of this writing they’ve easily cleared the $750k mark. Check ‘em out: