Category Archives: Photography

NVIDIA AI promises superhuman noise reduction & watermark removal

Back in the way back, the Adobe User Ed team got in trouble for publishing a Healing Brush tutorial that demonstrated how to remove watermarks (sorry, photographers!). Now bots promise to do the same, only radically faster & better:

“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA writes. “The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.

“Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.”

See many more examples over on PetaPixel.

NewImage

[YouTube]

Photography: So *that’s* how they got that drone shot

You know what’s really hard? Flying steadily in one direction while smoothly sweeping the camera around to focus on a subject and maybe climbing/descending and maybe tilting the camera? Yeah, just kidding: it’s nearly impossible.

But maybe now*, through the use of Course Lock mode & with this guidance from Drone Film Guide, I can pull it off.

In a nutshell:

  • Pick a heading & speed
  • Start flying back & forth along this fixed path while varying rotation/height/tilt
  • Dial down the sensitivity of your yaw control

NewImage

In a second installment, Stewart goes into more detail comparing Course Lock to Tap Fly:

*”Now” is relative: Yesterday my luck finally ran out as I flew the Mavic into some telephone wires. At least it’s not at the bottom of Bixby Canyon or Three-Mile Slough, where other power lines threatened to put it on previous (mis)adventures. (“God helps old folks & fools…”) The drone took a hard bounce off the pavement, necessitating a service trip to reset the gimbal (which moves but now doesn’t respond to control inputs), but overall it’s amazingly sturdy. 💪😑 

[YouTube 1 & 2]

Photography: Kilauea puts on a show

Mick Kalber was willing to stick his neck out—literally—to offer a glimpse into Hawaii’s explosive landscape. I’m struck by the visual variety of the flows (seemingly crunchy, creamy, crusted, and more):

The Volcano Goddess Pele is continually erupting hot liquid rock into the channelized rivers leading to the Pacific Ocean. Most of the fountaining activity is still confined within the nearly 200-foot high spatter cone she has built around that eruptive vent. Her fiery fountains send 6-9 million cubic meters of lava downslope every day… a volume difficult to even wrap your mind around!

More flyovers are here.

NewImage

[Vimeo] [Via]

Adobe’s working to detect Photoshopping

Who better to sell radar detectors than the people who make radar guns?

From DeepFakes (changing faces in photos & videos) to Lyrebird (synthesizing voices) to video puppetry, a host of emerging tech threatens to further undermine trust in what’s recorded & transmitted. With that in mind, the US government’s DARPA has gotten involved:

DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

With that in mind, I like seeing that Adobe’s jumping in to detect the work of its & others’ tools:

NewImage

[YouTube] [Via]

Ohhhhhh yeaahhhh: ML produces super slow mo

Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:

[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.

NewImage

[YouTube] [Via]

ZOMG, Lego Hasselblad!

Oh myyyyyy…

Per PetaPixel (which features a great gallery of images):

In all, the build took Sham about 2 hours and used 1,120 different pieces. Sham says she’s hoping to create a system in which you can create photos using the LEGO camera and a smartphone.

Sham has submitted her Hasselblad build to LEGO Ideas, LEGO’s crowdsourced system for suggesting future LEGO kits. LEGO has already selected Sham’s build as a “Staff Pick.” If Sham’s project attracts 10,000 supporters (it currently has around 500 at the time of this writing), then it will be submitted for LEGO Review, during which LEGO decision makers will hand-pick projects to become new official LEGO Ideas sets.

NewImage

[YouTube]

Demo: Generating realistic 3D faces & skin from ordinary photos

10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).

NewImage

 [YouTube]