Back in the way back, the Adobe User Ed team got in trouble for publishing a Healing Brush tutorial that demonstrated how to remove watermarks (sorry, photographers!). Now bots promise to do the same, only radically faster & better:
“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA writes. “The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.
“Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.”
See many more examples over on PetaPixel.
You know what’s really hard? Flying steadily in one direction while smoothly sweeping the camera around to focus on a subject and maybe climbing/descending and maybe tilting the camera? Yeah, just kidding: it’s nearly impossible.
But maybe now*, through the use of Course Lock mode & with this guidance from Drone Film Guide, I can pull it off.
In a nutshell:
- Pick a heading & speed
- Start flying back & forth along this fixed path while varying rotation/height/tilt
- Dial down the sensitivity of your yaw control
In a second installment, Stewart goes into more detail comparing Course Lock to Tap Fly:
*”Now” is relative: Yesterday my luck finally ran out as I flew the Mavic into some telephone wires. At least it’s not at the bottom of Bixby Canyon or Three-Mile Slough, where other power lines threatened to put it on previous (mis)adventures. (“God helps old folks & fools…”) The drone took a hard bounce off the pavement, necessitating a service trip to reset the gimbal (which moves but now doesn’t respond to control inputs), but overall it’s amazingly sturdy. 💪😑
[YouTube 1 & 2]
Mick Kalber was willing to stick his neck out—literally—to offer a glimpse into Hawaii’s explosive landscape. I’m struck by the visual variety of the flows (seemingly crunchy, creamy, crusted, and more):
The Volcano Goddess Pele is continually erupting hot liquid rock into the channelized rivers leading to the Pacific Ocean. Most of the fountaining activity is still confined within the nearly 200-foot high spatter cone she has built around that eruptive vent. Her fiery fountains send 6-9 million cubic meters of lava downslope every day… a volume difficult to even wrap your mind around!
More flyovers are here.
Who better to sell radar detectors than the people who make radar guns?
From DeepFakes (changing faces in photos & videos) to Lyrebird (synthesizing voices) to video puppetry, a host of emerging tech threatens to further undermine trust in what’s recorded & transmitted. With that in mind, the US government’s DARPA has gotten involved:
DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.
With that in mind, I like seeing that Adobe’s jumping in to detect the work of its & others’ tools:
Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:
[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
Hmm—is there really a big market for this specialized photo-editing hardware like this? Apparently so, as Loupedeck is building a new version of its device, expanding app coverage, and promoting it with a glossy launch video:
Drone-launching skybridge, man. Drone-launching skybridge!
The twin towers will also be located in Shenzhen, Guangdong, and will feature giant quadruple-height indoor drone flight testing spaces as well as a sky bridge that will be used for showing off new drones and technologies.
Check out PetaPixel for more details.
Per PetaPixel (which features a great gallery of images):
In all, the build took Sham about 2 hours and used 1,120 different pieces. Sham says she’s hoping to create a system in which you can create photos using the LEGO camera and a smartphone.
Sham has submitted her Hasselblad build to LEGO Ideas, LEGO’s crowdsourced system for suggesting future LEGO kits. LEGO has already selected Sham’s build as a “Staff Pick.” If Sham’s project attracts 10,000 supporters (it currently has around 500 at the time of this writing), then it will be submitted for LEGO Review, during which LEGO decision makers will hand-pick projects to become new official LEGO Ideas sets.
Cool stuff, coming soon: Basically, “upload portrait-mode image, then let Facebook extrude it into a 3D model, fill in the gaps, and display it interactively a la panoramas.”
Here’s the paper.
10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).