Do I know what the hell is going on here? No, of course not! (Have you ever met me? 😌) But thankfully my colleagues Noah, Richard, and co. do, and it promises a way to capture & display rich, dimensional photos (see interactive example that lets you play with parallax & see depth; more are on the site). Check it out:
Hmm—I’ve never had occasion to use this solipsistic-but-cool flight mode on my drone, but now I’m tempted to try capturing some epic dronies. (Just gotta figure out where I misplaced my moody Scottish highlands…)
I haven’t yet tried it, but sample results look impressive:
It’s free to download, but usage carries a somewhat funky pricing structure. PetaPixel explains,
You’ll need to sign up for an API key through the website and be connected to the Internet while using it. You’ll be able to do 50 background removals in a small size (625×400, or 0.25 megapixels) through the plugin every month for free (and unlimited removals through the website at that size). If you work with larger volumes or higher resolutions (up to 4000×2500, or 10 megapixels), you’ll need to buy credits.
The rockstar crew behind Night Sight have created a neural network that takes a standard RGB image from a cellphone & produces a relit image, displaying the subject as though s/he were illuminated via a different environment map. Check out the results:
I spent years wanting & trying to get capabilities like this into Photoshop—and now it’s close to running in realtime on your telephone (!). Days of miracles and… well, you know.
Our method is trained on a small database of 18 individuals captured under different directional light sources in a controlled light stage setup consisting of a densely sampled sphere of lights. Our proposed technique produces quantitatively superior results on our dataset’s validation set compared to prior works, and produces convincing qualitative relighting results on a dataset of hundreds of real-world cellphone portraits. Because our technique can produce a 640 × 640 image in only 160 milliseconds, it may enable interactive user-facing photographic applications in the future.
Chinese 360°/VR camera company Kandao is promising 10x interpolation to create super slow-mo effects. I find the results impressive, although there are some (probably unintentionally) charming artifacts visible on the squirrel close-up below. It’d be fun to compare it to work from my teammate Aseem as well as more recent efforts from NVIDIA.
AI Slow-Motion will first appear in Kandao’s Obsidian and QooCam 360/VR cameras, but Kandao is planning to open up the tech to other cameras down the road. For now, if you own a Kandao camera, you can find the new feature in the latest Qoocam Studio and in the upcoming Kandao Studio v3.0 (coming April 23rd).