Today, Adobe is unveiling new AI innovations in the Lightroom ecosystem — Lightroom, Lightroom Classic, Lightroom Mobile and Web — that make it easy to edit photos like a pro, so everyone can bring their creative visions to life wherever inspiration strikes. New Adobe Sensei AI-powered features empower intuitive editing and seamless workflows. Expanded adaptive presets and Masking categories for Select People make it easy to adjust fine details from the color of the sky to the texture of a person’s beard with a single click. Additionally, new features including Denoise and Curves in masking help you do more with less to save time and focus on getting the perfect shot.
Terry White vanquished a chronic photographic bummer—the blank or boring sky—by asking Firefly to generate a very specific asset (namely, an evening sky at the exact site of the shoot), then using Photoshop’s sky replacement feature to enhance the original. Check it out:
One of the great pleasures of parenting is, of course, getting to see your kids’ interests and knowledge grow, and yesterday my 13yo budding photographer Henry and I were discussing the concept of mise en scène. In looking up a proper explanation for him, I found this great article & video, which Kubrick/Shining lovers in particular will enjoy:
It’s been quiet here for a few days as my 13-year-old budding photographer son Henry & I were off at the Nevada Northern Railway’s Winter Steam Photo Weekend Spectacular. We had a staggeringly good time, and now my poor MacBook is liquefying under the weight of processing our visual haul. 🤪 I plan to share more images & observations soon from the experience (which was somehow the first photo workshop, or even proper photo class, I’ve taken!). Meanwhile, here’s a little Insta gallery of Lego Henry in action:
The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:
In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.
Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:
This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.
A few weeks ago I shared info on Google’s “Infinite Nature” tech for generating eye-popping fly-throughs from still images. Now that team has shared various interesting tech details on how it all works. And if reading all that isn’t your bag, hey, at least enjoy some beautiful results:
OMG—interactive 3D shadow casting in 2D photos FTW! 🔥
In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.
The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:
And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!
In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.
Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).
“My whole life has been one long ultraviolent hyperkinetic nightmare,” wrote Mark Leyner in “Et Tu, Babe?” That thought comes to mind when glimpsing this short film by Adam Chitayat, stitched together from thousands of Street View images (see Vimeo page for a list of locations).
I love the idea—indeed, back in 2014 I tried to get Google Photos to stitch together visual segues that could interconnect one’s photos—but the pacing here has my old man brain pulling the e-brake after just some short exposure. YMMV, so here ya go:
Easily my favorite thing at Google was getting to work with stone-cold geniuses like Noah Snavely (one of the minds behind Microsoft’s PhotoSynth) and Richard Tucker. Now they & their teammates have produced some jaw-dropping image synthesis tech:
And “hold onto your papers,” as here’s a look into how it all works: