Check out this eye-popping capture of the World Cup celebration in Buenos Aires, and see pilot Ale Petra’s Instagram feed for lots more FPV goodness:
The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:
PERSEVERE 💙 • 1-6-2023 • 3-5:30 • 8 Miles
The Santa Cruz Sentinel writes,
In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.
Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:
This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.
I meant to share this one last month, but there’s just no keeping up with the pace of progress!
My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use:
OMG—interactive 3D shadow casting in 2D photos FTW! 🔥
In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.
The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:
And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!
It seems almost too good to be true, but Google Researchers & their university collaborators have unveiled a way to edit images using just text:
In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.
Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).
I can’t wait to see it in action!
Check out Palette:
“My whole life has been one long ultraviolent hyperkinetic nightmare,” wrote Mark Leyner in “Et Tu, Babe?” That thought comes to mind when glimpsing this short film by Adam Chitayat, stitched together from thousands of Street View images (see Vimeo page for a list of locations).
I love the idea—indeed, back in 2014 I tried to get Google Photos to stitch together visual segues that could interconnect one’s photos—but the pacing here has my old man brain pulling the e-brake after just some short exposure. YMMV, so here ya go:
Easily my favorite thing at Google was getting to work with stone-cold geniuses like Noah Snavely (one of the minds behind Microsoft’s PhotoSynth) and Richard Tucker. Now they & their teammates have produced some jaw-dropping image synthesis tech:
And “hold onto your papers,” as here’s a look into how it all works:
Well… kinda? I’m feeling somewhat hoodwinked, though. The new cam promises 72-megapixel captures, compared to 18 from its predecessor. This happens via some kind of 4x upsampling, it appears, and at least right now that’s incompatible with shooting HDR images.
Thus, as you can see via the comparisons below & via these original images, I was able to capture somewhat better detail (e.g. look at text) at the cost of getting worse tonal range (e.g. see the X2 lying on top of the book).
I need to carve out time to watch the tutorial below on how to wring the best out of this new cam.
Who’s got two thumbs & just pulled the trigger? This guuuuuy. 😌
Now, will it be worth it? I sure hope so.
Fortunately I got to try out the much larger & more expensive One R 1″ Edition back in July & concluded that it’s not for me (heavier, lacking Bullet Time, and not producing appreciably better quality results—at least for the kind of things I shoot).
I’m of course hoping the X3 (success to my much-beloved One X2) will be more up my alley. Here’s some third-party perspective:
Check out ClipDrop’s relighting app, demoed here:
The app allows you to apply professional lights to your portrait images 📸 in real time ⚡
— Onur Tasar (@onurxtasar) September 7, 2022
Fellow nerds might enjoy reading about the implementation details.
Eng manager Barry Young writes,
The latest beta build of Photoshop contains a new feature called Photo Restoration. Whenever I have seen new updates in AI photo restoration over the last few years, I have tried the technology on an old family photo that I have of my great great great grandfather. A Scotsman who lived between 1845-1919. I applied the neural filter plus colorize technique to update the image in Photoshop. The restored photo is on the left, the original on the right. It is really astonishing how advanced AI is becoming.
Learn more about accessing the feature in Photoshop here.
This little guy looks like a kick in the pants to fly, and “turtle mode”—which gives the drone the ability to flip itself over & fly again after a collision—seems really smart:
The new open-source Stable Diffusion model is pretty darn compelling. Per PetaPixel:
“Just telling the AI something like ‘landscape photography by Marc Adamus, Glacial lake, sunset, dramatic lighting, mountains, clouds, beautiful’ gives instant pleasant looking photography-like images. It is incredible that technology has got to this point where mere words produce such wonderful images (please check the Facebook group for more).” — photographer Aurel Manea
Somehow I totally missed this announcement a few months back—perhaps because the device apparently isn’t compatible with my Mavic 2 Pro. I previously bought an Insta360 One R (which can split in half) with a drone-mounting cage, but I found the cam so flaky overall that I never took the step of affixing it to a cage that was said to interfere with GPS signals. In any event, this little guy looks fun:
Speaking of 360º vids, Stewart & Alina share a range of great points on “reframing with purpose” (serving the storytelling), plus technical details on relative sharpness (it’s much greater towards the center), color profiles, and more.
At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.
GANs (generative adversarial networks), like what underpins Smart Portrait in Photoshop, promise all kind of fine-grained image synthesis and editing. Check out new advances around one’s ‘do:
[Via Davis Brown]
Synthesizing wholly new images is incredible, but as I noted my recent podcast conversation, it may well be that surgical slices of tech like DALL•E will prove to be just as impactful—a la Content-Aware Fill emerging from a thin slice of the PatchMatch paper. In this case,
To fix the image, [Nicholas Sherlock] erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”
A keen eye will note that the bug’s spot pattern has changed, but it’s still the same bug. Pretty amazing.
I love seeing how Anthony Schmidt, a 13yo photographer with autism, treats his neuroatypicality & resulting hyperfocus as a blessing. It’s a point I try to gently impress upon my own obsessive son about our unusual brains. Check out Anthony’s story & his pretty damn impressive model-car photography!
Cristóbal Valenzuela from Runway ML shared a fun example of what’s possible via video segmentation & overlaying multiple takes of a trick:
- Separate yourself from the background in each clip
- Throw away all backgrounds but one, and stack up all the clips of just you (with the success on top).
Coincidentally, I just saw Russell Brown posting a fun bonus-limbed image:
Per the team blog (which lists myriad other improvements):
The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.
And here’s a fun detail:
Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.
I’ll share more details as I see tutorials, etc. arrive.
Well, they do call themselves a camera company… ¯\_(ツ)_/¯ This little contraption looks incredibly lightweight (pocketable, even) and easy to use. Visual quality (particularly stabilization) seems a little borderline, but I dig its person-centric nature, including tracking & AR effects (segmentation, cloning, etc.). Check out a great review—including a man-machine “romantic montage” (!):
Great to see Adobe AI getting some love:
Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.
Honestly, from DALL•E innovations to classic mind-blowers like this, I feel like my brain is cooking in my head. 🙃 Take ‘er away, science:
Bonus madness (see thread for details):
My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)
I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)
Driving through the Southwest in 2020, we came across this dark & haunting mural showing the nearby Navajo Generation Station:
Now I see that the station has been largely demolished, as shown in this striking drone clip:
A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.
In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:
I’m not sure who captured this image (conservationist Beverly Joubert, maybe?), or whether it’s indeed the National Geographic Picture of The Year, but it’s stunning no matter what. Take a close look:
Elsewhere I love this compilation of work from “Shadowologist & filmmaker” Vincent Bal:
View this post on Instagram
I know only what I’ve seen here, but this combination wireless charger & DSLR-style camera grip seems very thoughtfully designed. Its ability to function as a phone stand (e.g. for use while videoconferencing) while charging puts it over the top.
Nice to see my old team’s segmentation tech roll out more widely.
The Verge writes,
Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.
Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.
Check out the post for more images & making-of info.
Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:
In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.
Roughly forever ago, when I was pushing the idea of extending the Photoshop compositing pipeline to include plug-in modules (which we did kinda succeed with in the form of 3D layers—now sadly ripped out), I loved the idea of layers that could emit & control light. We didn’t get there, of course, but I’m happy to see folks like Boris FX offering some cool new controls:
In Death Valley a couple of weeks ago, my 12yo Mini Me Henry & I had fun creating little narratives in the sand. I have to say, it’s pretty cool how far a kid can get these days with a telephone & handful of plastic bricks! Here’s a little gallery we made together.
Elsewhere, I’m perpetually amazed at what folks can do with enough time, talent, and willpower:
My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!
Russell of course caught some amazing moments (see his recent posts), and you might enjoy this behind-the-scenes footage from Rocky Montez-Carr (aka Henry’s kindly chauffeur 😌🙏):
Lately I’ve been drawn to bold lighting of the face & body, both for black & white and stunning color:
Therefore I really dig this music video from Lusine, which leans into the myriad possibilities inherent in moving lights around a face:
[Via Cameron Smith]
Some 20+ years ago (cripes…), 405: The Movie became a viral smash, in part thanks to the DIY filmmakers’ trick of compositing multiple images of the busy LA freeway in order to make it look deserted.
Now (er, 8 years ago; double cripes…) Russell Houghten has used what I imagine to be similar but more modern techniques to remove car traffic from the streets, freeing up the concrete rivers for some lovely skateboarding reveries:
I’m headed out to Death Valley on Friday for some quick photographic adventures with Russell Brown & friends, and I’m really excited to try photographing with burning steel wool for the first time. I’m inspired by this tutorial from Insta360 to try shooting with my little 360º cam:
“Just don’t be horrendously disappointed if it doesn’t turn out quite like this,” advises Henry, my 12yo aspiring assistant. Fair enough, dude—but let’s take it for a spin!
If you’ve ever shot this way & have any suggestions for us, please add ’em in the comments. TIA!
I was really pleased to see Google showcase the new Magic Eraser feature in Pixel 6 marketing. Here’s a peek at how it works:
I had to chuckle & remember how, just after he’d been instrumental in shipping Content-Aware Fill in Photoshop in 2010, my teammate Iván Cavero Belaunde created a tablet version he dubbed “Trotsky,” in mock honor of the Soviet practice of “disappearing” people from photos. I still wish we’d gotten to ship it—especially with that name!
Update: Somehow Iván still has the icon after all these years:
Wait for it…
I’ve long envied friends like Adobe design director Matthew Richmond & principal scientist Marc Levoy who have the time, equipment, and energy to rig up high-end cameras for videoconferencing. Now Opal promises similar quality for the low (?) price of $299. Check out The Verge’s review, available in robo-spoken form here if you’d prefer:
PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”
None of the photos are of people who actually exist.
The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:
Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:
A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:
“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”