I have to admit, I don’t know Erwitt’s photography nearly as well as I know his name, but this largely humorous new collection makes me want to change that:
Category Archives: Photography
Hypersonics throttle up
Here’s just a beautiful little bit of filmmaking (starting at 2:06, in case the link below fails to cue up the right spot). Let’s go Stratolaunch!
New USAF Thunderbirds documentary looks amazing
Having really enjoyed shooting the Thunderbirds over the years, I’m eager to check this out:
From a recent show we saw in Salinas:
VSCO introduces Canvas
Another day, another ~infinite canvas for ideation & synthesis. This time, somewhat to my surprise, the surface comes from VSCO—a company whose users I’d have expected to be precious & doctrinaire in their opposition to any kind of AI-powered image generation. But who knows, “you can just do things.” ¯\_(ツ)_/¯
Sigma BF: Clean AF
Is it for me? Dunno: lately the only thing that justifies shooting with something other than my phone is a big, fast zoom lens, and I don’t know whether pairing such a thing with this slim beauty would kinda defeat the purpose. Still, I must know more…
Here’s a nice early look at the cam plus a couple of newly announced lenses:
SynthLight promises state-of-the-art relighting
Here’s a nice write-up covering this paper. It’ll be interesting to dig into the details of how it compares to previous work (see category). [Update: The work comes in part from Adobe Research—I knew those names looked familiar :-)—so here’s hoping we see it in Photoshop & other tools soon.]
this is wild..
this new AI relighting tool can detect the light source in the 3D environment of your image and relight your character, the shadows look so realistic..
it’s especially helpful for AI images
10 examples: pic.twitter.com/sxNR39YTeT
— el.cine (@EHuanglu) January 18, 2025
New AI-powered upscalers arrive
Check out the latest from Topaz:
Topaz really cooked with their new upscaling model called “redefine” — basically every CSI “enhance” meme you’ve seen IRL.
Settings:
– 4x Upscale
– Creativity: 2
– Texture: 3
– No promptIt’s basically the Topaz take on the magnific style of “creative upscaling” where you use… pic.twitter.com/T7dLoAjFJt
— Bilawal Sidhu (@bilawalsidhu) December 17, 2024
Alternately, you can run InvSR via Gradio:
Image super-resolution model just dropped! Superior results even with a single sampling step.
InvSR: Arbitrary-steps Image Super-resolution via Diffusion Inversion. pic.twitter.com/gS7uoGwnQ8
— Gradio (@Gradio) December 16, 2024
Thunder & The Deep Blue Sea
Everybody needs a good wingman, and when it comes to celebrating the beauty of aviation, I’ve got a great one in my son Henry. Much as we’ve done the last couple of years, this month we first took in the air show in Salinas, featuring the USAF Thunderbirds…

…followed by the Blue Angels buzzing Alcatraz & the Golden Gate at Fleet Week in San Francisco.

In both cases we were treated to some jaw-dropping performances—from a hovering F-35 to choreographed walls of fire—from some of the best aviators in the world. Check ’em out:
And thanks for the nice shootin’, MiniMe!

iPhone goes on safari
Austin Mann puts the new gear through its paces in Kenya:
Last week at the Apple keynote event, the iPhone camera features that stood out the most to me were the new Camera Control button, upgraded 48-megapixel Ultra Wide sensor, improved audio recording features (wind reduction and Audio Mix), and Photographic Styles. […]
Over the past week we’ve traveled over a thousand kilometers across Kenya, capturing more than 10,000 photos and logging over 3TB of ProRes footage with the new iPhone 16 Pro and iPhone 16 Pro Max cameras. Along the way, we’ve gained valuable insights into these camera systems and their features.
iPhone 16 + AI: Quick helpful summaries
Check out my friend Bilawal’s summary thread, which pairs quick demos from Apple with bits of useful context:
Caught the Apple keynote? I’ve distilled down the most intriguing highlights for AI and spatial computing creators and builders—no need to sift through it yourself. Thread: pic.twitter.com/hiLM7iMzi4
— Bilawal Sidhu (@bilawalsidhu) September 10, 2024
There are some great additional details in this thread from Halide Camera as well:
There’s a lot of info to digest from the keynote, so here’s our summary of all the changes and new features of iPhone 16 and 16 Pro cameras in this quick thread pic.twitter.com/z7xB0aekLi
— Halide + Kino (@halidecamera) September 9, 2024
“I Accidentally Became A Meme: Hide The Pain Harold”
Heh: András István Arató—aka Hide The Pain Harold, the wincing king of stock photography—seems like a genuinely good dude. Here he narrates his story in brief:
Photography: Chasing Shreveport Steam
(And no, I’m not just talking oppressive humidity—though after living in California so long, that was quite a handful.) My 14yo MiniMe Henry & I had a ball over the weekend on our first trip to Louisiana, chasing the Empress steam engine as it made its way from Canada down to Mexico City. I’ll try to share a proper photo album soon, but in the meantime here are some great shots from Henry (enhanced with the now-indispensible Generative Fill), plus a bit of fun drone footage:
GenFill comes to Lightroom!
When I surveyed thousands of Photoshop customers waaaaaay back in the Before Times—y’know, summer 2022—I was struck by the fact that beyond wanting to insert things into images, and far beyond wanting to create images from scratch, just about everyone wanted better ways to remove things.
Happily, that capability has now come to Lightroom. It’s a deceptively simple change that, I believe, required a lot of work to evolve Lr’s non-destructive editing pipeline. Traditionally all edits were expressed as simple parameters, and then masks got added—but as far as I know, this is the first time Lr has ventured into transforming pixels in an additive way (that is, modify one bunch, then make subsequent edits that depend on the previous edits). That’s a big deal, and a big step forward for the team.
A few more examples courtesy of Howard Pinsky:
Removing distracting objects just got that much more powerful in @Lightroom. Generative Remove has arrived! pic.twitter.com/CrZ6A3AKOF
— Howard Pinsky (@Pinsky) May 21, 2024
Distraction removal headed to Photoshop, Lightroom
Adobe friends like Eli Shechtman have been publishing research for several years, and Creative Bloq reports that the functionality is due to make its way to the flagship imaging apps in the near future. Check out their post for details.
Automatic selection:

Cleaned-up results:

Object removal in Lightroom:

Lego + GenFill = Yosemite Magic
Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:
Generative Fill, remaining awesome for family photos. From Yosemite yesterday: pic.twitter.com/GtRP0UCaV6
— John Nack (@jnack) April 1, 2024
Google Research promises better image compositing
Speaking of folks with whom I’ve somehow had the honor of working, some of my old teammates from Google have unveiled ObjectDrop. Check out this video & thread:
Google presents ObjectDrop
Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion
Diffusion models have revolutionized image editing but often generate images that violate physical laws, particularly the effects of objects on the scene, e.g., pic.twitter.com/j7TMadRhxo
— AK (@_akhaliq) March 28, 2024
A bit more detail, from the project site:
Diffusion models have revolutionized image editing but often generate images that violate physical laws, particularly the effects of objects on the scene, e.g., occlusions, shadows, and reflections. By analyzing the limitations of self-supervised approaches, we propose a practical solution centered on a counterfactual dataset.
Our method involves capturing a scene before and after removing a single object, while minimizing other changes. By fine-tuning a diffusion model on this dataset, we are able to not only remove objects but also their effects on the scene. However, we find that applying this approach for photorealistic object insertion requires an impractically large dataset. To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.
Our approach significantly outperforms prior methods in photorealistic object removal and insertion, particularly at modeling the effects of objects on the scene.
Irish blessings
Hey gang—I hope you’ve had a safe & festive St. Patrick’s Day. To mark the occasion, I figured I’d reshare a couple of the videos I captured in the old country with my dad back in August.
Here’s Co. Clare’s wild burren (“rocky district,” hence the choice of Chieftains/Stones banger)…
…my dad’s grandparents’ medieval town in Galway…
…and my mom’s mother’s farm in Mayo:
Creating the creepy infrared world of Dune
I really enjoyed Dolby’s recent podcast on Greig Fraser and the Cinematography of Dune: Part Two, as well as this deep dive with Denis Villeneuve on how they modified an ARRI Alexa LF IMAX camera to create the Harkonnens’ alienating home world.
I love this idea and I tried, for Giedi Prime, the home world of Harkonnen, there’s less information in the book and it’s a world that is disconnected from nature. It’s a plastic world. So, I thought that it could be interesting if the light, the sunlight could give us some insight on their psyche. What if instead of revealing colors, the sunlight was killing them and creating a very eerie black and white world, that will give us information about how these people perceive reality, about their political system, about how that primitive brutalist culture and it was in the screenplay.

Happy birthday to Photoshop, Lightroom, and Camera Raw!
I’m a day late saying it here, but happy birthday to three technologies that changed my life (all our lives, maybe), and to which I’ll be forever grateful to have gotten to contribute. As Jeff Schewe noted:
Happy Birthday Digital Imaging…aka Photoshop, Camera Raw & Lightroom. Photoshop shipped February 19th, 1990. Camera Raw shipped February 19th, 2003 and Lightroom shipped February 19th, 2007. Coincidence? Hum, I wonder…but ya never know when Thomas Knoll is involved…

Check out Jeff’s excellent overview, written for Photoshop’s 30th, as well as his demo of PS 1.0 (which “cost a paltry $895 and could run on home computers like the Macintosh IIfx for under $10,000″—i.e. ~$2,000 & $24,000 today!).
Podcast: The Cinematography of Oppenheimer
I really enjoyed getting some behind-the-scenes info on The Cinematography of Oppenheimer, courtesy of director of photography Hoyte Van Hoytema & Dolby’s Sound + Image Lab podcast. It’s packed with interesting details (e.g. hacking loud & bulky IMAX hardware, dealing with new film emulsions, the impact on color perception of cutting from color to B&W, etc.). I think you’ll dig it.
Six Ways to Spice Up Your Photos in Lightroom
Adding glows & fog, making bokeh balls, and more—lots of nice, bite-sized demos, conveniently navigable by chapter (hover over the progress bar):
FPV Miniatur Wunderland
Insta360 takes us down & not so dirty around Hamburg’s Miniatur Wunderland in this fun 2-minute tour:
Deeply chill photography
(Cue Metallica’s Trapped Under Ice!)
Russell Brown & some of my old Photoshop teammates recently ventured into -40º (!!) weather in Canada, pushing themselves & their gear to the limits to witness & capture the Northern Lights:
Perhaps on future trips they can team up with these folks:
To film an ice hockey match from this new angle of action, Axis Communications used a discrete modular camera — commonly seen in ATM machines, onboard vehicles, and other small spaces where a tiny camera needs to fit — and froze it inside the ice.
Check out the results:
Behind—and under—the scenes:
To the moon! Insta360 makes a satellite
What if your tiny planet—a visual genre I’ve enjoyed beating halfway into the ground—were our actual planet? Insta360, on whom I’ve spent crazy amounts of money buying brilliant-if-maddening gear, has now sent their devices to the edge of space:

“We’re on a mission from God…”
On the off chance you missed me over the last week or so, it’s due to my being off in Illinois with the fam, having fun making silliness like this:
Reflect on this: Project See Through burns through glare
Marc Levoy (professor emeritus at Stanford) was instrumental in delivering the revolutionary Night Sight mode on Pixel 3 phones—and by extension on all the phones that quickly copied their published techniques. After leaving Google for Adobe, he’s been leading a research team that’s just shown off the reflection-zapping Project See Through:
Today, it’s difficult or impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.
New features come to Lightroom Classic
Lens Blur, HDR, Point Color, and more: Katrin Eismann breaks down the update in this overview, and Matt Kloskowski shows the features in action here:
☘️ The Fields of Athenry ☘️
During our Ireland trip a few weeks back, I captured some aerial views of the town from which my great-grandfather emigrated.
As it often does, Luma generated a really nice 3D model from my orbiting footage:
☘️ Air Mayo ☘️
Just me, my dad, our Irish cousins, and 900 of their closest sheep. ☘️😌☘️
Irish panos 🏰🚁☘️
Just a wee bit o’ the droning for ya, overflying our cousins’ ancient neighbors (such show-offs!):
DJI “Spotlight Mode” looks rad
I’ve been flying a bunch here in Ireland this week & can’t wait to share some good stuff soon. (Weren’t transatlantic plane rides meant for video editing?) In the meantime, I’m only now learning of a really promising-looking way to have the drone focus on a subject of interest, leaving the operator free to vary other aspects of flight (height, rotation, etc.). Check it out:
“Photos Of Hollywood’s Biggest Stars Hanging With Their Younger Selves”
Fool me thrice? Insta360 GO 3 arrives
Having really enjoyed my Insta360 One X, X2, and X3 cams over the years, I’ve bought—and been burned by—the tiny GO & GO2:
- In 2019 I wrote The tiny Insta360 GO looks clever. Sadly I found it far more glitchy than clever.
- In 2021 I wrote Insta360 GO 2: Finally a wearable cam that doesn’t suck? It was better, but I still can’t count on it to actually record. Thus it’s largely gathered dust.
And yet… I still believe that having an unobtrusive, AI-powered “wearable photographer” (as Google Clips sought to be) is a worthy and potentially game-changing north star. (See the second link above for some interesting history & perspective). So, damn if I’m not looking at the new GO 3 and thinking, “Maybe this time Lucy won’t pull away the football…”
Here’s Casey Neistat’s perspective:
A stunning timelapse from Insta360
“Don’t give up on the real world”: A great new campaign from Nikon
I’m really enjoying this new campaign from Nikon Peru:
“This obsession with the artificial is making us forget that our world is full of amazing natural places that are often stranger than fiction.
“We created a campaign with real unbelievable natural images taken with our cameras, with keywords like those used with Artificial Intelligence.”
Check out the resulting 2-minute piece:
And here are some of the stills, courtesy of PetaPixel:



New Lightroom updates enhance noise removal & more
Check ’em out:
From the team blog:
Today, Adobe is unveiling new AI innovations in the Lightroom ecosystem — Lightroom, Lightroom Classic, Lightroom Mobile and Web — that make it easy to edit photos like a pro, so everyone can bring their creative visions to life wherever inspiration strikes. New Adobe Sensei AI-powered features empower intuitive editing and seamless workflows. Expanded adaptive presets and Masking categories for Select People make it easy to adjust fine details from the color of the sky to the texture of a person’s beard with a single click. Additionally, new features including Denoise and Curves in masking help you do more with less to save time and focus on getting the perfect shot.
Firefly + Sky = Superfly Firesky?
Terry White vanquished a chronic photographic bummer—the blank or boring sky—by asking Firefly to generate a very specific asset (namely, an evening sky at the exact site of the shoot), then using Photoshop’s sky replacement feature to enhance the original. Check it out:
“What is Mise en Scène?”
One of the great pleasures of parenting is, of course, getting to see your kids’ interests and knowledge grow, and yesterday my 13yo budding photographer Henry and I were discussing the concept of mise en scène. In looking up a proper explanation for him, I found this great article & video, which Kubrick/Shining lovers in particular will enjoy:
Back from the land of steam & snow 🚂
It’s been quiet here for a few days as my 13-year-old budding photographer son Henry & I were off at the Nevada Northern Railway’s Winter Steam Photo Weekend Spectacular. We had a staggeringly good time, and now my poor MacBook is liquefying under the weight of processing our visual haul. 🤪 I plan to share more images & observations soon from the experience (which was somehow the first photo workshop, or even proper photo class, I’ve taken!). Meanwhile, here’s a little Insta gallery of Lego Henry in action:
For a taste of how the workshop works, check out this overview from past events:
Amazing drone footage from Argentina
Check out this eye-popping capture of the World Cup celebration in Buenos Aires, and see pilot Ale Petra’s Instagram feed for lots more FPV goodness:
“PERSEVERE”: A giant statement of encouragement
The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:
@brighton.denevan PERSEVERE 💙 • 1-6-2023 • 3-5:30 • 8 Miles
The Santa Cruz Sentinel writes,
In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.

“The Book of Leaves”
Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:
This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.
[Via]
Photography: The beauty of cavitation @ 82,000fps
Happy Friday. 🫧
Check out frame interpolation from Runway
I meant to share this one last month, but there’s just no keeping up with the pace of progress!
My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use:
AI photography: A deeper dive into “Infinite Nature”
A few weeks ago I shared info on Google’s “Infinite Nature” tech for generating eye-popping fly-throughs from still images. Now that team has shared various interesting tech details on how it all works. And if reading all that isn’t your bag, hey, at least enjoy some beautiful results:

Adobe “Made In The Shade” sneak is 😎
OMG—interactive 3D shadow casting in 2D photos FTW! 🔥
In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.
New Lightroom features: A 1-minute tour, plus a glimpse of the future
The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:
And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!
“Imagic”: Text-based editing of photos
It seems almost too good to be true, but Google Researchers & their university collaborators have unveiled a way to edit images using just text:

In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.

Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).

I can’t wait to see it in action!
A new, free AI colorization tool
Check out Palette:
Zooming around the world through Google Street View
“My whole life has been one long ultraviolent hyperkinetic nightmare,” wrote Mark Leyner in “Et Tu, Babe?” That thought comes to mind when glimpsing this short film by Adam Chitayat, stitched together from thousands of Street View images (see Vimeo page for a list of locations).
I love the idea—indeed, back in 2014 I tried to get Google Photos to stitch together visual segues that could interconnect one’s photos—but the pacing here has my old man brain pulling the e-brake after just some short exposure. YMMV, so here ya go:
[Via]