I love seeing how Anthony Schmidt, a 13yo photographer with autism, treats his neuroatypicality & resulting hyperfocus as a blessing. It’s a point I try to gently impress upon my own obsessive son about our unusual brains. Check out Anthony’s story & his pretty damn impressive model-car photography!
Cristóbal Valenzuela from Runway ML shared a fun example of what’s possible via video segmentation & overlaying multiple takes of a trick:
- Separate yourself from the background in each clip
- Throw away all backgrounds but one, and stack up all the clips of just you (with the success on top).
Coincidentally, I just saw Russell Brown posting a fun bonus-limbed image:
Per the team blog (which lists myriad other improvements):
The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.
And here’s a fun detail:
Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.
I’ll share more details as I see tutorials, etc. arrive.
Well, they do call themselves a camera company… ¯\_(ツ)_/¯ This little contraption looks incredibly lightweight (pocketable, even) and easy to use. Visual quality (particularly stabilization) seems a little borderline, but I dig its person-centric nature, including tracking & AR effects (segmentation, cloning, etc.). Check out a great review—including a man-machine “romantic montage” (!):
Great to see Adobe AI getting some love:
Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.
Honestly, from DALL•E innovations to classic mind-blowers like this, I feel like my brain is cooking in my head. 🙃 Take ‘er away, science:
Bonus madness (see thread for details):
My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)
I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)
Driving through the Southwest in 2020, we came across this dark & haunting mural showing the nearby Navajo Generation Station:
Now I see that the station has been largely demolished, as shown in this striking drone clip:
A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.
In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:
I’m not sure who captured this image (conservationist Beverly Joubert, maybe?), or whether it’s indeed the National Geographic Picture of The Year, but it’s stunning no matter what. Take a close look:
Elsewhere I love this compilation of work from “Shadowologist & filmmaker” Vincent Bal:
View this post on Instagram
I know only what I’ve seen here, but this combination wireless charger & DSLR-style camera grip seems very thoughtfully designed. Its ability to function as a phone stand (e.g. for use while videoconferencing) while charging puts it over the top.
Nice to see my old team’s segmentation tech roll out more widely.
The Verge writes,
Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.
Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.
Check out the post for more images & making-of info.
Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:
In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.
Roughly forever ago, when I was pushing the idea of extending the Photoshop compositing pipeline to include plug-in modules (which we did kinda succeed with in the form of 3D layers—now sadly ripped out), I loved the idea of layers that could emit & control light. We didn’t get there, of course, but I’m happy to see folks like Boris FX offering some cool new controls:
In Death Valley a couple of weeks ago, my 12yo Mini Me Henry & I had fun creating little narratives in the sand. I have to say, it’s pretty cool how far a kid can get these days with a telephone & handful of plastic bricks! Here’s a little gallery we made together.
Elsewhere, I’m perpetually amazed at what folks can do with enough time, talent, and willpower:
My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!
Russell of course caught some amazing moments (see his recent posts), and you might enjoy this behind-the-scenes footage from Rocky Montez-Carr (aka Henry’s kindly chauffeur 😌🙏):
Lately I’ve been drawn to bold lighting of the face & body, both for black & white and stunning color:
Therefore I really dig this music video from Lusine, which leans into the myriad possibilities inherent in moving lights around a face:
[Via Cameron Smith]
Some 20+ years ago (cripes…), 405: The Movie became a viral smash, in part thanks to the DIY filmmakers’ trick of compositing multiple images of the busy LA freeway in order to make it look deserted.
Now (er, 8 years ago; double cripes…) Russell Houghten has used what I imagine to be similar but more modern techniques to remove car traffic from the streets, freeing up the concrete rivers for some lovely skateboarding reveries:
I’m headed out to Death Valley on Friday for some quick photographic adventures with Russell Brown & friends, and I’m really excited to try photographing with burning steel wool for the first time. I’m inspired by this tutorial from Insta360 to try shooting with my little 360º cam:
“Just don’t be horrendously disappointed if it doesn’t turn out quite like this,” advises Henry, my 12yo aspiring assistant. Fair enough, dude—but let’s take it for a spin!
If you’ve ever shot this way & have any suggestions for us, please add ’em in the comments. TIA!
I was really pleased to see Google showcase the new Magic Eraser feature in Pixel 6 marketing. Here’s a peek at how it works:
I had to chuckle & remember how, just after he’d been instrumental in shipping Content-Aware Fill in Photoshop in 2010, my teammate Iván Cavero Belaunde created a tablet version he dubbed “Trotsky,” in mock honor of the Soviet practice of “disappearing” people from photos. I still wish we’d gotten to ship it—especially with that name!
Update: Somehow Iván still has the icon after all these years:
Wait for it…
I’ve long envied friends like Adobe design director Matthew Richmond & principal scientist Marc Levoy who have the time, equipment, and energy to rig up high-end cameras for videoconferencing. Now Opal promises similar quality for the low (?) price of $299. Check out The Verge’s review, available in robo-spoken form here if you’d prefer:
PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”
None of the photos are of people who actually exist.
The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:
Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:
A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:
“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”
File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,
With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.
Here’s some example output:
Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.
Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:
I must try to replicate this myself!
You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.
As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:
It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:
[Via Michael Klynstra]
Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:
It’s wild what computers can now do—and wild how we just take much of it for granted.
Oh, global warming, you old scamp…
Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:
Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!
For the sake of comparison, here’s the unmodified original:
Liam Neeson narrates
“Ireland invites giant screen audiences on a joyful adventure into the Emerald Isle’s immense natural beauty, rich history, language, music and arts. Amid such awe-inspiring locations as Giant’s Causeway, Skellig Michael and the Cliffs of Moher, the film follows Irish writer Manchán Magan on his quest to reconnect Irish people from around the world with their land, language, and heritage.
Of course, my wry Irishness compels me to share Conan O’Brien’s classic counterpoint from the blustery Cliffs of Moher…
…and Neeson’s Irish home-makeover show on SNL. 😛
I like this concise 6-minute list from Shutterstock, though I wish they touched on how to avoid the dreaded scourge of propeller shadows. (Somehow I’ve yet to look this up & get my head conclusively around it.)
I think you might dig this insightful thread from Vancouver-based DOP/director/colorist Devan Scott, just as I did:
I love seeing the team revisiting & polishing the fundamentals!
The Object Selection Tool has been significantly improved. Now, just hover over the object you want to select in the image and a single click will select it.
Choose Layer>Mask All Objects to easily generate masks for all the objects detected within your layer with just a single click.
Talk about an amazing gig that comes around once in ~forever:
Lightroom is the world’s top photography ecosystem with everything needed to edit, manage, store and share your images across a connected system on any device and the web! The Digital Imaging group at Adobe includes the Photoshop and Lightroom ecosystems.
We are looking for the next leader of the Lightroom ecosystem to build and execute the vision and strategy to accelerate the growth of Lightroom’s mobile, desktop, web and cloud products into the future. This person should have a proven track record as a great product management leader, have deep empathy for customers and a passion for photography. This is a high visibility role on one of Adobe’s most important projects.
Jump in or tell a promising friend!
Per Daring Fireball:
Devan Scott put together a wonderful, richly illustrated thread on Twitter contrasting the use of color grading in Skyfall and Spectre. Both of those films were directed by Sam Mendes, but they had different cinematographers — Roger Deakins for Skyfall, and Hoyte van Hoytema for Spectre. Scott graciously and politely makes the case that Skyfall is more interesting and fully-realized because each new location gets a color palette of its own, whereas the entirety of Spectre is in a consistent color space.
Click or tap on through to the thread; I think you’ll enjoy it.
Congrats to Eric Chan & the whole crew for making Time’s list:
Most of the photos we take these days look great on the small screen of a phone. But blow them up, and the flaws are unmistakable. So how do you clean up your snaps to make them poster-worthy? Adobe’s new Super Resolution feature, part of its Lightroom and Photoshop software, uses machine learning to boost an image’s resolution up to four times its original pixel count. It works by looking at its database of photos similar to the one it’s upscaling, analyzing millions of pairs of high- and low-resolution photos (including their raw image data) to fill in the missing data. The result? Massive printed smartphone photos worthy of a primo spot on your living-room wall. —Jesse Will
[Via Barry Young}
In traditional graphics work, vectorizing a bitmap image produces a bunch of points & lines that the computer then renders as pixels, producing something that approximates the original. Generally there’s a trade-off between editability (relatively few points, requiring a lot of visual simplification, but easy to see & manipulate) and fidelity (tons of points, high fidelity, but heavy & hard to edit).
Importing images into a generative adversarial network (GAN) works in a similar way: pixels are converted into vectors which are then re-rendered as pixels—and guess what, it’s a generally lossy process where fidelity & editability often conflict. When the importer tries to come up with a reasonable set of vectors that fit the entire face, it’s easy to end up with weird-looking results. Additionally, changing one attribute (e.g. eyebrows) may cause changes to others (e.g. hairline). I saw a case once where making someone look another direction caused them to grow a goatee (!).
My teammates’ FaceStudio effort proposes to address this problem by sidestepping the challenge of fitting the entire face, instead letting you broadly select a region and edit just that. Check it out:
Turning bursts of what would have been outtakes into compelling little animations: that’s the promise of Project In-Between.
“Folded optics” & computational zoom FTW! The ability to apply segmentation and selective blur (e.g. to the background behind a moving cyclist) strikes me as especially smart.
On a random personal note, it’s funny to see demo files for features like Magic Eraser and think, “Hey, I know that guy!” much like I did with Content-Aware Fill eleven (!) years ago. And it’s fun that some of the big brains I got to work with at Google have independently come over to collaborate at Adobe. It’s a small, weird world.
I know I posted about it just the other day, but the design of this system is legit interesting. I thought it was especially cool that one can remove the side grips, attach them to the monitor, and control the whole rig from literally miles away (!).
Built-in gimbal, 8K rez, LIDAR rangefinder for low-light focusing—let’s go!
It commands a pro price tag, too. Per The Verge:
The 6K version costs $7,199, the 8K version is $11,499, and both come with a decent kit: the gimbal, camera, LIDAR range finder, a monitor and hand grips / top handle, a carrying case, and a battery (the 8K camera also comes with a 1TB SSD). In the realm of production cameras and stabilization systems, that’s actually on the lower end (DJI’s cinema-focused Ronin 2 stabilizer costs over $8,000 without any camera attached, and Sony’s FX9 6K camera costs $11,000 for just the body), but if you were hoping to use the LIDAR focus system to absolutely nail focus in your vlogs, you may want to rethink that.
I was so excited to build an AR stack for Google Lens, aiming to bring realtime magic to billions of phones’ default camera. Sadly, after AR Playground went out the door three years ago & the world shrugged, Google lost interest.
At least they’re letting others like Snap grab the mic.
Dubbed “Quick Tap to Snap,” the new feature will enable users to tap the back of the device twice to open the Snapchat camera directly from the lock screen. Users will have to authenticate before sending photos or videos to a friend or their personal Stories page.
I wish Apple would offer similar access to third-party camera apps like Halide Camera, etc. Its absence has entirely killed my use of those apps, no matter how nice they may be.
Take a moment, won’t you, to enjoy some ethereal undersea beauty with me?
Many, many years ago, en route home from Legoland, we spied a crazy-looking photography rig atop a car on the freeway, so naturally the boys had to recreate it in Lego when we got home:
Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)
If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯
Anyway, back in 2021, take ‘er away, Ryan Dumlao: