Well… kinda? I’m feeling somewhat hoodwinked, though. The new cam promises 72-megapixel captures, compared to 18 from its predecessor. This happens via some kind of 4x upsampling, it appears, and at least right now that’s incompatible with shooting HDR images.
Thus, as you can see via the comparisons below & via these original images, I was able to capture somewhat better detail (e.g. look at text) at the cost of getting worse tonal range (e.g. see the X2 lying on top of the book).
I need to carve out time to watch the tutorial below on how to wring the best out of this new cam.
Who’s got two thumbs & just pulled the trigger? This guuuuuy. 😌
Now, will it be worth it? I sure hope so.
Fortunately I got to try out the much larger & more expensive One R 1″ Edition back in July & concluded that it’s not for me (heavier, lacking Bullet Time, and not producing appreciably better quality results—at least for the kind of things I shoot).
I’m of course hoping the X3 (success to my much-beloved One X2) will be more up my alley. Here’s some third-party perspective:
The latest beta build of Photoshop contains a new feature called Photo Restoration. Whenever I have seen new updates in AI photo restoration over the last few years, I have tried the technology on an old family photo that I have of my great great great grandfather. A Scotsman who lived between 1845-1919. I applied the neural filter plus colorize technique to update the image in Photoshop. The restored photo is on the left, the original on the right. It is really astonishing how advanced AI is becoming.
Learn more about accessing the feature in Photoshop here.
The new open-source Stable Diffusion model is pretty darn compelling. Per PetaPixel:
“Just telling the AI something like ‘landscape photography by Marc Adamus, Glacial lake, sunset, dramatic lighting, mountains, clouds, beautiful’ gives instant pleasant looking photography-like images. It is incredible that technology has got to this point where mere words produce such wonderful images (please check the Facebook group for more).” — photographer Aurel Manea
Somehow I totally missed this announcement a few months back—perhaps because the device apparently isn’t compatible with my Mavic 2 Pro. I previously bought an Insta360 One R (which can split in half) with a drone-mounting cage, but I found the cam so flaky overall that I never took the step of affixing it to a cage that was said to interfere with GPS signals. In any event, this little guy looks fun:
Speaking of 360º vids, Stewart & Alina share a range of great points on “reframing with purpose” (serving the storytelling), plus technical details on relative sharpness (it’s much greater towards the center), color profiles, and more.
At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.
Synthesizing wholly new images is incredible, but as I noted my recent podcast conversation, it may well be that surgical slices of tech like DALL•E will prove to be just as impactful—a la Content-Aware Fill emerging from a thin slice of the PatchMatch paper. In this case,
To fix the image, [Nicholas Sherlock] erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”
A keen eye will note that the bug’s spot pattern has changed, but it’s still the same bug. Pretty amazing.
I love seeing how Anthony Schmidt, a 13yo photographer with autism, treats his neuroatypicality & resulting hyperfocus as a blessing. It’s a point I try to gently impress upon my own obsessive son about our unusual brains. Check out Anthony’s story & his pretty damn impressive model-car photography!
The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.
And here’s a fun detail:
Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.
I’ll share more details as I see tutorials, etc. arrive.
Well, they do call themselves a camera company… ¯\_(ツ)_/¯ This little contraption looks incredibly lightweight (pocketable, even) and easy to use. Visual quality (particularly stabilization) seems a little borderline, but I dig its person-centric nature, including tracking & AR effects (segmentation, cloning, etc.). Check out a great review—including a man-machine “romantic montage” (!):
Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.
My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)
I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)
A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.
In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:
Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.
Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.
Check out the post for more images & making-of info.
Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:
In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.
Roughly forever ago, when I was pushing the idea of extending the Photoshop compositing pipeline to include plug-in modules (which we did kinda succeed with in the form of 3D layers—now sadly ripped out), I loved the idea of layers that could emit & control light. We didn’t get there, of course, but I’m happy to see folks like Boris FX offering some cool new controls:
In Death Valley a couple of weeks ago, my 12yo Mini Me Henry & I had fun creating little narratives in the sand. I have to say, it’s pretty cool how far a kid can get these days with a telephone & handful of plastic bricks! Here’s a little gallery we made together.
Elsewhere, I’m perpetually amazed at what folks can do with enough time, talent, and willpower:
My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!
Some 20+ years ago (cripes…), 405: The Movie became a viral smash, in part thanks to the DIY filmmakers’ trick of compositing multiple images of the busy LA freeway in order to make it look deserted.
Now (er, 8 years ago; double cripes…) Russell Houghten has used what I imagine to be similar but more modern techniques to remove car traffic from the streets, freeing up the concrete rivers for some lovely skateboarding reveries:
I’m headed out to Death Valley on Friday for some quick photographic adventures with Russell Brown & friends, and I’m really excited to try photographing with burning steel wool for the first time. I’m inspired by this tutorial from Insta360 to try shooting with my little 360º cam:
“Just don’t be horrendously disappointed if it doesn’t turn out quite like this,” advises Henry, my 12yo aspiring assistant. Fair enough, dude—but let’s take it for a spin!
If you’ve ever shot this way & have any suggestions for us, please add ’em in the comments. TIA!
I was really pleased to see Google showcase the new Magic Eraser feature in Pixel 6 marketing. Here’s a peek at how it works:
I had to chuckle & remember how, just after he’d been instrumental in shipping Content-Aware Fill in Photoshop in 2010, my teammate Iván Cavero Belaunde created a tablet version he dubbed “Trotsky,” in mock honor of the Soviet practice of “disappearing” people from photos. I still wish we’d gotten to ship it—especially with that name!
Update: Somehow Iván still has the icon after all these years:
I’ve long envied friends like Adobe design director Matthew Richmond & principal scientist Marc Levoy who have the time, equipment, and energy to rig up high-end cameras for videoconferencing. Now Opal promises similar quality for the low (?) price of $299. Check out The Verge’s review, available in robo-spoken form here if you’d prefer:
PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”
None of the photos are of people who actually exist.
The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:
Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:
A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:
“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”
File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,
With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.
Here’s some example output:
Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.
Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:
I must try to replicate this myself!
You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.
As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:
It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:
Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:
Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:
Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!
For the sake of comparison, here’s the unmodified original:
“Ireland invites giant screen audiences on a joyful adventure into the Emerald Isle’s immense natural beauty, rich history, language, music and arts. Amid such awe-inspiring locations as Giant’s Causeway, Skellig Michael and the Cliffs of Moher, the film follows Irish writer Manchán Magan on his quest to reconnect Irish people from around the world with their land, language, and heritage.
Of course, my wry Irishness compels me to share Conan O’Brien’s classic counterpoint from the blustery Cliffs of Moher…
I like this concise 6-minute list from Shutterstock, though I wish they touched on how to avoid the dreaded scourge of propeller shadows. (Somehow I’ve yet to look this up & get my head conclusively around it.)
Lightroom is the world’s top photography ecosystem with everything needed to edit, manage, store and share your images across a connected system on any device and the web! The Digital Imaging group at Adobe includes the Photoshop and Lightroom ecosystems.
We are looking for the next leader of the Lightroom ecosystem to build and execute the vision and strategy to accelerate the growth of Lightroom’s mobile, desktop, web and cloud products into the future. This person should have a proven track record as a great product management leader, have deep empathy for customers and a passion for photography. This is a high visibility role on one of Adobe’s most important projects.