Category Archives: Photography

“PERSEVERE”: A giant statement of encouragement

The ongoing California storms have beaten the hell out of beloved little communities like Capitola, where the pier & cute seaside bungalows have gotten trashed. I found this effort by local artist Brighton Denevan rather moving:

@brighton.denevan

PERSEVERE 💙 • 1-6-2023 • 3-5:30 • 8 Miles

♬ original sound – brightondenevan

The Santa Cruz Sentinel writes,

In the wake of the recent devastating storm damage to businesses in Capitola Village, local artist Brighton Denevan spent a few hours Friday on Capitola Beach sculpting the word “persevere” repeatedly in the sand to highlight a message of resilience and toughness that is a hallmark of our community. “The idea came spontaneously a few hours before low tide,” Denevan said. “After seeing all the destruction, it seemed like the right message for the moment.” Denevan has been drawing on paper since the age of 5 and picked up the rake and went out to the beach canvas in 2020 and each year I’ve done more projects. Last year, he created more than 200 works in the sand locally and across the globe.

“The Book of Leaves”

Obsessive (in a good way) photographer & animator Brett Foxwell has gathered & sequenced thousands of individual leaves into a mesmerizing sequence:

This is the complete leaf sequence used in the accompanying short film LeafPresser. While collecting leaves, I conceived that the leaf shape every single plant type I could find would fit somewhere into a continuous animated sequence of leaves if that sequence were expansive enough. If I didn’t have the perfect shape, it meant I just had to collect more leaves.

[Via]

Check out frame interpolation from Runway

I meant to share this one last month, but there’s just no keeping up with the pace of progress!

My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use:

Adobe “Made In The Shade” sneak is 😎

OMG—interactive 3D shadow casting in 2D photos FTW! 🔥

In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.

New Lightroom features: A 1-minute tour, plus a glimpse of the future

The Lightroom team has rolled out a ton of new functionality, from smarter selections to adaptive presets to performance improvements. You should read up on the whole shebang—but for a top-level look, spend a minute with Ben Warde:

And looking a bit more to the future, here’s a glimpse at how generative imaging (in the style of DALL•E, Stable Diffusion, et al) might come into LR. Feedback & ideas welcome!

“Imagic”: Text-based editing of photos

It seems almost too good to be true, but Google Researchers & their university collaborators have unveiled a way to edit images using just text:

In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.

Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).

I can’t wait to see it in action!

Zooming around the world through Google Street View

“My whole life has been one long ultraviolent hyperkinetic nightmare,” wrote Mark Leyner in “Et Tu, Babe?” That thought comes to mind when glimpsing this short film by Adam Chitayat, stitched together from thousands of Street View images (see Vimeo page for a list of locations).

I love the idea—indeed, back in 2014 I tried to get Google Photos to stitch together visual segues that could interconnect one’s photos—but the pacing here has my old man brain pulling the e-brake after just some short exposure. YMMV, so here ya go:

[Via]

Are high-res shots from the Insta360 X3 any good?

Well… kinda? I’m feeling somewhat hoodwinked, though. The new cam promises 72-megapixel captures, compared to 18 from its predecessor. This happens via some kind of 4x upsampling, it appears, and at least right now that’s incompatible with shooting HDR images.

Thus, as you can see via the comparisons below & via these original images, I was able to capture somewhat better detail (e.g. look at text) at the cost of getting worse tonal range (e.g. see the X2 lying on top of the book).

I need to carve out time to watch the tutorial below on how to wring the best out of this new cam.

Insta360 announces the X3

Who’s got two thumbs & just pulled the trigger? This guuuuuy. 😌

Now, will it be worth it? I sure hope so.

Fortunately I got to try out the much larger & more expensive One R 1″ Edition back in July & concluded that it’s not for me (heavier, lacking Bullet Time, and not producing appreciably better quality results—at least for the kind of things I shoot).

I’m of course hoping the X3 (success to my much-beloved One X2) will be more up my alley. Here’s some third-party perspective:

Relight faces via a slick little web app

Check out ClipDrop’s relighting app, demoed here:

Fellow nerds might enjoy reading about the implementation details.

Photoshop previews new AI-powered photo restoration

Eng manager Barry Young writes,

The latest beta build of Photoshop contains a new feature called Photo Restoration. Whenever I have seen new updates in AI photo restoration over the last few years, I have tried the technology on an old family photo that I have of my great great great grandfather. A Scotsman who lived between 1845-1919. I applied the neural filter plus colorize technique to update the image in Photoshop. The restored photo is on the left, the original on the right. It is really astonishing how advanced AI is becoming.

Learn more about accessing the feature in Photoshop here.

Stunning landscape photos conjured by robots

The new open-source Stable Diffusion model is pretty darn compelling. Per PetaPixel:

“Just telling the AI something like ‘landscape photography by Marc Adamus, Glacial lake, sunset, dramatic lighting, mountains, clouds, beautiful’ gives instant pleasant looking photography-like images. It is incredible that technology has got to this point where mere words produce such wonderful images (please check the Facebook group for more).” — photographer  Aurel Manea

Insta360 Sphere promises epic aerial shots

Somehow I totally missed this announcement a few months back—perhaps because the device apparently isn’t compatible with my Mavic 2 Pro. I previously bought an Insta360 One R (which can split in half) with a drone-mounting cage, but I found the cam so flaky overall that I never took the step of affixing it to a cage that was said to interfere with GPS signals. In any event, this little guy looks fun:

Head to head: Insta360 One RS 1″ vs. X2

At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.

Using DALL•E to sharpen macro photography 👀

Synthesizing wholly new images is incredible, but as I noted my recent podcast conversation, it may well be that surgical slices of tech like DALL•E will prove to be just as impactful—a la Content-Aware Fill emerging from a thin slice of the PatchMatch paper. In this case,

To fix the image, [Nicholas Sherlock] erased the blurry area of the ladybug’s body and then gave a text prompt that reads “Ladybug on a leaf, focus stacked high-resolution macro photograph.”

A keen eye will note that the bug’s spot pattern has changed, but it’s still the same bug. Pretty amazing.

Fun, these clone wars are

Cristóbal Valenzuela from Runway ML shared a fun example of what’s possible via video segmentation & overlaying multiple takes of a trick:

As a commenter noted, the process (shown in this project file) goes like this:

  1. Separate yourself from the background in each clip
  2. Throw away all backgrounds but one, and stack up all the clips of just you (with the success on top).

Coincidentally, I just saw Russell Brown posting a fun bonus-limbed image:

Lightroom now supports video editing

At last…!

Per the team blog (which lists myriad other improvements):

The same edit controls that you already use to make your photography shine can now be used with your videos as well! Not only can you use Lightroom’s editing capabilities to make your video clips look their best, you can also copy and paste edit settings between photos and videos, allowing you to achieve a consistent aesthetic across both your photos and videos. Presets, including Premium Presets and Lightroom’s AI-powered Recommended Presets, can also be used with videos. Lightroom also allows you to trim off the beginning or end of a video clip to highlight the part of the video that is most important.

And here’s a fun detail:

Video: Creative — to go along with Lightroom’s fantastic new video features, these stylish and creative presets, created by Stu Maschwitz, are specially optimized to work well with videos.

I’ll share more details as I see tutorials, etc. arrive.

Outdoor Photographer reviews Adobe Super Resolution

Great to see Adobe AI getting some love:

Adobe Super Resolution technology is the best solution I’ve yet found for increasing the resolution of digital images. It doubles the linear resolution of your file, quadrupling the total pixel count while preserving fine detail. Super Resolution is available in both Adobe Camera Raw (ACR) and Lightroom and is accessed via the Enhance command. And because it’s built-in, it’s free for subscribers to the Creative Cloud Photography Plan.

Check out the whole article for details.

NASA celebrates Hubble’s 32nd birthday with a lovely photo of five clustered galaxies

Honestly, from DALL•E innovations to classic mind-blowers like this, I feel like my brain is cooking in my head. 🙃 Take ‘er away, science:

Bonus madness (see thread for details):

A free online face-swapping tool

My old boss on Photoshop, Kevin Connor, used to talk about the inexorable progression of imaging tools from the very general (e.g. the Clone Stamp) to the more specific (e.g. the Healing Brush). In the process, high-complexity, high-skill operations were rendered far more accessible—arguably to a fault. (I used to joke that believe it or not, drop shadows were cool before Photoshop made them easy. ¯\_(ツ)_/¯)

I think of that observation when seeing things like the Face Swap tool from Icons8. What once took considerable time & talent in an app like Photoshop is now rendered trivially fast (and free!) to do. “Days of Miracles & Wonder,” though we hardly even wonder now. (How long will it take DALL•E to go from blown minds to shrugged shoulders? But that’s a subject for another day.)

MyStyle promises smarter facial editing based on knowing you well

A big part of my rationale in going to Google eight (!) years ago was that a lot of creativity & expressivity hinge on having broad, even mind-of-God knowledge of one’s world (everywhere you’ve been, who’s most important to you, etc.). Given access to one’s whole photo corpus, a robot assistant could thus do amazing things on one’s behalf.

In that vein, MyStyle proposes to do smarter face editing (adjusting expressions, filling in gaps, upscaling) by being trained on 100+ images of an individual face. Check it out:

Fantastic Shadow Beasts (and Where To Find Them)

I’m not sure who captured this image (conservationist Beverly Joubert, maybe?), or whether it’s indeed the National Geographic Picture of The Year, but it’s stunning no matter what. Take a close look:

Elsewhere I love this compilation of work from “Shadowologist & filmmaker” Vincent Bal:

 

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

 

A post shared by WELCOME TO THE UNIVERSE OF ART (@artistsuniversum)

Google Photos is bringing Portrait Blur to Android subscribers

Nice to see my old team’s segmentation tech roll out more widely.

The Verge writes,

Google Photos’ portrait blur feature on Android will soon be able to blur backgrounds in a wider range of photos, including pictures of pets, food, and — my personal favorite — plants… Google Photos has previously been able to blur the background in photos of people. But with this update, Pixel owners and Google One subscribers will be able to use it on more subjects. Portrait blur can also be applied to existing photos as a post-processing effect.

Asphalt Galaxies

PetaPixel writes,

Finnish photographer Juha Tanhua has been shooting an unusual series of “space photos.” While the work may look like astrophotography images of stars, galaxies, and nebulae, they were actually captured with a camera pointed down, not up. Tanhua created the images by capturing gasoline puddles found on the asphalt of parking lots.

Check out the post for more images & making-of info.

Tutorial: Combining video frames to make long exposures

Waaaay back in the way back, we had fun enabling “Safe, humane tourist-zapping in Photoshop Extended,” using special multi-frame processing techniques to remove transient elements in images. Those techniques have remained obscure yet powerful. In this short tutorial, Julieanne Kost puts ’em to good use:

In this video (Combining Video Frames to Create Still Images), we’re going to learn how to use Smart Objects in combination with Stack Modes to combine multiple frames from a video (exported as an image sequence) into a single still image that appears to be a long exposure, yet still freezes motion.

Valley of Fire

My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!

Russell of course caught some amazing moments (see his recent posts), and you might enjoy this behind-the-scenes footage from Rocky Montez-Carr (aka Henry’s kindly chauffeur 😌🙏):

“Nobody walks in LA”—but with Content-Aware Fill, they can skate there

Some 20+ years ago (cripes…), 405: The Movie became a viral smash, in part thanks to the DIY filmmakers’ trick of compositing multiple images of the busy LA freeway in order to make it look deserted.

Now (er, 8 years ago; double cripes…) Russell Houghten has used what I imagine to be similar but more modern techniques to remove car traffic from the streets, freeing up the concrete rivers for some lovely skateboarding reveries:

Some sparkling aspirations

I’m headed out to Death Valley on Friday for some quick photographic adventures with Russell Brown & friends, and I’m really excited to try photographing with burning steel wool for the first time. I’m inspired by this tutorial from Insta360 to try shooting with my little 360º cam:

 

“Just don’t be horrendously disappointed if it doesn’t turn out quite like this,” advises Henry, my 12yo aspiring assistant. Fair enough, dude—but let’s take it for a spin!

If you’ve ever shot this way & have any suggestions for us, please add ’em in the comments. TIA!

“Adobe Trotsky”

I was really pleased to see Google showcase the new Magic Eraser feature in Pixel 6 marketing. Here’s a peek at how it works:

I had to chuckle & remember how, just after he’d been instrumental in shipping Content-Aware Fill in Photoshop in 2010, my teammate Iván Cavero Belaunde created a tablet version he dubbed “Trotsky,” in mock honor of the Soviet practice of “disappearing” people from photos. I still wish we’d gotten to ship it—especially with that name!

Update: Somehow Iván still has the icon after all these years:

New stock photos are 100% AI-generated

PetaPixel reports,

PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”

None of the photos are of people who actually exist.

The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:

Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:

Milky Way Bridge

A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:

PetaPixel writes,

“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”