Category Archives: Photography

New stock photos are 100% AI-generated

PetaPixel reports,

PantherMedia, the first microstock agency in Germany, […] partnered with VAIsual, a technology company that pioneers algorithms and solutions to generate synthetic licensed stock media. The two have come together to offer the first set of 100% AI-generated, licensable stock photos of “people.”

None of the photos are of people who actually exist.

The “first” claim seems odd to me, as Generated.photos has been around for quite some time—albeit not producing torsos. That site offers an Anonymizer service that can take in your image, then generate multiple faces that vaguely approximate your characteristics. Here’s what it made for me:

Now I’m thinking of robots replacing humans in really crummy stock-photo modeling jobs, bringing to mind Mr. “Rob Ott” sliding in front of the camera:

Milky Way Bridge

A year ago, I was shivering out amidst the Trona Pinnacles with Russell Brown, working to capture some beautiful celestial images at oh-dark-early. I’m wholly unsurprised to learn that he knows photographer Michael Shainblum, who went to even more extraordinary lengths to capture this image of the Milky Way together with the Golden Gate Bridge:

PetaPixel writes,

“I think this was the perfect balance of a few different things,” he explains. “The fog was thick and low enough to really block out most of the light pollution from the city, but the fog had also traveled so far inland that it covered most of the eastern bay as well. The clouds above just the eastern side around the cities may have also helped. The last thing is the time of evening and time of the season. I was photographing the Milky Way late at night as it started to glide across the western sky, away from the city.”

Photography: Keyframe mode on Skydio 2 looks clever & fun

File under, “OMG, Duh, Why Didn’t We Think Of/Get This Sooner?” The Verge writes,

With Skydio’s self-flying drone, you don’t need to sketch or photograph those still frames, of course. You simply fly there. You fly the drone to a point in 3D space, press a button when the drone’s camera is lined up with what you want to see in the video, then fly to the next, virtually storyboarding your shot with every press.

Here’s some example output:

Check out the aforementioned Verge article for details on how the mode works (and sometimes doesn’t). Now I just need to get Russell Brown or someone (but let’s be honest, it’s Russell 😉) to expense one of these things so I can try it out.

Rad scans: Drones & trees

Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:

I must try to replicate this myself!

You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.

As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:

It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:

[Via Michael Klynstra]

Photography: “A Choice of Weapons”

Nearly 16 (!) years ago I noted the passing of “novelist, self-taught pianist, semi-pro basketball player, composer, director of Shaft–who somehow he still found time to be a groundbreaking photojournalist at Life for more than 20 years” Gordon Parks. Now HBO is streaming “A Choice of Weapons: Inspired By Gordon Parks,” covering his work & that of those he’s inspired to bear witness:

Dreaming of a Neural Christmas

Oh, global warming, you old scamp…

Illinois stayed largely snow-free during our recent visit, but I had some fun screwing around with Photoshop’s new Landscape Mixer Neural Filter, giving the place a dusting of magic:

Just for the lulz, I tried applying the filter to a 360º panorama I’d captured via my drone. The results don’t entirely withstand a lot of scrutiny (try showing the pano below in full-screen mode & examine the buildings), but they’re fun—and good grief, we can now do all this in literally one click!

For the sake of comparison, here’s the unmodified original:

Cinematography: Ireland in IMAX ☘️

Liam Neeson narrates

Ireland invites giant screen audiences on a joyful adventure into the Emerald Isle’s immense natural beauty, rich history, language, music and arts. Amid such awe-inspiring locations as Giant’s Causeway, Skellig Michael and the Cliffs of Moher, the film follows Irish writer Manchán Magan on his quest to reconnect Irish people from around the world with their land, language, and heritage.

Of course, my wry Irishness compels me to share Conan O’Brien’s classic counterpoint from the blustery Cliffs of Moher…

…and Neeson’s Irish home-makeover show on SNL. 😛

Lightroom is looking for a new PM Director

Talk about an amazing gig that comes around once in ~forever:

Lightroom is the world’s top photography ecosystem with everything needed to edit, manage, store and share your images across a connected system on any device and the web! The Digital Imaging group at Adobe includes the Photoshop and Lightroom ecosystems.

We are looking for the next leader of the Lightroom ecosystem to build and execute the vision and strategy to accelerate the growth of Lightroom’s mobile, desktop, web and cloud products into the future. This person should have a proven track record as a great product management leader, have deep empathy for customers and a passion for photography. This is a high visibility role on one of Adobe’s most important projects.

Jump in or tell a promising friend!

Adobe releases Arm version of Lightroom for Windows and macOS - The Verge

Color in Skyfall

Per Daring Fireball:

Devan Scott put together a wonderful, richly illustrated thread on Twitter contrasting the use of color grading in Skyfall and Spectre. Both of those films were directed by Sam Mendes, but they had different cinematographers — Roger Deakins for Skyfall, and Hoyte van Hoytema for Spectre. Scott graciously and politely makes the case that Skyfall is more interesting and fully-realized because each new location gets a color palette of its own, whereas the entirety of Spectre is in a consistent color space.

Click or tap on through to the thread; I think you’ll enjoy it.

Best Inventions of 2021: Adobe Super Resolution

Congrats to Eric Chan & the whole crew for making Time’s list:

Most of the photos we take these days look great on the small screen of a phone. But blow them up, and the flaws are unmistakable. So how do you clean up your snaps to make them poster-worthy? Adobe’s new Super Resolution feature, part of its Lightroom and Photoshop software, uses machine learning to boost an image’s resolution up to four times its original pixel count. It works by looking at its database of photos similar to the one it’s upscaling, analyzing millions of pairs of high- and low-resolution photos (including their raw image data) to fill in the missing data. The result? Massive printed smartphone photos worthy of a primo spot on your living-room wall. —Jesse Will

[Via Barry Young}

FaceStudio enables feature-by-feature editing via GANs

In traditional graphics work, vectorizing a bitmap image produces a bunch of points & lines that the computer then renders as pixels, producing something that approximates the original. Generally there’s a trade-off between editability (relatively few points, requiring a lot of visual simplification, but easy to see & manipulate) and fidelity (tons of points, high fidelity, but heavy & hard to edit).

Importing images into a generative adversarial network (GAN) works in a similar way: pixels are converted into vectors which are then re-rendered as pixels—and guess what, it’s a generally lossy process where fidelity & editability often conflict. When the importer tries to come up with a reasonable set of vectors that fit the entire face, it’s easy to end up with weird-looking results. Additionally, changing one attribute (e.g. eyebrows) may cause changes to others (e.g. hairline). I saw a case once where making someone look another direction caused them to grow a goatee (!).

My teammates’ FaceStudio effort proposes to address this problem by sidestepping the challenge of fitting the entire face, instead letting you broadly select a region and edit just that. Check it out:

Inside the 50-megapixel Pixel 6 camera

“Folded optics” & computational zoom FTW! The ability to apply segmentation and selective blur (e.g. to the background behind a moving cyclist) strikes me as especially smart.

On a random personal note, it’s funny to see demo files for features like Magic Eraser and think, “Hey, I know that guy!” much like I did with Content-Aware Fill eleven (!) years ago. And it’s fun that some of the big brains I got to work with at Google have independently come over to collaborate at Adobe. It’s a small, weird world.

DJI Ronin 4D looks amazing

Built-in gimbal, 8K rez, LIDAR rangefinder for low-light focusing—let’s go!

It commands a pro price tag, too. Per The Verge:

The 6K version costs $7,199, the 8K version is $11,499, and both come with a decent kit: the gimbal, camera, LIDAR range finder, a monitor and hand grips / top handle, a carrying case, and a battery (the 8K camera also comes with a 1TB SSD). In the realm of production cameras and stabilization systems, that’s actually on the lower end (DJI’s cinema-focused Ronin 2 stabilizer costs over $8,000 without any camera attached, and Sony’s FX9 6K camera costs $11,000 for just the body), but if you were hoping to use the LIDAR focus system to absolutely nail focus in your vlogs, you may want to rethink that.

Google enables Pixel -> Snap in two taps

I was so excited to build an AR stack for Google Lens, aiming to bring realtime magic to billions of phones’ default camera. Sadly, after AR Playground went out the door three years ago & the world shrugged, Google lost interest.

At least they’re letting others like Snap grab the mic.

Dubbed “Quick Tap to Snap,” the new feature will enable users to tap the back of the device twice to open the Snapchat camera directly from the lock screen. Users will have to authenticate before sending photos or videos to a friend or their personal Stories page. 

Snapchat’s Pixel service will also include extra augmented-reality lenses and integrate some Google features, like live translation in the chat feature, according to the company.

I wish Apple would offer similar access to third-party camera apps like Halide Camera, etc. Its absence has entirely killed my use of those apps, no matter how nice they may be.

Demo: Camera Raw is coming to Photoshop for iPad

Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)

If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯

Anyway, back in 2021, take ‘er away, Ryan Dumlao:

Online seminar tomorrow: Russell Brown discusses his latest photographic explorations

I had a ball schlepping all around Death Valley & freezing my butt off while working with Russell back in January, and this seminar sounds fun:

Oct 12, 2021; 7:00 – 8:30pm Eastern

Russell Preston Brown is the senior creative director at Adobe, as well as an Emmy Award-winning instructor. His ability to bring together the world of design and software development is a perfect match for Adobe products. In Russell’s 32 years of creative experience at Adobe, he has contributed to the evolution of Adobe Photoshop with feature enhancements, advanced scripts and development. He has helped the world’s leading photographers, publishers, art directors and artists to master the software tools that have made Adobe’s applications the standard by which all others are measured.

New facial-puppeteering tech from the team behind Deep Nostalgia

The creative alchemists at D-ID have introduced “Speaking Portrait.” Per PetaPixel,

These can be made with any still photo and will animate the head while other parts stay static and can’t have replaced backgrounds. Still, the result below shows how movements and facial expressions performed by the real person are seamlessly added to a still photograph. The human can act as a sort of puppeteer of the still photo image.

What do you think?

“Float,” a beautiful short film

It’s odd to say “no spoilers” about a story that unfolds in less than three minutes, but I don’t want to say anything that would interfere with your experience. Just do yourself a favor and watch.

The fact of this all having been shot entirely on iPhone is perhaps the least interesting part about it, but that’s not to say it’s unremarkable: seeing images of my own young kids pop up, shot on iPhones 10+ years ago, the difference is staggering—and yet taken wholly for granted. Heck, even the difference made in four years is night & day.

Come try editing your face using just text

A few months back, I mentioned that my teammates had connected some machine learning models to create StyleCLIP, a way of editing photos using natural language. People have been putting it to interesting, if ethically complicated, use:

Now you can try it out for yourself. Obviously it’s a work in progress, but I’m very interested in hearing what you think of both the idea & what you’re able to create.

And just because my kids love to make fun of my childhood bowl cut, here’s Less-Old Man Nack featuring a similar look, as envisioned by robots:

Photography: A rather amazing sunflower time lapse

This is glorious, if occasionally a bit xenomorph-looking. Happy Friday.

PetaPixel writes,

The plants featured in Neil Bromhall’s timelapses are grown in a blackened, window-less studio with a grow light serving as artificial sunlight.

“Plants require periods of day and night for photosynthesis and to stimulate the flowers and leaves to open,” the photographer tells PetaPixel. “I use heaters or coolers and humidifiers to control the studio condition for humidity and temperature. You basically want to recreate the growing conditions where the plants naturally thrive.”

Lighting-wise, Bromhall uses a studio flash to precisely control his exposure regardless of the time of day it is. The grow light grows the plants while the flash illuminates the photos.

Anonymize your photo automatically

Hmm—I’m not sure what to think about this & would welcome your thoughts. Promising to “Give people an idea of your appearance, while still protecting your true identity,” this Anonymizer service will take in your image, then generate multiple faces that vaguely approximate your characteristics:

Here’s what it made for me:

I find the results impressive but a touch eerie, and as I say, I’m not sure how to feel. Is this something you’d find useful (vs., say, just using something other than a photograph as your avatar)?

How Google’s new “Total Relighting” tech works

As I mentioned back in May,

You might remember the portrait relighting features that launched on Google Pixel devices last year, leveraging some earlier research. Now a number of my former Google colleagues have created a new method for figuring out how a portrait is lit, then imposing new light sources in order to help it blend into new environments.

Two-Minute Papers has put together a nice, accessible summary of how it works:

https://youtu.be/SEsYo9L5lOo

Amazing colorization coming? Check out AI-Powered “Time-Travel Rephotography”

A bunch of my former Google colleagues, including with whom I’m now working as she’s joined Adobe, have introduced new techniques that promise amazing colorization of old photos.

By characterizing the quirks & limitations of old cameras and film, then creating and manipulating a “digital sibling,” the team is able to achieve some really lifelike results:

These academic videos are often kinda dry, but I promise that this one is pretty intriguing:

Flying with a Big Train

Dropping by the Tehachapi Loop (“the Eight Wonder of the railroading world”) last year en route to Colorado was a highlight of the journey and one of my son Henry’s greatest railfanning experiences ever—which is really saying something!

This year Hen & I set out for some sunset trainspotting. We were thrilled to see multiple trains passing each other & looping over one another via the corkscrew tracks. Giant hat tip to the great Wynton Marsalis & co. for the “Big Train” accompaniment here:

As a papa-razzo, I was especially pleased to have the little chap rocking my DSLR while I flew, capturing some fun shots:

Tangential bonus(-ish): Here’s a little zoom around Red Rocks outside Vegas the next day:

Photography: “Jay Myself” is terrific

Many years ago I had the chance to drop by Jay Maisel‘s iconic converted bank building in the Bowery. (This must’ve been before phone cameras got good, as otherwise I’d have shot the bejesus out of the place.) It was everything you’d hope it to be.

As luck would have it, my father-in-law (having no idea about the visit) dialed up the documentary “Jay Myself” last night, and whole family (down to my 12yo budding photographer son) loved it. I think you would, too!

Animation: Gmunk & Light

I’ve admired the motion graphics of Bradley Munkowitz since my design days in the 90’s (!), and I enjoyed this insight into one of his most recent creations:

What I didn’t know until now is that he collaborated with the folks at Bot & Dolly—who created the brilliant work below before getting acquired by Google and, as best I can tell, having their talent completely wasted there 😭.

Gone Fishin’, 2021 edition

Hey all—greetings from somewhere in the great American west, which I’m happily exploring with my wife, kids, and dog. Being an obviously crazy person, I can’t just, y’know, relax and stop posting for a while, but you may notice that my cadence here drops for a few days.

In the meantime, I’ll try to gather up some good stuff to share. Here’s a shot I captured while flying over the Tehachapi Loop on Friday (best when viewed full screen).


Just for fun, here’s a different rendering of the same file (courtesy of running the Mavic Pro’s 360º stitch through Insta360 Studio):

And, why not, heres’ another shot of the trains in action. I can’t wait to get some time to edit & share the footage.

default

Google Pixel brings video to astrophotography

Psst, hey, Russell Brown, tell me again when we’re taking our Pixels to the desert… 😌✨

Pixel owners love using astrophotography in Night Sight to take incredible photos of the night sky, and now it’s getting even better. You can now create videos of the stars moving across the sky all during the same exposure. Once you take a photo in Night Sight, both the photo and video will be saved in your camera roll. Try waiting longer to capture even more of the stars in your video. This feature is available on Pixel 4 and newer phones and you can learn more at g.co/pixel/astrophotography.

Google makes strides on equitable imaging

“I’m real black, like won’t show up on your camera phone,” sang Childish Gambino. It remains a good joke, but ten years later, it’s long past time for devices to be far fairer in how they capture and represent the world. I’m really happy to see my old teammates at Google focusing on just this area:

Apply for the Adobe Stock Artist Development Fund

I’m really happy to see Adobe putting skin in the game to increase diversity & inclusion in stock imagery:

Introducing the Artist Development Fund, a new $500,000 creative commission program from Adobe Stock. As an expression of our commitment to inclusion we’re looking for artists who self-identify with and expertly depict diverse communities within their work.

Here’s how it works:

The fund also ensures artists are compensated for their work. We will be awarding funding of $12,500 each to a total of 40 global artists on a rolling basis during 2021. Artist Development Fund recipients will also gain unique opportunities, including having their work and stories featured across Adobe social and editorial channels to help promote accurate and inclusive cultural representation within the creative industry.

VFX & photography: Fireside chat tonight with Paul Debevec

If you liked yesterday’s news about Total Relighting, or pretty much anything else related to HDR capture over the last 20 years, you might dig this SIGGRAPH LA session, happening tonight at 7pm Pacific:

Paul Debevec is one of the most recognized researchers in the field of CG today. LA ACM SIGGRAPH’s “fireside chat” with Paul and Carolyn Giardina, of the Hollywood Reporter, will allow us a glimpse at the person behind all the innovative scientific work. This event promises to be one of our most popularas Paul always draws a crowd and is constantly in demand to speak at conferences around the world.

“Total Relighting” promises to teleport(rait) you into new vistas

This stuff makes my head spin around—and not just because the demo depicts heads spinning around!

You might remember the portrait relighting features that launched on Google Pixel devices last year, leveraging some earlier research. Now a number of my former Google colleagues have created a new method for figuring out how a portrait is lit, then imposing new light sources in order to help it blend into new environments. Check it out:

Interesting, interactive mash-ups powered by AI

Check out how StyleMapGAN (paper, PDF, code) enables combinations of human & animal faces, vehicles, buildings, and more. Unlike simple copy-paste-blend, this technique permits interactive morphing between source & target pixels:

From the authors, a bit about what’s going on here:

Generative adversarial networks (GANs) synthesize realistic images from random latent vectors. Although manipulating the latent vectors controls the synthesized outputs, editing real images with GANs suffers from i) time-consuming optimization for projecting real images to the latent vectors, ii) or inaccurate embedding through an encoder. We propose StyleMapGAN: the intermediate latent space has spatial dimensions, and a spatially variant modulation replaces AdaIN. It makes the embedding through an encoder more accurate than existing optimization-based methods while maintaining the properties of GANs. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Last but not least, conventional editing methods on GANs are still valid on our StyleMapGAN. Source code is available at https://github.com/naver-ai/StyleMapGAN​.