Monthly Archives: February 2026

Creative technologist needed on the Flux team

I’ve really enjoyed collaborating with Black Forest Labs, the brain-geniuses behind Flux (and before that, Stable Diffusion). They’re looking for a creative technologist to join their team. Here’s a bit of the job listing in case the ideal candidate might be you or someone you know:

BFL’s models need someone who knows them inside out – not just what they can do today, but what nobody’s tried yet. This role sits at the intersection of creative excellence, deep model knowledge, and go-to-market impact. You’ll create the work that makes people realize what’s possible with generative media – original pieces, experiments, and creative assets that set the standard for what FLUX can do and show it to the world

Create original creative work that pushes FLUX to its limits – experiments, visual explorations, and pieces that show what’s possible before anyone else figures it out

Collaborate with the research and product teams from the start of training/product development to understand the core strengths of each new model/product and create assets that amplify and showcase these. You will also provide feedback to those teams throughout the development process on what needs to improve.

UI: Realtime generation & the undiscovered country

Former Apple designer Tuhin Kumar, who recently logged three years at Luma AI, makes a great point here:

To the extent I give Adobe gentle but unending grief about their near-total absence from the world of UI innovation, this is the kind of thing I have in mind. What if any layer in Photoshop—or any shape in Illustrator—could have realtime-rendering generative parameters attached?

Like, where are they? Don’t they want to lead? (It’s a genuine question: maybe the strategy is just to let everyone else try things, and then to finally follow along at scale.) And who knows, maybe certain folks are presently beavering away on secret awesome things. Maybe… I will continue hoping so!

Shooting up a storm

Supporting my MiniMe Henry’s burgeoning interest in photography remains a great joy. Having recently captured the Super Bowl flyover with him (see previous), I prayed that Monday’s torrential downpour in LA just might give us some spectacular skies—and, what do you know, it did! Check out our gallery (selects below), featuring one seriously exuberant kid!

I’ve also been enjoying Hen’s great eye for reflections, put to good use during our recent visit to the USS Hornet:

Nano Banana goes to the Super Bowl

It’s hard to believe that when I dropped by Google in 2022, arguing vociferously that we work together to put Imagen into Photoshop, they yawned & said, “Can you show up with nine figures?”—and now they’re spending eight figures on a 60-second ad to promote the evolved version of that tech. Funny ol’ world…

Bad To The B-ONE

MiniMe on the lens + Dad in Lightroom/Photoshop, making the dream work. 🙂

 

Check out our gallery for full-res shots plus a few behind-the-scenes pics. BTW: Can you tell which clouds were really there and which ones came via Photoshop’s Sky Replacement feature? If not, then the feature and I have done our jobs!

And peep this incredibly smooth camerawork that paired the flyover with the home of the brave:

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

 

A post shared by ESPN (@espn)

BTS: Choreographing today’s giant Super Bowl flyover

Through insanely good timing, I caught Friday’s practice flyover as the jets headed up to Levi’s Stadium:

Right now my MiniMe & I are getting set to head up to the Bayshore Trail with proper cameras, as we hope to catch the real event at 3:30 local time.

Meanwhile, I’ve been enjoying this deep dive video (courtesy of our Photoshop teammate Sagar Pathak, who’s gotten just insane access in past years). It features interviews with multiple pilots, producers, and more as they explain the challenges of safely putting eight cross-service aircraft into a tight formation over hundreds of thousands of people—and in front of a hundred+ million viewers. I think you’ll dig it.

Interactive relighting control for Qwen image creation

A couple of weeks ago I mentioned a cool, simple UI for changing camera angles using the Qwen imaging model. Along related lines, here’s an interface for relighting images:

Martini promises real creative control for filmmakers

This new tool (currently in closed beta, to which one can request access via the site)

Martini puts you in the director’s chair so you can make the video you see in your head… Get the exact shot you want, not whatever the model gives you. Step into virtual worlds and compose shots with camera position, lenses, and movement… No more juggling disconnected tools. Image generation, video generation, and world models—all in one place, with a built-in timeline.

I can’t wait to try stepping into the set. Beyond filmmaking, think what something like this could mean to image creation & editing…

Adobe vets launch AniStudio

My former colleagues Jue Wang & Chen Fang are making an impressive indie debut:

AniStudio exists because we believe animation deserves a future that’s faster, more accessible, and truly built for the AI era—not as an add-on, but from the ground up. This isn’t a finished story. It’s the first step of a new one, and we want to build it together with the people who care about animation the most.

Check it out: