Adobe’s new generative 3D/vector tech is a real head-turner. I’m impressed that the results look like clean, handmade paths, with colors that match the original—and not like automatic tracing of crummy text-to-3D output. I can’t wait to take it for a… oh man, don’t say it don’t say it… spin.
All posts by jnack
Project Perfect Blend promises game-changing compositing in Photoshop
Oh man, for years we wanted to build this feature into Photoshop—years! We tried many times (e.g. I wanted this + scribble selection to be the marquee features in Photoshop Touch back in 2011), but the tech just wasn’t ready. But now, maybe, the magic is real—or at least tantalizingly close!
Being a huge nerd, I wonder about how the tech works, and whether it’s substantially the same as what Magnific has been offering (including via a Photoshop panel) for the last several months. Here’s how I used that on my pooch:

But even if it’s all the same, who cares?
Being useful to people right where they live & work, with zero friction, is tremendous. Generative Fill is a perfect example: similar (if lower quality) inpainting was available from DALL•E for a year+ before we shipped GenFill in Photoshop, but the latter has quietly become an indispensible, game-changing piece of the imaging puzzle for millions of people. I’d love to see compositing improvements go the same way.
The ceiling can’t hold us stuffed animals
As I drove the Micronaxx to preschool back in 2013, Macklemore’s “Can’t Hold Us” hit the radio & the boys flipped out, making their stuffed buddies Leo & Ollie go nuts dancing to the tune. I remember musing with Dave Werner (a fellow dad to young kids) about being able to animate said buddies.
Fast forward a decade+, and now Dave is using Adobe’s recently unveiled Firefly Video model to do what we could only dimly imagine back then:
Bringing stuffed animals to life with Adobe Firefly Generate Video. pic.twitter.com/XSbQxaIDiD
— Dave Werner (@okaysamurai) October 16, 2024
Time to unearth Leo & get him on stage at last. :->
Extremely metal “I Voted” sticker
Aw hell yeah, 12yo illustrator Jane!
IM FUCKING CRYING pic.twitter.com/fsSqPxsHhQ
— Casey Shea Enthusiast (@csheaenthusiast) October 3, 2024
AI-flavored vacation pix: Delightful nightmare fuel
Enjoy the latest from Magnific impresario Javi Lopez!
PART 3: Handed my vacation videos to an AI for auto editing, and now I’m pretty sure I’ll have nightmares for life pic.twitter.com/jYX1TZ4rMX
— Javi Lopez (@javilopen) October 12, 2024
Striking visualizations of a storm surge
Amazing, and literally immersive, work by artists at The Weather Channel. Yikes—stay safe out there, everybody.
The 3D artists at the weather channel deserve a raise for this insane visual
Now watch this, and then realize forecasts are now predicting up to 15 ft of storm surge in certain areas on the western coast of Florida pic.twitter.com/HHrCVWNgpg
— wave (@0xWave) October 8, 2024
Flair AI promises brand-consistent video creation
As soon as Google dropped DreamBooth back in 2022, people have been trying—generally without much success—to train generative models that can incorporate the fine details of specific products. Thus far it just hasn’t been possible to meet most brands’ demanding requirements for fidelity.
Now tiny startup Flair AI promises to do just that—and to pair the object definitions with custom styling and even video. Check it out:
You can now generate brand-consistent video advertisements for your products on @flairAI_
1. Train a model on your brand’s aesthetic
2. Train a model on your clothing or product
3. Combine both models in one prompt
4. Animate✨In beta – comment/RT for access and free credits pic.twitter.com/88NYLVOFSQ
— Mickey Friedman (@mickeyxfriedman) October 7, 2024
In search of The Something Else
Late last night my wife & I found ourselves in the depths of the Sunday Evening Blues—staring out towards the expanse of yet another week of work & school, without much differentiation from most of those before & after it. I’m keenly aware of the following fact, of course:

And yet, oof… it’s okay to acknowledge the petty creeping of tomorrow & tomorrow & tomorrow. The ennui will pass—as everything always does—but it’s real.
This reminded me of the penguin heroine in what was one of our favorite books to read to the Micronaxx back when they were actually micro, A Penguin Story by Antoinette Portis. Ol’ Edna is always searching for The Something Else—and she finds it! I came across this charming little narration of the story, and just in case you too might need a little avian encouragement—well, enjoy:
Meta AI introduces conversational editing
I was super hyped last year when Meta announced “Emu Edit” tech for selectively editing images using just language:
Now you can try the tech via Meta.ai and in various apps:
Meta has casually released the best AI image editor
You can upload your image to Meta AI and just write the edits you want to make.
Accessible for free in WhatsApp, Instagram, Messenger, Facebook, etc. pic.twitter.com/jJEhMdJadT
— Paul Couvert (@itsPaulAi) October 2, 2024
In my limited experience so far, it’s cool but highly unpredictable. I’ll test it further, and I’d love to know how it works for you. Meanwhile you can try similar techniques via https://playground.com/:
Welcome to the new Playground
Use AI to design logos, t-shirts, social media posts, and more by just texting it like a person.
Watch: pic.twitter.com/eSwJcJUxtB
— Playground (@playground_ai) September 3, 2024
RIP Dikembe Mutombo
[I know this note seems supremely off topic, but bear with me.]
I’m sorry to hear of the passing of larger-than-life NBA star Dikembe Mutombo. He inspired the name of a “Project Mutombo” at Google, which was meant to block unintended sharing of content outside of one’s company. Unrelated (AFAIK he never knew of the project), back in 2015 I happened to see him biking around campus—dwarfing a hapless Google Bike & making its back tire cartoonishly flat.
RIP, big guy. Thanks for the memories, GIFs, and inspiration.
Fun VFX from Runway Turbo
As always, I’m blown away in equal parts by:
- Just how powerful this tech is becoming, and
- Just how blasé we all can be about it all
Days of Miracles & Wonder, amirite?
Wow @runwayml just dropped an updated Gen-3 Alpha Turbo Video-to-Video mode & it’s awesome! It’s super fast & lets you do 9:16 portrait video. Anything is possible! pic.twitter.com/AxeFaJwAPR
— Blaine Brown (@blizaine) September 28, 2024
Zuck talks AR wearables & much more
I quite enjoyed the Verge’s interview with Mark Zuckerberg, discussing how they think about building a whole range of reality-augmenting devices, from no-display Wayfarers to big-ass goggles, and especially to “glasses that look like glasses”—the Holy Grail in between.
Links to some of the wide-ranging topics they covered:
00:00 Orion AR smart glasses
00:27 Platform shift from mobile to AR
02:15 The vision for Orion & AR glasses
03:55 Why people will upgrade to AR glasses
05:20 A range of options for smart glasses
07:32 Consumer ambitions for Orion
11:40 Reality Labs spending & the cost of AR
12:44 Ray-Ban partnership
17:11 Ray-Ban Meta sales & success
18:59 Bringing AI to the Ray-Ban Meta
21:54 Replacing phones with AR glasses
25:18 Influx of AI content on social media
28:32 The vision for AI-filled social media
34:04 Will AI lead to less human interaction?
35:24 Success of Threads
36:41 Competing with X & the role of news
40:04 Why politics can hurt social platforms
41:52 Mark’s shift away from politics
46:00 Cambridge Analytica, in hindsight
49:09 Link between teen mental health and social media
53:52 Disagreeing with EU regulation
56:06 Debate around AI training data & copyright
1:00:07 Responsibility around AR as a platform
Tangentially, I gave myself an unintended chuckle with this:
Fun, unintended juxtaposition at the top of my camera roll. pic.twitter.com/MsNHJUeFvB
— John Nack (@jnack) September 26, 2024
A sobering micro-critique of AI
iPhone goes on safari
Austin Mann puts the new gear through its paces in Kenya:
Last week at the Apple keynote event, the iPhone camera features that stood out the most to me were the new Camera Control button, upgraded 48-megapixel Ultra Wide sensor, improved audio recording features (wind reduction and Audio Mix), and Photographic Styles. […]
Over the past week we’ve traveled over a thousand kilometers across Kenya, capturing more than 10,000 photos and logging over 3TB of ProRes footage with the new iPhone 16 Pro and iPhone 16 Pro Max cameras. Along the way, we’ve gained valuable insights into these camera systems and their features.
A little encouragement from Carl Jung

Or as I said upon launching the first-ever Photoshop public beta, all those years ago:
“Be bold, and mighty forces will come to your aid.” – Goethe
Pillow fight NYC!
Fernando Livschitz, whose amazing work I’ve featured many times over the years, is back with some delightfully pillowy interactions in & over the Big Apple:
Big GoT
So is the expanded Midwest the Midwesteros? 🙂 Whatever the case, enjoy this little mashup before House Killjoy lawyers go full loot train on it.
Guillermo del Toro on AI
Oof. But of course he’s right that a tool is just a tool, not a provider of meaning & value unto itself.
GDT says it all here. pic.twitter.com/pK5WPtDY7l
— Todd Vaziri (@tvaziri) September 17, 2024
“Jurassic Park – 1950’s Super Panavision 70”
Chaos reigns!
I have no idea what AI and other tools were used here, but it’d be fun to get a peek behind the curtain. As a commenter notes,
The meandering strings in the soundtrack. The hard studio lighting of the close-ups. The midtone-heavy Technicolor grading. The macro-lens DOF for animation sequences. This is spot-on 50’s film aesthetic, bravo.
[Via Andy Russell]
Flux goes realtime with Krea
And if that headline makes no sense, it probably just means your not terminally AI-pilled, and I’m caught flipping a grunt. 😉 Anyway, the tiny but mighty crew at Krea have brought the new Flux text-to-image model—including its ability to spell—to their realtime creation tool:
Flux now in Realtime.
available in Krea with hundreds of styles included.
free for everyone. pic.twitter.com/4gmMOmcUvg
— KREA AI (@krea_ai) September 12, 2024
Old meets new: Disposable cam + Runway AI
What a fun little project & great NYC vibe-catcher: the folks at Runway captured street scenes with a disposable film camera, then used their model to put the images in motion. Check it out:
Shooting visual effects with a disposable camera and Gen-3 Alpha. pic.twitter.com/QRd3cI4Hqr
— Runway (@runwayml) September 6, 2024
iPhone 16 + AI: Quick helpful summaries
Check out my friend Bilawal’s summary thread, which pairs quick demos from Apple with bits of useful context:
Caught the Apple keynote? I’ve distilled down the most intriguing highlights for AI and spatial computing creators and builders—no need to sift through it yourself. Thread: pic.twitter.com/hiLM7iMzi4
— Bilawal Sidhu (@bilawalsidhu) September 10, 2024
There are some great additional details in this thread from Halide Camera as well:
There’s a lot of info to digest from the keynote, so here’s our summary of all the changes and new features of iPhone 16 and 16 Pro cameras in this quick thread pic.twitter.com/z7xB0aekLi
— Halide + Kino (@halidecamera) September 9, 2024
“Only Murders In The Building” titles
Somehow, despite my wife being a huge fan of the show over the last couple of years, I hadn’t previously seen the delightful titles for Only Murders In The Building:
Salon has a great article that goes behind the scenes with Elastic, which previously created titles for “Game of Thrones,” “Watchmen” and “Captain Marvel,” among others.
“The brief was this idea of a love letter to New York in a way and true crime and true crime podcasts,” Lisa Bolan, a creative director at Elastic, told Salon. “John really wanted to capture this romantic illustrative approach to New York, building on the magic of Hirschfeld and The New Yorker – illustrators who have abstracted New York in a way that’s beautiful and also speaks to these little glimpses of magic in the urban landscape.

AI joke o’ the day
Behind the scenes: DIY Deadpool
I love seeing how scrappy creators combine tools in new ways, blazing trails that we may come to see as commonplace soon enough. Here Eric Solorio (enigmatic_e) shows how he used Viggle & other tools to create his viral Deadpool animation:
As promised, here is a breakdown of how I did the Deadpool animation I recently posted. pic.twitter.com/F130Skq17U
— enigmatic_e (@8bit_e) August 1, 2024
See also some of his luchador moves, plus more on his various feeds:
Step (and fly) into Spike Jonze’s Kenzo World
If you never see the use of After Effects in this delightfully madcap vid—well, that’s exactly as it should be. Apparently the filmmakers were featured in an Adobe trade show booth after it was released.
In any event, go nuts, Margaret Qualley!
Newton promises fun physics-based animation for After Effects
I haven’t gotten to try it out yet, but Newton looks like a lot of fun:
If different fonts asked you out…
“Frutiger, you old son of a glyph!” :-p
#FILF
View this post on Instagram
The right to unbare arms
…with bears! Courtesy of image references in Photoshop GenFill:
— Anna McNaught (@annamcnaughty) August 28, 2024
Riffing on the world through Ideogram
I’ve been having a ball using the new Ideogram app for iOS to import photos & remix them into new creations. This is possible via their web UI as well, but there’s something extra magical about the immediacy of capture & remix. Check out a couple quick explorations I did while out with the kids, starting from a ballcap & the fuel tank of an old motorcycle:
More examples, riffing on a classic @TriumphAmerica fuel tank: pic.twitter.com/cZ5USqyGFN
— John Nack (@jnack) August 27, 2024
AI news flash: People prefer paying for things that are actually good
I love this level of transparency from the folks behind Photo AI. Developer @levelsio reports,
[Flux] made Photo AI finally good enough overnight to be actually used by people and be satisfied with the results… it’s more expensive [than SD] but worth it because the photos are way way better… Not sure about profitability but with SD it was about 85% profit. With Flux def less maybe 65%… Very unplanned and grateful the foundational models got better.
We’re arguably in something of a trough of disillusionment in the AI-art hype cycle, but this kind of progress gives reason for hope: more quality & more utility do translate into more sustainable value—and there’s every reason to think that things will only improve from here.
Flux, the new AI model, changes businesses (and lives)
It made https://t.co/1vEawpI5vb finally good enough overnight to be actually used by people and be satisfied with the results
All my improvements before helped but now it’s accelerating with Flux’s photo quality pic.twitter.com/BiAqi5BgnY
— @levelsio (@levelsio) August 21, 2024
Generative AI: Nuance > Sanctimony
Listen, I know that it’s a lot more seductive & cathartic to say “I f*cking hate generative AI,” and you can get 90,000+ likes for doing so, but—believe it or not—thoughtfulness & nuance actually matter. That is, how one uses generative tech can have very different implications for the creative community.
It’s therefore important to evaluate a range of risk/reward scenarios: What’s unambiguously useful & low-risk, vs. what’s an inducement to ripping people off, and what lies in the middle?
I see a continuum like this (click/tap to see larger):

None of this will draw any attention or generate much conversation—at least if my attempts to engage people on Twitter are any indication—but it’s the kind of thing actual toolmakers must engage with if we’re to make progress together. And so, back to work.
PS—This, always this:
Life imitates A(I)rt
This kind of foolishness soothes my soul. :-p
Some Chinese dudes imitating AI videos lol this is next level pic.twitter.com/LqB3O327Kr
— GioM (@theGioM) August 15, 2024
Reinterpreting classic instrument clusters in the age of CarPlay
“Tell me about a product you hate that you use regularly.” I asked this question of hundreds of Google PM candidates I interviewed, and it was always a great bozo detector. Most people don’t have much of an answer—no real passion or perspective. I want to know not just what sucks, but why it sucks.
If I were asked the same question, I’d immediately say “Every car infotainment system ever made.” As Tolstoy might say, “Each one is unhappy in its own way.” The most interesting thing, I think, isn’t just to talk about the crappy mismatched & competing experiences, but rather about why every system I’ve ever used sucks. The answer can’t be “Every person at every company is a moron”—so what is it?
So much comes down to the structure of the industry, with hardware & software being made by a mishmash of corporate frenemies, all contending with a soup of regulations, risk aversion (one recall can destroy the profitability of a whole product line), and surprisingly bargain-bin electronics.
Despite all that, talented folks continue to fight the good fight, and I enjoyed John LePore’s speculative designs that reinterpret the instrument clusters of classic cars (from Corvettes to DeLoreans) through Apple’s latest CarPlay framework:
No, YOU’RE obsessed with instrument clusters pic.twitter.com/deE0YgAhGY
— John LePore (@JohnnyMotion) June 26, 2024
Interesting thoughts from Adobe Research on type composition
My old teammates have done some promising research on how to facilitate more interesting typesetting. Check out this 1-minute overview:
Ahnuld’s Fables
My friend Nathan has fed a mix of Schwarzenegger photos & drawings from Aesop’s Fables into the new open-source Flux model, creating a rad woodcut style. That’s interesting enough on its own—but it’s so 24 hours ago, and thus he’s now taken to animating the results. Check out the thread below for details:
Animating yesterday’s #FLUX woodcut Arnold using one of my favorite clips from the old soundboards
This uses Follow-Your-Emoji / Reference UNet in ComfyUI, which did a better job than LivePortrait.
Some comparison results in thread #aivideo pic.twitter.com/C9pgWgVJS5
— Nathan Shipley (@CitizenPlain) August 15, 2024
Pixel 9 adds on-device image generation
It’s wild that capabilities that blew our minds two years ago—for which I & others spent months on a waiting list for DALL•E, which demanded beefy servers to run—are now available (only better) running in your pocket, on your telephone. Check out the latest from Google:
Pixel Studio is a first-of-its-kind image generator. So now you can bring all ideas to life from scratch, right on your phone — a true creative canvas.9
It’s powered by combining an on-device diffusion model running on Tensor G4 and our Imagen 3 text-to-image model in the cloud. With a UI optimized for easy prompting, style changes and editing, you can quickly bring your ideas to conversations with friends and family.
3. Pixel Studio
Create anything you imagine with PixelStudio, a groundbreaking image generator powered by an on-device diffusion model. It’s your AI canvas. pic.twitter.com/oDBqkUfqOR
— EyeingAI (@EyeingAI) August 13, 2024
Days of Miracles & Wonder, as always…
Google Pixel introduces an interactive “Add Me” feature
Back when I worked on Google Photos, and especially later when I worked in Research, I really wanted to ship a camera mode that would help ensure great group photos. Prior to the user pressing the capture button, it would observe the incoming video stream, notice when it had at least one instance of each face smiling with their eyes open, and then knit together a single image in which everyone looked good.
Of course, the idea was hardly new: I’d done the same thing manually with my own wedding photos back in 2005, and in 2013 Google+ introduced “AutoAwesome Smile” to select good expressions across images & merge them into a single shot. It was a great feature, though sadly the only time people noticed its existence is when it failed in often hilarious “AutoAwful” ways (turning your baby or dog into, say, a two-nosed Picasso). My idea was meant to improve on this by not requiring multiple photos, and of course by suppressing unwanted hilarity.
Anyway, Googlers gonna Google, and now the Pixel team has introduced an interactive mode that helps you capture & merge two shots—the first one of a group, and the second of the photographer who took the first. Check out Marques Brownlee’s 1-minute demo:
The most interesting AI feature on the new Pixels IMO: “Add Me”
Full video: https://t.co/1jCauLsl2y pic.twitter.com/cWhZNLs4RO
— Marques Brownlee (@MKBHD) August 13, 2024
For more details, check out his full review of Google’s new devices.
That’s all well and good—but wake me when they decide to bring back David Hasselhoff photobombs:

Uizard & the future of AI-assisted design
Uizard (“Wizard”), which was recently acquired by Miro, has rolled out Autodesigner 2.0:
We take the intuitive conversational flow of ChatGPT and merge it with Uizard generative UI capabilities and drag-and-drop editor, to provide you with an intuitive UI design generator. You can turn a couple of ideas into a digital product design concept in a flash!
I’m really curious to see how the application of LLMs & conversational AI reshapes the design process, from ideation & collaboration to execution, deployment, and learning—and I’d love to hear your thoughts! Meanwhile here’s a very concise look at how Autodesigner works:
And if that piques your interest, here’s a more in-depth look:
A little birthday lunacy
I fondly recall Andy Samberg saying years ago that they’d sometimes cook up a sketch that would air at the absolute tail end of Saturday Night Live, be seen by almost no one, and be gotten by far fewer still—and yet for, like, 10,000 kids, it would become their favorite thing ever.
Given that it was just my birthday, I’ve dug up such an old… gem (?). This is why I’ve spent the last ~25 years hearing Jack Black belting out “Ha-ppy Birth-DAYYY!!” Enjoy (?!).
“Top Billing,” huge egos, and the art of title design
99% Invisible is back at it, uncovering hidden but fascinating bits of design in action. This time around it’s concerned with the art of movie title & poster design—specifically with how to deal with actors who insist on being top billed. In the case of the otherwise forgotten movie Outrageous Fortune:
Two different prints of the movie were made, one listing Shelley Long’s name first and the other listing Bette Midler’s name first. Not only that, two different covers to take-home products (LaserDisc and VHS) were also made, with different names first. The art was mirrored, so that the names aligned with the actors images.
One interesting pattern that’s emerged is to place one actor’s name in the lower left & another in the upper right—thus deliberately conflicting with normal reading order in English:

Anyway, as always with this show, just trust me—the subject is way more interesting than you might think.
A great little Simone Biles flipbook
Here’s your topical antidote to AI overload—a paper flipbook of some of the world’s greatest flipping (credit to The Flippist):
This is so cool @Simone_Biles pic.twitter.com/xnkpYPcbH9
— Emma Bailey #gymnastalliance (@MoominWhisky) August 3, 2024
Throwback: “Behind the scenes with Olympians & Google’s AR ‘Scan Van'”
I’m old enough to remember 2020, when we sincerely (?) thought that everyone would be excited to put 3D-scanned virtual Olympians onto their coffee tables… or something. (Hey, it was fun while it lasted! And it temporarily kept a bunch of graphics nerds from having to slink back to the sweatshop grind of video game development.)
Anyway, here’s a look back to what Google was doing around augmented reality and the 2020 (’21) Olympics:
I swear I spent half of last summer staring at tiny 3D Naomi Osaka volleying shots on my desktop. I remain jealous of my former teammates who got to work with these athletes (and before them, folks like Donald Glover as Childish Gambino), even though doing so meant dealing with a million Covid safety protocols. Here’s a quick look at how they captured folks flexing & flying through space:
View this post on Instagram
You can play with the content just by searching:

[Via Chikezie Ejiasi]
99% Invisible talks Olympic design history
Man do I ever love these guys. Do yourself a solid and listen to this quick, accessible history covering the design of the ’68 games in Mexico City—one inexorably wrapped up in political conflict & civic design. It’s great.

AI stuff I need to see in Photoshop
…and other creative imaging tools, stat!
Google Research has devised “Alchemist,” a new way to swap object textures:

And people keep doing wonderful things with realtime image synthesis:
Happy mixing of decoder embeddings in real-time! Base prompt is ‘photo of a room, sofa, decor’ and the two knobs are ‘industrial’ and ‘rococo’. If you are wondering what is running there in the background… pic.twitter.com/5svyDy5C4e
— Johannes Stelzer (@j_stelzer) July 30, 2024
“How To Draw An Owl,” AI edition
Always pushing the limits of expressive tech, Martin Nebelong has paired Photoshop painting with AI rendering, followed by Runway’s new image-to-video model. “Days of Miracles & Wonder,” as always:
Painting with AI in photoshop – And doing magic with Runways new Gen 3 image to video. This stuff is insane.. wow.
Our tools and workflows are at the brink of an incredible renaissance.
In this history books, this clip will be referred to as “Owl and cake” 😛
Seriously though,… pic.twitter.com/mIcJQNL3Ti
— Martin Nebelong (@MartinNebelong) July 30, 2024
Meta releases SAM 2 for fast segmentation
Man, I’m old enough to remember rotoscoping video by hand—a process that quickly made me want to jump right out a window. Years later, when we were working on realtime video segmentation at Google, I was so proud to show the tech to a bunch of high school design students—only to have them shrug and treat it as completely normal.
Ah, but so it goes: “One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations. Once people get used to a certain luxury, they take it for granted.” — Yuval Noah Harari
In any case, Meta has just released what looks like a great update to their excellent—and open-source—Segment Anything Model. Check it out:
Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos.
SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences
Details https://t.co/eTTDpxI60h pic.twitter.com/mOFiF1kZfE
— AI at Meta (@AIatMeta) July 29, 2024
You can play with the demo and learn more on the site:
- Following up on the success of the Meta Segment Anything Model (SAM) for images, we’re releasing SAM 2, a unified model for real-time promptable object segmentation in images and videos that achieves state-of-the-art performance.
- In keeping with our approach to open science, we’re sharing the code and model weights with a permissive Apache 2.0 license.
- We’re also sharing the SA-V dataset, which includes approximately 51,000 real-world videos and more than 600,000 masklets (spatio-temporal masks).
- SAM 2 can segment any object in any video or image—even for objects and visual domains it has not seen previously, enabling a diverse range of use cases without custom adaptation.
Neural rendering: Neo + Firefly
Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:
Did you see this mind blowing Adobe ControlNet + 3D Composer Adobe is going to launch! It will really boost creatives’ workflow. Video through @jnack
— Kris Kashtanova (@icreatelife) May 14, 2023
A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:
Here’s a quick exploration from the always-interesting Martin Nebelong:
A very quick first test of Adobe Project Neo.. didn’t realize this was out in open beta by now. Very cool!
I had to try to sculpt a burger and take that through Krea. You know, the usual thing!
There’s some very nice UX in NEO and the list-based SDF editing is awesome.. very… pic.twitter.com/e3ldyPfEDw
— Martin Nebelong (@MartinNebelong) April 26, 2024
And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:
Tutorial: Direct your cartoons with Project Neo + Firefly + ToonCrafter
1) Model your characters in Project Neo
2) Generate first and last frame with Firefly + Structure Reference
3) Use ToonCrafter to make a video interpolation between the first and the last frameEnjoy! pic.twitter.com/YPy32hoVDR
— Kris Kashtanova (@icreatelife) June 3, 2024
AI in Ai: Illustrator adds Vector GenFill
As I’ve probably mentioned already, when I first surveyed Adobe customers a couple of years ago (right after DALL•E & Midjourney first shipped), it was clear that they wanted selective synthesis—adding things to compositions, and especially removing them—much more strongly than whole-image synthesis.
Thus it’s no surprise that Generative Fill in Photoshop has so clearly delivered Firefly’s strongest product-market fit, and I’m excited to see Illustrator following the same path—but for vectors:
Generative Shape Fill will help you improve your workflow including:
- Create detailed, scalable vectors: After you draw or select your shape, silhouette, or outline in your artboard, use a text prompt to ideate on vector options to fill it.
- Style Reference for brand consistency: Create a wide variety of options that match the color, style, and shape of your artwork to ensure a consistent look and feel.
- Add effects to your creations: Enhance your vector options further by adding styles like 3D, geometric, pixel art or more.

They’re also adding the ability to create vector patterns simply via prompting:
Photoshop’s new Selection Brush helps control GenFill
Soon after Generative Fill shipped last year, people discovered that using a semi-opaque selection could help blend results into an environment (e.g. putting fish under water). The new Selection Brush in Photoshop takes functionality that’s been around for 30+ years (via Quick Select mode) and brings it more to the surface, which in turn makes it easier to control GenFill behavior:
The Selection Brush has arrived in @Photoshop! ✨
“Okay, but why the heck do I need yet ANOTHER selection tool?!”
Most traditional selection methods offer no control over the opacity of your selections.
Typically this wouldn’t matter, but after Generative Fill dropped, we… pic.twitter.com/C7WHuK4u2R
— Howard Pinsky (@Pinsky) July 23, 2024




