…when you start seeing correctly spelled letting in the real world & thinking that it’s a DALL•E spelling fail. :-p

Fun! You can grab the free browser extension here.
* right-click-remix any image w/ tons of amazing AI presets: Style Transfer, Controlnets… * build & remix your own workflows with full comfyUI support * local + cloud!
besides some really great default presets using all sorts of amazing ComfyUI workflows (which you can inspect and remix on http://glif.app), the extension will now also pull your own compatible glifs into it!
The tech, a demo of which you can try here, promises “‘imitative editing,’ allowing users to edit images using reference images without the need for detailed text descriptions.”

Here it is in action:
兄弟们,这个牛P
MimicBrush:通过模仿参考图像对目标图像选定区域自动进行局部编辑
也就是你可以选择图片中的某个部分(比如一个人的衣服、背景等),然后选择一个你喜欢的参考图片。
MimicBrush会根据参考图片的样子自动修改你选择的部分,让它看起来像参考图片中的那样。… pic.twitter.com/8wVy2hPgW3
— 小互 (@imxiaohu) June 18, 2024
Good grief, the pace of change makes “AI vertigo” such a real thing. Just last week we were seeing “skeleton underwater” memes with Runway submerged in a rusty chair. :-p I’m especially excited to see how it handles text (which remains a struggle for text-to-image models including DALL•E):
Being able to render text has also been incredibly fun to play with pic.twitter.com/Y12ZvLm6I8
— Cristóbal Valenzuela (@c_valenzuelab) June 17, 2024
Hey man, it’s Monday. 🙂 Enjoy some silly but well executed VFX:
Wtf! pic.twitter.com/mdT5YgAn97
— LA CUEVA DEL CINÉFILO (@Tibu696) June 15, 2024
I’m really digging the simple joy in this little experiment, powered by Imagen:
1 Prompt. 26 letters. Any kind of alphabet you can imagine. #GenType empowers you to craft, refine, and download one-of-a-kind AI generated type, building from A-Z with just your imagination.
Watch our Creative Lab teammates @trudypainter and @soybean_gx demo this latest… pic.twitter.com/rr6FIoEg2f
— labs.google (@labsdotgoogle) June 12, 2024
Here’s a bit of fun enabled by “weedy seadragons on PVC pipes in a magical undersea kingdom” (click to see at full res):

I’m super eager to try this one out!
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Unleash your inner monster maker: Monster Camp by @monster_library pic.twitter.com/HyH56WyvAr
— Luma AI (@LumaLabsAI) June 12, 2024
There’s been a firestorm this week about the terms of service that my old home team put forward, based (as such things have been since time immemorial) on a lot of misunderstanding & fear. Fortunately the company has been working to clarify what’s really going on.
Sorry for delay on this. Info here, including what actually changed in the TOS (not much), as well as what Adobe can / cannot do with your content. https://t.co/LZFkDXrmep
— Mike Chambers (@mesh) June 6, 2024
I did at least find this bit of parody amusing:
Huge if true. https://t.co/AFK8nyhrDg
— John Nack (@jnack) June 6, 2024
My former Google teammates have been cranking out some amazing AI personalization tech, with HyperDreamBooth far surpassing the performance of their original DreamBooth (y’know, from 2022—such a simpler ancient time!). Here they offer a short & pretty accessible overview of how it works:
Using only a single input image, HyperDreamBooth is able to personalize a text-to-image diffusion model 25x faster than DreamBooth, by using (1) a HyperNetwork to generate an initial prediction of a subset of network weights that are then (2) refined using fast finetuning for high fidelity to subject detail. Our method both conserves model integrity and style diversity while closely approximating the subject’s essence and details.
“Maybe the real treasure was the friends we made along the way” is, generally, ironic shorthand for “worthless treasure”—but I’ve also found it to be true. That’s particularly the case for the time I spent at Google, where I met excellent folks like Bilawal Sidhu (a fellow PM veteran of the augmented reality group). I’m delighted that he’s now crushing it as the new host of the TED AI Show podcast.
Check out their episodes so far, including an interview with former OpenAI board member Helen Toner, who discusses the circumstances of firing Sam Altman last year before losing her board position.
I haven’t yet seen Hundreds of Beavers, but it looks gloriously weird:
I particularly enjoyed this Movie Mindset podcast episode, which in part plays as a fantastic tribute to the power of After Effects:
We sit down with Mike Cheslik, the director of the new(ish) silent comedy action farce Hundreds of Beavers. We discuss his Wisconsin influences, ultra-DIY approach to filmmaking, making your film exactly as stupid as it needs to be, and the inherent humor of watching a guy in a mascot costume get wrecked on camera.
My new teammates continue to roll out good stuff. (I can’t yet take credit for anything.) Come take it for a spin!
New feature! Introducing Greeting Cards in Designer!
Transform a simple prompt into a beautiful card for any occasion in 4 easy steps. #MicrosoftDesigner pic.twitter.com/JYfpafuKH8
— Microsoft Designer (@MSFT365Designer) May 29, 2024
(And no, I’m not just talking oppressive humidity—though after living in California so long, that was quite a handful.) My 14yo MiniMe Henry & I had a ball over the weekend on our first trip to Louisiana, chasing the Empress steam engine as it made its way from Canada down to Mexico City. I’ll try to share a proper photo album soon, but in the meantime here are some great shots from Henry (enhanced with the now-indispensible Generative Fill), plus a bit of fun drone footage:
Who’d a thunk it? But now everyone is getting into the game:
“Combine your ink strokes with text prompts to generate new images in nearly real time with Cocreator,” Microsoft explains. “As you iterate, so does the artwork, helping you more easily refine, edit and evolve your ideas. Powerful diffusion-based algorithms optimize for the highest quality output over minimum steps to make it feel like you are creating alongside AI.”
The Designer team at Microsoft is working to enable AI-powered creation & editing experiences across a wide range of tools, and I’m delighted that my new teammates are rolling out a new set of integrations. Check out how you can now create images right inside Microsoft Teams:
I really enjoyed this TED talk from Fei-Fei Li on spatial computing & the possible dawning of a Cambrian explosion on how we—and our creations—perceive the world.
In the beginning of the universe, all was darkness — until the first organisms developed sight, which ushered in an explosion of life, learning and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for computers and robots. She shows how machines are gaining “spatial intelligence” — the ability to process visual data, make predictions and act upon those predictions — and shares how this could enable AI to interact with humans in the real world.
When I surveyed thousands of Photoshop customers waaaaaay back in the Before Times—y’know, summer 2022—I was struck by the fact that beyond wanting to insert things into images, and far beyond wanting to create images from scratch, just about everyone wanted better ways to remove things.
Happily, that capability has now come to Lightroom. It’s a deceptively simple change that, I believe, required a lot of work to evolve Lr’s non-destructive editing pipeline. Traditionally all edits were expressed as simple parameters, and then masks got added—but as far as I know, this is the first time Lr has ventured into transforming pixels in an additive way (that is, modify one bunch, then make subsequent edits that depend on the previous edits). That’s a big deal, and a big step forward for the team.
A few more examples courtesy of Howard Pinsky:
Removing distracting objects just got that much more powerful in @Lightroom. Generative Remove has arrived! pic.twitter.com/CrZ6A3AKOF
— Howard Pinsky (@Pinsky) May 21, 2024
Adobe’s CEO (duh :-)) sat down with Nilay Patel for an in-depth interview. Here are some of the key points, as summarized by ChatGPT:
———-
I still can’t believe I was allowed in the building with these giant throbbing brains. 🙂
Create a 3D model from a single image, set of images or a text prompt in < 1 minute
This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or… pic.twitter.com/sOsOBsjC8Q
— Bilawal Sidhu (@bilawalsidhu) May 17, 2024
This kind of evolution should make a lot of people rethink what it means to be an image editor going forward—or even an image.
Amazing work by @RuiqiGao @holynski_ @philipphenzler @rmbrualla @_pratul_ @jon_barron @poolio
The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release.pic.twitter.com/RArpAZfJJB
— Bilawal Sidhu (@bilawalsidhu) May 17, 2024
I’ve gotta say, this one touches a kinda painful nerve with me.
10 years ago I walked into the Google Photos team expecting normal humans to do things like say, “Show me the best pictures of my grandkids.” I immediately felt like a fool: something like 97% of daily users don’t search, preferring to simply launch the app and scroll scroll scroll forever.
A decade later, the Photos team is talking about using large language models to enable uses like the following:
With Ask Photos, you can ask for what you’re looking for in a natural way, like: “Show me the best photo from each national park I’ve visited.” Google Photos can show you what you need, saving you from all that scrolling.
For example, you can ask: “What themes have we had for Lena’s birthday parties?”. Ask Photos will understand details, like what decorations are in the background or on the birthday cake, to give you the answer.
Will anyone actually do this? It’s really hard for me to imagine, at least as it’s been framed above.
Now, what I can imagine working—in pretty great ways—is a real Assistant experience that suggests a bunch of useful tasks with which it can assist, such as gathering up photos to make birthday or holiday cards. (The latter task always falls to me every year, and I wish I could more confidently do it better.) Assistant could easily ask whose birthday it is & on what date, then scan one’s library and suggest a nice range of images as well as presentation options (cards, short animations, etc.). That kind of agent could be a joy to interact with.
Never doubt the power of a motivated person or two to do what needs to be done. Stick around to the last section of this short vid to see Stable Diffusion-powered “Find & Replace” (maskless inpainting powered by prompts) in action:
The newest version of the https://t.co/zMFye0YPsP #Photoshop plugin adds support for v2 of the @StabilityAI APIs. You now get all @replicate models, @OpenAI‘s DALL•E 3, and all of Stability’s models (including SD3). And it’s still free!https://t.co/exnJVygz4m pic.twitter.com/t03GQR0Do7
— Christian Cantrell (@cantrell) May 16, 2024
Martin Evening combines Adobe Substance 3D modeler and Krea to go from 3D sketch to burning rubber:
I don’t know about you, but I think this is pretty incredible..
Adobe Substance 3D modeler and Krea mainly here.
Quick, gestural 3d sculpting plus ai =(I think I’ve said that a few times haven’t I?) #ai #art pic.twitter.com/w0ZhBvQ0Pr
— Martin Nebelong (@MartinNebelong) May 13, 2024
Jon Finger combines a whole slew of tools for sketch->AR:
More playing with a Procreate to Simulon pipeline. It’s still a rough pipeline but fun to play with.
Sketched in @Procreate
Style transfer in @Magnific_AI
3d conversion in @fondantai
Motion capture in @MoveAI_
Rigged in @MaxonVFX c4d
Final capture on @Simulon pic.twitter.com/GiI1wJ2IWR— Jon Finger (@mrjonfinger) May 13, 2024
I came across this post (originally from 2017) just now while looking for other work from Paul Asente. Here’s hoping it can finally see the light of day in Illustrator! —J.
———–
Paul Asente is an OG of the graphics world, having been responsible for (if I recall correctly) everything from Illustrator’s vector meshes & art brushes to variable-width strokes. Now he’s back with new Adobe illustration tech to drop some millefleurs science:
PhysicsPak automatically fills a shape with copies of elements, growing, stretching, and distorting them to fill the space. It uses a physics simulation to do this and to control the amount of distortion.

[YouTube]
Unlike Runway, Pika, Sora, and other generative video models, this approach from Krea (well-known for their realtime, multimodal AI composition tools) is simply keyframing states of image generation—which is a pretty powerful approach unto itself.
Krea Video is here!
this is how it works
–
(sound on) pic.twitter.com/eld5RAoHdO— KREA AI (@krea_ai) May 9, 2024
Here’s a lovely uses of it in action:
Cities, made on @krea_ai‘s new video product pic.twitter.com/gLFFnK5AkX
— Justine Moore (@venturetwins) May 9, 2024
Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:
Having fun reinterpreting my son’s old drawings via #AdobeFirefly Structure Reference: pic.twitter.com/ALLBqdyPEc
— John Nack (@jnack) April 16, 2024
Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:
THIS IS NOT 3D
Did you know you can use AI as kind of pseudo 3d renderer?
In the future, every pixel in a video game will not be RENDERED but GENERATED in real time. But people are already creating insane “AI renders” today.
Here are 18 mind blowing examples + a tutorial: pic.twitter.com/MujuYpJcO3
— Javi Lopez (@javilopen) April 16, 2024
Check out Min Choi’s crossbreeding of Star Wars characters with iconic paintings (click tweet below to see the thread):
3. “The Scream” by Edvard Munch pic.twitter.com/a3FchLr4B4
— Min Choi (@minchoi) May 4, 2024
Here’s a look at his process (also a thread):
Yesterday, I showed wild AI results of famous art in Star Wars style using Midjourney v6.
Here are the comparison to original art.
How did AI do?
1. “The Last Supper” by Leonardo da Vinci pic.twitter.com/7Y4r08TQPJ
— Min Choi (@minchoi) May 5, 2024
I dig this charming little narrative from Apple. Happy Tuesday.
KFC is making a characteristic AI bug into a feature:
KFC celebrates the launch of their most finger-lickin’ product yet, with even more extrAI fingers.
With help from Meta’s new AI experience, KFC is encouraging people to use the new feature and generate images with more than five fingers. This AI idea builds on KFC’s new Saucy Nuggets campaign promoting their new saucy nuggets. To reward their participation, users will unlock a saucy nuggets coupon on the restaurant’s app.
Clever, though I’m reminded of Wint’s remark that “you do not, under any circumstances, ‘gotta hand it to them.'”

I told filmmaker Paul Trillo that I’ve apparently blogged his work here more than a dozen times over the past 10 years—long before AI generation became a thing. That’s because he’s always been eager to explore the boundaries of what’s possible with any given set of tools. In “Notes To My Future Self,” he combines new & traditional methods to make a haunting, melancholy meditation:
And here he provides an illuminating 1-minute peek into the processes that helped him create all this in just over a week’s time:
Can AI create better VFX? Lots of VFX don’t look great because they don’t know what they’re lighting to on set. Using a variety of AI tools, we can now move fluidly between pre and post. BGs made with stable diffusion, Photoshop Gen Fill, Magnific, Krea and Topaz and Runway Gen-2 pic.twitter.com/DLyA60XaUB
— Paul Trillo (@paultrillo) April 26, 2024
I get that it’s all in good fun, but hoo boy, the “Ex-Terminator” feature from PhotoRoom makes me melancholy. Meet me in Montauk…
We are excited to launch our latest AI partnership campaign with @okcupid ! This one was so much fun to build.
More than half of singles want to erase their exes from their photos, so @photoroom_app and @okcupid teamed up to help singles ditch the ex and keep the selfies… pic.twitter.com/n0WRN7zICH
— Matthieu Rouif (@matthieurouif) April 29, 2024
This app looks like a delightful little creation tool that’s just meant for doodling, but I’d love to see this kind of physical creation paired with the world of generative AI rendering. I’m reminded of how “Little Big Planet” years ago made me yearn for Photoshop tools that felt like Sackboy’s particle-emitting jetpack. Someday, maybe…?
A kind of 3D brush
Tiny Glade is going to be just a relaxing castle doodling game. No more, no less. More than enough!
The game seems amazing. But oh my god… Think about what could be done by further abstracting the idea of that “3D brush.”pic.twitter.com/kguZCq5jrb
— Javi Lopez (@javilopen) April 21, 2024
Adobe friends like Eli Shechtman have been publishing research for several years, and Creative Bloq reports that the functionality is due to make its way to the flagship imaging apps in the near future. Check out their post for details.
Automatic selection:

Cleaned-up results:

Object removal in Lightroom:

Check out this nice little tutorial from Howard Pinsky:
You love to see it—available now via the beta (which you can download via that little “CC” icon you generally ignore in your menubar :-)):
Just released! Don’t just edit images in #Photoshop. Now Ps can make them with #adobefirefly integrated! #adobexcommunity pic.twitter.com/VL33b58QY0
— Paul Trani (@paultrani) April 23, 2024
Also, props to Paul on his HELVETICA shirt, which reminds me of my old METADATA beauty.
You can try this now at Meta.ai. I’m very curious to see how much people favor speed vs. output quality.
Meta’s new AI does crazy realtime image generation…feels like improv or a tool for karaoke pic.twitter.com/uzzJ3g4lxD
— Scott Stein (@jetscott) April 19, 2024
I keep meaning to try out this new capability, but there are so many tools, so few hours! In any case, it promises to be an exciting breakthrough. If you take it for a spin, I’d love to hear what you think of the results.
We’re thrilled to unveil #Transparency— another new https://t.co/LlErGl3jwe feature that enables true native transparent PNG generation!
Transparency is more than background removal—this is native image diffusion with clean edges.
Read on! pic.twitter.com/JCEGas8H8z
— Leonardo.Ai (@LeonardoAi_) March 19, 2024
Highlighted use cases:
Hard to keep a good font down; even harder with a bad one! :-p Enjoy:
Sure, all this stuff—including what’s now my career’s work—will likely make it semi-impossible to reason together about any shared conception of reality, thereby calling into question the viability of democracy… but on the upside, moar dank memes!
Here’s how to create a dancing character using just an image + an existing video clip:
Viggle is the new hottest AI Creative Tool That is forever changing Memes and the future of AI Video.@aiwarper created a meme with the joker and Lil Yachty that caused a hilarious explosion.
In this video I’ll show you:
1. What Viggle is and how it works
2. Why its more… pic.twitter.com/dl2XSyQ0oT— Riley Brown (@rileybrown_ai) April 16, 2024
Removing objects will be huge, and Generative Extend—which can add a couple of seconds to clips to ease transitions—seems handy. Check out what’s in the works:
Many, many years ago, I delighted in experimenting with vector copies of famous logos I could download from the, um, copyright-agnostic Logotypes.ru. That site seems to be gone now, but this quick vid highlights some others you might find useful:
Check out the latest work (downloadable for free here) from longtime Adobe veteran (and former VP of product at Stability AI) Christian Cantrell:
The new version of the Concept Art #photoshop plugin is here! Create your own AI-powered workflows by combining hundreds of different imaging models from @replicate — as well as DALL•E 2 and 3 — without leaving @Photoshop. This is a complete rewrite with tons of new features coming (including local inference).

Not content to let Adobe & ChatGPT have all the fun, Google is now making its Imagen available to developers for image synthesis, including inserting items & expanding images:
We’re also adding advanced photo editing features, including inpainting and outpainting.
These features make it easy to remove unwanted elements, include new ones, and expand the borders of images to create a wider field of view. → https://t.co/Cbz4Pajkch #GoogleCloudNext pic.twitter.com/SRsEQjHWD5
— Google DeepMind (@GoogleDeepMind) April 9, 2024
Imagen, Google’s text-to-image mode, can now create live images from text, in preview. Just imagine generating animated images such as GIFs from a simple text prompt… Imagen also gets advanced photo editing features, including inpainting and outpainting, and a digital watermarking feature powered by Google DeepMind’s SynthID.
I’m eager to learn more about the last bit re: content provenance. Adobe has talked a bunch about image watermarking, but has not (as far as I know) shipped any support.
Meanwhile Google is also challenging Runway, Pika, & others in the creation of short video clips:
Our generative technology Imagen 2 can now create short, 4-second live images from a single prompt.
It’s available to use in @GoogleCloud’s #VertexAI platform. → https://t.co/CLMN3wNmeP #GoogleCloudNext pic.twitter.com/B4RocdDXrk
— Google DeepMind (@GoogleDeepMind) April 9, 2024
Given that my wife is the one responsible enough to chase the eclipse today & not roast her eyeballs, I’m left at home digging up a classic Dana Carvey bit about the eclipse (30 seconds, starts at 2:04). Enjoy! :-p
For 10 years or so I’ve been posting admiringly about the work of Paul Trillo (16 times so far; 17 now, good Lord), so I was excited to hear his conversation with the NYT Hard Fork crew—especially as he’s recently been pushing the limits with OpenAI’s Sora model. I think you’ll really enjoy this thoughtful, candid, and in-depth discussion about the possibilities & pitfalls of our new AI-infused creative world:
Some companies spend three months just on wringing their hands about whether to let you load a style reference image; others spend three people and go way beyond that, in realtime ¯\_(ツ)_/¯ :
These guys are doing such a good job creating intuitive visual interfaces for prompting
This is the new real-time image blending interface from @krea_ai
Such a smart designpic.twitter.com/qgzC86DNm7
— Nick St. Pierre (@nickfloats) April 4, 2024
When DALL•E first dropped, it wasn’t full-image creation that captured my attention so much as inpainting, i.e. creating/removing objects in designated regions. Over the years (all two of ’em ;-)) I’ve lost track of whether DALL•E’s Web interface has remained available (’cause who’s needed it after Generative Fill?), but I’m very happy to see this sort of selective synthesis emerge in the ChatGPT-DALL•E environment:
You can now edit DALL·E images in ChatGPT across web, iOS, and Android. pic.twitter.com/AJvHh5ftKB
— OpenAI (@OpenAI) April 3, 2024
It’s also nice to see more visual suggestions appearing there:
You can also get inspiration on styles when creating images in the DALL·E GPT. pic.twitter.com/mRrkwJKHyq
— OpenAI (@OpenAI) April 3, 2024
Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:
Generative Fill, remaining awesome for family photos. From Yosemite yesterday: pic.twitter.com/GtRP0UCaV6
— John Nack (@jnack) April 1, 2024
Hey, I know what you know (or quite possibly less :-)), but this demo (which for some reason includes Shaq) looks pretty cool:
From the description:
Elevate your data storytelling with #ProjectInfographIt, a game-changing solution leveraging Adobe Firefly generative AI. Simplify the infographic creation process by instantly generating design elements tailored to your key messages and data. With intuitive features for color palettes, chart types, graphics, and animations, effortlessly transform complex insights into visually stunning infographics.
Man, I can’t tell you how long I wanted folks to get this tech into their hands, and I’m excited that you can finally take it for a spin. Here are some great examples (from a thread by Min Choi, which contains more) showing how people are putting it into action:
Reinterpreted kids’ drawings:
Adobe Firefly structure reference:
I created these images using my kid’s art as reference + text prompts like these:
– red aeroplane toy made with felt, appliqué stitch, clouds, blue background
– broken ship, flowing paint from a palette of yellow and green colorsKept the… https://t.co/TMofxYx8E8 pic.twitter.com/nZpG3MnnZg
— Anu Aakash (@anukaakash) March 30, 2024
More demanding sketch-to-image:
Honestly, #AdobeFirefly ‘s new structure reference feature is super useful for going from a sketch to a realistic rendering. pic.twitter.com/v0HCCsTmZY
— Pierrick Chevallier | IA (@CharaspowerAI) March 29, 2024
Stylized Bitmoji:
Teachers!
You can also customize your @Bitmoji with @Adobe Firefly! #ai #aiforeducation #AdobeFirefly pic.twitter.com/WGV6oNvrwS— Andrew Davies, M.Ed. (@EduTechWizard) March 30, 2024