My son recently noticed the sly, clever syncing of graffiti with the lyrics playing in the intro to Baby Driver. Check it out:
And although it’s tangential, this gave me an excuse to show him the great animated text in Stranger than Fiction:
My son recently noticed the sly, clever syncing of graffiti with the lyrics playing in the intro to Baby Driver. Check it out:
And although it’s tangential, this gave me an excuse to show him the great animated text in Stranger than Fiction:
AI-powered relighting & resyling is a hell of a drug!
Elias Artista (Senior Environment Artist at Bethesda Game Studios) will guide you step by step in this video:pic.twitter.com/vDCDooimXc
— Javi Lopez (@javilopen) July 19, 2024

Oh man, I wish I could say that my high school art career didn’t involve a whole bunch of these things, but OMG it sure did. :-p
I’m delighted to see that Magnific is now available as a free Photoshop panel!
WE HAVE LISTENED TO YOU.
Magnific plugin for Photoshop
We are launching one of the most requested features by professionals: the ability to use Magnific from within Photoshop!
LET’S GO! Step by step tutorial pic.twitter.com/Drhk99NcAt
— Javi Lopez (@javilopen) July 8, 2024
For now the functionality is limited to upscaling, but I have to think that they’ll soon turn on the super cool relighting & restyling tech that enables fun like transforming my dog using just different prompts (click to see larger):

I wish Adobe hadn’t given up (at least for the last couple of years and foreseeable future) on the Smart Portrait tech we were developing. It’s been stuck at 1.0 since 2020 and could be so much better. Maybe someday!
In the meantime, check out LivePortrait:
Some impressive early results coming out of LivePortrait, a new model for face animation.
Upload a photo + a reference video and combine them!
(these clips are from u/Choidonhyeon) pic.twitter.com/ZXJdI0sRqt
— Justine Moore (@venturetwins) July 6, 2024
And now you can try it out for yourself:
Realtime Live Portrait is live on @fal!
Play with demo here: https://t.co/0N14KGtaAw pic.twitter.com/eZl8WsWVMY
— Jonathan Fischoff (@jfischoff) July 16, 2024
And how & why did he create the font to begin with? Here’s a rather charming little look behind the scenes:
Being able to declare what you want, instead of having to painstakingly set up parameters for materials, lighting, etc. may prove to be an incredibly unlock for visual expressivity, particularly around the generally intimidating realm of 3D. Check out what tyFlow is bringing to the table:

You can see a bit more about how it works in this vid…
…or a lot more in this one:
Years ago Adobe experimented with a real-time prototype of Photoshop’s Landscape Mixer Neural Filter, and the resulting responsiveness made one feel like a deity—fluidly changing summer to winter & back again. I was reminded of using Google Earth VR, where grabbing & dragging th
Nothing came of it, but in the time since then, realtime diffusion rendering (see amazing examples from Krea & others) and image-to-image restyling have opened some amazing new doors. I wish I could attach filters to any layer in Photoshop (text, 3D, shape, image) and have it reinterpreted like this:
New way to navigate latent space. It preservers the underlying image structure and feels a bit like a powerful style-transfer that can be applied to anything. The trick is to… pic.twitter.com/orFBysBpkT
— Johannes Stelzer (@j_stelzer) July 15, 2024
Pretty cool! I’d love to see Illustrator support model import & rendering of this sort, such that models could be re-posed in one’s .Ai doc, but this still looks like a solid approach:
3D meets 2D!
With the Expressive or Pixel Art styles in Project Neo, you can export your designs as SVGs to edit in Illustrator or use on your websites. pic.twitter.com/vOsjb2S2Un
— Howard Pinsky (@Pinsky) July 11, 2024
Heh: András István Arató—aka Hide The Pain Harold, the wincing king of stock photography—seems like a genuinely good dude. Here he narrates his story in brief:
New tech from my old Google teammates makes some exciting claims:
Using Magic Insert we are, for the first time, able to drag-and-drop a subject from an image with an arbitrary style onto another target image with a vastly different style and achieve a style-aware and realistic insertion of the subject into the target image.
Here is a demo that you can access on the desktop version of the website. We’re excited by the options Magic Insert opens up for artistic creation, content creation and for the overall expansion of GenAI controllability. pic.twitter.com/HhbfrEfXZH
— Nataniel Ruiz (@natanielruizg) July 3, 2024
Of course, much of the challenge here—where art meets science—is around identity preservation: to what extent can & should the output resemble the input? Here it’s subject to some interpretation. In other applications one wants an exact copy of a given person or thing, but optionally transformed in just certain ways (e.g. pose & lighting).
When we launched Firefly last year, we showed off some of Adobe’s then-new ObjectStitch tech for making realistic composites. It didn’t ship while I was there due to challenges around identity preservation. As far as I know those challenges remain only partially solved, so I’ll continue holding out hope—as I have for probably 30 years now!—for future tech breakthroughs that get us all the way across that line.


Check out this striking application of AI-powered relighting: a single rendering is deeply & realistically transformed via one AI tool, and the results are then animated & extended by another.
Style Transfer + Relight + Upscale + Luma (key frames) = pic.twitter.com/i7FujiZ5P1
— Javi Lopez (@javilopen) June 29, 2024
Meanwhile Krea has just jumped into the game with similar-looking relighting tech. I’m off to check it out!
announcing Scene Transfer.
create new scenes in seconds with perfect light and color consistency.
free for everyone. pic.twitter.com/JxYff4NZrP
— KREA AI (@krea_ai) July 5, 2024
I love the rich detail that Steve Cutts packs into every frame of this bleak rendering of our screen-addicted world:
And if that’s not quite enough a mood for ya, try “Happiness”:
Honestly I’ll be kinda sad when this kind of madness gets “fixed”:
Gymnastics is the Turing test of video generation models pic.twitter.com/cOhmUJjI2m
— Deedy (@deedydas) July 2, 2024
May we live in interesting times…
Dall-E on biblically accurate gymnastics — pic.twitter.com/mKKxdS0HGv
— Deedy (@deedydas) July 2, 2024
Wandering alone around the campus of my alma mater this past weekend had me in a deeply wistful, reflective mood. I reached out across time & space to some long-separated friends, and I thought you might enjoy this beautiful tune that’s been in my head the whole while.
Man, what I wouldn’t have given years ago, when we were putting 3D support into Photoshop, for the ability to compute meshes from objects (e.g. a photo of a soda can or a shirt) in order to facilitate object placement like this.
Days of Miracles & Wonder, as always…
Infinite seamless mega meme mashup
Keyframes were used to seamlessly transition between 20 memes w/ audio @LumaLabsAI Audio on pic.twitter.com/9jzbMDUDp2
— Blaine Brown (@blizaine) June 29, 2024
Here’s a micro tutorial on how to create similar effects:
Here’s how to morph memes using Dream Machine’s new Keyframe feature. Simply upload two of your favorite memes, write a prompt that describes how you’d like to transition between them, and we’ll dream up the rest. https://t.co/G3HUEBEAcO #LumaDreamMachine pic.twitter.com/yNaRhERutn
— Luma AI (@LumaLabsAI) June 29, 2024
Heh—we’re way beyond Not Hotdog now. Alexander Reben writes,
“Silly AI Label Maker” [is] a mode of the “Conceptual Camera” developed as part of my artist residency at @openai
Tired: Using generative video to animate memes.
Wired: Using it to insert phantom “friends” into your wedding memories!
I threw our 15-year old wedding photo into Luma’s Dream Machine just to see what happens. I’m thoroughly amused. pic.twitter.com/TIEVK6gCuf
— Howard Pinsky (@Pinsky) June 15, 2024
Well, it doesn’t create animated results, but it can work perhaps surprisingly well on regions in static shots:
Generative Fill isn’t available for moving videos just yet, but Photoshop can handle stationary clips quite well pic.twitter.com/e8GGGdomrC
— Howard Pinsky (@Pinsky) June 19, 2024
It can also be used to expand the canvas of similar shots:
“But can videos be EXPANDED in @Photoshop?!”
Sound up! pic.twitter.com/hyeoJs9Bse
— Howard Pinsky (@Pinsky) June 20, 2024
Much amaze, wowo wowo:
This Lego machine can easily create a beautiful pixelart of anything you want! It is programmed in Python, and, with help of OpenAI’s DALL-E 3, it can make anything!
DesignBoom writes,
Sten of the YouTube channel Creative Mindstorms demonstrates his very own robot printer named Pixelbot 3000, made of LEGO bricks, that can produce pixel art with the help of OpenAI’s DALL-E 3 and AI images. Using a 32 x 32 plate and numerous round LEGO bricks, the robot printer automatically pins the pieces onto their designated positions until it forms the pixel art version of the image. He uses Python as his main programming language, and to create pixel art of anything, he employs AI, specifically OpenAI’s DALL-E 3.
Fun! You can grab the free browser extension here.
* right-click-remix any image w/ tons of amazing AI presets: Style Transfer, Controlnets… * build & remix your own workflows with full comfyUI support * local + cloud!
besides some really great default presets using all sorts of amazing ComfyUI workflows (which you can inspect and remix on http://glif.app), the extension will now also pull your own compatible glifs into it!
The tech, a demo of which you can try here, promises “‘imitative editing,’ allowing users to edit images using reference images without the need for detailed text descriptions.”

Here it is in action:
兄弟们,这个牛P
MimicBrush:通过模仿参考图像对目标图像选定区域自动进行局部编辑
也就是你可以选择图片中的某个部分(比如一个人的衣服、背景等),然后选择一个你喜欢的参考图片。
MimicBrush会根据参考图片的样子自动修改你选择的部分,让它看起来像参考图片中的那样。… pic.twitter.com/8wVy2hPgW3
— 小互 (@imxiaohu) June 18, 2024
Good grief, the pace of change makes “AI vertigo” such a real thing. Just last week we were seeing “skeleton underwater” memes with Runway submerged in a rusty chair. :-p I’m especially excited to see how it handles text (which remains a struggle for text-to-image models including DALL•E):
Being able to render text has also been incredibly fun to play with pic.twitter.com/Y12ZvLm6I8
— Cristóbal Valenzuela (@c_valenzuelab) June 17, 2024
Hey man, it’s Monday. 🙂 Enjoy some silly but well executed VFX:
Wtf! pic.twitter.com/mdT5YgAn97
— LA CUEVA DEL CINÉFILO (@Tibu696) June 15, 2024
I’m really digging the simple joy in this little experiment, powered by Imagen:
1 Prompt. 26 letters. Any kind of alphabet you can imagine. #GenType empowers you to craft, refine, and download one-of-a-kind AI generated type, building from A-Z with just your imagination.
Watch our Creative Lab teammates @trudypainter and @soybean_gx demo this latest… pic.twitter.com/rr6FIoEg2f
— labs.google (@labsdotgoogle) June 12, 2024
Here’s a bit of fun enabled by “weedy seadragons on PVC pipes in a magical undersea kingdom” (click to see at full res):

I’m super eager to try this one out!
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Unleash your inner monster maker: Monster Camp by @monster_library pic.twitter.com/HyH56WyvAr
— Luma AI (@LumaLabsAI) June 12, 2024
There’s been a firestorm this week about the terms of service that my old home team put forward, based (as such things have been since time immemorial) on a lot of misunderstanding & fear. Fortunately the company has been working to clarify what’s really going on.
Sorry for delay on this. Info here, including what actually changed in the TOS (not much), as well as what Adobe can / cannot do with your content. https://t.co/LZFkDXrmep
— Mike Chambers (@mesh) June 6, 2024
I did at least find this bit of parody amusing:
Huge if true. https://t.co/AFK8nyhrDg
— John Nack (@jnack) June 6, 2024
My former Google teammates have been cranking out some amazing AI personalization tech, with HyperDreamBooth far surpassing the performance of their original DreamBooth (y’know, from 2022—such a simpler ancient time!). Here they offer a short & pretty accessible overview of how it works:
Using only a single input image, HyperDreamBooth is able to personalize a text-to-image diffusion model 25x faster than DreamBooth, by using (1) a HyperNetwork to generate an initial prediction of a subset of network weights that are then (2) refined using fast finetuning for high fidelity to subject detail. Our method both conserves model integrity and style diversity while closely approximating the subject’s essence and details.
“Maybe the real treasure was the friends we made along the way” is, generally, ironic shorthand for “worthless treasure”—but I’ve also found it to be true. That’s particularly the case for the time I spent at Google, where I met excellent folks like Bilawal Sidhu (a fellow PM veteran of the augmented reality group). I’m delighted that he’s now crushing it as the new host of the TED AI Show podcast.
Check out their episodes so far, including an interview with former OpenAI board member Helen Toner, who discusses the circumstances of firing Sam Altman last year before losing her board position.
I haven’t yet seen Hundreds of Beavers, but it looks gloriously weird:
I particularly enjoyed this Movie Mindset podcast episode, which in part plays as a fantastic tribute to the power of After Effects:
We sit down with Mike Cheslik, the director of the new(ish) silent comedy action farce Hundreds of Beavers. We discuss his Wisconsin influences, ultra-DIY approach to filmmaking, making your film exactly as stupid as it needs to be, and the inherent humor of watching a guy in a mascot costume get wrecked on camera.
My new teammates continue to roll out good stuff. (I can’t yet take credit for anything.) Come take it for a spin!
New feature! Introducing Greeting Cards in Designer!
Transform a simple prompt into a beautiful card for any occasion in 4 easy steps. #MicrosoftDesigner pic.twitter.com/JYfpafuKH8
— Microsoft Designer (@MSFT365Designer) May 29, 2024
(And no, I’m not just talking oppressive humidity—though after living in California so long, that was quite a handful.) My 14yo MiniMe Henry & I had a ball over the weekend on our first trip to Louisiana, chasing the Empress steam engine as it made its way from Canada down to Mexico City. I’ll try to share a proper photo album soon, but in the meantime here are some great shots from Henry (enhanced with the now-indispensible Generative Fill), plus a bit of fun drone footage:
Who’d a thunk it? But now everyone is getting into the game:
“Combine your ink strokes with text prompts to generate new images in nearly real time with Cocreator,” Microsoft explains. “As you iterate, so does the artwork, helping you more easily refine, edit and evolve your ideas. Powerful diffusion-based algorithms optimize for the highest quality output over minimum steps to make it feel like you are creating alongside AI.”
The Designer team at Microsoft is working to enable AI-powered creation & editing experiences across a wide range of tools, and I’m delighted that my new teammates are rolling out a new set of integrations. Check out how you can now create images right inside Microsoft Teams:
I really enjoyed this TED talk from Fei-Fei Li on spatial computing & the possible dawning of a Cambrian explosion on how we—and our creations—perceive the world.
In the beginning of the universe, all was darkness — until the first organisms developed sight, which ushered in an explosion of life, learning and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for computers and robots. She shows how machines are gaining “spatial intelligence” — the ability to process visual data, make predictions and act upon those predictions — and shares how this could enable AI to interact with humans in the real world.
When I surveyed thousands of Photoshop customers waaaaaay back in the Before Times—y’know, summer 2022—I was struck by the fact that beyond wanting to insert things into images, and far beyond wanting to create images from scratch, just about everyone wanted better ways to remove things.
Happily, that capability has now come to Lightroom. It’s a deceptively simple change that, I believe, required a lot of work to evolve Lr’s non-destructive editing pipeline. Traditionally all edits were expressed as simple parameters, and then masks got added—but as far as I know, this is the first time Lr has ventured into transforming pixels in an additive way (that is, modify one bunch, then make subsequent edits that depend on the previous edits). That’s a big deal, and a big step forward for the team.
A few more examples courtesy of Howard Pinsky:
Removing distracting objects just got that much more powerful in @Lightroom. Generative Remove has arrived! pic.twitter.com/CrZ6A3AKOF
— Howard Pinsky (@Pinsky) May 21, 2024
Adobe’s CEO (duh :-)) sat down with Nilay Patel for an in-depth interview. Here are some of the key points, as summarized by ChatGPT:
———-
I still can’t believe I was allowed in the building with these giant throbbing brains. 🙂
Create a 3D model from a single image, set of images or a text prompt in < 1 minute
This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or… pic.twitter.com/sOsOBsjC8Q
— Bilawal Sidhu (@bilawalsidhu) May 17, 2024
This kind of evolution should make a lot of people rethink what it means to be an image editor going forward—or even an image.
Amazing work by @RuiqiGao @holynski_ @philipphenzler @rmbrualla @_pratul_ @jon_barron @poolio
The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release.pic.twitter.com/RArpAZfJJB
— Bilawal Sidhu (@bilawalsidhu) May 17, 2024
I’ve gotta say, this one touches a kinda painful nerve with me.
10 years ago I walked into the Google Photos team expecting normal humans to do things like say, “Show me the best pictures of my grandkids.” I immediately felt like a fool: something like 97% of daily users don’t search, preferring to simply launch the app and scroll scroll scroll forever.
A decade later, the Photos team is talking about using large language models to enable uses like the following:
With Ask Photos, you can ask for what you’re looking for in a natural way, like: “Show me the best photo from each national park I’ve visited.” Google Photos can show you what you need, saving you from all that scrolling.
For example, you can ask: “What themes have we had for Lena’s birthday parties?”. Ask Photos will understand details, like what decorations are in the background or on the birthday cake, to give you the answer.
Will anyone actually do this? It’s really hard for me to imagine, at least as it’s been framed above.
Now, what I can imagine working—in pretty great ways—is a real Assistant experience that suggests a bunch of useful tasks with which it can assist, such as gathering up photos to make birthday or holiday cards. (The latter task always falls to me every year, and I wish I could more confidently do it better.) Assistant could easily ask whose birthday it is & on what date, then scan one’s library and suggest a nice range of images as well as presentation options (cards, short animations, etc.). That kind of agent could be a joy to interact with.
Never doubt the power of a motivated person or two to do what needs to be done. Stick around to the last section of this short vid to see Stable Diffusion-powered “Find & Replace” (maskless inpainting powered by prompts) in action:
The newest version of the https://t.co/zMFye0YPsP #Photoshop plugin adds support for v2 of the @StabilityAI APIs. You now get all @replicate models, @OpenAI‘s DALL•E 3, and all of Stability’s models (including SD3). And it’s still free!https://t.co/exnJVygz4m pic.twitter.com/t03GQR0Do7
— Christian Cantrell (@cantrell) May 16, 2024
Martin Evening combines Adobe Substance 3D modeler and Krea to go from 3D sketch to burning rubber:
I don’t know about you, but I think this is pretty incredible..
Adobe Substance 3D modeler and Krea mainly here.
Quick, gestural 3d sculpting plus ai =(I think I’ve said that a few times haven’t I?) #ai #art pic.twitter.com/w0ZhBvQ0Pr
— Martin Nebelong (@MartinNebelong) May 13, 2024
Jon Finger combines a whole slew of tools for sketch->AR:
More playing with a Procreate to Simulon pipeline. It’s still a rough pipeline but fun to play with.
Sketched in @Procreate
Style transfer in @Magnific_AI
3d conversion in @fondantai
Motion capture in @MoveAI_
Rigged in @MaxonVFX c4d
Final capture on @Simulon pic.twitter.com/GiI1wJ2IWR— Jon Finger (@mrjonfinger) May 13, 2024
I came across this post (originally from 2017) just now while looking for other work from Paul Asente. Here’s hoping it can finally see the light of day in Illustrator! —J.
———–
Paul Asente is an OG of the graphics world, having been responsible for (if I recall correctly) everything from Illustrator’s vector meshes & art brushes to variable-width strokes. Now he’s back with new Adobe illustration tech to drop some millefleurs science:
PhysicsPak automatically fills a shape with copies of elements, growing, stretching, and distorting them to fill the space. It uses a physics simulation to do this and to control the amount of distortion.

[YouTube]
Unlike Runway, Pika, Sora, and other generative video models, this approach from Krea (well-known for their realtime, multimodal AI composition tools) is simply keyframing states of image generation—which is a pretty powerful approach unto itself.
Krea Video is here!
this is how it works
–
(sound on) pic.twitter.com/eld5RAoHdO— KREA AI (@krea_ai) May 9, 2024
Here’s a lovely uses of it in action:
Cities, made on @krea_ai‘s new video product pic.twitter.com/gLFFnK5AkX
— Justine Moore (@venturetwins) May 9, 2024
Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:
Having fun reinterpreting my son’s old drawings via #AdobeFirefly Structure Reference: pic.twitter.com/ALLBqdyPEc
— John Nack (@jnack) April 16, 2024
Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:
THIS IS NOT 3D
Did you know you can use AI as kind of pseudo 3d renderer?
In the future, every pixel in a video game will not be RENDERED but GENERATED in real time. But people are already creating insane “AI renders” today.
Here are 18 mind blowing examples + a tutorial: pic.twitter.com/MujuYpJcO3
— Javi Lopez (@javilopen) April 16, 2024
Check out Min Choi’s crossbreeding of Star Wars characters with iconic paintings (click tweet below to see the thread):
3. “The Scream” by Edvard Munch pic.twitter.com/a3FchLr4B4
— Min Choi (@minchoi) May 4, 2024
Here’s a look at his process (also a thread):
Yesterday, I showed wild AI results of famous art in Star Wars style using Midjourney v6.
Here are the comparison to original art.
How did AI do?
1. “The Last Supper” by Leonardo da Vinci pic.twitter.com/7Y4r08TQPJ
— Min Choi (@minchoi) May 5, 2024
I dig this charming little narrative from Apple. Happy Tuesday.
KFC is making a characteristic AI bug into a feature:
KFC celebrates the launch of their most finger-lickin’ product yet, with even more extrAI fingers.
With help from Meta’s new AI experience, KFC is encouraging people to use the new feature and generate images with more than five fingers. This AI idea builds on KFC’s new Saucy Nuggets campaign promoting their new saucy nuggets. To reward their participation, users will unlock a saucy nuggets coupon on the restaurant’s app.
Clever, though I’m reminded of Wint’s remark that “you do not, under any circumstances, ‘gotta hand it to them.'”

I told filmmaker Paul Trillo that I’ve apparently blogged his work here more than a dozen times over the past 10 years—long before AI generation became a thing. That’s because he’s always been eager to explore the boundaries of what’s possible with any given set of tools. In “Notes To My Future Self,” he combines new & traditional methods to make a haunting, melancholy meditation:
And here he provides an illuminating 1-minute peek into the processes that helped him create all this in just over a week’s time:
Can AI create better VFX? Lots of VFX don’t look great because they don’t know what they’re lighting to on set. Using a variety of AI tools, we can now move fluidly between pre and post. BGs made with stable diffusion, Photoshop Gen Fill, Magnific, Krea and Topaz and Runway Gen-2 pic.twitter.com/DLyA60XaUB
— Paul Trillo (@paultrillo) April 26, 2024