Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:
Category Archives: DALL•E
Dalí meets DALL•E! 👨🏻🎨🤖
Among the great pleasures of this year’s revolutions in AI imaging has been the chance to discover & connect with myriad amazing artists & technologists. I’ve admired the work of Nathan Shipley, so I was delighted to connect him with my self-described “grand-mentee” Joanne Jang, PM for DALL•E. Nathan & his team collaborated with the Dalí Museum & OpenAI to launch Dream Tapestry, a collaborative realtime art-making experience.
The Dream Tapestry allows visitors to create original, realistic Dream Paintings from a text description. Then, it stitches a visitor’s Dream Painting together with five other visitors’ paintings, filling in the spaces between them to generate one collective Dream Tapestry. The result is an ever-growing series of entirely original Dream Tapestries, exhibited on the walls of the museum.
Check it out:
DALL•E arrives in Photoshop via the Flying Dog panel
I haven’t yet gotten to try this integration, but I’m excited to see it arrive.
🌿 Eat your heart out, Homer Simpson 🌿
Here’s another beautiful, DALL•E-infused collaboration between VFX whiz Paul Trillo & Shyama Golden:
DALL•E is now available to everyone
Whew—no more wheedling my “grand-mentee” Joanne on behalf of colleagues wanting access. 😅
Starting today, we are removing the waitlist for the DALL·E beta so users can sign up and start using it immediately. More than 1.5M users are now actively creating over 2M images a day with DALL·E—from artists and creative directors to authors and architects—with over 100K users sharing their creations and feedback in our Discord community.
You can sign up here. Also exciting:
We are currently testing a DALL·E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system.
It’s hard to overstate just how much this groundbreaking technology has rocked our whole industry—all since publicly debuting less than 6 months ago! Congrats to the whole team. I can’t wait to see what they’re cooking up next.
AR: Stepping inside famous paintings with a boost from DALL•E
Karen X. Cheng & pals (including my friend August Kamp) went to work extending famous works by Vermeer, Da Vinci, and Magritte, then placing them into AR filter (which you can launch from the post) that lets you walk right into the scenes. Wild!
“Little Simple Creatures”: Family & game art-making with DALL•E
Creative director Wes Phelan shared this charming little summary of how he creates kids’ books & games using DALL•E, including their newly launched outpainting support:

John Oliver gets DALL•E-pilled
Judi Dench fighting a centaur on the moon!
Goose Pilates!
Happy Friday. 😅
DALL•E outpainting arrives
Let the canvases extend in every direction! The thoughtfully designed new tiling UI makes it easy to synthesize adjacent chunks in sequence, partly overcoming current resolution limits in generative imaging:
Here’s a nice little demo from our designer Davis Brown, who takes his dad Russell’s surreal desert explorations to totally new levels:
Using DALL•E for generative fashion design
Amazing work from the always clever Karen X. Cheng, collaborating with Paul Trillo & others:
View this post on Instagram
Speaking of Paul here’s a fun new little VFX creation made using DALL•E:
AI is going to change VFX. This is a silly little experiment but it shows how powerful dall-e 2 is in generating elements into a pre existing video. These tools will become easier to use so when spectacle becomes cheap, ideas will prevail#aiart #dalle #ufo @openaidalle #dalle2 pic.twitter.com/XGHy9uY09H
— Paul Trillo (@paultrillo) August 30, 2022
CLIP interrogator reveals what your robo-artist assistant sees
Ever since DALL•E hit the scene, I’ve been wanting to know what words its model for language-image pairing would use to describe images:
Now the somewhat scarily named CLIP Interrogator promises exactly that kind of insight:
What do the different OpenAI CLIP models see in an image? What might be a good text prompt to create similar images using CLIP guided diffusion or another text to image model? The CLIP Interrogator is here to get you answers!
Here’s hoping it helps us get some interesting image -> text -> image flywheels spinning.
AE + DALL•E = Concept car madness
More wildly impressive inpainting & animation from Paul Trillo:
DALL•E + Snapchat = Clothing synthesis + try-on
Though we don’t (yet?) have the ability to use 3D meshes (e.g. those generated from a photo of a person) to guide text-based synthesis through systems like DALL•E, here’s a pretty compelling example of making 2D art, then wrapping it onto a body in real time:
Asked #dalle2 to generate some jeans look in a style of Gustav Klimt, then put it on cloth template from the latest workshop from @SnapAR ✨👖 pic.twitter.com/lUH0YSqB1t
— Maxiм (@maximkuzlin) August 3, 2022
Ketchup goes AI…? Heinz puts DALL•E to work
Interesting, and of course inevitable:
“This emerging tech isn’t perfect yet, so we got some weird results along with ones that looked like Heinz—but that was part of the fun. We then started plugging in ketchup combination phrases like ‘impressionist painting of a ketchup bottle’ or ‘ketchup tarot card’ and the results still largely resembled Heinz. We ultimately found that no matter how we were asking, we were still seeing results that looked like Heinz.”
Pass the Kemp!

[Via Aaron Hertzmann]
More DALL•E + After Effects magic
Creator Paul Trillo (see previous) is back at it. Here’s new work + a peek into how it’s made:
Kids swoon as DALL•E brings their ideas into view
Nicely done; can’t wait to see more experiences like this.
Animated magic made via DALL•E + After Effects
😮