…or at least in what seems like every marginal hair salon. I love bite-sized little cultural insights like this:
Category Archives: Illustration
Some great demos of Recolor Vectors
Veteran author Deke McClelland has posted a fun 1-minute tour of the new Recolor Vectors module:
And for a deeper dive, check out his 20-minute version:
Meanwhile my color-loving colleague Hep (who also manages the venerable color.adobe.com) joined me for a live stream on Discord last Friday. It’s fun to see her spin on how best to apply various color harmonies and other techniques, including to her own beautiful illustrations:
Check out Firefly’s new Recolor Vectors module
Our first new module has just arrived 🎉, so grab your SVGs & make a path (oh my God) to the site.
From the team post:
Vector recoloring in the Firefly beta now enables you to:
- Enter detailed text descriptions to generate colors and color palette variations in seconds
- Use a drop-down menu to generate different vector styles that fit your creative needs
- Gain creative assistance and inspiration by quickly generating color options that bring your visions to life in an instant
As always, we’d love to hear what you think of the tools & what you’d like to see next!


Animated Drawings tech + Firefly = 🍬🌽🕺🏻
Meta Research has introduced Animated Drawings, “A Method for Automatically Animating Children’s Drawings of the Human Figure” (as their forthcoming paper is titled).
You can try it out via their Web interface, and/or take a bit more technical dive here:
I’m of course delighted to see folks starting to use it to bring their Adobe Firefly creations to life:
Demo: Creating cute characters in Adobe Firefly
Adobe prototyper Lee Brimelow has been happily distracting himself by creating delightful little creatures using Firefly, like this:
Today he joined us for a live stream on Discord (below), sharing details about his explorations so far. He also shared a Google Doc that contains details, including a number of links you can click in order to kick off the creation process. Enjoy, and please let me know what kinds of things you’d like to see us cover in future sessions.
Animation: “Grand Canons”
Enjoy, if you will, this “visual symphony of everyday objects“:
A brush makes watercolors appear on a white sheet of paper. An everyday object takes shape, drawn with precision by an artist’s hand. Then two, then three, then four… Superimposed, condensed, multiplied, thousands of documentary drawings in successive series come to life on the screen, composing a veritable visual symphony of everyday objects. The accumulation, both fascinating and dizzying, takes us on a trip through time.
Kottke notes, “More of Biet’s work can be found on his website or on Instagram.”
Use Stable Diffusion ControlNet in Photoshop
Check out this integration of sketch-to-image tech—and if you have ideas/requests on how you’d like to see capabilities like these get more deeply integrated into Adobe tools, lay ’em on me!
Also, it’s not in Photoshop, but as it made me think of the Photo Restoration Neural Filter in PS, check out this use of ControlNet to revive an old family photo:
Animation: Celebrating pioneering members of the Negro Leagues
Motionographer offers a behind-the-scenes look at the creation of these great animated vignettes:
ControlNet is wild
NVIDIA Canvas goes 360º
The company has announced a new mode for their Canvas painting app that turns simple brushstrokes into 360 environment maps for use in 3D apps or Omniverse. Check out this quick preview:
A vector high-wire act: Illustrator icon speed run
Heh—I can’t quite say why I found this quick demo from developer & illustrator Marc Edwards both gripping & slightly nerve-racking, but his accuracy is amazing:
“Twitter ads through the ages”
Check out this fun thread of Midjourney-made illustrations:
Generate scripts and faces, then animate them with Creative Reality Studio
Creative Reality Studio from D-ID (the folks behind the MyHeritage Deep Nostalgia tech that blew up a couple of years ago) can generate faces & scripts, then animate them. I find the results… interesting?

AI-made super babies
Artist & musician Ben Morin has been making some impressive pop-culture mashups, turning well-known characters into babies (using, I believe, Midjourney to combine a reference image with a prompt). Check out the results.

AI-made avatars for LinkedIn, Tinder, and more
As I say, another day, another specialized application of algorithmic fine-tuning. Per Vice:
For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire…

…while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses.
Meanwhile, the quality of generated faces continues to improve at a blistering pace:
✨ Trained my own model for https://t.co/ll0YGEo53Z for more photorealistic renders called
`people-diffusion`
I think by this week I can deploy it!
🤖 These are all 100% AI-generated people
Skin finally has pores now but don’t look at the hands yet please 😂 pic.twitter.com/Y6wbPz3BSS
— @levelsio (@levelsio) November 21, 2022
Crowdsourced AI Snoop Doggs (is a real headline you can now read)
The Doggfather recently shared a picture of himself (rendered presumably via some Stable Diffusion/DreamBooth personalization instance)…
…thus inducing fans to reply with their own variations (click tweet above to see the thread). Among the many fun Snoop Doggs (or is it Snoops Dogg?), I’m partial to Cyberpunk…
Cyberpunk Snoop Dogg, 1,2,3 or 4? pic.twitter.com/w8BgeJBx86
— Techietree.eth/tez (@techietree_eth) November 29, 2022
…and Yodogg:
Yodogg pic.twitter.com/9qqbluoCyt
— NIDO (@OfficialNID0) November 28, 2022
My Heritage introduces “AI Time Machine”
Another day, another special-purpose variant of AI image generation.
A couple of years ago, MyHeritage struck a chord with the world via Deep Nostalgia, an online app that could animate the faces of one’s long-lost ancestors. In reality it could animate just about any face in a photo, but I give them tons of credit for framing the tech in a really emotionally resonant way. It offered not a random capability, but rather a magical window into one’s roots.
Now the company is licensing tech from Astria, which itself builds on Stable Diffusion & Google Research’s DreamBooth paper. Check it out:

Interestingly (perhaps only to me), it’s been hard for MyHeritage to sustain the kind of buzz generated by Deep Nostalgia. They later introduced the much more ambitious DeepStory, which lets you literally put words in your ancestors’ mouths. That seems not to have bent the overall needle in awareness, at least in the way that the earlier offering did. Let’s see how portrait generation fares.

Charming one’s mom with AI 🐶
Speaking of Bilawal, and in the vein of the PetPortrait.ai service I mentioned last week, here’s a fun little video in which he’s trained an AI model to create images of his mom’s dog. “Oreo lookin’ FESTIVE in that sweater, yo!” 🥰 I can only imagine that this kind of thing will become mainstream quickly.
An illustrated wedding album made with DreamBooth
I’m not sure whom to credit with this impressive work (found here), nor how exactly they made it, but—like the bespoke pet portraits site I shared yesterday—I expect to see an explosion in such purpose-oriented applications of AI imaging:

PetPortrait.ai promises bespoke images of animals
We’re at just the start of what I expect to be an explosion of hyper-specific offerings powered by AI.
For $24, PetPortrait.ai offers “40 high resolution, beautiful, one-of-a-kind portraits of your pets in a variety of styles.” They say it takes 4-6 hours and requires the following input:
- ~10 portrait photos of their face
- ~5 photos from different angles of their head and chest
- ~5 full-body photos

It’ll be interesting to see what kind of traction this gets. The service Turn Me Royal offers more human-made offerings in a similar vein, and we delighted our son by commissioning this doge-as-Venetian-doge portrait (via an artist on Etsy) a couple of years ago:

Try Adobe’s new “Animate from audio” tool
Check out explanation below, then start creating right here.
Runway “Infinite Canvas” enables outpainting
I’ve tried it & it’s pretty slick. These guys are cooking with gas! (Also, how utterly insane would this have been to see even six months ago?! What a year, what a world.)
A fistful of generative imaging news
Man, I can’t keep up with this stuff—and that’s a great problem to have. Here are some interesting finds from just the last few days:
- Custom SD models are coming to Photoshop via Christian Cantrell’s plugin panel.
- Christian has trained a model on Rivians & says (ambitiously, but not without some justification) that “This is how all advertising and marketing collateral will be made sooner than most of the world realizes.”
- On a related note, here’s a thread (from an engineer at Shopify) on fine-tuning models to generate images of specific products (showing strengths/limitations).
- I see numerous custom models emerging that enable creation of art in the style of Spider-Man, Pixar, and more.
- Stability has rolled out new fine-tuned decoders. (See thread for what that means.)
- Here’s another multiplayer SD-powered creation experience.
- VR: Sketch Diffusion runs live in Gravity Sketch on Quest Pro headsets.
- Enabling text-based image editing via SD.
- Astria is another site promising custom image generation via uploaded pics.
Adobe Character Animator introduces Motion Library
Motion Library allows you to easily add premade animated motions like fighting, dancing, and running to your characters. Choose from a collection of over 350 motions and watch your puppets come to life in new and exciting ways!
Wayback machine: When “AI” was “Adobe Illustrator”
Check out a fun historical find from Adobe evangelist Paul Trani:
The video below shipped on VHS with the very first version of Adobe Illustrator. Adobe CEO & Illustrator developer John Warnock demonstrated the new product in a single one-hour take. He was certainly qualified, being one of the four developers whose names were listed on the splash screen!
How lucky it was for the world that a brilliant graphics engineer (John) married a graphic designer (Marva Warnock) who could provide constant input as this groundbreaking app took shape.
If you’re interested in more of the app’s rich history, check out The Adobe Illustrator Story:
Demo: Generating an illustrated narrative with DreamBooth
The Corridor Crew has been banging on Stable Diffusion & Google’s new DreamBooth tech (see previous) that enables training the model to understand a specific concept—e.g. one person’s face. Here they’ve trained it using a few photos of team member Sam Gorski, then inserted him into various genres:

From there they trained up models for various guys at the shop, then created an illustrated fantasy narrative. Just totally incredible, and their sheer exuberance makes the making-of pretty entertaining:
Lexica: Search for AI-made art, with prompts
The makers of this new search engine say they’re already serving more than 200,000 images/day & growing rapidly. Per this article, “It’s a massive collection of over 5 million Stable Diffusion images including its text prompts.” Just get ready to see some… interesting art (?). 🙃


AI + James Joyce = Poetry in motion
Lovely work from Glenn Marshall & friends:
AI art -> “Bullet Hell” & Sirenhead
“Shoon is a recently released side scrolling shmup,” says Vice, “that is fairly unremarkable, except for one quirk: it’s made entirely with art created by Midjourney, an AI system that generates images from text prompts written by users.’ Check out the results:
Meanwhile my friend Bilawal is putting generative imaging to work in creating viral VFX:
“Dreamcatching”: Generative AI for music vids
Trippy!
Magdalena Bay has shared a new Felix Geen directed video for “Dreamcatching.” The clip, multi-dimensional explored through cutting-edge AI technology and GAN artwork, combined with VQGAN+CLIP, is a technique that utilizes a collection of neural networks that work in unison to generate images based on input text and/or images.
“Little Simple Creatures”: Family & game art-making with DALL•E
Creative director Wes Phelan shared this charming little summary of how he creates kids’ books & games using DALL•E, including their newly launched outpainting support:

John Oliver gets DALL•E-pilled
Judi Dench fighting a centaur on the moon!
Goose Pilates!
Happy Friday. 😅
Alpaca brings Stable Diffusion to Photoshop 🔥
I don’t know much about these folks, but I’m excited to see that they’re working to integrate Stable Diffusion into Photoshop:

You can add your name to the waitlist via their site. Meanwhile here’s another exploration of SD + Photoshop:
🤘Death Metal Furby!🤘
See, isn’t that a more seductive title than “Personalizing Text-to-Image Generation using Textual Inversion“? 😌 But the so-titled paper seems really important in helping generative models like DALL•E to become much more precise. The team writes:
We ask: how can we use language-guided models to turn our cat into a painting, or imagine a new product based on our favorite toy? Here we present a simple approach that allows such creative freedom.
Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new “words” in the embedding space of a frozen text-to-image model. These “words” can be composed into natural language sentences, guiding personalized creation in an intuitive way.
Check out the kind of thing it yields:

AI art: “…Y’know, for kids!”
Many years ago (nearly 10!), when I was in the thick of making up bedtime stories every night, I wished aloud for an app that would help do the following:
- Record you telling your kids bedtime stories (maybe after prompting you just before bedtime)
- Transcribe the text
- Organize the sound & text files (into a book, journal, and/or timeline layout)
- Add photos, illustrations, and links.
- Share from the journal to a blog, Tumblr, etc.
I was never in a position to build it, but seeing this fusion of kid art + AI makes me hope again:
With #stablediffusion img2img, I can help bring my 4yr old’s sketches to life.
Baby and daddy ice cream robot monsters having a fun day at the beach. 😍#AiArtwork pic.twitter.com/I7NDxIfWBF
— PH AI (@fofrAI) August 23, 2022
So here’s my tweet-length PRD:
- Record parents’/kids’ voices.
- Transcribe as a journal.
- Enable scribbling.
- Synthesize images on demand.
On behalf of parents & caregivers everywhere, come on, world—LFG! 😛
“Hyperlapse vs. AI,” + AR fashion
Malick Lombion & friends combined “more than 1,200 AI-generated art pieces combined with around 1,400 photographs” to create this trippy tour:
Elsewhere, After Effects ninja Paul Trillo is back at it with some amazing video-meets-DALL•E-inpainting work:
I’m eager to see all the ways people might combine generation & fashion—e.g. pre-rendering fabric for this kind of use in AR:
“Mr. Blue Sky, but every lyric is an AI generated image”
Happy Monday. 😌
[Via Dave Dobish]
A quick tour of Make-A-Scene
I mentioned Meta Research’s DALL•E-like Make-A-Scene tech when it debuted recently, but I couldn’t directly share their short overview vid. Here’s a quick look at how various artists have been putting the system to work, notably via hand-drawn cues that guide image synthesis:
“Make-A-Scene” promises generative imaging cued via sketching
This new tech from Facebook Meta one-ups DALL•E et al by offering more localized control over where elements are placed:
The team writes,
We found that the image generated from both text and sketch was almost always (99.54 percent of the time) rated as better aligned with the original sketch. It was often (66.3 percent of the time) more aligned with the text prompt too. This demonstrates that Make-A-Scene generations are indeed faithful to a person’s vision communicated via the sketch.


Kids swoon as DALL•E brings their ideas into view
Nicely done; can’t wait to see more experiences like this.
Now available (and free!): Adobe Character Animator Starter Mode
Check out my friend Dave’s quick intro to this easy-to-use (and free-to-download) tech:
For more detail, here’s a deeper dive:
“Content-Aware Fill… cubed”: DALL•E inpainting is nuts
The technology’s ability not only to synthesize new content, but to match it to context, blows my mind. Check out this thread showing the results of filling in the gap in a simple cat drawing via various prompts. Some of my favorites are below:

Also, look at what it can build out around just a small sample image plus a text prompt (a chef in a sushi restaurant); just look at it!

Meet “Imagen,” Google’s new AI image synthesizer
What a time to be alive…
Hard on the heels of OpenAI revealing DALL•E 2 last month, Google has announced Imagen, promising “unprecedented photorealism × deep level of language understanding.” Unlike DALL•E, it’s not yet available via a demo, but the sample images (below) are impressive.
I’m slightly amused to see Google flexing on DALL•E by highlighting Imagen’s strengths in figuring out spatial arrangements & coherent text (places where DALL•E sometimes currently struggles). The site claims that human evaluators rate Imagen output more highly than what comes from competitors (e.g. MidJourney).
I couldn’t be more excited about these developments—most particularly to figure out how such systems can enable amazing things in concert with Adobe tools & users.
What a time to be alive…


Painting: Behind the scenes with Kehinde Wiley
I’ve long admired President Obama’s official portrait, but I haven’t known much about Kehinde Wiley. I enjoyed this brief peek into his painting process:
Here he shares more insights with Trevor Noah:
Adobe Fresco adds Liquify, Magic Wand, and more
Check out all the new goods!
It’s not new to this release, but I’d somehow missed it: support for perspective lines looks very cool.

A charming Route 66 doodle from Google
Last year I took my then-11yo son Henry (aka my astromech droid) on a 2000-mile “Miodyssey” down Route 66 in my dad’s vintage Miata. It was a great way to see the country (see more pics & posts than you might ever want), and despite the tight quarters we managed not to kill one another—or to get slain by Anton Chigurh in an especially murdery Texas town (but that’s another story!).
In any event, we were especially charmed to see the Goog celebrate the Mother Road in this doodle:
Famous logos recreated in Grotesque Middle Ages style
Heh—I love this kind of silly mashup. (And now I want to see what kind of things DALL•E would dream up for prompts like “medieval grotesque Burger King logo.”)

Things Fall Apart
Driving through the Southwest in 2020, we came across this dark & haunting mural showing the nearby Navajo Generation Station:

Now I see that the station has been largely demolished, as shown in this striking drone clip:
DALL•E 2 looks too amazing to be true
There’s no way this is real, is there?! I think it must use NFW technology (No F’ing Way), augmented with a side of LOL WTAF. 😛
Here’s an NYT video showing the system in action:

The NYT article offers a concise, approachable description of how the approach works:
A neural network learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of avocado photos, for example, it can learn to recognize an avocado. DALL-E looks for patterns as it analyzes millions of digital images as well as text captions that describe what each image depicts. In this way, it learns to recognize the links between the images and the words.
When someone describes an image for DALL-E, it generates a set of key features that this image might include. One feature might be the line at the edge of a trumpet. Another might be the curve at the top of a teddy bear’s ear.
Then, a second neural network, called a diffusion model, creates the image and generates the pixels needed to realize these features. The latest version of DALL-E, unveiled on Wednesday with a new research paper describing the system, generates high-resolution images that in many cases look like photos.
Though DALL-E often fails to understand what someone has described and sometimes mangles the image it produces, OpenAI continues to improve the technology. Researchers can often refine the skills of a neural network by feeding it even larger amounts of data.
I can’t wait to try it out.
Illustration: Things-Could-Be-Worse Mugs
“Lost your keys? Lost your job?” asks illustrator Don Moyer. “Look at the bright side. At least you’re not plagued by pterodactyls, pursued by giant robots, or pestered by zombie poodles. Life is good!”
I find this project (Kickstarting now) pretty charming:
[Via]