Generative artist Glenn Marshall has used CLIP + VQGAN to send Radiohead down a rather Lovecraftian rabbit hole:
Category Archives: Illustration
Terrific “Retro Modern” movie posters
I love these posters from Concepcion Studios (also viewable on Instagram and available for purchase on Etsy). [Via]


Charming “Viewfinder” animation
This is the super chill content I needed right now. 😌
Colossal writes,
“Viewfinder” is a charming animation about exploring the outdoors from the Seoul-based studio VCRWORKS. The second episode in the recently launched Rhythmens series, the peaceful short follows a central character on a hike in a springtime forest and frames their whimsically rendered finds through the lens of a camera.
You can find another installment on their Vimeo page.
NVIDIA Canvas paints with AI, exports to Photoshop
“A nuclear-powered pencil”: that’s how someone recently described ArtBreeder, and the phrase comes to mind for NVIDIA Canvas, a new prototype app you can download (provided you have Windows & beefy GPU) and use to draw in some trippy new ways:
Paint simple shapes and lines with a palette of real world materials, like grass or clouds. Then, in real-time, our revolutionary AI model fills the screen with show-stopping results.
Don’t like what you see? Swap a material, changing snow to grass, and watch as the entire image changes from a winter wonderland to a tropical paradise. The creative possibilities are endless.
[Via]
Illustrator 2021 adds canvas rotation
Extremely old, never-say-die (but good-natured) sibling rivalry: “Hey, 2008 called, and its wants its Photoshop feature back!” 🙃 I kid, though, and I’m happy to see illustrators getting this nice, smoothly rendering feature. Here’s a 1-minute tour:
Stop motion—via embroidery!
What an incredible labor of love this must have been to stitch & animate:
Our most ridiculously labor-intensive animation ever! The traditional Passover folk song rendered in embroidermation by Nina Paley and Theodore Gray. These very same embroidered matzoh covers are available for purchase here.
[Via Christa Mrgan]
Trippy Adobe brushes
As I noted last year,
I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).
In that vein, I dig what Erik Natzke & co. have explored:
This one’s even trippier:
Here’s a quick tutorial on how to make your own brush via Adobe Capture:
And here are the multicolor brushes added to Adobe Fresco last year:
Illustrator & InDesign get big boosts on Apple Silicon
On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.
As luck would have it, Adobe has some happy news to share about the latest hardware evolution:
Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]
These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.
Check out the post for full details.

Automatic caricature creation gets better & better
A few weeks ago I mentioned Toonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:
Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).
Illustration: How the Simpsons animation evolved over 30 years
There are lots of fun details here, from the evolution of the “potato-chip lip,” to how lines & shapes evolved to let characters rotate more easily in space, to hundreds of pages of documentation on exactly how hair & eyes should work, and more.
BMW art cars go AI
Generative artist Nathan Shipley has been doing some amazing work with GANs, and he recently collaborated with BMW to use projection mapping to turn a new car into a dynamic work of art:
I’ve long admired the Art Cars series, with a particular soft spot for Jenny Holzer’s masterfully disconcerting PROTECT ME FROM WHAT I WANT:

Here’s a great overview of the project’s decades of heritage, including a dive into how Andy Warhol adorned what may be the most valuable car in the world—painting on it at lightning speed:
AI art: GANimated flowers
Years ago my friend Matthew Richmond (Chopping Block founder, now at Adobe) would speak admiringly of “math-rock kids” who could tinker with code to expand the bounds of the creative world. That phrase came to mind seeing this lovely little exploration from Derrick Schultz:
Here it is in high res:
Body Movin’: Adobe Character Animator introduces body tracking (beta)
You’ll scream, you’ll cry, promises designer Dave Werner—and maybe not due just to “my questionable dance moves.”
Live-perform 2D character animation using your body. Powered by Adobe Sensei, Body Tracker automatically detects human body movement using a web cam and applies it to your character in real time to create animation. For example, you can track your arms, torso, and legs automatically. View the full release notes.
Check out the demo below & the site for full details.
A beautiful collage mural celebrates Dallas’s Deep Ellum
Syncopated AI nightmare fuel
I’ve obviously been talking a ton about the crazy-powerful, sometimes eerie StyleGAN2 technology. Here’s a case of generative artist Mario Klingemann wiring visuals to characteristics of music:
Watch it at 1/4 speed if you really want to freak yourself out.
Beats-to-visuals gives me an excuse to dig up & reshare Michel Gondry’s brilliant old Chemical Brothers video that associated elements like bridges, posts, and train cars with the various instruments at play:
Back to Mario: he’s also been making weirdly bleak image descriptions using CLIP (the same model we’ve explored using to generate faces via text). I congratulated him on making a robot sound like Werner Herzog. 🙃

A painterly, AI-hallucinated music video
The Avalanches used ArtBreeder (see previous) to make this trippy, ever-morphing music video.
This provides me a periodic reminder that I’ve never seen What Dreams May Come & should someday fix that.


Artbreeder is wild
Artbreeder is a trippy project that lets you “simply keep selecting the most interesting image to discover totally new images. Infinitely new random ‘children’ are made from each image. Artbreeder turns the simple act of exploration into creativity.” Check out interactive remixing:
Artbreeder is a nuclear powered pencil.
have a spin remixing some of mine here:https://t.co/pjeX7PNcgC#artbreeder #ai #ganbreeder #conceptart #comics pic.twitter.com/zxGLculJtA
— Bay Raitt (@bayraitt) September 17, 2019
Here’s an overview of how it works:
Generative Adversarial Networks are the main technology enabling Artbreeder. Artbreeder uses BigGAN and StyleGAN models. There is a minimal open source version available that uses BigGAN.
Using AI to create Disney- & Pixar-style caricatures
I find this emerging space so fascinating. Check out how Toonify.photos (which you can use for free, or at high quality for a very modest fee) can turn one’s image into a cartoon character. It leverages training data based on iconic illustration styles:
I also chuckled at this illustration from the video above, as it endeavors to how two networks (the “adversaries” in “Generative Adversarial Network”) attempt, respectively, to fool the other with output & to avoid being fooled. Check out more details in the accompanying article.

Of 3D Pets & Molten Pups
The Epic team behind the hyper-realistic, Web-hosted MetaHuman Creator—which is now available for early access—rolled out the tongue-in-cheek “MetaPet Creator” for April Fool’s. Artist Jelena Jovanovic offers a peek behind the scenes.
Elsewhere I put my pal Seamus (who’s presently sawing logs on the couch next to me) through NVIDIA’s somewhat wacky GANimal prototype app, attempting to mutate him into various breeds—with semi-Brundlefly results. 👀

A Man, A Plan, A StyleGAN…
On Monday I mentioned my new team’s mind-blowing work to enable image synthesis through typing, and I noted that it builds on NVIDIA’s StyleGAN research. If you’re interested in the latter, check out this two-minute demo of how it enables amazing interactive generation of stylized imagery:
This new project called StyleGAN2, developed by NVIDIA Research, and presented at CVPR 2020, uses transfer learning to produce seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project. Learn more here.
