Right at the start of my career, I had the chance to draw some simple Peanuts animations for MetLife banner ads. The cool thing is that back then, Charles Schulz himself had to approve each use of his characters—and I’m happy to say he approved mine. 😌 (For the record, as I recall it feature Linus’s hair flying up as he was surprised.)
In any event, here’s a fun tutorial commissioned by Apple:
As Kottke notes, “They’ve even included a PDF of drawing references to make it easier.” Fortunately you don’t have to do the whole thing in 35 seconds, a la Schulz himself:
“Viewfinder” is a charming animation about exploring the outdoors from the Seoul-based studio VCRWORKS. The second episode in the recently launched Rhythmens series, the peaceful short follows a central character on a hike in a springtime forest and frames their whimsically rendered finds through the lens of a camera.
“A nuclear-powered pencil”: that’s how someone recently described ArtBreeder, and the phrase comes to mind for NVIDIA Canvas, a new prototype app you can download (provided you have Windows & beefy GPU) and use to draw in some trippy new ways:
Paint simple shapes and lines with a palette of real world materials, like grass or clouds. Then, in real-time, our revolutionary AI model fills the screen with show-stopping results.
Don’t like what you see? Swap a material, changing snow to grass, and watch as the entire image changes from a winter wonderland to a tropical paradise. The creative possibilities are endless.
What an incredible labor of love this must have been to stitch & animate:
Our most ridiculously labor-intensive animation ever! The traditional Passover folk song rendered in embroidermation by Nina Paley and Theodore Gray. These very same embroidered matzoh covers are available for purchase here.
I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).
In that vein, I dig what Erik Natzke & co. have explored:
This one’s even trippier:
Here’s a quick tutorial on how to make your own brush via Adobe Capture:
And here are the multicolor brushes added to Adobe Fresco last year:
On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.
As luck would have it, Adobe has some happy news to share about the latest hardware evolution:
Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]
These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.
A few weeks ago I mentionedToonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:
Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).
There are lots of fun details here, from the evolution of the “potato-chip lip,” to how lines & shapes evolved to let characters rotate more easily in space, to hundreds of pages of documentation on exactly how hair & eyes should work, and more.
Years ago my friend Matthew Richmond (Chopping Block founder, now at Adobe) would speak admiringly of “math-rock kids” who could tinker with code to expand the bounds of the creative world. That phrase came to mind seeing this lovely little exploration from Derrick Schultz:
You’ll scream, you’ll cry, promises designer Dave Werner—and maybe not due just to “my questionable dance moves.”
Live-perform 2D character animation using your body. Powered by Adobe Sensei, Body Tracker automatically detects human body movement using a web cam and applies it to your character in real time to create animation. For example, you can track your arms, torso, and legs automatically. View the full release notes.
I’ve obviously been talking a ton about the crazy-powerful, sometimes eerie StyleGAN2 technology. Here’s a case of generative artist Mario Klingemann wiring visuals to characteristics of music:
Watch it at 1/4 speed if you really want to freak yourself out.
Beats-to-visuals gives me an excuse to dig up & reshare Michel Gondry’s brilliant old Chemical Brothers video that associated elements like bridges, posts, and train cars with the various instruments at play:
Back to Mario: he’s also been making weirdly bleak image descriptions using CLIP (the same model we’ve explored using to generate faces via text). I congratulated him on making a robot sound like Werner Herzog. 🙃
Artbreeder is a trippy project that lets you “simply keep selecting the most interesting image to discover totally new images. Infinitely new random ‘children’ are made from each image. Artbreeder turns the simple act of exploration into creativity.” Check out interactive remixing:
I find this emerging space so fascinating. Check out how Toonify.photos (which you can use for free, or at high quality for a very modest fee) can turn one’s image into a cartoon character. It leverages training data based on iconic illustration styles:
I also chuckled at this illustration from the video above, as it endeavors to how two networks (the “adversaries” in “Generative Adversarial Network”) attempt, respectively, to fool the other with output & to avoid being fooled. Check out more details in the accompanying article.
Elsewhere I put my pal Seamus (who’s presently sawing logs on the couch next to me) through NVIDIA’s somewhat wacky GANimal prototype app, attempting to mutate him into various breeds—with semi-Brundlefly results. 👀
On Monday I mentioned my new team’s mind-blowing work to enable image synthesis through typing, and I noted that it builds on NVIDIA’s StyleGAN research. If you’re interested in the latter, check out this two-minute demo of how it enables amazing interactive generation of stylized imagery:
This new project called StyleGAN2, developed by NVIDIA Research, and presented at CVPR 2020, uses transfer learning to produce seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project. Learn more here.
It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:
Photoshop: Edit, composite, and create beautiful images, graphics, and art.
Illustrator: Create beautiful vector art and illustrations.
Fresco: Draw and paint with thousands of natural brushes.
Spark Post: Make stunning social graphics — in seconds.
Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.
Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.
Apropos of nothing, check out 60 lovingly rendered seconds commissioned by YouTube:
Maciej Kuciara writes,
MECHA – the love letter to our youth. Watching anime classics as kids left a mark that stayed with us to this day. So we felt it’s due to time to celebrate our love to mecha and pay proper homage with this piece we did for YouTube.
What happens if you train an ML model from hundreds of thousands of 2D renders of 3D creature models? Glad you asked!
Today, we present Chimera Painter, a trained machine learning (ML) model that automatically creates a fully fleshed out rendering from a user-supplied creature outline. Employed as a demo application, Chimera Painter adds features and textures to a creature outline segmented with body part labels, such as “wings” or “claws”, when the user clicks the “transform” button.
My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:
On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:
Here’s a more in-depth look (starting around 1:46) at controlling the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:
I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):
Man, these are stunning—and they’re all done in camera:
First coated in black, the anonymous subjects in Tim Tadder’s portraits are cloaked with hypnotic swirls and thick drips of bright paint. To create the mesmerizing images, the Encinitas, California-based photographer and artist pours a mix of colors over his sitters and snaps a precisely-timed shot to capture each drop as it runs down their necks or splashes from their chins.
Seems like the Illustrator feature sneak-peeked last year is about to be released. Of course, I wouldn’t be a salty B 🙃 if I didn’t slip in some reference to all this going back ~15 years to Adobe Kuler & Illustrator Live Color & whatnot. (Okay, there—personal brand promise kept!)
Okay, not wars—how about enamel pins? Color me a little skeptical that the augmented reality portion of these pins will get much use, but hey, if it’s just a nice little bonus on something people already wanted, what the heck?
Loathe as I am to have Pepe the Frog appearing on my blog, this new documentary—which muses on everything from meme culture & nihilism to artistic ownership & meaning—sounds pretty interesting, and some of the animation is beautiful. The trailer’s worth a look:
They still exist, quarantine notwithstanding, and Worldgrapher + Visual Suspect have made some trippy, beautiful footage showing some:
Made by simply cropping and duplicating real footage, the dizzying video twists and turns through complex interchanges that are repeated in patterns and emblazoned with headlights and the city’s glow. Many of the shots descend into the center of the transportation systems, glimpsing the moving cars and traffic lights.
Shapr3D is an iPad drawing app that lets you create 3D drawings without having to use a desktop computer or CAD software. Designs created in this “pro-level” tool are compatible with major CAD file formats and support instant exports for 3D printing.
They’ve achieved this by treating each facial feature locally first, and then the face as a whole, basically assigning a probability to each feature. That way you don’t need a professional sketch to generate a realistic-looking image, but the better the sketch, the better and more accurate the results become. What’s more, the software can work in near-real-time,
TBH I’m a little nonplussed about the specific effects shown here, but I remain intrigued by the idea of a highly accessible, results-oriented app that could also generate layered imagery for further tweaking in Photoshop and other more flexible tools.
The main goal of apps like this might simply be to introduce more people to the Adobe ecosystem. Adobe CTO Abhay Parasnis said as much in an interview with The Verge, in which he calls Photoshop Camera “the next one in that journey for us.” Photoshop Camera could act as the “gateway drug” to a Creative Cloud subscription for anybody who discovers a dormant love of photo editing.
With the exception of a single yellow chair, it appears as though every visual shown during the performance was generated in post. What really sells the performance, however, is the choreography. Throughout the entirety of the performance, Perry reacts and responds to every visual element shown “on-stage”.