Literally! I love this kind of minimal yet visually rich work.
Okay, I still don’t understand the math here—but I feel closer now! Freya Holmér has done a beautiful job of visualizing the core workings of what’s a core ingredient in myriad creative applications:
FaceMix offers a rather cool way to create a face by mixing together up to four individually editable images, which you can upload or select from a set of presets. The 30-second tour:
Here’s a more detailed look into how it works:
The New York Public Library has shared some astronomical drawings by E.L. Trouvelot done in the 1870s, comparing them to contemporary NASA images. They write,
Trouvelot was a French immigrant to the US in the 1800s, and his job was to create sketches of astronomical observations at Harvard College’s observatory. Building off of this sketch work, Trouvelot decided to do large pastel drawings of “the celestial phenomena as they appear…through the great modern telescopes.”
Going back seven years or so, when we were working on a Halloween face painting feature for Google Photos (sort of ur-AR), I’ve been occasionally updating a Pinterest board full of interesting augmentations done to human faces. I’ve particularly admired the work of Yulia Brodskaya, a master of paper quilling. Here’s a quick look into her world:
Heh—my Adobe video eng teammate Eric Sanders passed along this fun poster (artist unknown):
It reminds me of a silly thing I made years ago when our then-little kids had a weird fixation on light fixtures. Oddly enough, this remains the one & presumably only piece of art I’ll ever get to show Matt Groening, as I got to meet him at dinner with Lynda Weinman back then. (Forgive the name drop; I have so few!)
I’m a huge & longtime fan of Chop Shop’s beautiful space-tech illustrations, so I’m excited to see them kicking off a new Kickstarter campaign:
Right at the start of my career, I had the chance to draw some simple Peanuts animations for MetLife banner ads. The cool thing is that back then, Charles Schulz himself had to approve each use of his characters—and I’m happy to say he approved mine. 😌 (For the record, as I recall it feature Linus’s hair flying up as he was surprised.)
In any event, here’s a fun tutorial commissioned by Apple:
As Kottke notes, “They’ve even included a PDF of drawing references to make it easier.” Fortunately you don’t have to do the whole thing in 35 seconds, a la Schulz himself:
Generative artist Glenn Marshall has used CLIP + VQGAN to send Radiohead down a rather Lovecraftian rabbit hole:
This is the super chill content I needed right now. 😌
“Viewfinder” is a charming animation about exploring the outdoors from the Seoul-based studio VCRWORKS. The second episode in the recently launched Rhythmens series, the peaceful short follows a central character on a hike in a springtime forest and frames their whimsically rendered finds through the lens of a camera.
You can find another installment on their Vimeo page.
“A nuclear-powered pencil”: that’s how someone recently described ArtBreeder, and the phrase comes to mind for NVIDIA Canvas, a new prototype app you can download (provided you have Windows & beefy GPU) and use to draw in some trippy new ways:
Paint simple shapes and lines with a palette of real world materials, like grass or clouds. Then, in real-time, our revolutionary AI model fills the screen with show-stopping results.
Don’t like what you see? Swap a material, changing snow to grass, and watch as the entire image changes from a winter wonderland to a tropical paradise. The creative possibilities are endless.
What an incredible labor of love this must have been to stitch & animate:
Our most ridiculously labor-intensive animation ever! The traditional Passover folk song rendered in embroidermation by Nina Paley and Theodore Gray. These very same embroidered matzoh covers are available for purchase here.
[Via Christa Mrgan]
As I noted last year,
I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).
In that vein, I dig what Erik Natzke & co. have explored:
This one’s even trippier:
Here’s a quick tutorial on how to make your own brush via Adobe Capture:
And here are the multicolor brushes added to Adobe Fresco last year:
On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.
As luck would have it, Adobe has some happy news to share about the latest hardware evolution:
Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]
These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.
Check out the post for full details.
A few weeks ago I mentioned Toonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:
Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).
There are lots of fun details here, from the evolution of the “potato-chip lip,” to how lines & shapes evolved to let characters rotate more easily in space, to hundreds of pages of documentation on exactly how hair & eyes should work, and more.
Generative artist Nathan Shipley has been doing some amazing work with GANs, and he recently collaborated with BMW to use projection mapping to turn a new car into a dynamic work of art:
I’ve long admired the Art Cars series, with a particular soft spot for Jenny Holzer’s masterfully disconcerting PROTECT ME FROM WHAT I WANT:
Here’s a great overview of the project’s decades of heritage, including a dive into how Andy Warhol adorned what may be the most valuable car in the world—painting on it at lightning speed:
Years ago my friend Matthew Richmond (Chopping Block founder, now at Adobe) would speak admiringly of “math-rock kids” who could tinker with code to expand the bounds of the creative world. That phrase came to mind seeing this lovely little exploration from Derrick Schultz:
Here it is in high res:
You’ll scream, you’ll cry, promises designer Dave Werner—and maybe not due just to “my questionable dance moves.”
Live-perform 2D character animation using your body. Powered by Adobe Sensei, Body Tracker automatically detects human body movement using a web cam and applies it to your character in real time to create animation. For example, you can track your arms, torso, and legs automatically. View the full release notes.
Check out the demo below & the site for full details.
I love this piece from artist Tristan Eaton, celebrating Dallas’s historic Deep Ellum neighborhood:
I’ve obviously been talking a ton about the crazy-powerful, sometimes eerie StyleGAN2 technology. Here’s a case of generative artist Mario Klingemann wiring visuals to characteristics of music:
Watch it at 1/4 speed if you really want to freak yourself out.
Beats-to-visuals gives me an excuse to dig up & reshare Michel Gondry’s brilliant old Chemical Brothers video that associated elements like bridges, posts, and train cars with the various instruments at play:
Back to Mario: he’s also been making weirdly bleak image descriptions using CLIP (the same model we’ve explored using to generate faces via text). I congratulated him on making a robot sound like Werner Herzog. 🙃
Artbreeder is a trippy project that lets you “simply keep selecting the most interesting image to discover totally new images. Infinitely new random ‘children’ are made from each image. Artbreeder turns the simple act of exploration into creativity.” Check out interactive remixing:
Artbreeder is a nuclear powered pencil.
— Bay Raitt (@bayraitt) September 17, 2019
Here’s an overview of how it works:
I find this emerging space so fascinating. Check out how Toonify.photos (which you can use for free, or at high quality for a very modest fee) can turn one’s image into a cartoon character. It leverages training data based on iconic illustration styles:
I also chuckled at this illustration from the video above, as it endeavors to how two networks (the “adversaries” in “Generative Adversarial Network”) attempt, respectively, to fool the other with output & to avoid being fooled. Check out more details in the accompanying article.
The Epic team behind the hyper-realistic, Web-hosted MetaHuman Creator—which is now available for early access—rolled out the tongue-in-cheek “MetaPet Creator” for April Fool’s. Artist Jelena Jovanovic offers a peek behind the scenes.
Elsewhere I put my pal Seamus (who’s presently sawing logs on the couch next to me) through NVIDIA’s somewhat wacky GANimal prototype app, attempting to mutate him into various breeds—with semi-Brundlefly results. 👀
On Monday I mentioned my new team’s mind-blowing work to enable image synthesis through typing, and I noted that it builds on NVIDIA’s StyleGAN research. If you’re interested in the latter, check out this two-minute demo of how it enables amazing interactive generation of stylized imagery:
This new project called StyleGAN2, developed by NVIDIA Research, and presented at CVPR 2020, uses transfer learning to produce seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project. Learn more here.
It’s cool to see these mobile creativity apps Voltron-ing together via the new Adobe Design Mobile Bundle, which includes the company’s best design apps for the iPad at 50% off when purchased together. Per the site:
- Photoshop: Edit, composite, and create beautiful images, graphics, and art.
- Illustrator: Create beautiful vector art and illustrations.
- Fresco: Draw and paint with thousands of natural brushes.
- Spark Post: Make stunning social graphics — in seconds.
- Creative Cloud: Mobile access to your Creative Cloud assets, livestreams, and learn content.
More good stuff is coming to Fresco soon, too:
Then, there are live oil brushes in Fresco that you just don’t get in any other app. In Fresco, today, you can replicate the look of natural media like oils, watercolors and charcoal — soon you’ll be able to add motion as well! We showed a sneak peek at the workshop, and it blew people’s minds.
Dan Harvey and Heather Ackroyd make insanely large portraits using a giant darkroom that governs where light falls & therefore how grass grows.
With the right care & feeding, the portraits can, in principle, last indefinitely.
I’m excited to see my Adobe friends continuing to make on-the-go sketching & illustrating richer & more delightful. Check out a brief demo of some of the latest:
Grainy & surreal, this film demanded “elaborate but pointless effort” (in the words of creator Soetkin Verstegen) to animate puppets encased in transient ice. I found it mesmerizing.
Stay cozy, friends.
Apropos of nothing, check out 60 lovingly rendered seconds commissioned by YouTube:
Maciej Kuciara writes,
MECHA – the love letter to our youth. Watching anime classics as kids left a mark that stayed with us to this day. So we felt it’s due to time to celebrate our love to mecha and pay proper homage with this piece we did for YouTube.
What happens if you train an ML model from hundreds of thousands of 2D renders of 3D creature models? Glad you asked!
Today, we present Chimera Painter, a trained machine learning (ML) model that automatically creates a fully fleshed out rendering from a user-supplied creature outline. Employed as a demo application, Chimera Painter adds features and textures to a creature outline segmented with body part labels, such as “wings” or “claws”, when the user clicks the “transform” button.
Enjoy some mouthwatering Uncanny Valley Ranch:
Using an AI-based framework called Pixel2Style2Pixel and searching for faces in a dataset harvested from Flickr, Nathan Shipley made some more photorealistic faces for Pixar characters.
Kottke goes on to say,
In response to a reader suggestion, Shipley fed the generated image for Dash back into the system and this happened:
My longstanding dream (dating back to the Bush Administration!) to have face relighting in Photoshop has finally come true—and then some. In case you missed it last week, check out Conan O’Brien meeting machine learning via Photoshop:
— scott belsky (@scottbelsky) October 21, 2020
On PetaPixel, Allen Murabayashi from PhotoShelter shows what it can do on a portrait of Joe Biden—presenting this power as a potential cautionary tale:
I love the fact that the Neural Filters plug-in provides a playground within Photoshop for integrating experimental new tech. Who knows what else might spring from Adobe-NVIDIA collaboration—maybe scribbling to create a realistic landscape, or even swapping expressions among pets (!?):
New York magazine commissioned dozens of artists to create “I Voted” stickers, and you can see them in their Oct. 26 issue. I like the Shepard Fairey one enough to make into my Twitter avatar.
Man, these are stunning—and they’re all done in camera:
First coated in black, the anonymous subjects in Tim Tadder’s portraits are cloaked with hypnotic swirls and thick drips of bright paint. To create the mesmerizing images, the Encinitas, California-based photographer and artist pours a mix of colors over his sitters and snaps a precisely-timed shot to capture each drop as it runs down their necks or splashes from their chins.
I’m excited to see the tech my team has built into YouTube, Duo, and other apps land in Arts & Culture, powering five new fun experiences:
Snap a video or image of yourself to become Van Gogh or Frida Kahlo’s self-portraits, or the famous Girl with a Pearl Earring. You can also step deep into history with a traditional Samurai helmet or a remarkable Ancient Egyptian necklace.
Okay, not wars—how about enamel pins? Color me a little skeptical that the augmented reality portion of these pins will get much use, but hey, if it’s just a nice little bonus on something people already wanted, what the heck?
This video showing the relative size of stars & the reasons for these phenomena is just so entirely charming
And how great that it’s from an animation shop named Kurzgesagt (“German for ‘in a nutshell’). I must know more of these peeps.
Offering me a cookie, are you, German website? “Akzeptieren,” jawohl!
Loathe as I am to have Pepe the Frog appearing on my blog, this new documentary—which muses on everything from meme culture & nihilism to artistic ownership & meaning—sounds pretty interesting, and some of the animation is beautiful. The trailer’s worth a look:
I dig it, though everything depicted is much more AR than what I think of as Photoshop—and I’d love to live in a world where AR & spatial effects are this delightfully easy to create.
Hey, remember highways?
Made by simply cropping and duplicating real footage, the dizzying video twists and turns through complex interchanges that are repeated in patterns and emblazoned with headlights and the city’s glow. Many of the shots descend into the center of the transportation systems, glimpsing the moving cars and traffic lights.