One nice, cheeky quirk of Google is the ability to write one’s own epitaph upon departing, slapping a few words of sometimes salty wisdom on the out door. My former colleague Hodie Meyers bugged out just ahead of me & dropped a sarcastic fistful of Despair.com-worthy gems:
Do things because they are possible
Do many things at once and try to spread yourself thin
Build the complete system before evaluating the idea. Call it MVP anyways
Never let client feedback or user research distract you from your intuition
And remember: It’s always more important that you launch something than that you create true value for your users and customers
Hey all—greetings from somewhere in the great American west, which I’m happily exploring with my wife, kids, and dog. Being an obviously crazy person, I can’t just, y’know, relax and stop posting for a while, but you may notice that my cadence here drops for a few days.
In the meantime, I’ll try to gather up some good stuff to share. Here’s a shot I captured while flying over the Tehachapi Loop on Friday (best when viewed full screen).
Just for fun, here’s a different rendering of the same file (courtesy of running the Mavic Pro’s 360º stitch through Insta360 Studio):
And, why not, heres’ another shot of the trains in action. I can’t wait to get some time to edit & share the footage.
“A nuclear-powered pencil”: that’s how someone recently described ArtBreeder, and the phrase comes to mind for NVIDIA Canvas, a new prototype app you can download (provided you have Windows & beefy GPU) and use to draw in some trippy new ways:
Paint simple shapes and lines with a palette of real world materials, like grass or clouds. Then, in real-time, our revolutionary AI model fills the screen with show-stopping results.
Don’t like what you see? Swap a material, changing snow to grass, and watch as the entire image changes from a winter wonderland to a tropical paradise. The creative possibilities are endless.
Psst, hey, Russell Brown, tell me again when we’re taking our Pixels to the desert… 😌✨
Pixel owners love using astrophotography in Night Sight to take incredible photos of the night sky, and now it’s getting even better. You can now create videos of the stars moving across the sky all during the same exposure. Once you take a photo in Night Sight, both the photo and video will be saved in your camera roll. Try waiting longer to capture even more of the stars in your video. This feature is available on Pixel 4 and newer phones and you can learn more at g.co/pixel/astrophotography.
Extremely old, never-say-die (but good-natured) sibling rivalry: “Hey, 2008 called, and its wants its Photoshop feature back!” 🙃 I kid, though, and I’m happy to see illustrators getting this nice, smoothly rendering feature. Here’s a 1-minute tour:
Even if I weren’t, to my surprise, watching the Netflix series The Punisher and liking it way more than I expected, I’d be a sucker for this kind of beautiful title sequence:
I have the show to thank for introducing me to this brutal Tom Waits banger, which comes equipped with its own surrealist nightmare of a video:
What an incredible labor of love this must have been to stitch & animate:
Our most ridiculously labor-intensive animation ever! The traditional Passover folk song rendered in embroidermation by Nina Paley and Theodore Gray. These very same embroidered matzoh covers are available for purchase here.
Netflix and Adobe are partnering to introduce The Great Untold; a short film competition meets a road trip across America. The next generation of creators are invited to submit their story idea in the form of a movie trailer via TikTok, for a chance to win a cash prize and have their work produced in their hometown with the help of Hollywood experts. Submit now: WhatsYourGreatUntold.com
I’m not sure whether the demo animation does the idea justice, as you might reasonably think “Why would I want to scarify a face & then make a computer fill in the gaps?,” but the underlying idea (that the computer can smartly fill holes based on understanding the real-world structure of a scene) seems super compelling.
“Write it in the sky in gossamer teardrops!” as Patton Oswalt might say: Firefly Drone Shows form incredible, ephemeral images via flying freakin’ robots:
LEGO has officially announced the new LEGO adidas Originals Superstar (10282) which will be available starting on July 1. The shoe has 731 pieces and will retail for $79.99. In the ongoing collaboration with adidas, LEGO has recreated the iconic Superstar sneaker in brick form. Instead of the regular LEGO packaging, the set will actually come in a shoebox for authenticity and even the laces on it are real.
I’ve always said that when—not if—I die in a fiery crash alongside Moffett Field, it’ll be because I was rubbernecking at some cool plane or other (e.g. the immense Antonov An-124), and you’ll remember this and say, “Well, he did at least call his shot.”
Suffice it to say I’m a huge plane nerd with a special soft spot for exotic (to me) ex-Soviet aircraft. I therefore especially enjoyed this revealing look into the Tu-22, whose alcohol-based air conditioning system made it a huge hit with aircrews (that is, when it wasn’t killing them via things like its downward-firing ejection seats!). Even if planes aren’t your jam, I think you’ll find the segment on how the alcohol became currency really interesting.
I had a long & interesting talk this week with Erik Natzke, whose multi-disciplinary art (ranging from code to textiles) has inspired me for years. As we were talking through the paths by which one can find a creative solution, he shared this quote from painter Chuck Close:
Chuck Close: I thought that using a palette was like shooting an arrow directly at a bull’s-eye. You hope that you make the right decision out of context. But when you shoot it at the bull’s eye, you hit what you were aiming at. And I thought, as a sports metaphor, golf was a much more interesting way to think about it.
If you think about golf, it’s the only sport—and it’s a little iffy if it’s a sport, although Tiger made it into a sport—in which you move from general to specific in an ideal number of correcting moves. The first stroke is just a leap of faith, you hit it out there; you hope you’re on the fairway. Second one corrects that, the third one corrects that. By the third or fourth you hope that you’re on the green. And at one or two putts, you place that ball in a very specific three-and-a-half inch diameter circle, which you couldn’t even see from the tee. How did you do it? You found it moving through the landscape, making mid-course corrections.
I thought, “This is exactly how I paint.” I tee off in the wrong direction to make it more interesting, now I’ve got to correct like crazy, then I’ve got to correct again. What’s it need? I need some of that. And then four or five or six strokes, I hopefully have found the color world that I want. Then I can sort of celebrate, you know, put that in the scorecard, and move on to the next one.
Bonus: “Is that a face made of meat??” — my 11yo Henry, walking by just now & seeing this image from afar 😛
Photoshop Neural Filters are insanely cool, but right now adjusting any parameter generally takes a number of seconds of calculation. To make things more interactive, of my teammates are collaborating with university researchers on an approach that couples cheap-n’-cheerful quality for interactive preview with nicer-but-slower calculation of final results. This is all a work in progress, and I can’t say if/when these techniques will ship in real products, but I’m very glad to see the progress.
I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).
In that vein, I dig what Erik Natzke & co. have explored:
This one’s even trippier:
Here’s a quick tutorial on how to make your own brush via Adobe Capture:
And here are the multicolor brushes added to Adobe Fresco last year:
On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.
As luck would have it, Adobe has some happy news to share about the latest hardware evolution:
Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]
These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.
Watch how this new tech is able to move & blend just parts of an image (e.g. hair) while preserving others:
We propose a novel latent space for image blending which is better at preserving detail and encoding spatial information, and propose a new GAN-embedding algorithm which is able to slightly modify images to conform to a common segmentation mask.
Our novel representation enables the transfer of the visual properties from multiple reference images including specific details such as moles and wrinkles, and because we do image blending in a latent-space we are able to synthesize images that are coherent.
A few weeks ago I mentionedToonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:
Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).
This Adobe Research collaboration with Stanford & Brown Universities aims to make sense of people moving in space, despite having just 2D video as an input:
We introduce HuMoR: a 3D Human Motion Model for Robust Estimation of temporal pose and shape. Though substantial progress has been made in estimating 3D human motion and shape from dynamic observations, recovering plausible pose sequences in the presence of noise and occlusions remains a challenge. For this purpose, we propose an expressive generative model in the form of a conditional variational autoencoder, which learns a distribution of the change in pose at each step of a motion sequence. Furthermore, we introduce a flexible optimization-based approach that leverages HuMoR as a motion prior to robustly estimate plausible pose and shape from ambiguous observations. Through extensive evaluations, we demonstrate that our model generalizes to diverse motions and body shapes after training on a large motion capture dataset, and enables motion reconstruction from multiple input modalities including 3D keypoints and RGB(-D) videos.
For what seems like forever, Adam Lisagor’s Sandwich crew has been lovingly adding more great visual jokes & well-crafted copy than just about anybody in the game. Their recent work for the Mighty app is just as delightful as you’d expect:
There are lots of fun details here, from the evolution of the “potato-chip lip,” to how lines & shapes evolved to let characters rotate more easily in space, to hundreds of pages of documentation on exactly how hair & eyes should work, and more.
“I’m real black, like won’t show up on your camera phone,” sang Childish Gambino. It remains a good joke, but ten years later, it’s long past time for devices to be far fairer in how they capture and represent the world. I’m really happy to see my old teammates at Google focusing on just this area:
Hmm—this looks slick, but I’m not sure that I want to have a big plastic box swinging around my face while I’m trying to get fit. As a commenter notes, “That’s just Beat Saber with someone saying ‘good job’ once in a while”—but a friend of mine says it’s great. ¯\_(ツ)_/¯
This vid (same poster frame but different content) shows more of the actual gameplay:
What happens when you pair a couple of super swole dudes who can actually do pretty good Ahnuld/Sly impressions with deepfakes tech? Why, charming silliness like this, of course: