As I’m on a kick sharing recent work from Ira Kemelmacher-Shlizerman & team, here’s another banger:
Given an “in-the-wild” video, we train a deep network with the video frames to produce an animatable human representation.
This can be rendered from any camera view in any body pose, enabling applications such as motion re-targeting and bullet-time rendering without the need for rigged 3D meshes.
I look forward (?) to the not-so-distant day when a 3D-extracted Trevor Lawrence hucks a touchdown to Cleatus the Fox Sports Robot. Grand slam!!
“VOGUE: Try-On by StyleGAN,” from my former Google colleague Ira Kemelmacher-Shlizerman & her team, promises to synthesize photorealistic clothing & automatically apply it to a range of body shapes (leveraging the same StyleGAN foundation that my new teammates are using to build images via text):
Artbreeder is a trippy project that lets you “simply keep selecting the most interesting image to discover totally new images. Infinitely new random ‘children’ are made from each image. Artbreeder turns the simple act of exploration into creativity.” Check out interactive remixing:
Generative Adversarial Networks are the main technology enabling Artbreeder. Artbreeder uses BigGAN and StyleGAN models. There is a minimal open source version available that uses BigGAN.
I’ve long loved the weird mechanical purring of those flappy-letter signs one sees (or at least used to see) in train stations & similar venues, but I haven’t felt like throwing down the better part of three grand to own a Vestaboard. Now maker Scott Bezek is working on an open-source project for making such signs at home, combining simple materials and code. In case you’d never peeked inside such a mechanism (and really, why would you have?) and are curious, here’s how they work:
And here, for some reason, are six oddly satisfying minutes of a sign spelling out four-letter words:
I remain fascinated by what Snap & Facebook are doing with their respective AR platforms, putting highly programmable camera stacks into the hands of hundreds of millions of consumers & hundreds of thousands of creators. If you have thoughts on the subject & want to nerd out some time, drop me a note.
A few months back I wanted to dive into the engine that’s inside Instagram, and I came across the Spark AR masterclass put together & presented by filter creator Eddy Adams. I found it engaging & informative, if even a bit fast for my aging brain 🙃. If you’re tempted to get your feet wet in this emerging space, I recommend giving it a shot.
“Boys,” I DM’d the lads (because somehow that’s a thing now), “I hope you someday find spouses (speece?) cool enough to send you things like Mom just sent me.” And Crom-willing, they will. 😌 Happy Friday.
I find this emerging space so fascinating. Check out how Toonify.photos (which you can use for free, or at high quality for a very modest fee) can turn one’s image into a cartoon character. It leverages training data based on iconic illustration styles:
I also chuckled at this illustration from the video above, as it endeavors to how two networks (the “adversaries” in “Generative Adversarial Network”) attempt, respectively, to fool the other with output & to avoid being fooled. Check out more details in the accompanying article.
It’s really cool to see the Goog leveraging its immense corpus of not just 2D or 3D, but actually 4D (time-based), data to depict our planetary home.
In the biggest update to Google Earth since 2017, you can now see our planet in an entirely new dimension — time. With Timelapse in Google Earth, 24 million satellite photos from the past 37 years have been compiled into an interactive 4D experience. Now anyone can watch time unfold and witness nearly four decades of planetary change. […]
Having a train-obsessed 11yo son who enjoys exclaiming things like, “Hey, that’s Cooper Black!,” this tour of railroad typography is 💯 up our family’s alley. (Tangential, but as it’s already on my clipboard: we’re keeping a running album of our train-related explorations along Route 66, and Henry’s been adding things like an atomic train tour to his YouTube channel.)
From the typesetting video description:
Ever since the first train services, a wide variety of guides have helped passengers understand the railways; supplementing the text with timetables, maps, views, and diagrams. Typographically speaking, the linear nature of railways and the modular nature of trains meant that successful diagrams could be designed economically by using typographic sorts. Various typographic trains and railways from the 1830s to present-day will be evaluated in terms of data visualization, decoration, and the economics of reproduction. Bringing things up to date, techniques for typesetting emoji and CSS trains are explored, and a railway-inspired layout model will be proposed for wider application in the typography of data visualization and ornamentation.
“Imagine what you can create. Create what you can imagine.”
So said the first Adobe video I ever saw, back in 1993 when I’d just started college & attended the Notre Dame Mad Macs user group. I saw it just that once, 20+ years ago, but the memory is vivid: an unfolding hand with an eye in the palm encircled by the words “Imagine what you can create. Create what you can imagine.” I was instantly hooked.
I got to mention this memory to Adobe founders Chuck Geschke & John Warnock at a dinner some 15 years later. Over that whole time—through my college, Web agency, and ultimately Adobe roles—the company they started had fully bent the arc of my career, as it continues to do today. I wish I’d had the chance to talk more with Chuck, who passed away on Friday. Outside of presenting to him & John at occasional board meetings, however, that’s all the time we had. Still, I’m glad I had the chance to share that one core memory.
I’ll always envy my wife Margot for getting to spend what she says was a terrific afternoon with him & various Adobe women leaders a few years back:
“Everyone sweeps the floor around here”
I can’t tell you how many times I’ve cited this story (source) from Adobe’s early history, as it’s such a beautiful distillation of the key cultural duality that Chuck & John instilled from the start:
The hands-on nature of the startup was communicated to everyone the company brought onboard. For years, Warnock and Geschke hand-delivered a bottle of champagne or cognac and a dozen roses to a new hire’s house. The employee arrived at work to find hammer, ruler, and screwdriver on a desk, which were to be used for hanging up shelves, pictures, and so on.
“From the start we wanted them to have the mentality that everyone sweeps the floor around here,” says Geschke, adding that while the hand tools may be gone, the ethic persists today.
“Charlie, you finally did it.”
I’m inspired reading all the little anecdotes & stories of inspiration that my colleagues are sharing, and I thought I’d cite one in particular—from Adobe’s 35th anniversary celebration—that made me smile. Take it away, Chuck:
I have one very special moment that meant a tremendous amount to me. Both my grandfather and my father were letterpress photoengravers — the people who made color plates to go into high-quality, high-volume publications such as Time magazine and all the other kinds of publishing that was done back then.
As we were trying to take that very mechanical chemical process and convert it into something digital, I would bring home samples of halftones and show them to my father. He’d say, “Hmm, let me look at that with my loupe,” because engravers always had loupes. He’d say, “You know, Charles, that doesn’t look very good.” Now, when my dad said, “Charles,” it was bad news.
About six months later, I brought him home something that I knew was spot on. All the rosettes were perfect. It was a gorgeous halftone. I showed it to my dad and he took his loupe out and he looked at it, and he smiled and said, “Charlie, you finally did it.” And, to me, that was probably one of the biggest high points of the early part of my career here.
And a final word, which I’ll share with my profound thanks:
“An engineer lives to have his idea embodied in a product that impacts the world.” Mr. Geschke said. “I consider myself the luckiest man on Earth.”
The Epic team behind the hyper-realistic, Web-hosted MetaHuman Creator—which is now available for early access—rolled out the tongue-in-cheek “MetaPet Creator” for April Fool’s. Artist Jelena Jovanovic offers a peek behind the scenes.
Elsewhere I put my pal Seamus (who’s presently sawing logs on the couch next to me) through NVIDIA’s somewhat wacky GANimal prototype app, attempting to mutate him into various breeds—with semi-Brundlefly results. 👀