All posts by jnack

“How to Draw Yourself as a Peanuts Character”

Right at the start of my career, I had the chance to draw some simple Peanuts animations for MetLife banner ads. The cool thing is that back then, Charles Schulz himself had to approve each use of his characters—and I’m happy to say he approved mine. 😌 (For the record, as I recall it feature Linus’s hair flying up as he was surprised.)

In any event, here’s a fun tutorial commissioned by Apple:

As Kottke notes, “They’ve even included a PDF of drawing references to make it easier.” Fortunately you don’t have to do the whole thing in 35 seconds, a la Schulz himself:

different faces drawn for Peanuts comic strip characters

[Via]

Photography: “Jay Myself” is terrific

Many years ago I had the chance to drop by Jay Maisel‘s iconic converted bank building in the Bowery. (This must’ve been before phone cameras got good, as otherwise I’d have shot the bejesus out of the place.) It was everything you’d hope it to be.

As luck would have it, my father-in-law (having no idea about the visit) dialed up the documentary “Jay Myself” last night, and whole family (down to my 12yo budding photographer son) loved it. I think you would, too!

PixARface: Scarface goes Pixar

One, it’s insane what AR can do in realtime.
Two, this kind of creative misuse of tech is right up my alley.

Update/bonus: Nobody effs with the AR Jesus:

The Story Behind the Theme Song to ‘Seinfeld‘

It’s a little OT for this blog, but I really enjoyed this article as a discussion of design—of using art to solve problems.

I told Jerry, “It sounds more like a sound design issue than a music assignment. So, how about this? We treat the Seinfeld theme song as if your voice telling jokes is the melody, the jokes you tell are the lyrics and my job is to accompany you in a musical way that does not interfere with the audio of you telling jokes.

Also great:

Warren Littlefield had the unfortunate job of telling Larry, “I don’t like the music. It’s distracting, it’s weird, it’s annoying!” And as soon as he said the word annoying, Larry David just lit up. Like, “Really? Annoying? Cool!” Because if you know Larry, if you watch Curb Your Enthusiasm, that’s what he loves most, to annoy you! That’s his brand of comedy. 

Now, enjoy (?) Seinfeld meeting Kendrick Lamar:

Disney Research shows off new style-transfer tech

Okay, this one is a little “inside baseball,” but I’m glad to see more progress using GANs to transfer visual styles among images. Check it out:

The current state-of-the-art in neural style transfer uses a technique called Adaptive Instance Normalization (AdaIN), which transfers the statistical properties of style features to a content image, and can transfer an infinite number of styles in real time. However, AdaIN is a global operation, and thus local geometric structures in the style image are often ignored during the transfer. We propose Adaptive convolutions; a generic extension of AdaIN, which allows for the simultaneous transfer of both statistical and structural styles in real time.

Design: New Lego T2 VW bus

Greetings from Leadville, Colorado, which on weekends is transformed to an open-air rolling showroom for Sprinter vans. (Aside: I generally feel like I’m doing fine financially, but then I think, “Who are these armies of people dropping 200g’s on tarted-up delivery vans?!”) They’re super cool, but we’re kicking it old-/small-school in our VW Westy. Thus you know I’m thrilled to see this little beauty rolling out of Lego factories soon:

VFX: An oral history of Terminator 2

One of my favorite flexes while working on Google Photos was to say, “Hey, you remember the liquid-metal guy in Terminator 2? You know who wrote that? This guy,” while pointing to my ex-Adobe teammate John Schlag. I’d continue to go down the list—e.g. “You know who won an Oscar for rigging at DreamWorks? This guy [points at Alex Powell].” I did this largely to illustrate how insane it was to have such a murderer’s row of talent working on whatever small-bore project Photos had in mind. (Sorry, it was a very creatively disappointing time.)

Anyway, John S., along with Michael Natkin (who went on to spend a decade+ making After Effects rock), contributed to this great oral history of the making of Terminator 2. It’s loaded with insights & behind-the-scenes media I’d never seen before. Enjoy!

A generative artist joins Adobe

I’m delighted that Ryan Murdock (Adverb on Twitter) is joining our group:

I returned to Adobe specifically to help cutting-edge creators like this bring their magic to as many people as possible, and I’m really excited to see what we can do together. (Suggestions are welcome. 😌🔥)

Derek DelGaudio’s “In & Of Itself” is mesmerizing

Oh my God… what an amazing film! I’d heard my friends rave, and I don’t know what took me so long to watch it. I bounced between slack-jawed & openly weeping. Here’s just a taste:

Prior to watching, I’d really enjoyed Derek’s appearance on Fresh Air:

And totally tangentially (as it’s not at all related to Derek’s style of showmanship), there’s SNL’s hilarious So You’re Willing to Date a Magician:

Adobe joins the new Open 3D Foundation

Back in the 90’s I pleaded with Macromedia to enable a “Flash Interchange Format” that would allow me to combine multiple apps in making great animated content. They paid this no attention, and that’s part of why I joined Adobe & started working on things like integrating After Effects with LiveMotion—a code path that helps connect AE with other apps even two+ decades (!) later.

Point is, I’ve always loved aligning tools in ways that help creators combine apps & reach an audience. While at Google I worked with Adobe folks on 3D data exchange, and now I’m happy to see that Adobe is joining the new Open 3D Foundation, meant to “accelerate developer collaboration on 3D engine development for AAA-games and high-fidelity simulations.”

Amazon… is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms

As for Adobe’s role,

“Adobe is proud to champion the Open 3D Foundation as a founding member. Open source technologies are critical to advance sustainability across 3D industries and beyond. We believe collaborative and agnostic toolsets are the key to not only more healthy and innovative ecosystems but also to furthering the democratization of 3D on a global scale.” — Sebastien Deguy, VP of 3D & Immersive at Adobe.

Animation: Gmunk & Light

I’ve admired the motion graphics of Bradley Munkowitz since my design days in the 90’s (!), and I enjoyed this insight into one of his most recent creations:

What I didn’t know until now is that he collaborated with the folks at Bot & Dolly—who created the brilliant work below before getting acquired by Google and, as best I can tell, having their talent completely wasted there 😭.

Charming “Viewfinder” animation

This is the super chill content I needed right now. 😌

Colossal writes,

Viewfinder” is a charming animation about exploring the outdoors from the Seoul-based studio VCRWORKS. The second episode in the recently launched Rhythmens series, the peaceful short follows a central character on a hike in a springtime forest and frames their whimsically rendered finds through the lens of a camera.

You can find another installment on their Vimeo page.

Brickit scans Legos & suggests creations 🤯

OMG—I’m away from our brick piles & thus can’t yet try this myself, but I can’t wait to take it for a spin. As PetaPixel explains:

If you have a giant pile of LEGO bricks and are in need of ideas on what to build, Brickit is an amazing app that was made just for you. It uses a powerful AI camera to rapidly scan your LEGO bricks and then suggest fun little projects you can build with what you have.

Here’s a short 30-second demo showing how the app works — prepare to have your mind blown:

AI: An amazing Adobe PM opportunity

When I saw what Adobe was doing to harness machine learning to deliver new creative superpowers, I knew I had to be part of it. If you’re a seasoned product manager & if this missions sounds up your alley, consider joining me via this new Principal PM role:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

Tell me more, you say? But of course! The mission, per the listing:

  • In this hands-on role, you will help define a comprehensive product roadmap for Neural filters.
  • Work with PMs on app teams to prioritize filters and models that will have the largest impact to targeted user bases and, ultimately, create the most business value.
  • Collaborate with PMM counterparts to build and execute GTM strategies, establish Neural Filters as an industry-leading ML tool and drive awareness and adoption
  • Develop an understanding of business impact and define and be accountable for OKRs and measures of success for the Neural Filters platform and ecosystem.
  • Develop a prioritization framework that considers user feedback and research along with business objectives. Use this framework to guide the backlogs and work done by partner teams.
  • Guide the efforts for new explorations keeping abreast of latest developments in the pixel generation AI.
  • Partner with product innovators to spec out POC implementations of new features.
  • Develop the strategy to expand Neural Filters to other surfaces like web, mobile, headless and more CC apps focusing on core business metrics of conversion, retention and monetization.
  • Guide the team’s efforts in bias testing frameworks and integration with authenticity and ethical AI initiatives. This technology can be incredibly powerful, but can also introduce tremendous ethical and legal implications. It’s imperative that this person is cognizant of the risks and consistently operates with high integrity.

If this sounds like your jam, or if you know of someone who’d be a great fit, please check out the listing & get in touch!

Five Golden Rules For Building Unsuccessful Products

One nice, cheeky quirk of Google is the ability to write one’s own epitaph upon departing, slapping a few words of sometimes salty wisdom on the out door. My former colleague Hodie Meyers bugged out just ahead of me & dropped a sarcastic fistful of Despair.com-worthy gems:

  1. Do things because they are possible
  2. Do many things at once and try to spread yourself thin
  3. Build the complete system before evaluating the idea. Call it MVP anyways
  4. Never let client feedback or user research distract you from your intuition
  5. And remember: It’s always more important that you launch something than that you create true value for your users and customers

Gone Fishin’, 2021 edition

Hey all—greetings from somewhere in the great American west, which I’m happily exploring with my wife, kids, and dog. Being an obviously crazy person, I can’t just, y’know, relax and stop posting for a while, but you may notice that my cadence here drops for a few days.

In the meantime, I’ll try to gather up some good stuff to share. Here’s a shot I captured while flying over the Tehachapi Loop on Friday (best when viewed full screen).


Just for fun, here’s a different rendering of the same file (courtesy of running the Mavic Pro’s 360º stitch through Insta360 Studio):

And, why not, heres’ another shot of the trains in action. I can’t wait to get some time to edit & share the footage.

default

NVIDIA Canvas paints with AI, exports to Photoshop

“A nuclear-powered pencil”: that’s how someone recently described ArtBreeder, and the phrase comes to mind for NVIDIA Canvas, a new prototype app you can download (provided you have Windows & beefy GPU) and use to draw in some trippy new ways:

Paint simple shapes and lines with a palette of real world materials, like grass or clouds. Then, in real-time, our revolutionary AI model fills the screen with show-stopping results.

Don’t like what you see? Swap a material, changing snow to grass, and watch as the entire image changes from a winter wonderland to a tropical paradise. The creative possibilities are endless.

[Via]

Google Pixel brings video to astrophotography

Psst, hey, Russell Brown, tell me again when we’re taking our Pixels to the desert… 😌✨

Pixel owners love using astrophotography in Night Sight to take incredible photos of the night sky, and now it’s getting even better. You can now create videos of the stars moving across the sky all during the same exposure. Once you take a photo in Night Sight, both the photo and video will be saved in your camera roll. Try waiting longer to capture even more of the stars in your video. This feature is available on Pixel 4 and newer phones and you can learn more at g.co/pixel/astrophotography.

Titles: “The Punisher”

Even if I weren’t, to my surprise, watching the Netflix series The Punisher and liking it way more than I expected, I’d be a sucker for this kind of beautiful title sequence:

I have the show to thank for introducing me to this brutal Tom Waits banger, which comes equipped with its own surrealist nightmare of a video:

Stop motion—via embroidery!

What an incredible labor of love this must have been to stitch & animate:

Our most ridiculously labor-intensive animation ever! The traditional Passover folk song rendered in embroidermation by Nina Paley and Theodore Gray. These very same embroidered matzoh covers are available for purchase here.

[Via Christa Mrgan]

What if Content-Aware Fill started hallucinating?

Man, I’m not even the first to imagine a tripping-out Content-Aware Phil…

…cue the vemödalen. ¯\_(ツ)_/¯

Anyway, “Large Scale Image Completion via Co-Modulated Generative Adversarial Networks” (and you thought “Content-Aware Fill” was a mouthful), which you can try out right in your browser, promises next-level abilities to fill in gaps by using GANs that understand specific domains like human faces & landscapes.

I’m not sure whether the demo animation does the idea justice, as you might reasonably think “Why would I want to scarify a face & then make a computer fill in the gaps?,” but the underlying idea (that the computer can smartly fill holes based on understanding the real-world structure of a scene) seems super compelling.

Lego introduces Adidas shelltoes

Oh my God.

LEGO has officially announced the new LEGO adidas Originals Superstar (10282) which will be available starting on July 1. The shoe has 731 pieces and will retail for $79.99. In the ongoing collaboration with adidas, LEGO has recreated the iconic Superstar sneaker in brick form. Instead of the regular LEGO packaging, the set will actually come in a shoebox for authenticity and even the laces on it are real.

Design: The “Supersonic Booze Carrier”

I’ve always said that when—not if—I die in a fiery crash alongside Moffett Field, it’ll be because I was rubbernecking at some cool plane or other (e.g. the immense Antonov An-124), and you’ll remember this and say, “Well, he did at least call his shot.”

Suffice it to say I’m a huge plane nerd with a special soft spot for exotic (to me) ex-Soviet aircraft. I therefore especially enjoyed this revealing look into the Tu-22, whose alcohol-based air conditioning system made it a huge hit with aircrews (that is, when it wasn’t killing them via things like its downward-firing ejection seats!). Even if planes aren’t your jam, I think you’ll find the segment on how the alcohol became currency really interesting.

Chuck Close compares golf & creativity

I had a long & interesting talk this week with Erik Natzke, whose multi-disciplinary art (ranging from code to textiles) has inspired me for years. As we were talking through the paths by which one can find a creative solution, he shared this quote from painter Chuck Close:

Chuck Close: I thought that using a palette was like shooting an arrow directly at a bull’s-eye. You hope that you make the right decision out of context. But when you shoot it at the bull’s eye, you hit what you were aiming at. And I thought, as a sports metaphor, golf was a much more interesting way to think about it.

If you think about golf, it’s the only sport—and it’s a little iffy if it’s a sport, although Tiger made it into a sport—in which you move from general to specific in an ideal number of correcting moves. The first stroke is just a leap of faith, you hit it out there; you hope you’re on the fairway. Second one corrects that, the third one corrects that. By the third or fourth you hope that you’re on the green. And at one or two putts, you place that ball in a very specific three-and-a-half inch diameter circle, which you couldn’t even see from the tee. How did you do it? You found it moving through the landscape, making mid-course corrections.

I thought, “This is exactly how I paint.” I tee off in the wrong direction to make it more interesting, now I’ve got to correct like crazy, then I’ve got to correct again. What’s it need? I need some of that. And then four or five or six strokes, I hopefully have found the color world that I want. Then I can sort of celebrate, you know, put that in the scorecard, and move on to the next one.

Bonus: “Is that a face made of meat??” — my 11yo Henry, walking by just now & seeing this image from afar 😛

“Anycost GAN” promises interactive editing using AI

Photoshop Neural Filters are insanely cool, but right now adjusting any parameter generally takes a number of seconds of calculation. To make things more interactive, of my teammates are collaborating with university researchers on an approach that couples cheap-n’-cheerful quality for interactive preview with nicer-but-slower calculation of final results. This is all a work in progress, and I can’t say if/when these techniques will ship in real products, but I’m very glad to see the progress.

Trippy Adobe brushes

As I noted last year,

I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).

In that vein, I dig what Erik Natzke & co. have explored:

This one’s even trippier:

Here’s a quick tutorial on how to make your own brush via Adobe Capture:

And here are the multicolor brushes added to Adobe Fresco last year:

Illustrator & InDesign get big boosts on Apple Silicon

On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.

As luck would have it, Adobe has some happy news to share about the latest hardware evolution:

Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]

These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.

Check out the post for full details.

“Barbershop” uses GANs to flip your wig

Watch how this new tech is able to move & blend just parts of an image (e.g. hair) while preserving others:

We propose a novel latent space for image blending which is better at preserving detail and encoding spatial information, and propose a new GAN-embedding algorithm which is able to slightly modify images to conform to a common segmentation mask.

Our novel representation enables the transfer of the visual properties from multiple reference images including specific details such as moles and wrinkles, and because we do image blending in a latent-space we are able to synthesize images that are coherent.

Automatic caricature creation gets better & better

A few weeks ago I mentioned Toonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:

Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).

alt text

Adobe “HuMoR” estimates 3D human movements from 2D inputs

This Adobe Research collaboration with Stanford & Brown Universities aims to make sense of people moving in space, despite having just 2D video as an input:

We introduce HuMoR: a 3D Human Motion Model for Robust Estimation of temporal pose and shape. Though substantial progress has been made in estimating 3D human motion and shape from dynamic observations, recovering plausible pose sequences in the presence of noise and occlusions remains a challenge. For this purpose, we propose an expressive generative model in the form of a conditional variational autoencoder, which learns a distribution of the change in pose at each step of a motion sequence. Furthermore, we introduce a flexible optimization-based approach that leverages HuMoR as a motion prior to robustly estimate plausible pose and shape from ambiguous observations. Through extensive evaluations, we demonstrate that our model generalizes to diverse motions and body shapes after training on a large motion capture dataset, and enables motion reconstruction from multiple input modalities including 3D keypoints and RGB(-D) videos.

Google makes strides on equitable imaging

“I’m real black, like won’t show up on your camera phone,” sang Childish Gambino. It remains a good joke, but ten years later, it’s long past time for devices to be far fairer in how they capture and represent the world. I’m really happy to see my old teammates at Google focusing on just this area:

“Supernatural” offers home workouts in VR

Hmm—this looks slick, but I’m not sure that I want to have a big plastic box swinging around my face while I’m trying to get fit. As a commenter notes, “That’s just Beat Saber with someone saying ‘good job’ once in a while”—but a friend of mine says it’s great. ¯\_(ツ)_/¯

This vid (same poster frame but different content) shows more of the actual gameplay: