Monthly Archives: June 2021

What if Content-Aware Fill started hallucinating?

Man, I’m not even the first to imagine a tripping-out Content-Aware Phil…

…cue the vemödalen. ¯\_(ツ)_/¯

Anyway, “Large Scale Image Completion via Co-Modulated Generative Adversarial Networks” (and you thought “Content-Aware Fill” was a mouthful), which you can try out right in your browser, promises next-level abilities to fill in gaps by using GANs that understand specific domains like human faces & landscapes.

I’m not sure whether the demo animation does the idea justice, as you might reasonably think “Why would I want to scarify a face & then make a computer fill in the gaps?,” but the underlying idea (that the computer can smartly fill holes based on understanding the real-world structure of a scene) seems super compelling.

Lego introduces Adidas shelltoes

Oh my God.

LEGO has officially announced the new LEGO adidas Originals Superstar (10282) which will be available starting on July 1. The shoe has 731 pieces and will retail for $79.99. In the ongoing collaboration with adidas, LEGO has recreated the iconic Superstar sneaker in brick form. Instead of the regular LEGO packaging, the set will actually come in a shoebox for authenticity and even the laces on it are real.

Design: The “Supersonic Booze Carrier”

I’ve always said that when—not if—I die in a fiery crash alongside Moffett Field, it’ll be because I was rubbernecking at some cool plane or other (e.g. the immense Antonov An-124), and you’ll remember this and say, “Well, he did at least call his shot.”

Suffice it to say I’m a huge plane nerd with a special soft spot for exotic (to me) ex-Soviet aircraft. I therefore especially enjoyed this revealing look into the Tu-22, whose alcohol-based air conditioning system made it a huge hit with aircrews (that is, when it wasn’t killing them via things like its downward-firing ejection seats!). Even if planes aren’t your jam, I think you’ll find the segment on how the alcohol became currency really interesting.

Chuck Close compares golf & creativity

I had a long & interesting talk this week with Erik Natzke, whose multi-disciplinary art (ranging from code to textiles) has inspired me for years. As we were talking through the paths by which one can find a creative solution, he shared this quote from painter Chuck Close:

Chuck Close: I thought that using a palette was like shooting an arrow directly at a bull’s-eye. You hope that you make the right decision out of context. But when you shoot it at the bull’s eye, you hit what you were aiming at. And I thought, as a sports metaphor, golf was a much more interesting way to think about it.

If you think about golf, it’s the only sport—and it’s a little iffy if it’s a sport, although Tiger made it into a sport—in which you move from general to specific in an ideal number of correcting moves. The first stroke is just a leap of faith, you hit it out there; you hope you’re on the fairway. Second one corrects that, the third one corrects that. By the third or fourth you hope that you’re on the green. And at one or two putts, you place that ball in a very specific three-and-a-half inch diameter circle, which you couldn’t even see from the tee. How did you do it? You found it moving through the landscape, making mid-course corrections.

I thought, “This is exactly how I paint.” I tee off in the wrong direction to make it more interesting, now I’ve got to correct like crazy, then I’ve got to correct again. What’s it need? I need some of that. And then four or five or six strokes, I hopefully have found the color world that I want. Then I can sort of celebrate, you know, put that in the scorecard, and move on to the next one.

Bonus: “Is that a face made of meat??” — my 11yo Henry, walking by just now & seeing this image from afar 😛

“Anycost GAN” promises interactive editing using AI

Photoshop Neural Filters are insanely cool, but right now adjusting any parameter generally takes a number of seconds of calculation. To make things more interactive, of my teammates are collaborating with university researchers on an approach that couples cheap-n’-cheerful quality for interactive preview with nicer-but-slower calculation of final results. This is all a work in progress, and I can’t say if/when these techniques will ship in real products, but I’m very glad to see the progress.

Trippy Adobe brushes

As I noted last year,

I’ve always been part of that weird little slice of the Adobe user population that gets really hyped about offbeat painting tools—from stretching vectors along splines & spraying out fish in Illustrator (yes, they’re both in your copy right now; no, you’ve never used them).

In that vein, I dig what Erik Natzke & co. have explored:

This one’s even trippier:

Here’s a quick tutorial on how to make your own brush via Adobe Capture:

And here are the multicolor brushes added to Adobe Fresco last year:

Illustrator & InDesign get big boosts on Apple Silicon

On an epic dog walk this morning, Old Man Nack™ took his son through the long & winding history of Intel vs. Motorola, x86 vs. PPC, CISC vs. RISC, toasted bunny suits, the shock of Apple’s move to Intel (Marklar!), and my lasting pride in delivering the Photoshop CS3 public beta to give Mac users native performance six months early.

As luck would have it, Adobe has some happy news to share about the latest hardware evolution:

Today, we’re thrilled to announce that Illustrator and InDesign will run natively on Apple Silicon devices. While users have been able to continue to use the tool on M1 Macs during this period, today’s development means a considerable boost in speed and performance. Overall, Illustrator users will see a 65 percent increase in performance on an M1 Mac, versus Intel builds — InDesign users will see similar gains, with a 59 percent improvement on overall performance on Apple Silicon. […]

These releases will start to roll out to customers starting today and will be available to all customers across the globe soon.

Check out the post for full details.

“Barbershop” uses GANs to flip your wig

Watch how this new tech is able to move & blend just parts of an image (e.g. hair) while preserving others:

We propose a novel latent space for image blending which is better at preserving detail and encoding spatial information, and propose a new GAN-embedding algorithm which is able to slightly modify images to conform to a common segmentation mask.

Our novel representation enables the transfer of the visual properties from multiple reference images including specific details such as moles and wrinkles, and because we do image blending in a latent-space we are able to synthesize images that are coherent.

Automatic caricature creation gets better & better

A few weeks ago I mentioned Toonify, an online app that can render your picture in a variety of cartoon styles. Researchers are busily cranking away to improve upon it, and the new AgileGAN promises better results & the ability to train models via just a few inputs:

Our approach provides greater agility in creating high quality and high resolution (1024×1024) portrait stylization models, requiring only a limited number of style exemplars (∼100) and short training time (∼1 hour).

alt text

Adobe “HuMoR” estimates 3D human movements from 2D inputs

This Adobe Research collaboration with Stanford & Brown Universities aims to make sense of people moving in space, despite having just 2D video as an input:

We introduce HuMoR: a 3D Human Motion Model for Robust Estimation of temporal pose and shape. Though substantial progress has been made in estimating 3D human motion and shape from dynamic observations, recovering plausible pose sequences in the presence of noise and occlusions remains a challenge. For this purpose, we propose an expressive generative model in the form of a conditional variational autoencoder, which learns a distribution of the change in pose at each step of a motion sequence. Furthermore, we introduce a flexible optimization-based approach that leverages HuMoR as a motion prior to robustly estimate plausible pose and shape from ambiguous observations. Through extensive evaluations, we demonstrate that our model generalizes to diverse motions and body shapes after training on a large motion capture dataset, and enables motion reconstruction from multiple input modalities including 3D keypoints and RGB(-D) videos.

Google makes strides on equitable imaging

“I’m real black, like won’t show up on your camera phone,” sang Childish Gambino. It remains a good joke, but ten years later, it’s long past time for devices to be far fairer in how they capture and represent the world. I’m really happy to see my old teammates at Google focusing on just this area:

“Supernatural” offers home workouts in VR

Hmm—this looks slick, but I’m not sure that I want to have a big plastic box swinging around my face while I’m trying to get fit. As a commenter notes, “That’s just Beat Saber with someone saying ‘good job’ once in a while”—but a friend of mine says it’s great. ¯\_(ツ)_/¯

This vid (same poster frame but different content) shows more of the actual gameplay: