All posts by jnack

Design details of the Blackbird

I know it’s a little OT for this blog, but as I’m always fascinated with clever little design solutions, I really enjoyed this detailed look at the iconic SR-71 Blackbird. I had no idea about things like it having a little periscope, or that its turn radius is so great that pivoting 180º at speed would necessitate covering the distance between Dayton, Ohio & and Chicago (!). Enjoy:

AI: Cats n’ Cages

Things the internet loves:
Nicolas Cage
Cats
Mashups

Let’s do this:

Elsewhere, I told my son that I finally agree with his strong view that the live-action Lion King (which I haven’t seen) does look pretty effed up. 🙃

Demo: Camera Raw is coming to Photoshop for iPad

Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)

If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯

Anyway, back in 2021, take ‘er away, Ryan Dumlao:

“How To Animate Your Head” in Character Animator

Let’s say you dig AR but want to, y’know, actually create instead of just painting by numbers (just yielding whatever some filter maker deigns to provide). In that case, my friend, you’ll want to check out this guidance from animator/designer/musician/Renaissance man Dave Werner.

0:00 Intro
1:27 Character Animator Setup
7:38 After Effects Motion Tracking
14:14 After Effects Color Matching
17:35 Outro (w/ surprise cameo)

Online seminar tomorrow: Russell Brown discusses his latest photographic explorations

I had a ball schlepping all around Death Valley & freezing my butt off while working with Russell back in January, and this seminar sounds fun:

Oct 12, 2021; 7:00 – 8:30pm Eastern

Russell Preston Brown is the senior creative director at Adobe, as well as an Emmy Award-winning instructor. His ability to bring together the world of design and software development is a perfect match for Adobe products. In Russell’s 32 years of creative experience at Adobe, he has contributed to the evolution of Adobe Photoshop with feature enhancements, advanced scripts and development. He has helped the world’s leading photographers, publishers, art directors and artists to master the software tools that have made Adobe’s applications the standard by which all others are measured.

Strike a pose with Adobe AI

My colleagues Jingwan, Jimei, Zhixin, and Eli have devised new tech for re-posing bodies & applying virtual clothing:

Our work enables applications of posed-guided synthesis and virtual try-on. Thanks to spatial modulation, our result preserves the texture details of the source image better than prior work.

Check out some results (below), see the details of how it works, and stay tuned for more.

Snapchat embraces deepfakes

They’re using using deepfakes for scripted micro-storytelling:

The new 10-episode Snap original series “The Me and You Show” taps into Snapchat’s Cameos — a feature that uses a kind of deepfake technology to insert someone’s face into a scene. Using Cameos, the show makes you the lead actor in comedy skits alongside one of your best friends by uploading a couple of selfies. […]

The Cameos feature is based on tech developed by AI Factory, a startup developing image and video recognition, analysis and processing technology that Snap acquired in 2019.  […]

According to Snap, more than 44 million Snapchat users engage with Cameos on a weekly and more than 16 million share Cameos with their friends.

I dunno—to my eye the results look like a less charming version of the old JibJab templates that were hot 20 years ago, but I’m 30 years older than the Snapchat core demographic, so what do I know?

New facial-puppeteering tech from the team behind Deep Nostalgia

The creative alchemists at D-ID have introduced “Speaking Portrait.” Per PetaPixel,

These can be made with any still photo and will animate the head while other parts stay static and can’t have replaced backgrounds. Still, the result below shows how movements and facial expressions performed by the real person are seamlessly added to a still photograph. The human can act as a sort of puppeteer of the still photo image.

What do you think?

AI-enhanced masking is coming to Lightroom, Camera Raw

I used to relish just how much Lightroom kicked Aperture’s butt when it came to making selective adjustments (for which, if I remember correctly, Aperture needed to rely on generating a “destructive” bitmap rendition to send to an outside editor). There’s no point in my mentioning this, I just like to live in the glorious past. 😉

But here’s some glorious future: Lightroom (both new & classic) plus Camera Raw are getting all kinds of AI-enhanced smart masking in the near future. Check out the team’s post for details, or just watch these 90 seconds:

[Via]

Come help me design The Future!

I’m incredibly excited to say that my team has just opened a really rare role to design AI-first experiences. From the job listing:

Together, we are working to inspire and empower the next generation of creatives. You will play an integral part, designing and prototyping exciting new product experiences that take full advantage of the latest AI technology from Adobe research. We’ll work iteratively to design, prototype, and test novel creative experiences, develop a deep understanding of user needs and craft new AI-first creative tools that empower users in entirely new and unimagined ways.

Your challenge is to help us pioneer AI-first creation experiences by creating novel experiences that are intuitive, empowering and first of kind.

By necessity that’s a little vague, but trust me, this stuff is wild (check out some of what I’ve been posting in the AI/ML category here), and I need a badass fellow explorer. I really want a partner who’s excited to have a full seat at the table alongside product & eng (i.e. you’re in the opposite of a service relationship where we just chuck things over the wall and say “make this pretty!”), and who’s excited to rapidly visualize a lot of ideas that we’ll test together.

We are at a fascinating inflection point, where computers learn to see more like people & can thus deliver new expressive superpowers. There will be many dead ends & many challenging ethical questions that need your careful consideration—but as Larry Page might say, it’s all “uncomfortably exciting.” 🔥

If you might be the partner we need, please get in touch via the form above, and feel free to share this opportunity with anyone who might be a great fit. Thanks!

“Float,” a beautiful short film

It’s odd to say “no spoilers” about a story that unfolds in less than three minutes, but I don’t want to say anything that would interfere with your experience. Just do yourself a favor and watch.

The fact of this all having been shot entirely on iPhone is perhaps the least interesting part about it, but that’s not to say it’s unremarkable: seeing images of my own young kids pop up, shot on iPhones 10+ years ago, the difference is staggering—and yet taken wholly for granted. Heck, even the difference made in four years is night & day.

Google taps (heh) Project Jacquard to improve accessibility

It’s always cool to see people using tech to help make the world more accessible to everyone:

This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual. 

We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea. 

Behind the scenes: Mandalorian & deepfakes

I hadn’t heard of Disney’s Gallery: The Mandalorian, but evidently it revealed more details about the Luke Skywalker scene. In response, according to Screen Rant,

VFX team Corridor Crew took the time to share their thoughts on the show’s process. From what they determined, Hamill was merely on set to provide some reference points for the creative team and the stand-in actor, Max Lloyd-Jones. The Mandalorian used deepfake technology to pull together Hamill’s likeness, and they combed through countless hours of Star Wars footage to find the best expressions.

I found the 6-minute segment pretty entertaining & enlightening. Check it out:

Adobe researchers show off new depth-estimation tech for regular images

I keep meaning to pour one out for my nearly-dead homie, Photoshop 3D (post to follow, maybe). We launched it back in 2007 thinking that widespread depth capture was right around the corner. But “Being early is the same as being wrong,” as Marc Andreessen says, and we were off by a decade (before iPhones started putting depth maps into images).

Now, though, the world is evolving further, and researchers are enabling apps to perceive depth even in traditional 2D images—no special capture required. Check out what my colleagues have been doing together with university collaborators:

[Via]

AR: How the giant Carolina Panther was made

By now you’ve probably seen this big gato bounding around:

https://twitter.com/Panthers/status/1437103615634726916?s=20

I’ve been wondering how it was done (e.g. was it something from Snap, using the landmarker tech that’s enabled things like Game of Thrones dragons to scale the Flatiron Building?). Fortunately the Verge provides some insights:

In short, what’s going on is that an animation of the virtual panther, which was made in Unreal Engine, is being rendered within a live feed of the real world. That means camera operators have to track and follow the animations of the panther in real time as it moves around the stadium, like camera operators would with an actual living animal. To give the panther virtual objects to climb on and interact with, the stadium is also modeled virtually but is invisible.

This tech isn’t baked into an app, meaning you won’t be pointing your phone’s camera in the stadium to get another angle on the panther if you’re attending a game. The animations are intended to air live. In Sunday’s case, the video was broadcast live on the big screens at the stadium.

I look forward to the day when this post is quaint, given how frequently we’re all able to glimpse things like this via AR glasses. I give it 5 years, or maybe closer to 10—but let’s see.

More great roles open at Adobe: Lightroom & Camera PMs, 3D artist

Check ’em out!

Principal Product Manager – Photoshop Camera

Adobe is looking for a product manager to help build a world-class mobile camera app for Adobe—powered by machine learning, computer vision, and computational photography, and available on all platforms. This effort, led by Adobe VP and Fellow Marc Levoy, who is a pioneer in computational photography, will begin as part of our Photoshop Camera app. It will expand its core photographic capture capabilities, adding new computational features, with broad appeal to consumers, hobbyists, influencers, and pros. If you are passionate about mobile photography, this is your opportunity to work with a great team that will be changing the camera industry.

Product Manager, Lightroom Community & Education

Adobe is looking for a product manager to help build a world-class community and education experience within the Lightroom ecosystem of applications! We’re looking for someone to help create an engaging, rewarding, and inspiring community to help photographers connect with each other and increase customer satisfaction and retention, as well as create a fulfilling in-app learning experience. If you are passionate about photography, building community, and driving customer success, this is your opportunity to work with a great team that is driving the future of photography!

QA technical artist

Adobe is looking to hire a QA Technical Artist (contract role) to work with the Product Management team for Adobe Stager, our 3D staging and rendering application. The QA Technical Artist will analyze and contribute to the quality of the application through daily art production and involvement with product feedback processes. We are looking for a candidate interested in working on state-of-the-art 3D software while revolutionizing how it can be approachable for new generations of creators.

What English sounds like to non-speakers

Kinda OT, I know, but I was intrigued by this attempt to use gibberish to let English speakers hear what the language sounds like to non-speakers. All right!

Of it the New Yorker writes:

The song lyrics are in neither Italian or English, though at first they sound like the latter. It turns out that Celentano’s words are in no language—they are gibberish, except for the phrase “all right!” In a television clip filmed several years later, Celentano explains (in Italian) to a “student” why he wrote a song that “means nothing.” He says that the song is about “our inability to communicate in the modern world,” and that the word “prisencolinensinainciusol” means “universal love.” […]

Prisencolinensinainciusol” is such a loving presentation of silliness. Would any grown performer allow themselves this level of playfulness now? Wouldn’t a contemporary artist feel obliged add a tinge of irony or innuendo to make it clear that they were “knowing” and “sophisticated”? It’s not clear what would be gained by darkening this piece of cotton candy, or what more you could know about it: it is perfect as is. 

Register for Adobe Developers Live

Sounds like an interesting opportunity to nerd out (in the best sense) in October 4-5:

Adobe Developers Live brings together Adobe developers and experience builders with diverse backgrounds and a singular purpose – to create incredible end-to-end experiences. This two-day conference will feature important developer updates, technical sessions and community networking opportunities. 

There’s also a planned hackathon:

Hackathon brings Adobe developers from across the global Adobe Experience Cloud community with Adobe engineering teams to connect, collaborate, contribute, and create solutions using the latest Experience Cloud products and tooling.

Come try editing your face using just text

A few months back, I mentioned that my teammates had connected some machine learning models to create StyleCLIP, a way of editing photos using natural language. People have been putting it to interesting, if ethically complicated, use:

Now you can try it out for yourself. Obviously it’s a work in progress, but I’m very interested in hearing what you think of both the idea & what you’re able to create.

And just because my kids love to make fun of my childhood bowl cut, here’s Less-Old Man Nack featuring a similar look, as envisioned by robots:

Photography: A rather amazing sunflower time lapse

This is glorious, if occasionally a bit xenomorph-looking. Happy Friday.

PetaPixel writes,

The plants featured in Neil Bromhall’s timelapses are grown in a blackened, window-less studio with a grow light serving as artificial sunlight.

“Plants require periods of day and night for photosynthesis and to stimulate the flowers and leaves to open,” the photographer tells PetaPixel. “I use heaters or coolers and humidifiers to control the studio condition for humidity and temperature. You basically want to recreate the growing conditions where the plants naturally thrive.”

Lighting-wise, Bromhall uses a studio flash to precisely control his exposure regardless of the time of day it is. The grow light grows the plants while the flash illuminates the photos.

Adobe 3D & Immersive is Hiring

Lots of cool-sounding roles are now accepting applications:


CURRENT OPEN POSITIONS

Sr. 3D Graphics Software Engineer – Research and Development

Seeking an experienced software engineer with expertise in 3D graphics research and engineering, a passion for interdisciplinary collaboration, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation 3D graphics software.

Senior 3D Graphics Software Engineer, 3D&I

Seeking an experienced Senior Software Engineer with a deep understanding of 3D graphics application engineering, familiarity with CPU and GPU architectures, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation collaborative 3D graphics software

Senior 3D Artist

We’re hiring a Senior 3D Artist to work closely with an important strategic partner. You will act as the conduit between the partner, and our internal product development teams. You have a deep desire to experiment with new technologies and design new and efficient workflows. The role is full-time and based in Portland or San Francisco. Also open to other west coast cities such as Seattle and Los Angeles.

Principal Designer, 3DI

We’re looking for a Principal Designer to join Adobe Design and help drive the evolution of our Substance 3D and Augmented Reality ecosystem for creative users.

Contract Position – Performance Software Engineer

Click on the above links to see full job descriptions and apply online. Don’t see what you’re looking for? Send us your profile, or portfolio. We are always looking for talented engineers, and other experts in the 3D field. We may have a future need for contractors or special projects.

“How video game rocks get made”

Last year I was delighted to help launch ultra-detailed 3D vehicles & environments, rendered in the cloud, right in Google Search:

Although we didn’t get to do so on my watch, I was looking forward to leveraging Unreal’s amazing Quixel library of photo-scanned 3D environmental assets. Here’s a look at how they’re made:

2-minute papers: How facial editing with GANs works

On the reasonable chance that you’re interested in my work, you might want to bookmark (or at least watch) this one. Two-Minute Papers shows how NVIDIA’s StyleGAN research (which underlies Photoshop’s Smart Portrait Neural Filter) has been evolving, recently being upgraded with Alias-Free GAN (which very nicely reduces funky artifacts—e.g. a “sticky beard” and “boiling” regions (hair, etc.):

https://youtu.be/0zaGYLPj4Kk

Side note: I continue to find the presenter’s enthusiasm utterly infectious: “Imagine saying that to someone 20 years ago. You would end up in a madhouse!” and “Holy mother of papers!”

1800s Astronomical Drawings vs. Modern NASA Images

The New York Public Library has shared some astronomical drawings by E.L. Trouvelot done in the 1870s, comparing them to contemporary NASA images. They write,

Trouvelot was a French immigrant to the US in the 1800s, and his job was to create sketches of astronomical observations at Harvard College’s observatory. Building off of this sketch work, Trouvelot decided to do large pastel drawings of “the celestial phenomena as they appear…through the great modern telescopes.”

[Via]

UI Faces enables easy avatar insertion

As I obviously have synthetic faces on my mind, here’s a rather cool tool for finding diverse images of people and adding them to design layouts:

UI Faces aggregates thousands of avatars which you can carefully filter to create your perfect personas or just generate random avatars.

Each avatar is tagged with age, gender, emotion and hair color using the Microsoft’s Face API, providing easier filtration and sorting.

Here’s how it integrates into Adobe XD:

Anonymize your photo automatically

Hmm—I’m not sure what to think about this & would welcome your thoughts. Promising to “Give people an idea of your appearance, while still protecting your true identity,” this Anonymizer service will take in your image, then generate multiple faces that vaguely approximate your characteristics:

Here’s what it made for me:

I find the results impressive but a touch eerie, and as I say, I’m not sure how to feel. Is this something you’d find useful (vs., say, just using something other than a photograph as your avatar)?

How Google’s new “Total Relighting” tech works

As I mentioned back in May,

You might remember the portrait relighting features that launched on Google Pixel devices last year, leveraging some earlier research. Now a number of my former Google colleagues have created a new method for figuring out how a portrait is lit, then imposing new light sources in order to help it blend into new environments.

Two-Minute Papers has put together a nice, accessible summary of how it works:

https://youtu.be/SEsYo9L5lOo

3D: A Rube Goldberg “exquisite corpse”

This fruit of collaborative creation process, all keyed off of a single scene file, is something to be hold, especially when viewed on a phone (where it approximates scrolling through a magical world):

For Dynamic Machines, I challenged 3D artists to guide a chrome ball from point A to point B in the most creative way possible. Nearly 2,000 artists entered, and in this video, the Top 100 renders are featured from an incredible community of 3D artists!

Little Photoshop of Horrors

Heh—my Adobe video eng teammate Eric Sanders passed along this fun poster (artist unknown):

It reminds me of a silly thing I made years ago when our then-little kids had a weird fixation on light fixtures. Oddly enough, this remains the one & presumably only piece of art I’ll ever get to show Matt Groening, as I got to meet him at dinner with Lynda Weinman back then. (Forgive the name drop; I have so few!)

Adobe makes a billion-dollar bet on cloud video collaboration

Back in 1999, before I worked at Adobe, a PM there called me to inquire about my design agency’s needs as we worked across teams and offices spread over multiple time zones. In the intervening years the company has tried many approaches, some more successful than others (what up, Version Cue! yeah, now who feels old…), but now they’re making the biggest bet I’ve seen:

With over a million users across media and entertainment companies, agencies, and global brands, Frame.io streamlines the video production process by enabling video editors and key project stakeholders to seamlessly collaborate using cloud-first workflows.

Creative Cloud customers, from video editors, to producers, to marketers, will benefit from seamless collaboration on video projects with Frame.io workflow functionality built natively in Adobe Creative Cloud applications like Adobe Premiere Pro, Adobe After Effects, and Adobe Photoshop.

I can’t wait to see how all this plays out—and if you’re looking for the ear of a PM on point who’d like to hear your thoughts, well, there’s one who lives in my house. 🙂

Come guide Photoshop by joining its new Beta program

“Be bold, and mighty forces will come to your aid.” – Goethe

So I said nearly 15 (!) years ago (cripes…) when we launched the first Photoshop public beta. Back then the effort required moving heaven and earth, whereas now it’s a matter of “oh hai, click that little icon that you probably neglect in your toolbar; here be goodies.” Such is progress, as the extraordinary becomes the ordinary. Anyhoo:

Photoshop Beta is debuting this month. It is a new way Creative Cloud members can give feedback to the Photoshop team. Photoshop Beta is an exciting opportunity to test and provide feedback about stability, performance, and occasionally new features by using a version of Photoshop before it is released.

To get Photoshop Beta, Creative Cloud members can install it from the Beta section of the Creative Cloud desktop app. Look for Photoshop Beta and simply click Install.

To provide feedback, head over to the Photoshop Ecosystem Adobe Community and create a new post using the “Beta” topic. Stay tuned for a brand-new forum experience for the Photoshop Beta coming soon.

Design: Why Monorails almost never work out

I was such a die-hard Apple dead-ender in the 90’s that I’d often fruitlessly pitch Macs anyone who’d listen (any many who wouldn’t). My roommate would listen to my rants about the vile inelegance of Windows, then gently shake his head and say, “Look, I get it. But the Mac is like a monorail: it’s sleek, it’s beautiful, and it’s just stuck on some little loop.” Then off he went to buy a new gaming PC.

This funny, informative video explains the actual mechanics & economics that explain why such “futuristic” designs have rarely made sense in the real world. Check it out.