All posts by jnack

Lightroom is looking for a new PM Director

Talk about an amazing gig that comes around once in ~forever:

Lightroom is the world’s top photography ecosystem with everything needed to edit, manage, store and share your images across a connected system on any device and the web! The Digital Imaging group at Adobe includes the Photoshop and Lightroom ecosystems.

We are looking for the next leader of the Lightroom ecosystem to build and execute the vision and strategy to accelerate the growth of Lightroom’s mobile, desktop, web and cloud products into the future. This person should have a proven track record as a great product management leader, have deep empathy for customers and a passion for photography. This is a high visibility role on one of Adobe’s most important projects.

Jump in or tell a promising friend!

Adobe releases Arm version of Lightroom for Windows and macOS - The Verge

NVIDIA GauGAN enables photo creation through words

“Days of Miracles & Wonder,” part ∞:

Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple trees in the foreground, or clouds in the sky.

It doesn’t just create realistic images — artists can also use the demo to depict otherworldly landscapes.

Here’s a 30-second demo:

And here’s a glimpse at Tatooine:

An awesome Adobe PM opportunity: AI for video

Check out this chance to work with some of my favorite engineering collaborators:

We believe AI will revolutionize the next generation of creative tools, automating repetitive tasks while leaving creative agency with the user. This is your opportunity to join a world-class team of researchers, engineers, and software developers who are inventing the next generation of creative tools powered by machine learning. In this role you will help Adobe Research launch new products and experiences that bring AI to video creative tools

Specific responsibilities:

— Launch products enhanced by machine learning.

Interact with customers as often as possible—through user interviews, user testing, social media, and wherever else you can find them—to understand unspoken, unmet needs. Advocate for the customer.

— Identify and quantify market fit and opportunities across user segments and platforms. 

— Guide our scientists and engineers with your deep understanding of market and technology trends, and the competitive landscape.

— Define a comprehensive product roadmap and strategy.

— Communicate our product strategy to executives.

— Form and test data-driven hypotheses about which choices will increase market impact. 

— Prioritize what needs to get done versus what could be done.

If you might be a good fit, please throw your hat in the ring, or tell a friend who might want to jump in!

Design: Google’s “Dragonscale” solar roofs

For the last few years I’ve been curiously watching what I affectionately call “nerd terrariums” being erected on Google’s main campus. Now the team behind their unique roof designs is providing some insight into how they work:

These panels coupled with the pavilion-like rooflines let us capture the power of the sun from multiple angles. Unlike a flat roof, which generates peak power at the same time of the day, our dragonscale solar skin will generate power during an extended amount of daylight hours… When up-and-running, Charleston East and Bay View will have about 7 megawatts of installed renewable power—generating roughly 40% of their energy needs.

Four construction team members install BIPV at Google’s Bay View office development.

Check out a quick overview—literally:

“Kurt Vonnegut: Unstuck In Time”

I was such a happy dad recently when my 12yo Henry (who, being an ADD guy like me, often finds long texts to be a slog) got completely engrossed in the graphic novel version of Slaughterhouse-Five and read it in an evening. Meanwhile his older brother was working his way through Cat’s Cradle—one of my all-time faves.

Now I’m pleased to see the arrival of Unstuck In Time, a new documentary covering Vonnegut’s life & work:

Tangentially (natch), this brought to mind the Vonnegut scenes in Back To School—where I first heard the phrase “F me?!

Oh, and then there’s one of my favorite encapsulations of life wisdom—a commencement address wrongly attributed to Vonnegut, and tonally right in his wheelhouse”

Color in Skyfall

Per Daring Fireball:

Devan Scott put together a wonderful, richly illustrated thread on Twitter contrasting the use of color grading in Skyfall and Spectre. Both of those films were directed by Sam Mendes, but they had different cinematographers — Roger Deakins for Skyfall, and Hoyte van Hoytema for Spectre. Scott graciously and politely makes the case that Skyfall is more interesting and fully-realized because each new location gets a color palette of its own, whereas the entirety of Spectre is in a consistent color space.

Click or tap on through to the thread; I think you’ll enjoy it.

Best Inventions of 2021: Adobe Super Resolution

Congrats to Eric Chan & the whole crew for making Time’s list:

Most of the photos we take these days look great on the small screen of a phone. But blow them up, and the flaws are unmistakable. So how do you clean up your snaps to make them poster-worthy? Adobe’s new Super Resolution feature, part of its Lightroom and Photoshop software, uses machine learning to boost an image’s resolution up to four times its original pixel count. It works by looking at its database of photos similar to the one it’s upscaling, analyzing millions of pairs of high- and low-resolution photos (including their raw image data) to fill in the missing data. The result? Massive printed smartphone photos worthy of a primo spot on your living-room wall. —Jesse Will

[Via Barry Young}

ProsePainter enables painting via descriptions

Type the name of something (e.g. “beautiful flowers”), then use a brush to specify where you want it applied. Here, just watch this demo:

The project is open source, complements of the creators of ArtBreeder.

Google “Pet Portraits” find doppelgängers in art

Super fun:

Today we are introducing Pet Portraits, a way for your dog, cat, fish, bird, reptile, horse, or rabbit to discover their very own art doubles among tens of thousands of works from partner institutions around the world. Your animal companion could be matched with ancient Egyptian figurinesvibrant Mexican street artserene Chinese watercolors, and more. Just open the rainbow camera tab in the free Google Arts & Culture app for Android and iOS to get started and find out if your pet’s look-alikes are as fun as some of our favorite animal companions and their matches.

Check out my man Seamus:

Image

Niantic introduces the Lightship AR dev platform

Hmm—I want to get excited here, but as I’ve previously detailed, I’m finding it tough.

Pokemon Go remains the one-hit wonder of the location-based content/gaming space. That being true 5+ years after its launch, during which time Niantic has launched & killed Harry Potter Wizard Unite; Microsoft has done the same with Minecraft Earth; and Google has (AFAIK) followed suit with their location-based gaming API, I’m not sure that we’ll turn a corner until real AR glasses arrive.

Still & all, here it is:

The Niantic Lightship Augmented Reality Developer Kit, or ARDK, is now available for all AR developers around the world at Lightship.dev. To celebrate the launch, we’re sharing a glimpse of the earliest AR applications and demo experiences from global brand partners and developer studios from across the world.

We’re also announcing the formation of Niantic Ventures to invest in and partner with companies building the future of AR. With an initial $20 million fund, Niantic Ventures will invest in companies building applications that share our vision for the Real-World Metaverse and contribute to the global ecosystem we are building. To learn more about Niantic Ventures, go to Lightship.dev.

It’s cool that “The Multiplayer API is free for apps with fewer than 50,000 monthly active users,” and even above that number, it’s free to everyone for the first six months.

FaceStudio enables feature-by-feature editing via GANs

In traditional graphics work, vectorizing a bitmap image produces a bunch of points & lines that the computer then renders as pixels, producing something that approximates the original. Generally there’s a trade-off between editability (relatively few points, requiring a lot of visual simplification, but easy to see & manipulate) and fidelity (tons of points, high fidelity, but heavy & hard to edit).

Importing images into a generative adversarial network (GAN) works in a similar way: pixels are converted into vectors which are then re-rendered as pixels—and guess what, it’s a generally lossy process where fidelity & editability often conflict. When the importer tries to come up with a reasonable set of vectors that fit the entire face, it’s easy to end up with weird-looking results. Additionally, changing one attribute (e.g. eyebrows) may cause changes to others (e.g. hairline). I saw a case once where making someone look another direction caused them to grow a goatee (!).

My teammates’ FaceStudio effort proposes to address this problem by sidestepping the challenge of fitting the entire face, instead letting you broadly select a region and edit just that. Check it out:

George Orwell on your self-delusional product metrics

Okay, not directly, but generally dead-on:

We are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on a battlefield.” – George Orwell, 1946.

“…or at reorg time.” — JNack

Mental Canvas enables 3D drawing

10 years ago we put a totally gratuitous (but fun!) 3D view of the layers stack into Photoshop Touch. You couldn’t actually edit in that mode, but people loved seeing their 2D layers with 3D parallax.

More recently apps are endeavoring to turn 2D photos into 3D canvases via depth analysis (see recent Adobe research), object segmentation, etc. That is, of course, an extension of what we had in mind when adding 3D to Photoshop back in 2007 (!)—but depth capture & extrapolation weren’t widely available, and it proved too difficult to shoehorn everything into the PS editing model.

Now Mental Canvas promises to enable some truly deep expressivity:

I do wonder how many people could put it to good use. (Drawing well is hard; drawing well in 3D…?) I Want To Believe… It’ll be cool to see where this goes.

MAX Sneak: Smarter vectorization through “Make It Pop”

Semantic segmentation + tracing FTW!

By using machine learning to understand the scene, Project Make it Pop makes it easy to create and customize an illustration by distinguishing between the background and the foreground as well as recognizing connected shapes and structures.

And you’ve gotta stick around for the whole thing, or just jump to around 2:52 where I literally started saying “WTF…?”

MAX Sneak: Morpheus edits faces in video

What if Photoshop’s breakthrough Smart Portrait, which debuted at MAX last year, could work over time?

One may think this is an easy task as all that is needed is to apply Smart Portrait for every frame in the video. Not only is this tedious, but also visually unappealing due to lack of temporal consistency. 

In Project Morpheus, we are building a powerful video face editing technology that can modify someone’s appearance in an automated manner, with smooth and consistent results. 

Check it out:

Come try Photoshop Web!

I kinda can’t believe it, but the team has gotten the old gal (plus Illustrator) running right in Web browsers!

VP of design Eric Snowden writes,

Extending Illustrator and Photoshop to the web (beta) will help you share creative work from the Illustrator and Photoshop desktop and iPad apps for commenting. Your collaborators can open and view your work in the browser and provide feedback. You’ll also be able to make basic edits without having to download or launch the apps.

Creative Cloud Spaces (beta) are a shared place that brings content and context together, where everyone on your team can access and organize files, libraries, and links in a centralized location.

Creative Cloud Canvas (beta) is a new surface where you and your team can display and visualize creative work to review with collaborators and explore ideas together, all in real-time and in the browser.

From the FAQ:

Adobe extends Photoshop to the web for sharing, reviewing, and light editing of Photoshop cloud documents (.psdc). Collaborators can open and view your work in the browser, provide feedback, and make basic edits without downloading the app.

Photoshop on the web beta features are now available for testing and feedback. For help, please visit the Adobe Photoshop beta community.

So, what do you think?

Inside the 50-megapixel Pixel 6 camera

“Folded optics” & computational zoom FTW! The ability to apply segmentation and selective blur (e.g. to the background behind a moving cyclist) strikes me as especially smart.

On a random personal note, it’s funny to see demo files for features like Magic Eraser and think, “Hey, I know that guy!” much like I did with Content-Aware Fill eleven (!) years ago. And it’s fun that some of the big brains I got to work with at Google have independently come over to collaborate at Adobe. It’s a small, weird world.

DJI Ronin 4D looks amazing

Built-in gimbal, 8K rez, LIDAR rangefinder for low-light focusing—let’s go!

It commands a pro price tag, too. Per The Verge:

The 6K version costs $7,199, the 8K version is $11,499, and both come with a decent kit: the gimbal, camera, LIDAR range finder, a monitor and hand grips / top handle, a carrying case, and a battery (the 8K camera also comes with a 1TB SSD). In the realm of production cameras and stabilization systems, that’s actually on the lower end (DJI’s cinema-focused Ronin 2 stabilizer costs over $8,000 without any camera attached, and Sony’s FX9 6K camera costs $11,000 for just the body), but if you were hoping to use the LIDAR focus system to absolutely nail focus in your vlogs, you may want to rethink that.

“Dogs & cats morphing together—mass hysteria!”

It’s that thing where you wake up, see some exciting research, tab over to Slack to share it with your team—and then notice that the work is from your teammates. 😝

Check out StyleAlign from my teammate Eli Shechtman & collaborators. Among other things, they’ve discovered interesting, useful correspondences in ML models for very different kinds of objects:

We find that the child model’s latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing

Here’s a little taste of what it enables:

And to save you the trouble of looking up the afore-referenced Ghostbusters line, here ya go. 👻

Google enables Pixel -> Snap in two taps

I was so excited to build an AR stack for Google Lens, aiming to bring realtime magic to billions of phones’ default camera. Sadly, after AR Playground went out the door three years ago & the world shrugged, Google lost interest.

At least they’re letting others like Snap grab the mic.

Dubbed “Quick Tap to Snap,” the new feature will enable users to tap the back of the device twice to open the Snapchat camera directly from the lock screen. Users will have to authenticate before sending photos or videos to a friend or their personal Stories page. 

Snapchat’s Pixel service will also include extra augmented-reality lenses and integrate some Google features, like live translation in the chat feature, according to the company.

I wish Apple would offer similar access to third-party camera apps like Halide Camera, etc. Its absence has entirely killed my use of those apps, no matter how nice they may be.

Plus Code addresses make the world more navigable

Finding my grandmother’s home in Ireland was one of the weirder adventures I’ve experienced. Directions were literally “Go to the post office and ask for directions.” This worked in 1984, but we visited again in 2007, the P.O. was defunct, so we literally had to ask some random neighbor on the road—who of course knew the way!

Much of the world similarly operates without the kind of street names & addresses most of us take for granted, and Google and others are working to enable Plus Code addresses to help people get around. Check out how it works:

Google writes,

Previously, creating addresses for an entire town or village could take years. Address Maker shortens this time to as little as a few weeks — helping under-addressed communities get on the map quickly, while also reducing costs. Address Maker allows organizations to easily assign addresses and add missing roads, all while making sure they work seamlessly in Google Maps and Maps APIs. Governments and NGOs in The Gambia, Kenya, India, South Africa and the U.S. are already using Address Maker, with more partners on the way. If you’re part of a local government or NGO and think Address Maker could help your community, reach out to us here g.co/maps/addressmaker.

Design details of the Blackbird

I know it’s a little OT for this blog, but as I’m always fascinated with clever little design solutions, I really enjoyed this detailed look at the iconic SR-71 Blackbird. I had no idea about things like it having a little periscope, or that its turn radius is so great that pivoting 180º at speed would necessitate covering the distance between Dayton, Ohio & and Chicago (!). Enjoy:

AI: Cats n’ Cages

Things the internet loves:
Nicolas Cage
Cats
Mashups

Let’s do this:

Elsewhere, I told my son that I finally agree with his strong view that the live-action Lion King (which I haven’t seen) does look pretty effed up. 🙃

Demo: Camera Raw is coming to Photoshop for iPad

Nine years ago, Google spent a tremendous amount of money buying Nik Software, in part to get a mobile raw converter—which, as they were repeatedly told, didn’t actually exist. (“Still, a man hears what he wants to hear and disregards the rest…”)

If all that hadn’t happened, I likely never would have gone there, and had the acquisition not been so ill-advised & ill-fitting, I probably wouldn’t have come back to Adobe. Ah, life’s rich pageant… ¯\_(ツ)_/¯

Anyway, back in 2021, take ‘er away, Ryan Dumlao:

“How To Animate Your Head” in Character Animator

Let’s say you dig AR but want to, y’know, actually create instead of just painting by numbers (just yielding whatever some filter maker deigns to provide). In that case, my friend, you’ll want to check out this guidance from animator/designer/musician/Renaissance man Dave Werner.

0:00 Intro
1:27 Character Animator Setup
7:38 After Effects Motion Tracking
14:14 After Effects Color Matching
17:35 Outro (w/ surprise cameo)

Online seminar tomorrow: Russell Brown discusses his latest photographic explorations

I had a ball schlepping all around Death Valley & freezing my butt off while working with Russell back in January, and this seminar sounds fun:

Oct 12, 2021; 7:00 – 8:30pm Eastern

Russell Preston Brown is the senior creative director at Adobe, as well as an Emmy Award-winning instructor. His ability to bring together the world of design and software development is a perfect match for Adobe products. In Russell’s 32 years of creative experience at Adobe, he has contributed to the evolution of Adobe Photoshop with feature enhancements, advanced scripts and development. He has helped the world’s leading photographers, publishers, art directors and artists to master the software tools that have made Adobe’s applications the standard by which all others are measured.