Category Archives: Miscellaneous

More great roles open at Adobe: Lightroom & Camera PMs, 3D artist

Check ’em out!

Principal Product Manager – Photoshop Camera

Adobe is looking for a product manager to help build a world-class mobile camera app for Adobe—powered by machine learning, computer vision, and computational photography, and available on all platforms. This effort, led by Adobe VP and Fellow Marc Levoy, who is a pioneer in computational photography, will begin as part of our Photoshop Camera app. It will expand its core photographic capture capabilities, adding new computational features, with broad appeal to consumers, hobbyists, influencers, and pros. If you are passionate about mobile photography, this is your opportunity to work with a great team that will be changing the camera industry.

Product Manager, Lightroom Community & Education

Adobe is looking for a product manager to help build a world-class community and education experience within the Lightroom ecosystem of applications! We’re looking for someone to help create an engaging, rewarding, and inspiring community to help photographers connect with each other and increase customer satisfaction and retention, as well as create a fulfilling in-app learning experience. If you are passionate about photography, building community, and driving customer success, this is your opportunity to work with a great team that is driving the future of photography!

QA technical artist

Adobe is looking to hire a QA Technical Artist (contract role) to work with the Product Management team for Adobe Stager, our 3D staging and rendering application. The QA Technical Artist will analyze and contribute to the quality of the application through daily art production and involvement with product feedback processes. We are looking for a candidate interested in working on state-of-the-art 3D software while revolutionizing how it can be approachable for new generations of creators.

What English sounds like to non-speakers

Kinda OT, I know, but I was intrigued by this attempt to use gibberish to let English speakers hear what the language sounds like to non-speakers. All right!

Of it the New Yorker writes:

The song lyrics are in neither Italian or English, though at first they sound like the latter. It turns out that Celentano’s words are in no language—they are gibberish, except for the phrase “all right!” In a television clip filmed several years later, Celentano explains (in Italian) to a “student” why he wrote a song that “means nothing.” He says that the song is about “our inability to communicate in the modern world,” and that the word “prisencolinensinainciusol” means “universal love.” […]

Prisencolinensinainciusol” is such a loving presentation of silliness. Would any grown performer allow themselves this level of playfulness now? Wouldn’t a contemporary artist feel obliged add a tinge of irony or innuendo to make it clear that they were “knowing” and “sophisticated”? It’s not clear what would be gained by darkening this piece of cotton candy, or what more you could know about it: it is perfect as is. 

Adobe 3D & Immersive is Hiring

Lots of cool-sounding roles are now accepting applications:


CURRENT OPEN POSITIONS

Sr. 3D Graphics Software Engineer – Research and Development

Seeking an experienced software engineer with expertise in 3D graphics research and engineering, a passion for interdisciplinary collaboration, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation 3D graphics software.

Senior 3D Graphics Software Engineer, 3D&I

Seeking an experienced Senior Software Engineer with a deep understanding of 3D graphics application engineering, familiarity with CPU and GPU architectures, and a deep sense of software craftsmanship to participate in the design and implementation of our next-generation collaborative 3D graphics software

Senior 3D Artist

We’re hiring a Senior 3D Artist to work closely with an important strategic partner. You will act as the conduit between the partner, and our internal product development teams. You have a deep desire to experiment with new technologies and design new and efficient workflows. The role is full-time and based in Portland or San Francisco. Also open to other west coast cities such as Seattle and Los Angeles.

Principal Designer, 3DI

We’re looking for a Principal Designer to join Adobe Design and help drive the evolution of our Substance 3D and Augmented Reality ecosystem for creative users.

Contract Position – Performance Software Engineer

Click on the above links to see full job descriptions and apply online. Don’t see what you’re looking for? Send us your profile, or portfolio. We are always looking for talented engineers, and other experts in the 3D field. We may have a future need for contractors or special projects.

3D: A Rube Goldberg “exquisite corpse”

This fruit of collaborative creation process, all keyed off of a single scene file, is something to be hold, especially when viewed on a phone (where it approximates scrolling through a magical world):

For Dynamic Machines, I challenged 3D artists to guide a chrome ball from point A to point B in the most creative way possible. Nearly 2,000 artists entered, and in this video, the Top 100 renders are featured from an incredible community of 3D artists!

Adobe makes a billion-dollar bet on cloud video collaboration

Back in 1999, before I worked at Adobe, a PM there called me to inquire about my design agency’s needs as we worked across teams and offices spread over multiple time zones. In the intervening years the company has tried many approaches, some more successful than others (what up, Version Cue! yeah, now who feels old…), but now they’re making the biggest bet I’ve seen:

With over a million users across media and entertainment companies, agencies, and global brands, Frame.io streamlines the video production process by enabling video editors and key project stakeholders to seamlessly collaborate using cloud-first workflows.

Creative Cloud customers, from video editors, to producers, to marketers, will benefit from seamless collaboration on video projects with Frame.io workflow functionality built natively in Adobe Creative Cloud applications like Adobe Premiere Pro, Adobe After Effects, and Adobe Photoshop.

I can’t wait to see how all this plays out—and if you’re looking for the ear of a PM on point who’d like to hear your thoughts, well, there’s one who lives in my house. 🙂

Come guide Photoshop by joining its new Beta program

“Be bold, and mighty forces will come to your aid.” – Goethe

So I said nearly 15 (!) years ago (cripes…) when we launched the first Photoshop public beta. Back then the effort required moving heaven and earth, whereas now it’s a matter of “oh hai, click that little icon that you probably neglect in your toolbar; here be goodies.” Such is progress, as the extraordinary becomes the ordinary. Anyhoo:

Photoshop Beta is debuting this month. It is a new way Creative Cloud members can give feedback to the Photoshop team. Photoshop Beta is an exciting opportunity to test and provide feedback about stability, performance, and occasionally new features by using a version of Photoshop before it is released.

To get Photoshop Beta, Creative Cloud members can install it from the Beta section of the Creative Cloud desktop app. Look for Photoshop Beta and simply click Install.

To provide feedback, head over to the Photoshop Ecosystem Adobe Community and create a new post using the “Beta” topic. Stay tuned for a brand-new forum experience for the Photoshop Beta coming soon.

“The Art of Logic”

I quite enjoyed this Talk at Google by mathematician & concert pianist (what a slouch!) Eugenia Cheng. Wait, wait, don’t go—I swear it’s infinitely more down-to-earth & charming than one would think. Among other things she uses (extremely accessible math (er, “maths” 🙄) to illuminate touchy subjects like societal privilege, diet, and exercise. It’s also available in podcast form.

Emotions are powerful. In newspaper headlines and on social media, they have become the primary way of understanding the world. With her new book “The Art of Logic: How to Make Sense in a World that Doesn’t”, Eugenia has set out to show how mathematical logic can help us see things more clearly – and know when politicians and companies are trying to mislead us. This talk, like the book, is filled with useful real-life examples of logic and illogic at work and an essential guide to decoding modern life.

“For Madmen Only”

“I will give you Del’s body, and it’s a great body, because you can study the effects of smoking, alcohol, cocaine, and heroin on the brain. All I need is the skull.”

So said Charna Halpern, the longtime creative partner of improve legend Del Close, who insisted that his skull be donated for use on stage (e.g. in Hamlet). To say that he sounds like a character would be an incredible understatement, and this new documentary about his life & work looks rather amazing:

Adobe/Google contributor named “2021 Significant New Researcher”

It’s been cool to watch my Adobe & Google colleagues (who sometimes hop back & forth over that fence) collaborating on the imaging-savvy Halide language, and now one of the contributors is getting recognized by ACM SIGGRAPH:

ACM SIGGRAPH is pleased to present the 2021 Significant New Researcher Award to Jonathan Ragan-Kelley for his outstanding contributions to systems and compilers in rendering and computational photography. 

Jonathan is best known for his work on the language and compiler Halide, which has become the industry standard for computational photography and image processing. Performance has always been at the heart of computer graphics. At a time when we can’t rely on Moore’s law alone, efficiently leveraging modern hardware such as CPUs and GPUs is extremely challenging because of different levels of parallelism and differing memory hierarchies. By cleanly separating an algorithm from how it is optimized, Halide provides a new set of abstractions that make it much easier to achieve high performance. Code written in Halide tends to be much more concise than C code (2x-10x shorter) and runs much more efficiently (2x-20x faster) across a range of different processors. The compiler is open source and has had significant impact in industry, including powering much of the Google Android Camera app and playing a critical role in making the Adobe Photoshop iPad app possible

[Via Sylvain Paris]

Premiere Pro joins the roster of M1-native Adobe apps

Happy news, per Digital Trends:

Adobe has just announced Premiere Pro runs natively on the new chip architecture, joining a stable of the company’s other apps in making the switch.

Premiere Pro has actually been available on M1-enabled Macs since December 2020, but ever since then, it has only been offered as a beta. Now, though, the full version has been launched to the public. […]

It is not the first app Adobe has migrated to Apple’s new platform, though. Lightroom made the leap in December 2020, Photoshop followed in March 2021, then Lightroom Classic, Illustrator, and InDesign arrived in June.

Design: New Lego T2 VW bus

Greetings from Leadville, Colorado, which on weekends is transformed to an open-air rolling showroom for Sprinter vans. (Aside: I generally feel like I’m doing fine financially, but then I think, “Who are these armies of people dropping 200g’s on tarted-up delivery vans?!”) They’re super cool, but we’re kicking it old-/small-school in our VW Westy. Thus you know I’m thrilled to see this little beauty rolling out of Lego factories soon:

Derek DelGaudio’s “In & Of Itself” is mesmerizing

Oh my God… what an amazing film! I’d heard my friends rave, and I don’t know what took me so long to watch it. I bounced between slack-jawed & openly weeping. Here’s just a taste:

Prior to watching, I’d really enjoyed Derek’s appearance on Fresh Air:

And totally tangentially (as it’s not at all related to Derek’s style of showmanship), there’s SNL’s hilarious So You’re Willing to Date a Magician:

AI: An amazing Adobe PM opportunity

When I saw what Adobe was doing to harness machine learning to deliver new creative superpowers, I knew I had to be part of it. If you’re a seasoned product manager & if this missions sounds up your alley, consider joining me via this new Principal PM role:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

Tell me more, you say? But of course! The mission, per the listing:

  • In this hands-on role, you will help define a comprehensive product roadmap for Neural filters.
  • Work with PMs on app teams to prioritize filters and models that will have the largest impact to targeted user bases and, ultimately, create the most business value.
  • Collaborate with PMM counterparts to build and execute GTM strategies, establish Neural Filters as an industry-leading ML tool and drive awareness and adoption
  • Develop an understanding of business impact and define and be accountable for OKRs and measures of success for the Neural Filters platform and ecosystem.
  • Develop a prioritization framework that considers user feedback and research along with business objectives. Use this framework to guide the backlogs and work done by partner teams.
  • Guide the efforts for new explorations keeping abreast of latest developments in the pixel generation AI.
  • Partner with product innovators to spec out POC implementations of new features.
  • Develop the strategy to expand Neural Filters to other surfaces like web, mobile, headless and more CC apps focusing on core business metrics of conversion, retention and monetization.
  • Guide the team’s efforts in bias testing frameworks and integration with authenticity and ethical AI initiatives. This technology can be incredibly powerful, but can also introduce tremendous ethical and legal implications. It’s imperative that this person is cognizant of the risks and consistently operates with high integrity.

If this sounds like your jam, or if you know of someone who’d be a great fit, please check out the listing & get in touch!

A thoughtful conversation about race

I know it’s not a subject that draws folks to this blog, but I wanted to share a really interesting talk I got to attend recently at Google. Broadcaster & former NFL player Emmanuel Acho hosts “Uncomfortable Conversations With A Black Man,” and I was glad that he shared his time and perspective with us. If you stick around to the end, I pop in with a question. The conversation is also available in podcast form.

This episode is with Emmanuel Acho, who discusses his book and YouTube Channel series of the same name: “Uncomfortable Conversations with a Black Man”, which offers conversations about race in an effort to drive open dialogue.

Emmanuel is a Fox Sports analyst and co-host of “Speak for Yourself”. After earning his undergraduate degree in sports management in 2012, Emmanuel was drafted by the Cleveland Browns. He was then traded to the Philadelphia Eagles in 2013, where he spent most of his career. While in the NFL, Emmanuel spent off seasons at the University of Texas to earn his master’s degree in Sports Psychology. Emmanuel left the football field and picked up the microphone to begin his broadcast career. He served as the youngest national football analyst and was named a 2019 Forbes 30 Under 30 Selection. Due to the success of his web series, with over 70 million views across social media platforms, he wrote the book “Uncomfortable Conversations with a Black Man”, and it became an instant New York Times Best Seller.

Character is Destiny: In Fond Appreciation of Chuck Geschke

“Imagine what you can create.
Create what you can imagine.”

So said the first Adobe video I ever saw, back in 1993 when I’d just started college & attended the Notre Dame Mad Macs user group. I saw it just that once, 20+ years ago, but the memory is vivid: an unfolding hand with an eye in the palm encircled by the words “Imagine what you can create. Create what you can imagine.” I was instantly hooked.

I got to mention this memory to Adobe founders Chuck Geschke & John Warnock at a dinner some 15 years later. Over that whole time—through my college, Web agency, and ultimately Adobe roles—the company they started had fully bent the arc of my career, as it continues to do today. I wish I’d had the chance to talk more with Chuck, who passed away on Friday. Outside of presenting to him & John at occasional board meetings, however, that’s all the time we had. Still, I’m glad I had the chance to share that one core memory.

I’ll always envy my wife Margot for getting to spend what she says was a terrific afternoon with him & various Adobe women leaders a few years back:

“Everyone sweeps the floor around here”

I can’t tell you how many times I’ve cited this story (source) from Adobe’s early history, as it’s such a beautiful distillation of the key cultural duality that Chuck & John instilled from the start:

The hands-on nature of the startup was communicated to everyone the company brought onboard. For years, Warnock and Geschke hand-delivered a bottle of champagne or cognac and a dozen roses to a new hire’s house. The employee arrived at work to find hammer, ruler, and screwdriver on a desk, which were to be used for hanging up shelves, pictures, and so on.

“From the start we wanted them to have the mentality that everyone sweeps the floor around here,” says Geschke, adding that while the hand tools may be gone, the ethic persists today.

“Charlie, you finally did it.”

I’m inspired reading all the little anecdotes & stories of inspiration that my colleagues are sharing, and I thought I’d cite one in particular—from Adobe’s 35th anniversary celebration—that made me smile. Take it away, Chuck:

I have one very special moment that meant a tremendous amount to me. Both my grandfather and my father were letterpress photoengravers — the people who made color plates to go into high-quality, high-volume publications such as Time magazine and all the other kinds of publishing that was done back then.

As we were trying to take that very mechanical chemical process and convert it into something digital, I would bring home samples of halftones and show them to my father. He’d say, “Hmm, let me look at that with my loupe,” because engravers always had loupes. He’d say, “You know, Charles, that doesn’t look very good.” Now, when my dad said, “Charles,” it was bad news.

About six months later, I brought him home something that I knew was spot on. All the rosettes were perfect. It was a gorgeous halftone. I showed it to my dad and he took his loupe out and he looked at it, and he smiled and said, “Charlie, you finally did it.” And, to me, that was probably one of the biggest high points of the early part of my career here.

And a final word, which I’ll share with my profound thanks:

“An engineer lives to have his idea embodied in a product that impacts the world.” Mr. Geschke said. “I consider myself the luckiest man on Earth.”

TinyElvis has (re)entered the building

…at least virtually.

Well gang, it’s official: I’m back at Adobe! Through the magic of technology, I found myself going through orientation yesterday in a desert motel room on Route 66 while my son/co-pilot/astromech droid attended 6th grade next to me. I was reminded of a dog walking on its hind legs: it doesn’t work well, but one is impressed that it works at all. 😌Afterwards we powered through the last six hours of our epic drive down 66 & its successors from Illinois to CA.

The blog may remain somewhat quiet for a bit as I find my sea legs, catch up with old friends, meet new folks, and realize how much I have to learn. It should be a great* journey, however, and I’m grateful to have you along for the ride!

*Mostly 😉:

Adobe’s looking for a Neural Filters PM

My excitement about what’s been going on here at the Big Red A is what drew me to reach out & eventually return (scheduled for Monday!). If you are (or know) a seasoned product manager who loves machine learning, check out this kickass listing:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

For some context, here’s an overview of the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

And check out Neural Filters working on Conan O’Brien back at Adobe MAX:

https://twitter.com/scottbelsky/status/1318997330008395776?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1318997330008395776%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=http%3A%2F%2Fjnack.com%2Fblog%2F2020%2F10%2F26%2Fphotoshops-new-smart-portrait-is-pretty-amazing%2F

War robot + paintball gun + Internet = Art (?)

Welcome to late capitalism, MF’s.

From the site:

We’ve put a Spot in an art gallery, mounted it with a .68cal paintball gun, and given the internet the ability to control it. We’re livestreaming Spot as it frolics and destroys the gallery around it. Spot’s Rampage is piloted by YOU! Spot is remote-controlled over the internet, and we will select random viewers to take the wheel.

[Via Rajat Paharia]

Rhyolite star trails

A week ago I found myself shivering in the ghost town of Rhyolite, Nevada, alongside Adobe’s Russell Brown as we explored the possibilities of shooting 360º & traditional images at night. I’d totally struck out days earlier at the Trona Pinnacles as I tried to capture 360º star trails via either the Ricoh Theta Z or the Insta360 One X2, but this time Russell kindly showed me how to set up the Theta for interval shooting & additive exposure. I’m kinda pleased with the results:

 

Stellar times chilling (literally!) with Russell Preston Brown. 💫

Posted by John Nack on Thursday, February 4, 2021

.

Photoshop is hiring

I’m excited to see this great team growing, especially as they’ve expanded the Photoshop imaging franchise to mobile & Web platforms. Check out some of the open roles:

———-

Photoshop Developers

Photoshop Quality Engineers

Full list of all Adobe opportunities.

Track.AI helps fight blindness in children

On a day of new hope & new vision, I’m delighted to see Google, Huawei, and the medical community using ML to help spot visual disorders in kids around the world:

This machine learning framework performs classification and regression tasks for early identification of patterns, revealing different types of visual deficiencies in children. This AI-powered solution reduces diagnosis time from months to just days, and trials are available across 5 countries (China, UAE, Spain, Vietnam and Mexico).

Google talk tonight about deepfakes & combating disinfo

7:30pm Pacific time, streaming free via YouTube:

In this talk, we’ll discuss the current state of AI-generated imagery, including Deepfakes and GANs: how they work, their capabilities, and what the future may hold. We’ll try to separate the hype from reality, and examine the social consequences of these technologies with a special focus on the effect that the idea of Deepfakes has had on the public. We’ll consider the visual misinformation landscape more broadly, including so-called “shallowfakes” and “cheapfakes” like Photoshop. Finally, we’ll review the challenges and promise of the global research community that has emerged around detecting visual misinformation.

New tech creates flowing cinemagraphs from single images

Researchers at Google, Facebook, and the University of Washington have devised “a fully automatic method for converting a still image into a realistic animated looping video.”

We target scenes with continuous fluid motion, such as flowing water and billowing smoke. Our method relies on the observation that this type of natural motion can be convincingly reproduced from a static Eulerian motion description… We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.

The results are rather amazing.