Category Archives: Miscellaneous

Design: New Lego T2 VW bus

Greetings from Leadville, Colorado, which on weekends is transformed to an open-air rolling showroom for Sprinter vans. (Aside: I generally feel like I’m doing fine financially, but then I think, “Who are these armies of people dropping 200g’s on tarted-up delivery vans?!”) They’re super cool, but we’re kicking it old-/small-school in our VW Westy. Thus you know I’m thrilled to see this little beauty rolling out of Lego factories soon:

Derek DelGaudio’s “In & Of Itself” is mesmerizing

Oh my God… what an amazing film! I’d heard my friends rave, and I don’t know what took me so long to watch it. I bounced between slack-jawed & openly weeping. Here’s just a taste:

Prior to watching, I’d really enjoyed Derek’s appearance on Fresh Air:

And totally tangentially (as it’s not at all related to Derek’s style of showmanship), there’s SNL’s hilarious So You’re Willing to Date a Magician:

AI: An amazing Adobe PM opportunity

When I saw what Adobe was doing to harness machine learning to deliver new creative superpowers, I knew I had to be part of it. If you’re a seasoned product manager & if this missions sounds up your alley, consider joining me via this new Principal PM role:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

Tell me more, you say? But of course! The mission, per the listing:

  • In this hands-on role, you will help define a comprehensive product roadmap for Neural filters.
  • Work with PMs on app teams to prioritize filters and models that will have the largest impact to targeted user bases and, ultimately, create the most business value.
  • Collaborate with PMM counterparts to build and execute GTM strategies, establish Neural Filters as an industry-leading ML tool and drive awareness and adoption
  • Develop an understanding of business impact and define and be accountable for OKRs and measures of success for the Neural Filters platform and ecosystem.
  • Develop a prioritization framework that considers user feedback and research along with business objectives. Use this framework to guide the backlogs and work done by partner teams.
  • Guide the efforts for new explorations keeping abreast of latest developments in the pixel generation AI.
  • Partner with product innovators to spec out POC implementations of new features.
  • Develop the strategy to expand Neural Filters to other surfaces like web, mobile, headless and more CC apps focusing on core business metrics of conversion, retention and monetization.
  • Guide the team’s efforts in bias testing frameworks and integration with authenticity and ethical AI initiatives. This technology can be incredibly powerful, but can also introduce tremendous ethical and legal implications. It’s imperative that this person is cognizant of the risks and consistently operates with high integrity.

If this sounds like your jam, or if you know of someone who’d be a great fit, please check out the listing & get in touch!

A thoughtful conversation about race

I know it’s not a subject that draws folks to this blog, but I wanted to share a really interesting talk I got to attend recently at Google. Broadcaster & former NFL player Emmanuel Acho hosts “Uncomfortable Conversations With A Black Man,” and I was glad that he shared his time and perspective with us. If you stick around to the end, I pop in with a question. The conversation is also available in podcast form.

This episode is with Emmanuel Acho, who discusses his book and YouTube Channel series of the same name: “Uncomfortable Conversations with a Black Man”, which offers conversations about race in an effort to drive open dialogue.

Emmanuel is a Fox Sports analyst and co-host of “Speak for Yourself”. After earning his undergraduate degree in sports management in 2012, Emmanuel was drafted by the Cleveland Browns. He was then traded to the Philadelphia Eagles in 2013, where he spent most of his career. While in the NFL, Emmanuel spent off seasons at the University of Texas to earn his master’s degree in Sports Psychology. Emmanuel left the football field and picked up the microphone to begin his broadcast career. He served as the youngest national football analyst and was named a 2019 Forbes 30 Under 30 Selection. Due to the success of his web series, with over 70 million views across social media platforms, he wrote the book “Uncomfortable Conversations with a Black Man”, and it became an instant New York Times Best Seller.

Character is Destiny: In Fond Appreciation of Chuck Geschke

“Imagine what you can create.
Create what you can imagine.”

So said the first Adobe video I ever saw, back in 1993 when I’d just started college & attended the Notre Dame Mad Macs user group. I saw it just that once, 20+ years ago, but the memory is vivid: an unfolding hand with an eye in the palm encircled by the words “Imagine what you can create. Create what you can imagine.” I was instantly hooked.

I got to mention this memory to Adobe founders Chuck Geschke & John Warnock at a dinner some 15 years later. Over that whole time—through my college, Web agency, and ultimately Adobe roles—the company they started had fully bent the arc of my career, as it continues to do today. I wish I’d had the chance to talk more with Chuck, who passed away on Friday. Outside of presenting to him & John at occasional board meetings, however, that’s all the time we had. Still, I’m glad I had the chance to share that one core memory.

I’ll always envy my wife Margot for getting to spend what she says was a terrific afternoon with him & various Adobe women leaders a few years back:

“Everyone sweeps the floor around here”

I can’t tell you how many times I’ve cited this story (source) from Adobe’s early history, as it’s such a beautiful distillation of the key cultural duality that Chuck & John instilled from the start:

The hands-on nature of the startup was communicated to everyone the company brought onboard. For years, Warnock and Geschke hand-delivered a bottle of champagne or cognac and a dozen roses to a new hire’s house. The employee arrived at work to find hammer, ruler, and screwdriver on a desk, which were to be used for hanging up shelves, pictures, and so on.

“From the start we wanted them to have the mentality that everyone sweeps the floor around here,” says Geschke, adding that while the hand tools may be gone, the ethic persists today.

“Charlie, you finally did it.”

I’m inspired reading all the little anecdotes & stories of inspiration that my colleagues are sharing, and I thought I’d cite one in particular—from Adobe’s 35th anniversary celebration—that made me smile. Take it away, Chuck:

I have one very special moment that meant a tremendous amount to me. Both my grandfather and my father were letterpress photoengravers — the people who made color plates to go into high-quality, high-volume publications such as Time magazine and all the other kinds of publishing that was done back then.

As we were trying to take that very mechanical chemical process and convert it into something digital, I would bring home samples of halftones and show them to my father. He’d say, “Hmm, let me look at that with my loupe,” because engravers always had loupes. He’d say, “You know, Charles, that doesn’t look very good.” Now, when my dad said, “Charles,” it was bad news.

About six months later, I brought him home something that I knew was spot on. All the rosettes were perfect. It was a gorgeous halftone. I showed it to my dad and he took his loupe out and he looked at it, and he smiled and said, “Charlie, you finally did it.” And, to me, that was probably one of the biggest high points of the early part of my career here.

And a final word, which I’ll share with my profound thanks:

“An engineer lives to have his idea embodied in a product that impacts the world.” Mr. Geschke said. “I consider myself the luckiest man on Earth.”

TinyElvis has (re)entered the building

…at least virtually.

Well gang, it’s official: I’m back at Adobe! Through the magic of technology, I found myself going through orientation yesterday in a desert motel room on Route 66 while my son/co-pilot/astromech droid attended 6th grade next to me. I was reminded of a dog walking on its hind legs: it doesn’t work well, but one is impressed that it works at all. 😌Afterwards we powered through the last six hours of our epic drive down 66 & its successors from Illinois to CA.

The blog may remain somewhat quiet for a bit as I find my sea legs, catch up with old friends, meet new folks, and realize how much I have to learn. It should be a great* journey, however, and I’m grateful to have you along for the ride!

*Mostly 😉:

Adobe’s looking for a Neural Filters PM

My excitement about what’s been going on here at the Big Red A is what drew me to reach out & eventually return (scheduled for Monday!). If you are (or know) a seasoned product manager who loves machine learning, check out this kickass listing:

Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.

The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!

For some context, here’s an overview of the feature, courtesy of NVIDIA, whose StyleGAN tech powers the feature:

And check out Neural Filters working on Conan O’Brien back at Adobe MAX:

https://twitter.com/scottbelsky/status/1318997330008395776?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1318997330008395776%7Ctwgr%5E%7Ctwcon%5Es1_&ref_url=http%3A%2F%2Fjnack.com%2Fblog%2F2020%2F10%2F26%2Fphotoshops-new-smart-portrait-is-pretty-amazing%2F

War robot + paintball gun + Internet = Art (?)

Welcome to late capitalism, MF’s.

From the site:

We’ve put a Spot in an art gallery, mounted it with a .68cal paintball gun, and given the internet the ability to control it. We’re livestreaming Spot as it frolics and destroys the gallery around it. Spot’s Rampage is piloted by YOU! Spot is remote-controlled over the internet, and we will select random viewers to take the wheel.

[Via Rajat Paharia]

Rhyolite star trails

A week ago I found myself shivering in the ghost town of Rhyolite, Nevada, alongside Adobe’s Russell Brown as we explored the possibilities of shooting 360º & traditional images at night. I’d totally struck out days earlier at the Trona Pinnacles as I tried to capture 360º star trails via either the Ricoh Theta Z or the Insta360 One X2, but this time Russell kindly showed me how to set up the Theta for interval shooting & additive exposure. I’m kinda pleased with the results:

 

Stellar times chilling (literally!) with Russell Preston Brown. 💫

Posted by John Nack on Thursday, February 4, 2021

.

Photoshop is hiring

I’m excited to see this great team growing, especially as they’ve expanded the Photoshop imaging franchise to mobile & Web platforms. Check out some of the open roles:

———-

Photoshop Developers

Photoshop Quality Engineers

Full list of all Adobe opportunities.

Track.AI helps fight blindness in children

On a day of new hope & new vision, I’m delighted to see Google, Huawei, and the medical community using ML to help spot visual disorders in kids around the world:

This machine learning framework performs classification and regression tasks for early identification of patterns, revealing different types of visual deficiencies in children. This AI-powered solution reduces diagnosis time from months to just days, and trials are available across 5 countries (China, UAE, Spain, Vietnam and Mexico).

Google talk tonight about deepfakes & combating disinfo

7:30pm Pacific time, streaming free via YouTube:

In this talk, we’ll discuss the current state of AI-generated imagery, including Deepfakes and GANs: how they work, their capabilities, and what the future may hold. We’ll try to separate the hype from reality, and examine the social consequences of these technologies with a special focus on the effect that the idea of Deepfakes has had on the public. We’ll consider the visual misinformation landscape more broadly, including so-called “shallowfakes” and “cheapfakes” like Photoshop. Finally, we’ll review the challenges and promise of the global research community that has emerged around detecting visual misinformation.

New tech creates flowing cinemagraphs from single images

Researchers at Google, Facebook, and the University of Washington have devised “a fully automatic method for converting a still image into a realistic animated looping video.”

We target scenes with continuous fluid motion, such as flowing water and billowing smoke. Our method relies on the observation that this type of natural motion can be convincingly reproduced from a static Eulerian motion description… We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.

The results are rather amazing.

Check out “Light Fields, Light Stages, and the Future of Virtual Production”

“Holy shit, you’re actually Paul Debevec!”

That’s what I said—or at least what I thought—upon seeing Paul next to me in line for coffee at Google. I’d known his name & work for decades, especially via my time PM’ing features related to HDR imaging—a field in which Paul is a pioneer.

Anyway, Paul & his team have been at Google for the last couple of years, and he’ll be giving a keynote talk at VIEW 2020 on Oct 18th. “You can now register for free access to the VIEW Conference Online Edition,” he notes, “to livestream its excellent slate of animation and visual effects presentations.”

In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning.  I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded.  Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach.  I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware.  I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.

NASA brings the in sound from way out

Nothing can stop us now
We are all playing stars…

A new project using sonification turns astronomical images from NASA’s Chandra X-Ray Observatory and other telescopes into sound. This allows users to “listen” to the center of the Milky Way as observed in X-ray, optical, and infrared light. As the cursor moves across the image, sounds represent the position and brightness of the sources.

Google Meet adds background blur—with a twist

“But what about the Web??”

I’d endlessly ask this of my old teammates, and I kept pushing to bring Google’s ML infrastructure (TensorFlow Lite, MediaPipe, etc.) and ML models (e.g. background segmentation) to everyone via browsers. Happily that work continues to bear fruit, and now the tech has come to the Web in Google Meet. This is something I haven’t seen from competitors (which rely on native apps for segmentation).

Background blur works directly within your browser and does not require an extension or any additional software. At launch, it will work on the Chrome browser on Windows and Mac desktop devices. Support for ChromeOS and Meet mobile apps will be coming soon, we’ll announce on the G Suite Updates blog when it’s available on those devices. 

This Photoshop panel is a joke. (Seriously!)

Years ago we laughed about creating an in-app Photoshop assistant called Brushy the Talking Airbrush who would, in a slack-jawed Gomer Pyle voice, intone things like “Hey, it look like yer tryin’ to retouch a photo!”

Along somewhat similarly cheeky lines comes Infinite Jokes:

“What if Photoshop could verbally judge your decisions while you were working, or tell you the best photo related puns and jokes? Infinite Jokes is just that!” reads the description. “The free light hearted panel is the perfect retouching buddy that will provide humor for hours on end.”

Yuk it up:

Google adds new tools for kids learning from home

Just in time for our boys as they level up their math skills:

When they’re stuck on a homework problem, students and parents can use Socratic and soon can use Google Lens to take a photo of a problem or equation they need help with. Socratic and Lens provide quick access to helpful results, such as step-by-step guides to solve the problem and detailed explainers to help you better understand key concepts.

Meanwhile, 3D in Search now covers a bunch of STEM-related topics:

[S]tudents can see 3D content on Search for nearly 100 STEM concepts across biology, chemistry and more using compatible Android and iOS devices. If students search for “Quantum mechanical model,” they can view a 3D atom up close and use augmented reality (AR) to bring it into their space. Check out how to use 3D for STEM concepts. 

Here’s a list of topics to try:

Glittering cutouts get their groove on

Longtime VFX stud Fernando Livschitz (see previous) has turned to 2D, making spray-painted cutouts derived from a real dancer in order to create this delightful little animation. It’s only 30s long, but the subsequent making-of minute is just as cool:

The stop-motion dancers remind me of the brilliant MacPaint animations (e.g. of Childish Gambino) from Pinot Ichwardardi, who happened to say this about low-fi tech:

[Via]

Two guys recreated “The Fast And The Furious” for less than $100 🔥

Looks… amazing.

As The AV Club helpfully notes,

The Fast And The Furious (On A Budget) isn’t quite the glossy Hollywood production as the movie it’s based on. There’s background music that still has an audio watermark repeating at regular intervals, the cars are all little plastic models that get blown up with firecrackers and canned VFX explosions, and the cast is limited to the duo of Yoshimura and Fairy alone, the two of them playing more than a dozen characters between them.

A deep dive into Lego UIs (seriously!)

Oh man… if some lab were tasked with conjuring peak delicious nerdery right up my & my son’s alleys, they’d stop here & declare victory.

Piloting an ocean exploration ship or Martian research shuttle is serious business. Let’s hope the control panel is up to scratch. Two studs wide and angled at 45°, the ubiquitous “2×2 decorated slope” is a LEGO minifigure’s interface to the world.

These iconic, low-resolution designs are the perfect tool to learn the basics of physical interface design. Armed with 52 different bricks, let’s see what they can teach us about the design, layout and organisation of complex interfaces.

Welcome to the world of LEGO UX design.

Enjoy! [Via Ben Jones, whom I deeply blame for taking me down this rabbit hole]