A few years ago I found myself wasting my life in the bowels of Google’s enterprise apps group. (How & why that happened is a long, salty story—but like everything good & bad, the chapter passed.) In the course of that we found ourselves talking with IT folks at Ocado, a company that’s transformed from grocery shopping into the provider of really interesting robotics. Check out this rather eye-popping demonstration of how their bots fulfill orders at crazy speed:
Last summer my former teammates got all kinds of clever in working around Covid restrictions—and the constraints of physics and 3D capture—to digitize top Olympic athletes performing their signature moves. I wish they’d share the behind-the-scenes footage, as it’s legit fascinating. (Also great: seeing Donald Glover, covered in mocap ping pong balls for the making of Pixel Childish Gambino AR content, sneaking up behind my colleague like some weird-ass phantom. 😝)
Anyway, after so much delay and uncertainty, I’m happy to see those efforts now paying off in the form of 3D/AR search results. Check it out:
It’s been cool to watch my Adobe & Google colleagues (who sometimes hop back & forth over that fence) collaborating on the imaging-savvy Halide language, and now one of the contributors is getting recognized by ACM SIGGRAPH:
ACM SIGGRAPH is pleased to present the 2021 Significant New Researcher Award to Jonathan Ragan-Kelley for his outstanding contributions to systems and compilers in rendering and computational photography.
Jonathan is best known for his work on the language and compiler Halide, which has become the industry standard for computational photography and image processing. Performance has always been at the heart of computer graphics. At a time when we can’t rely on Moore’s law alone, efficiently leveraging modern hardware such as CPUs and GPUs is extremely challenging because of different levels of parallelism and differing memory hierarchies. By cleanly separating an algorithm from how it is optimized, Halide provides a new set of abstractions that make it much easier to achieve high performance. Code written in Halide tends to be much more concise than C code (2x-10x shorter) and runs much more efficiently (2x-20x faster) across a range of different processors. The compiler is open source and has had significant impact in industry, including powering much of the Google Android Camera app and playing a critical role in making the Adobe Photoshop iPad app possible.
Adobe has just announced Premiere Pro runs natively on the new chip architecture, joining a stable of the company’s other apps in making the switch.
Premiere Pro has actually been available on M1-enabled Macs since December 2020, but ever since then, it has only been offered as a beta. Now, though, the full version has been launched to the public. […]
It is not the first app Adobe has migrated to Apple’s new platform, though. Lightroom made the leap in December 2020, Photoshop followed in March 2021, then Lightroom Classic, Illustrator, and InDesign arrived in June.
My wife’s teammates have been busy cooking up what looks like some great functionality, going beyond the basics to do things like differentiate among speakers & generate stylized on-screen text, across thirteen languages.
Right at the start of my career, I had the chance to draw some simple Peanuts animations for MetLife banner ads. The cool thing is that back then, Charles Schulz himself had to approve each use of his characters—and I’m happy to say he approved mine. 😌 (For the record, as I recall it feature Linus’s hair flying up as he was surprised.)
In any event, here’s a fun tutorial commissioned by Apple:
As Kottke notes, “They’ve even included a PDF of drawing references to make it easier.” Fortunately you don’t have to do the whole thing in 35 seconds, a la Schulz himself:
Many years ago I had the chance to drop by Jay Maisel‘s iconic converted bank building in the Bowery. (This must’ve been before phone cameras got good, as otherwise I’d have shot the bejesus out of the place.) It was everything you’d hope it to be.
As luck would have it, my father-in-law (having no idea about the visit) dialed up the documentary “Jay Myself” last night, and whole family (down to my 12yo budding photographer son) loved it. I think you would, too!
It’s a little OT for this blog, but I really enjoyed this article as a discussion of design—of using art to solve problems.
I told Jerry, “It sounds more like a sound design issue than a music assignment. So, how about this? We treat the Seinfeld theme song as if your voice telling jokes is the melody, the jokes you tell are the lyrics and my job is to accompany you in a musical way that does not interfere with the audio of you telling jokes.
Warren Littlefield had the unfortunate job of telling Larry, “I don’t like the music. It’s distracting, it’s weird, it’s annoying!” And as soon as he said the word annoying, Larry David just lit up. Like, “Really? Annoying? Cool!” Because if you know Larry, if you watch Curb Your Enthusiasm, that’s what he loves most, to annoy you! That’s his brand of comedy.
Okay, this one is a little “inside baseball,” but I’m glad to see more progress using GANs to transfer visual styles among images. Check it out:
The current state-of-the-art in neural style transfer uses a technique called Adaptive Instance Normalization (AdaIN), which transfers the statistical properties of style features to a content image, and can transfer an infinite number of styles in real time. However, AdaIN is a global operation, and thus local geometric structures in the style image are often ignored during the transfer. We propose Adaptive convolutions; a generic extension of AdaIN, which allows for the simultaneous transfer of both statistical and structural styles in real time.
Greetings from Leadville, Colorado, which on weekends is transformed to an open-air rolling showroom for Sprinter vans. (Aside: I generally feel like I’m doing fine financially, but then I think, “Who are these armies of people dropping 200g’s on tarted-up delivery vans?!”) They’re super cool, but we’re kicking it old-/small-school in our VW Westy. Thus you know I’m thrilled to see this little beauty rolling out of Lego factories soon:
One of my favorite flexes while working on Google Photos was to say, “Hey, you remember the liquid-metal guy in Terminator 2? You know who wrote that? This guy,” while pointing to my ex-Adobe teammate John Schlag. I’d continue to go down the list—e.g. “You know who won an Oscar for rigging at DreamWorks? This guy [points at Alex Powell].” I did this largely to illustrate how insane it was to have such a murderer’s row of talent working on whatever small-bore project Photos had in mind. (Sorry, it was a very creatively disappointing time.)
Anyway, John S., along with Michael Natkin (who went on to spend a decade+ making After Effects rock), contributed to this great oral history of the making of Terminator 2. It’s loaded with insights & behind-the-scenes media I’d never seen before. Enjoy!
I returned to Adobe specifically to help cutting-edge creators like this bring their magic to as many people as possible, and I’m really excited to see what we can do together. (Suggestions are welcome. 😌🔥)
Back in the 90’s I pleaded with Macromedia to enable a “Flash Interchange Format” that would allow me to combine multiple apps in making great animated content. They paid this no attention, and that’s part of why I joined Adobe & started working on things like integrating After Effects with LiveMotion—a code path that helps connect AE with other apps even two+ decades (!) later.
Point is, I’ve always loved aligning tools in ways that help creators combine apps & reach an audience. While at Google I worked with Adobe folks on 3D data exchange, and now I’m happy to see that Adobe is joining the new Open 3D Foundation, meant to “accelerate developer collaboration on 3D engine development for AAA-games and high-fidelity simulations.”
Amazon… is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms
As for Adobe’s role,
“Adobe is proud to champion the Open 3D Foundation as a founding member. Open source technologies are critical to advance sustainability across 3D industries and beyond. We believe collaborative and agnostic toolsets are the key to not only more healthy and innovative ecosystems but also to furthering the democratization of 3D on a global scale.” — Sebastien Deguy, VP of 3D & Immersive at Adobe.
What I didn’t know until now is that he collaborated with the folks at Bot & Dolly—who created the brilliant work below before getting acquired by Google and, as best I can tell, having their talent completely wasted there 😭.
“Viewfinder” is a charming animation about exploring the outdoors from the Seoul-based studio VCRWORKS. The second episode in the recently launched Rhythmens series, the peaceful short follows a central character on a hike in a springtime forest and frames their whimsically rendered finds through the lens of a camera.
OMG—I’m away from our brick piles & thus can’t yet try this myself, but I can’t wait to take it for a spin. As PetaPixel explains:
If you have a giant pile of LEGO bricks and are in need of ideas on what to build, Brickit is an amazing app that was made just for you. It uses a powerful AI camera to rapidly scan your LEGO bricks and then suggest fun little projects you can build with what you have.
Here’s a short 30-second demo showing how the app works — prepare to have your mind blown:
When I saw what Adobe was doing to harness machine learning to deliver new creative superpowers, I knew I had to be part of it. If you’re a seasoned product manager & if this missions sounds up your alley, consider joining me via this new Principal PM role:
Neural Filters is a new ML/GAN based set of creative features that recently launched in Photoshop and will eventually expand to the entire suite of Creative Cloud apps, helping to establish the foundations of AI-powered creative tools. The applications of these ML-backed technologies range from imaginative portrait edits, like adjusting the age of a subject, to colorizing B/W images to restoring old photos. As the technology evolves so too will its applicability to other medium like illustrations, video, 3D, and more.
The Principal PM will contribute to the strategy definition in terms of investments in new editing paradigms, training models and broaden the applicability of Neural Filters in apps like Photoshop, Fresco, After Effects and Aero!
Tell me more, you say? But of course! The mission, per the listing:
In this hands-on role, you will help define a comprehensive product roadmap for Neural filters.
Work with PMs on app teams to prioritize filters and models that will have the largest impact to targeted user bases and, ultimately, create the most business value.
Collaborate with PMM counterparts to build and execute GTM strategies, establish Neural Filters as an industry-leading ML tool and drive awareness and adoption
Develop an understanding of business impact and define and be accountable for OKRs and measures of success for the Neural Filters platform and ecosystem.
Develop a prioritization framework that considers user feedback and research along with business objectives. Use this framework to guide the backlogs and work done by partner teams.
Guide the efforts for new explorations keeping abreast of latest developments in the pixel generation AI.
Partner with product innovators to spec out POC implementations of new features.
Develop the strategy to expand Neural Filters to other surfaces like web, mobile, headless and more CC apps focusing on core business metrics of conversion, retention and monetization.
Guide the team’s efforts in bias testing frameworks and integration with authenticity and ethical AI initiatives. This technology can be incredibly powerful, but can also introduce tremendous ethical and legal implications. It’s imperative that this person is cognizant of the risks and consistently operates with high integrity.