Monthly Archives: January 2024

My panel discussion at the AI User Conference

Thanks to Jackson Beaman & crew for putting together a great event yesterday in SF. I joined him, KD Deshpande (founder of Simplified), and Sofiia Shvets (founder of Let’s Enhance & Claid.ai) for a 20-minute panel discussion (which starts at 3:32:03 or so, in case the embedded version doesn’t jump you to the proper spot) about creating production-ready imagery using AI. Enjoy, and please let me know if you have any comments or questions!

The Founding Fathers talk AI art

Well, not exactly—but T-Paine’s words about how we value things still resonate today:

We humans are fairly good at pricing effort (notably in dollars paid per hour worked), but we struggle much more with pricing value. Cue the possibly apocryphal story about Picasso asking $10,000 for a drawing he sketched in a matter of seconds, but the ability to create which had taken him a lifetime.

A couple of related thoughts:

  • My artist friend is a former Olympic athlete who talks about how people bond through shared struggle, particularly in athletics. For him, someone using AI-powered tools is similar to a guy showing up at the gym with a forklift, using it to move a bunch of weight, and then wanting to bond afterwards with the actual weightlifters.
  • I see ostensible thought leaders crowing about the importance of “taste,” but I wonder how they think that taste is or will be developed in the absence of effort.
  • As was said of—and by?—Steve Jobs, “The journey is the reward.”

[Via Louis DeScioli]

After Effects + Midjourney + Runway = Harry Potter magic

It’s bonkers what one person can now create—bonkers!

I edited out ziplines to make a Harry Potter flying video, added something special at the end
byu/moviemaker887 inAfterEffects

I took a video of a guy zip lining in full Harry Potter costume and edited out the zip lines to make it look like he was flying. I mainly used Content Aware Fill and the free Redgiant/Maxon script 3D Plane Stamp to achieve this.

For the surprise bit at the end, I used Midjourney and Runway’s Motion Brush to generate and animate the clothing.

Trapcode Particular was used for the rain in the final shot.

I also did a full sky replacement in each shot and used assets from ProductionCrate for the lighting and magic wand blast.

[Via Victoria Nece]

Krea upgrades its realtime generation

I had the pleasure of hanging out with these crazy-fast-moving guys last week, and I remain amazed at the speed of their shipping velocity. Check out the latest updates to their realtime canvas:

Check out how trailblazing artist Martin Nebelong is putting it to use:

Google introduces Lumiere for video generation & editing

Man, not a day goes by without the arrival of some new & mind-blowing magic—not a day!

We introduce Lumiere — a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion — a pivotal challenge in video synthesis. To this end, we introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution — an approach that inherently makes global temporal consistency difficult to achieve. […]

We demonstrate state-of-the-art text-to-video generation results, and show that our design easily facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylized generation.

Content credentials are coming to DALL•E

From its first launch, Adobe Firefly has included support for content credentials, providing more transparency around the origin of generated images, and I’m very pleased to see Open AI moving in the same direction:

Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3. 

We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.

Adobe Announces Inaugural Film & TV Fund, Committing $6 Million to Support Underrepresented Creators

In her 12+ years in Adobe’s video group, my wife Margot worked to bring more women into the world of editing & filmmaking, participating in efforts supporting all kinds of filmmakers across a diverse range of ages, genders, types of subject matter, experience levels, and backgrounds. I’m delighted to see such efforts continuing & growing:

Adobe and the Adobe Foundation will partner with a cohort of global organizations that are committed to empowering underrepresented communities, including Easterseals, Gold House, Latinx House, Sundance Institute and Yuvaa, funding fellowships and apprenticeships that offer direct, hands-on industry access. The grants will also enable organizations to directly support filmmakers in their communities with funding for short and feature films.

The first fellowship is a collaboration with the NAACP, designed to increase representation in post-production. The NAACP Editing Fellowship is a 14-week program focused on education and training, career growth and workplace experience and will include access to Adobe Creative Cloud to further set up emerging creators with the necessary tools. Applications open on Jan. 18, with four fellows selected to participate in the program starting in May.

Premiere Pro ups its audio game

“If you want to make a movie look good, make it sound good.” That’s the spirit in which Adobe is introducing a wide range of enhancements to audio handling in Premiere Pro:

According to the team, the audio workflow changes now available in the beta include:

  • Interactive Fade Handles: Now you can simply click and drag from the edge of a clip to create a variety of custom audio fades in the timeline or drag across two clips to create a crossfade. These visual fades provide more precision and control over audio transitions while making it easy to see where they are applied across your sequence.
  • AI-powered Audio Category Tagging: When you drag clips into the sequence, they’ll automatically be identified and labeled with new icons for dialogue, music, sound effects, or ambience. A single click on the icon provides access to the most relevant tools for that audio type in the Essential Sound panel — such as Loudness Matching or Auto Ducking.
  • Redesigned FX Clip Badges: An updated badge makes it easier for you to see which clips have effects added to them. New effects can be added by right clicking the badge, and a single click opens the Effect Control panel for even more adjustment without changing the workspace or searching for the panel.
  • Modern, Intelligent Waveforms and Clips: Waveforms now dynamically resize when you change the track height and improved clip colors make it easier for you to see and work with audio on the timeline.

Tutorial: Firefly + Character Animator

Helping discover Dave Werner & bring him into Adobe remains one of my favorite accomplishments at the company. He continues to do great work in designing characters as well as the tools that can bring them to life. Watch how he combines Firefly with Adobe Character Animator to create & animate a stylish tiger:

Adobe Firefly’s text to image feature lets you generate imaginative characters and assets with AI. But what if you want to turn them into animated characters with performance capture and control over elements like arm movements, pupils, talking, and more? In this tutorial, we’ll walk through the process of taking a static Adobe Firefly character and turning it into an animated puppet using Adobe Photoshop or Illustrator plus Character Animator.

“How Adobe is managing the AI copyright dilemma, with general counsel Dana Rao”

Honestly, if you asked, “Hey, wanna spend an hour+ listening to current and former intellectual property attorneys talking about EU antitrust regulation, ethical data sourcing, and digital provenance,” I might say, “Ehmm, I’m good!”—but Nilay Patel & Dana Rao make it work.

I found the conversation surprisingly engrossing & fast-moving, and I was really happy to hear Dana (with whom I’ve gotten to work some regarding AI ethics) share thoughtful insights into how the company forms its perspectives & works to put its values into practice. I think you’ll enjoy it—perhaps more than you’d expect!

Deeply chill photography

(Cue Metallica’s Trapped Under Ice!)

Russell Brown & some of my old Photoshop teammates recently ventured into -40º (!!) weather in Canada, pushing themselves & their gear to the limits to witness & capture the Northern Lights:

Perhaps on future trips they can team up with these folks:

To film an ice hockey match from this new angle of action, Axis Communications used a discrete modular camera — commonly seen in ATM machines, onboard vehicles, and other small spaces where a tiny camera needs to fit — and froze it inside the ice.

Check out the results:

Behind—and under—the scenes:

Adobe’s hiring a prototyper to explore generative AI

We’re only just beginning to discover the experiential possibilities around generative creation, so I’m excited to see this rare gig open up:

You will build new and innovative user interactions and interfaces geared towards our customers unique needs, test and refine those interfaces in collaboration with academic research, user researchers, designers, artists and product teams.

Check out the listing for the full details.

Two quotes worth reflecting on as we go into the new year

One, I swear I think of this observation from author Sebastian Junger at least once a day:

We’d do well to reflect on it in how we treat our colleagues, and especially—in this time of disruptive AI—how we treat the sensitive, hardworking creators who’ve traditionally supported toolmarkers like Adobe. Our “empowering” tech can all too easily make people feel devalued, thrown away like an old piece of fruit. And when that happens, we’re next.

Two, this observation hits me where I live:

I’ve joked for years about my “Irish Alzheimer’s,” in which one forgets everything but the grudges. It’s funny ’cause it’s true—but taken any real distance (focusing on failures & futility), it becomes corrosive, “like taking poison and hoping the other guy gets sick.”

Earlier today an old friend observed, “I’ve always had a justice hang-up.” So have I, and that’s part of what made us friends for so long.

But as I told him, “It’s such a double-edged sword: my over-inflamed sense of justice is a lot of what causes me to speak up too sharply and then light my way by all the burning bridges.” Finding the balance—between apathetic acquiescence on one end & alienating militancy on the other—can be hard.

So, for 2024 I’m trying to lead with gratitude. It’s the best antidote, I’m finding, to bitterness & bile. Let’s be glad for our fleeting opportunities to do, as Mother Teresa put it, “small things with great love.”

Here’s to courage, empathy, and wisdom for our year ahead.

Adobe Firefly named “Product of the Year”

Nice props from The Futurum Group:

Here is why: Adobe Firefly is the most commercially successful generative AI product ever launched. Since it was introduced in March in beta and made generally available in June, at last count in October, Firefly users have generated more than 3 billion images. Adobe says Firefly has attracted a significant number of new Adobe users, making it hard to imagine that Firefly is not aiding Adobe’s bottom line.

Happy New Year!

Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).