Category Archives: Adobe Firefly

Happy birthday, Adobe Firefly

The old (hah! but it seems that way) gal turns two today.

The ride has been… interesting, hasn’t it? I remain eager to see what all the smart folks at Adobe have been cooking up. As a user of Photoshop et al. for the last 30+ years, I selfishly hope it’s great!

In the meantime, I’ll admit that watching the video above—which I wrote & then made with the help of Davis Brown (son of Russell)—makes me kinda blue. Everything it depicts was based on real code we had working at the time. (I insisted that we not show anything that we didn’t think we could have shipping within three months’ time.) How much of that has ever gotten into users’ hands?

Yeah.

But as I say, I’m hoping and rooting for the best. My loyalty has never been to Adobe or to any other made-up entity, but rather to the spirit & practice of human creativity. Always will be, until they drag me off this rock. Rock the F on.

Adobe to offer access to non-Firefly models

Man, I’m old enough to remember writing a doc called “Yes, And…” immediately upon the launch of DALL•E in 2022, arguing that of course Adobe should develop its own generative models and of course it should also offer customers a choice of great third-party models—because of course no single model would be the best for every user in every situation.

And I’m old enough to remember being derided for just not Getting It™ about how selling per-use access to Firefly was going to be a goldmine, so of course we wouldn’t offer users a choice. ¯\_(ツ)_/¯

Oh well. Here we are, exactly two years after the launch of Firefly, and Adobe is going to offer access to third-party models. So… yay!

Analog meets AI in the papercraft world of Karen X Cheng

Check out this fun mixed-media romp, commissioned by Adobe:

And here’s a look behind the scenes:

A cool Firefly image->video flow

For the longest time, Firefly users’ #1 request was to use images to guide composition of new images. Now that Firefly Video has arrived, you can use a reference image to guide the creation of video. Here’s a slick little demo from Paul Trani:

The ceiling can’t hold us stuffed animals

As I drove the Micronaxx to preschool back in 2013, Macklemore’s “Can’t Hold Us” hit the radio & the boys flipped out, making their stuffed buddies Leo & Ollie go nuts dancing to the tune. I remember musing with Dave Werner (a fellow dad to young kids) about being able to animate said buddies.

Fast forward a decade+, and now Dave is using Adobe’s recently unveiled Firefly Video model to do what we could only dimly imagine back then:

Time to unearth Leo & get him on stage at last. :->

Neural rendering: Neo + Firefly

Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:

A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:

Here’s a quick exploration from the always-interesting Martin Nebelong:

And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:

Photoshop’s new Selection Brush helps control GenFill

Soon after Generative Fill shipped last year, people discovered that using a semi-opaque selection could help blend results into an environment (e.g. putting fish under water). The new Selection Brush in Photoshop takes functionality that’s been around for 30+ years (via Quick Select mode) and brings it more to the surface, which in turn makes it easier to control GenFill behavior:

Adobe TOS = POS? Not so much.

There’s been a firestorm this week about the terms of service that my old home team put forward, based (as such things have been since time immemorial) on a lot of misunderstanding & fear. Fortunately the company has been working to clarify what’s really going on.

I did at least find this bit of parody amusing:

Podcast: Shantanu on The Verge

Adobe’s CEO (duh :-)) sat down with Nilay Patel for an in-depth interview. Here are some of the key points, as summarized by ChatGPT:

———-

  1. AI as a Paradigm Shift: Narayen views AI as a fundamental shift, similar to the transitions to mobile and cloud technologies. He emphasizes that AI, especially generative AI, can automate tasks, enhance creative processes, and democratize access to creative tools. This allows users who might not have traditional artistic skills to create compelling content​ (GIGAZINE)​​ (Stanford Graduate School of Business)​.
  2. Generative AI in Adobe Products: Adobe’s Firefly, a family of generative AI models, has been integrated into various Adobe products. Firefly enhances creative workflows by enabling users to generate images, text effects, and video content with simple text prompts. This integration aims to accelerate ideation, exploration, and production, making it easier for creators to bring their visions to life​ (Adobe News)​​ (Welcome to the Adobe Blog)​.
  3. Empowering Creativity: Narayen highlights that Adobe’s approach to AI is centered around augmenting human creativity rather than replacing it. Tools like Generative Fill in Photoshop and new generative AI features in Premiere Pro are designed to streamline tedious tasks, allowing creators to focus on the more creative aspects of their work. This not only improves productivity but also expands creative possibilities​ (The Print)​​ (Adobe News)​.
  4. Business Model and Innovation: Narayen discusses how Adobe is adapting its business model to leverage AI. By integrating AI across Creative Cloud, Document Cloud, and Experience Cloud, Adobe aims to enhance its products and deliver more value to users. This includes experimenting with new business models and monetizing AI-driven features to stay at the forefront of digital creativity​ (Stanford Graduate School of Business)​​ (The Print)​.
  5. Content Authenticity and Ethics: Adobe emphasizes transparency and ethical use of AI. Initiatives like Content Credentials help ensure that AI-generated content is properly attributed and distinguishable from human-created content. This approach aims to maintain trust and authenticity in digital media​ (Adobe News)​​ (Welcome to the Adobe Blog)​.

Drawing-based magic with Firefly & Magnific

Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:


Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:

Lego + GenFill = Yosemite Magic

Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:

Infographic magic via Firefly?

Hey, I know what you know (or quite possibly less :-)), but this demo (which for some reason includes Shaq) looks pretty cool:

From the description:

Elevate your data storytelling with #ProjectInfographIt, a game-changing solution leveraging Adobe Firefly generative AI. Simplify the infographic creation process by instantly generating design elements tailored to your key messages and data. With intuitive features for color palettes, chart types, graphics, and animations, effortlessly transform complex insights into visually stunning infographics.

Fun uses of Firefly’s Structure Reference

Man, I can’t tell you how long I wanted folks to get this tech into their hands, and I’m excited that you can finally take it for a spin. Here are some great examples (from a thread by Min Choi, which contains more) showing how people are putting it into action:

Reinterpreted kids’ drawings:

More demanding sketch-to-image:

Stylized Bitmoji:

Firefly adds Structure Reference

I’m delighted to see that the longstanding #1 user request for Firefly—namely the ability to upload an image to guide the structure of a generated image—has now arrived:

This nicely complements the extremely popular style-matching capability we enabled back in October. You can check out details of how it works, as well a look at the UI (below)—plus my first creation made using the new tech ;-).

Firefly image creation & Lightroom come to Apple Vision Pro

Not having a spare $3500 burning a hole in my pocket, I’ve yet to take this for a spin myself, but I’m happy to see it. Per the Verge:

The interface of the Firefly visionOS app should be familiar to anyone who’s already used the web-based version of the tool — users just need to enter a text description within the prompt box at the bottom and hit “generate.” This will then spit out four different images that can be dragged out of the main app window and placed around the home like virtual posters or prints. […]

Meanwhile, we also now have a better look at the native Adobe Lightroom photo editing app that was mentioned back when the Apple Vision Pro was announced last June. The visionOS Lightroom experience is similar to that of the iPad version, with a cleaner, simplified interface that should be easier to navigate with hand gestures than the more feature-laden desktop software.

My panel discussion at the AI User Conference

Thanks to Jackson Beaman & crew for putting together a great event yesterday in SF. I joined him, KD Deshpande (founder of Simplified), and Sofiia Shvets (founder of Let’s Enhance & Claid.ai) for a 20-minute panel discussion (which starts at 3:32:03 or so, in case the embedded version doesn’t jump you to the proper spot) about creating production-ready imagery using AI. Enjoy, and please let me know if you have any comments or questions!

Tutorial: Firefly + Character Animator

Helping discover Dave Werner & bring him into Adobe remains one of my favorite accomplishments at the company. He continues to do great work in designing characters as well as the tools that can bring them to life. Watch how he combines Firefly with Adobe Character Animator to create & animate a stylish tiger:

Adobe Firefly’s text to image feature lets you generate imaginative characters and assets with AI. But what if you want to turn them into animated characters with performance capture and control over elements like arm movements, pupils, talking, and more? In this tutorial, we’ll walk through the process of taking a static Adobe Firefly character and turning it into an animated puppet using Adobe Photoshop or Illustrator plus Character Animator.

Adobe Firefly named “Product of the Year”

Nice props from The Futurum Group:

Here is why: Adobe Firefly is the most commercially successful generative AI product ever launched. Since it was introduced in March in beta and made generally available in June, at last count in October, Firefly users have generated more than 3 billion images. Adobe says Firefly has attracted a significant number of new Adobe users, making it hard to imagine that Firefly is not aiding Adobe’s bottom line.

Demos: Using Generative AI in Illustrator

If you’ve been sleeping on Text to Vector, check out this handful of quick how-to vids that’ll get you up to speed:

What’s even better than Generative Fill? GenFill that moves.

Back in the day, I dreaded demoing Photoshop ahead of the After Effects team: we’d do something cool, and they’d make that cool thing move. I hear echoes of that in Project Fast Fill—generative fill for video.

Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.

Check it out:

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

Generative Match: It’s Pablos all the way down…

Here’s a fun little tutorial from my teammate Kris on using reference images to style your prompt (in this case, her pet turtle Pablo). And meanwhile, here’s a little gallery of good style reference images (courtesy of my fellow PM Lee) that you’re welcome to download and use in your creations.

Important protections for creators in Generative Match

I’m really happy & proud that Firefly now enables uploading your own images & mixing them into your creations. For months & months, this has been users’ number 1 feature request.

But with power comes responsibility, of course, and we’ve spent a lot of time thinking about ways to discourage misuse of the tech (i.e. how do we keep this from becoming a rip-off engine?). I’m glad to say that we’ve invested in some good guidelines & guardrails:

First, we require users to confirm they have the right to use any work that they upload to Generative Match as a reference image.

Second, if an image’s Content Credentials include tags indicating that the image shouldn’t be used as a style reference, users won’t be able to use it with Generative Match. We will be rolling out the ability to add these tags to assets as part of the Content Credentials framework within our flagship products.

Third, when a reference image is used to generate an asset, we save a thumbnail of the image to help ensure that the use of Generative Match meets our terms of service. We also note that a reference image was used in the asset’s Content Credentials. Storing the reference image provides an important dose of accountability.

To be clear, these protections are just first steps, and we plan to do more to strengthen protections. In the meantime, your feedback is most welcome!

Introducing Generative Match in Firefly

Hey everyone—I’m just back from Adobe MAX, and hopefully my blog is back from some WordPress database shenanigans that have kept me from posting.

I don’t know what the site will enable right now, so I’ll start by simply pointing to a great 30-second tour of my favorite new feature in Firefly, Generative Match. It enables you to upload your own image as a style reference, or to pick one that Adobe provides, and mix it together with your prompt and other parameters.

You can then optionally share the resulting recipe (via “Copy link” in the Share menu that appears over results), complete with the image ingredient; try this example. This goes well beyond what one can do with just copying/pasting a prompt, and as we introduce more multimodal inputs (3D object, sketching, etc.), it’ll become all the more powerful.

All images below were generated with the following prompt: a studio portrait of a fluffy llama, hyperrealistic, shot on a white cyclorama + various style images:

Firefly summary on The Verge

In case you missed any or all of last week’s news, here’s a quick recap:

Firefly-powered workflows that have so far been limited to the beta versions of Adobe’s apps — like Illustrator’s vector recoloring, Express text-to-image effects, and Photoshop’s Generative Fill tools — are now generally available to most users (though there are some regional restrictions in countries with strict AI laws like China).

Adobe is also launching a standalone Firefly web app that will allow users to explore some of its generative capabilities without subscribing to specific Adobe Creative Suite applications. Adobe Express Premium and the Firefly web app will be included as part of a paid Creative Cloud subscription plan.

Specifically around credits:

To help manage the compute demand (and the costs associated with generative AI), Adobe is also introducing a new credit-based system that users can “cash in” to access the fastest Firefly-powered workflows. The Firefly web app, Express Premium, and Creative Cloud paid plans will include a monthly allocation of Generative Credits starting today, with all-app Creative Cloud subscribers receiving 1,000 credits per month.

Users can still generate Firefly content if they exceed their credit limit, though the experience will be slower. Free plans for supported apps will also include a credit allocation (subject to the app), but this is a hard limit and will require customers to purchase additional credits if they’re used up before the monthly reset. Customers can buy additional Firefly Generative Credit subscription packs starting at $4.99.

How Adobe is compensating Stock creators for their contributions to Firefly

None of this AI magic would be possible without beautiful source materials from creative people, and in a new blog post and FAQ, the Adobe Stock team provides some new info:

All eligible Adobe Stock contributors with photos, vectors or illustrations in the standard and Premium collection, whose content was used to train the first commercial Firefly model will receive a Firefly bonus. This initial bonus, which will be different for each contributor, is based on the all-time total number of approved images submitted to Adobe Stock that were used for Firefly training, and the number of licenses that those images generated in the 12-month period between June 3rd, 2022, to June 2nd, 2023. The bonus is planned to pay out once a year and is currently weighted towards number of licenses issued for an image, which we consider a useful proxy for the demand and usefulness of those images. The next Firefly Bonus is planned for 2024 for new content used for training Firefly.

They’ve also provided info on what’s permissible around submitting AI-generated content:

With Adobe Firefly now commercially available, Firefly-generated works that meet our generative AI submission guidelines will now be eligible for submission to Adobe Stock. Given the proliferation of generative AI in tools like Photoshop, and many more tools and cameras to come, we anticipate that assets in the future will contain some number of generated pixels and we want to set up Adobe Stock for the future while protecting artists. We are increasing our moderation capabilities and systems to be more effective at preventing the use of creators’ names as prompts with a focus on protecting creators’ IP. Contributors who submit content that infringes or violates the IP rights of other creators will be removed from Adobe Stock.

Firefly: Making a lo-fi animation with Adobe Express

Check out this quick tutorial from Kris Kashtanova:

Firefly site gets faster, adds dark mode support & more

Good stuff just shipped on firefly.adobe.com:

  • New menu options enable sending images from the Text to Image module to Adobe Express.
  • The UI now supports Danish, Dutch, Finnish, Italian, Korean, Norwegian, Swedish, and Chinese. Go to your profile and select preferences to change the UI language.
  • New fonts are available for Korean, Chinese (Traditional), and Chinese (Simplified).
  • Dark mode is here! Go to your profile and select preferences to change the mode.
  • A licensing and indemnification workflow is supported for entitled users.
  • Mobile bug fixes include significant performance improvements.
  • You can now access Firefly from the Web section of CC Desktop.

You may need to perform a hard refresh on your browser to see the changes. Cmd (Ctrl) + Shift + R.

If anything looks amiss, or if there’s more you’d like to see changed, please let us know!

GenFill + old photos = 🥰

Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:

And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:

ControlNet did a good job rejuvenating a stained blurry 70 year old photo of my 90 year old grandparents.
by u/prean625 in StableDiffusion

“Where the Fireflies Fly”

I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:

Photoshop introduces Generative Expand

It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.

In addition:

Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.