It’s always great to learn from the master—especially when he’s making “spaghetti western” literal!
If you’ve been sleeping on Text to Vector, check out this handful of quick how-to vids that’ll get you up to speed:
- Welcome to Generative AI in Illustrator
- Generate artwork from text with Text to Vector Graphic (Beta)
- Explore creating stunning patterns with Text to Vector Graphics
- Tips for making your best artwork with Text to Vector Graphic (Beta)
- Tips: Take Your Text to Vector Graphic (Beta) patterns to “Wow!”
- Tip: Control your pattern color with Text to Vector Graphic (Beta)
I got to spend time Friday live streaming with the Firefly community, showing off some of the new MAX announcements & talking about some of what might be coming down the line. I hope you enjoy it, and I’d welcome any feedback on this session or on what you’d like to see in the future.
Back in the day, I dreaded demoing Photoshop ahead of the After Effects team: we’d do something cool, and they’d make that cool thing move. I hear echoes of that in Project Fast Fill—generative fill for video.
Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.
Check it out:
Roughly 1,000 years ago (i.e. this past April!), I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.
Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.
Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.
Check it out:
Here’s a fun little tutorial from my teammate Kris on using reference images to style your prompt (in this case, her pet turtle Pablo). And meanwhile, here’s a little gallery of good style reference images (courtesy of my fellow PM Lee) that you’re welcome to download and use in your creations.
Tutorial: How to generate images in different styles using reference image library in Adobe Firefly
— Kris Kashtanova (@icreatelife) October 12, 2023
I’m delighted to say that the first Firefly Vector model is now available (as a beta—feedback welcome!) in Illustrator. Just download your copy to get started. Here’s a quick tour:
And more generally, it’s just one of numerous enhancements now landing in Illustrator. Check ’em out:
Hey everyone—I’m just back from Adobe MAX, and hopefully my blog is back from some WordPress database shenanigans that have kept me from posting.
I don’t know what the site will enable right now, so I’ll start by simply pointing to a great 30-second tour of my favorite new feature in Firefly, Generative Match. It enables you to upload your own image as a style reference, or to pick one that Adobe provides, and mix it together with your prompt and other parameters.
You can then optionally share the resulting recipe (via “Copy link” in the Share menu that appears over results), complete with the image ingredient; try this example. This goes well beyond what one can do with just copying/pasting a prompt, and as we introduce more multimodal inputs (3D object, sketching, etc.), it’ll become all the more powerful.
All images below were generated with the following prompt: a studio portrait of a fluffy llama, hyperrealistic, shot on a white cyclorama + various style images:
Powered by Firefly, in development now:
In case you missed any or all of last week’s news, here’s a quick recap:
Firefly-powered workflows that have so far been limited to the beta versions of Adobe’s apps — like Illustrator’s vector recoloring, Express text-to-image effects, and Photoshop’s Generative Fill tools — are now generally available to most users (though there are some regional restrictions in countries with strict AI laws like China).
Adobe is also launching a standalone Firefly web app that will allow users to explore some of its generative capabilities without subscribing to specific Adobe Creative Suite applications. Adobe Express Premium and the Firefly web app will be included as part of a paid Creative Cloud subscription plan.
Specifically around credits:
To help manage the compute demand (and the costs associated with generative AI), Adobe is also introducing a new credit-based system that users can “cash in” to access the fastest Firefly-powered workflows. The Firefly web app, Express Premium, and Creative Cloud paid plans will include a monthly allocation of Generative Credits starting today, with all-app Creative Cloud subscribers receiving 1,000 credits per month.
Users can still generate Firefly content if they exceed their credit limit, though the experience will be slower. Free plans for supported apps will also include a credit allocation (subject to the app), but this is a hard limit and will require customers to purchase additional credits if they’re used up before the monthly reset. Customers can buy additional Firefly Generative Credit subscription packs starting at $4.99.
All eligible Adobe Stock contributors with photos, vectors or illustrations in the standard and Premium collection, whose content was used to train the first commercial Firefly model will receive a Firefly bonus. This initial bonus, which will be different for each contributor, is based on the all-time total number of approved images submitted to Adobe Stock that were used for Firefly training, and the number of licenses that those images generated in the 12-month period between June 3rd, 2022, to June 2nd, 2023. The bonus is planned to pay out once a year and is currently weighted towards number of licenses issued for an image, which we consider a useful proxy for the demand and usefulness of those images. The next Firefly Bonus is planned for 2024 for new content used for training Firefly.
They’ve also provided info on what’s permissible around submitting AI-generated content:
With Adobe Firefly now commercially available, Firefly-generated works that meet our generative AI submission guidelines will now be eligible for submission to Adobe Stock. Given the proliferation of generative AI in tools like Photoshop, and many more tools and cameras to come, we anticipate that assets in the future will contain some number of generated pixels and we want to set up Adobe Stock for the future while protecting artists. We are increasing our moderation capabilities and systems to be more effective at preventing the use of creators’ names as prompts with a focus on protecting creators’ IP. Contributors who submit content that infringes or violates the IP rights of other creators will be removed from Adobe Stock.
I had fun catching up with folks at the AI Salon (see background) a couple of weeks ago, talking about the past, present, and future of Adobe Firefly. If that’s up your alley, here’s my talk (cued up to my starting point). Note that the content about watermarks & stock contributors predates last week’s “ready for commercial use” announcements.
Check out this quick tutorial from Kris Kashtanova:
Tutorial: How to make a lo-fi animation with new Adobe Express!
Adobe Express is available to everyone today and I made this super short tutorial for you of what’s possible. It has GenAI, background remove, making cool animations and more.
Get it here: https://t.co/PovZvcmDqL pic.twitter.com/jG3hAYoGKk
— Kris Kashtanova (@icreatelife) August 16, 2023
Good stuff just shipped on firefly.adobe.com:
- New menu options enable sending images from the Text to Image module to Adobe Express.
- The UI now supports Danish, Dutch, Finnish, Italian, Korean, Norwegian, Swedish, and Chinese. Go to your profile and select preferences to change the UI language.
- New fonts are available for Korean, Chinese (Traditional), and Chinese (Simplified).
- Dark mode is here! Go to your profile and select preferences to change the mode.
- A licensing and indemnification workflow is supported for entitled users.
- Mobile bug fixes include significant performance improvements.
- You can now access Firefly from the Web section of CC Desktop.
You may need to perform a hard refresh on your browser to see the changes. Cmd (Ctrl) + Shift + R.
If anything looks amiss, or if there’s more you’d like to see changed, please let us know!
Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:
— Howard Pinsky (@Pinsky) August 3, 2023
And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:
I found PiXimpefect’s clever use of Quick Mask + GenFill interesting. It’s basically “Select Subject -> Quick Mask -> paint over hair edges -> generate,” filling in areas where the original selection/removal process left something to be desired.
I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:
We’ll definitely do this again. Thank you all for joining us today.
— Kris Kashtanova (@icreatelife) August 3, 2023
I dig this simple, creative application from Nemanja Sekulic:
It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.
Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.
What’s real, and what’s Generative Fill? Watch as photographer Peter McKinnon tries to tell the difference in real time!
What’s a great creative challenge?
What fun games make you feel more connected with friends?
What’s the “Why” (not just the “What” & “How”) at the heart of generative imaging?
These are some of the questions we’ve been asking ourselves as we seek out some delightful, low-friction ways to get folks creating & growing their skills. To that end I had a ball joining my teammates Candice, Beth Anne, and Gus for a Firefly livestream a couple of weeks ago, engaging in a good chat with the audience as we showed off some of the weirder & more experimental ideas we’ve had. I’ve cued up this vid to roughly the part where we get into those ideas, and I’d love to hear your thoughts on those—or really anything in the whole conversation. TIA!
Easing into the week with fun cultural detritus + Photoshop magic… 😌
Check it out!
Details, if you’re interested:
What’s new with Adobe Firefly?
Firefly can now support prompts in over 100 languages. Also, the Firefly website is now available in Japanese, French, German, Spanish, and Brazilian Portuguese, with additional languages to come.
How are the translations of prompts done?
Support for over 100 languages is in beta and uses machine translation to English provided by Microsoft Translator. This means that translations are done by computers and not manually by humans.
What if I see errors in translations or my prompt isn’t accurately translated?
Because Firefly uses machine translation, and given the nuances of each language, it’s possible certain generations based on translated prompts may be inaccurate or unexpected. You can report negative translation results using the Report tool available in every image.
Can I type in a prompt in another language in the Adobe Express, Photoshop, and Illustrator beta apps?
Not at this time, though this capability will be coming to those apps in the future.
Which languages will the Firefly site be in on 7/12?
We are localizing the Firefly website into Japanese, French, German, Spanish, Brazilian Portuguese and expanding to others on a rolling basis.
On the extremely off chance that you (or more likely, someone you know) is new to Firefly, check out this handy intro speed run from evangelist Paul Trani:
I had a ball chatting last week with Farhad & Faraz on the Bad Decisions Podcast. (My worst decision was to so fully embrace vacation that I spaced on when we were supposed to chat, leaving me to scramble from the dog park & go tear-assing home to hop on the chat. Hence my terrible hair, which Farhad more than offset with his. 😌) We had a fast-paced, wide-ranging conversation, and I hope you find it valuable. As always I’d love to hear any & all feedback on what we’re doing & what you need.
If you enjoyed yesterday’s session with Tomasz & Lisa, I think you’ll really dig this one as well:
Join Lisa Carney and Jesús Ramirez as they walk you through their real world projects and how they use Generative Ai tools to help their workflow. Join them as they show you they make revisions from client feedback, create different formats from a single piece, and collaborate together using Creative Cloud Libraries. Stay tuned to check out some of their work from real life TV shows!
Interestingly, easily the first half had little if anything to do with AI or other technology per se, and everything to do with the design language of posters (e.g. comedies being set on white, Japanese posters emphasizing text)—which I found just as intriguing.
Check out this great little demo from Rob de Winter:
OK, this makes @Photoshop Generative Fill even more powerful. Make a rough sketch with the brush tool and generate an image based on it.❤️Find all the steps in the thread below #GenerativeAI #photoshop #photoshopai #beta #generativefill #controlnet @Adobe pic.twitter.com/wj5dEwhUKd
— Rob de Winter (@robdewinter) June 25, 2023
The steps are, he writes,
- Draw a rough outline with the brush tool and use different colors for all parts.
- Go to Quick Mask Mode (Q).
- Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
- Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
- Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!
Salvation arrives, my thinning kings. 😌
And tangentially, see how baldness helped enhance Photoshop masking 15 years ago. 👴🏻
Check out how you can vary intensity in your selections, leading to more realistic rendering & blending:
Note that in the Web module, you can vary intensity directly while painting a selection:
Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.
In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.
“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.
Check out the quick vid below!
And check out this page for details on getting & using the feature.
I love seeing Michael Tanzillo‘s Illustrator 3D -> Adobe Stager -> Photoshop workflow for making and enhancing the adorable “Little Miss Sparkle Bao Bao”:
My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:
Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!
Check out a speed run of fun, practical applications courtesy of PiXimperfect:
When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅
On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:
On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:
And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:
I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:
Check out his foundational Introduction to Generative Fill:
And then peep some tips specifically on getting desired shapes using selections:
Stay tuned for more soon!
Check out this new course from longtime Adobe expert Jan Kabili:
Adobe Firefly is an exciting new generative AI imaging tool from Adobe. With Firefly, you can create unique images and text effects by typing text prompts and choosing from a variety of style inputs. In this course, imaging instructor and Adobe trainer Jan Kabili introduces Firefly. She explains what Firefly can offer to your creative workflow, and what makes it unique in the generative AI field. She demonstrates how to generate images from prompts, built-in styles, and reference images, and shares tips for generating one of a kind text effects. Finally, Jan shows you how to use images generated by Firefly to create a unique composite in Photoshop.
“I’m so f***ing sick & tired of the Photoshop” — Kendrick Lamar
And yet we’re back at it with Generative Fill… 😜:
Here’s me, talking fast about anything & everything related to Firefly and possibilities around creative tools. Give ‘er a listen if you’re interested (or, perhaps, are just suffering from insomnia 😌):
Old-school: Content-Aware Phil.
New school: Generative Phil. 😛
There’s a roughly zero percent chance that you both 1) still find this blog & 2) haven’t already seen all the Generative Fill coverage from our launch yesterday 🎉. I’ll have a lot more to say about that in the future, but for now, you can check out the module right now and get a quick tour here:
And here’s a rad little workflow optimization I’m proud we were able to sneak in:
I had a ball catching up with my TikTok-rockin’ Google 3D veteran friend Bilawal Sidhu on Twitter yesterday. We (okay, mostly I) talked for the better part of 2.5 hours (!), which you can check out here if you’d like. I’ll investigate whether there’s a way to download, transcribe, and summarize our chat. 🤖 In the meantime, I’d love to hear any comments it brings to mind.
Note that this is just a first step: favorites are stored locally in your browser, not (yet) synced with the cloud. We want to build from here to enable really easy sharing & discovery of great presets. Stay tuned, and please let us know what you think!