…with bears! Courtesy of image references in Photoshop GenFill:
— Anna McNaught (@annamcnaughty) August 28, 2024
…with bears! Courtesy of image references in Photoshop GenFill:
— Anna McNaught (@annamcnaughty) August 28, 2024
Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:
Did you see this mind blowing Adobe ControlNet + 3D Composer Adobe is going to launch! It will really boost creatives’ workflow. Video through @jnack
— Kris Kashtanova (@icreatelife) May 14, 2023
A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:
Here’s a quick exploration from the always-interesting Martin Nebelong:
A very quick first test of Adobe Project Neo.. didn’t realize this was out in open beta by now. Very cool!
I had to try to sculpt a burger and take that through Krea. You know, the usual thing!
There’s some very nice UX in NEO and the list-based SDF editing is awesome.. very… pic.twitter.com/e3ldyPfEDw
— Martin Nebelong (@MartinNebelong) April 26, 2024
And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:
Tutorial: Direct your cartoons with Project Neo + Firefly + ToonCrafter
1) Model your characters in Project Neo
2) Generate first and last frame with Firefly + Structure Reference
3) Use ToonCrafter to make a video interpolation between the first and the last frameEnjoy! pic.twitter.com/YPy32hoVDR
— Kris Kashtanova (@icreatelife) June 3, 2024
Soon after Generative Fill shipped last year, people discovered that using a semi-opaque selection could help blend results into an environment (e.g. putting fish under water). The new Selection Brush in Photoshop takes functionality that’s been around for 30+ years (via Quick Select mode) and brings it more to the surface, which in turn makes it easier to control GenFill behavior:
The Selection Brush has arrived in @Photoshop! ✨
“Okay, but why the heck do I need yet ANOTHER selection tool?!”
Most traditional selection methods offer no control over the opacity of your selections.
Typically this wouldn’t matter, but after Generative Fill dropped, we… pic.twitter.com/C7WHuK4u2R
— Howard Pinsky (@Pinsky) July 23, 2024
There’s been a firestorm this week about the terms of service that my old home team put forward, based (as such things have been since time immemorial) on a lot of misunderstanding & fear. Fortunately the company has been working to clarify what’s really going on.
Sorry for delay on this. Info here, including what actually changed in the TOS (not much), as well as what Adobe can / cannot do with your content. https://t.co/LZFkDXrmep
— Mike Chambers (@mesh) June 6, 2024
I did at least find this bit of parody amusing:
Huge if true. https://t.co/AFK8nyhrDg
— John Nack (@jnack) June 6, 2024
Adobe’s CEO (duh :-)) sat down with Nilay Patel for an in-depth interview. Here are some of the key points, as summarized by ChatGPT:
———-
Check out this nice little tutorial from Howard Pinsky:
You love to see it—available now via the beta (which you can download via that little “CC” icon you generally ignore in your menubar :-)):
Just released! Don’t just edit images in #Photoshop. Now Ps can make them with #adobefirefly integrated! #adobexcommunity pic.twitter.com/VL33b58QY0
— Paul Trani (@paultrani) April 23, 2024
Also, props to Paul on his HELVETICA shirt, which reminds me of my old METADATA beauty.
Removing objects will be huge, and Generative Extend—which can add a couple of seconds to clips to ease transitions—seems handy. Check out what’s in the works:
Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:
Generative Fill, remaining awesome for family photos. From Yosemite yesterday: pic.twitter.com/GtRP0UCaV6
— John Nack (@jnack) April 1, 2024
Hey, I know what you know (or quite possibly less :-)), but this demo (which for some reason includes Shaq) looks pretty cool:
From the description:
Elevate your data storytelling with #ProjectInfographIt, a game-changing solution leveraging Adobe Firefly generative AI. Simplify the infographic creation process by instantly generating design elements tailored to your key messages and data. With intuitive features for color palettes, chart types, graphics, and animations, effortlessly transform complex insights into visually stunning infographics.
Man, I can’t tell you how long I wanted folks to get this tech into their hands, and I’m excited that you can finally take it for a spin. Here are some great examples (from a thread by Min Choi, which contains more) showing how people are putting it into action:
Reinterpreted kids’ drawings:
Adobe Firefly structure reference:
I created these images using my kid’s art as reference + text prompts like these:
– red aeroplane toy made with felt, appliqué stitch, clouds, blue background
– broken ship, flowing paint from a palette of yellow and green colorsKept the… https://t.co/TMofxYx8E8 pic.twitter.com/nZpG3MnnZg
— Anu Aakash (@anukaakash) March 30, 2024
More demanding sketch-to-image:
Honestly, #AdobeFirefly ‘s new structure reference feature is super useful for going from a sketch to a realistic rendering. pic.twitter.com/v0HCCsTmZY
— Pierrick Chevallier | IA (@CharaspowerAI) March 29, 2024
Stylized Bitmoji:
Teachers!
You can also customize your @Bitmoji with @Adobe Firefly! #ai #aiforeducation #AdobeFirefly pic.twitter.com/WGV6oNvrwS— Andrew Davies, M.Ed. (@EduTechWizard) March 30, 2024
I’m delighted to see that the longstanding #1 user request for Firefly—namely the ability to upload an image to guide the structure of a generated image—has now arrived:
Good morning!
I’m excited to share with you a new tool on Adobe Firefly website called Structure Reference. I spent whole weekend creating art with it and find this new feature the most inspiring for my art.You can draw a form (or use a photo or your sketch) and reach… pic.twitter.com/9icx1iJoVJ
— Kris Kashtanova (@icreatelife) March 26, 2024
This nicely complements the extremely popular style-matching capability we enabled back in October. You can check out details of how it works, as well a look at the UI (below)—plus my first creation made using the new tech ;-).
Not having a spare $3500 burning a hole in my pocket, I’ve yet to take this for a spin myself, but I’m happy to see it. Per the Verge:
The interface of the Firefly visionOS app should be familiar to anyone who’s already used the web-based version of the tool — users just need to enter a text description within the prompt box at the bottom and hit “generate.” This will then spit out four different images that can be dragged out of the main app window and placed around the home like virtual posters or prints. […]
Meanwhile, we also now have a better look at the native Adobe Lightroom photo editing app that was mentioned back when the Apple Vision Pro was announced last June. The visionOS Lightroom experience is similar to that of the iPad version, with a cleaner, simplified interface that should be easier to navigate with hand gestures than the more feature-laden desktop software.
I’m delighted to say that firefly.adobe.com now supports a live stream of community-created generative recipes. You can share your own simply by creating images via the Text to Image module, then clicking the share button. I’m especially pleased that if you use Generative Match to choose a stylization guide image, that image will be included in the recipe for anyone to use.
Thanks to Jackson Beaman & crew for putting together a great event yesterday in SF. I joined him, KD Deshpande (founder of Simplified), and Sofiia Shvets (founder of Let’s Enhance & Claid.ai) for a 20-minute panel discussion (which starts at 3:32:03 or so, in case the embedded version doesn’t jump you to the proper spot) about creating production-ready imagery using AI. Enjoy, and please let me know if you have any comments or questions!
Helping discover Dave Werner & bring him into Adobe remains one of my favorite accomplishments at the company. He continues to do great work in designing characters as well as the tools that can bring them to life. Watch how he combines Firefly with Adobe Character Animator to create & animate a stylish tiger:
Adobe Firefly’s text to image feature lets you generate imaginative characters and assets with AI. But what if you want to turn them into animated characters with performance capture and control over elements like arm movements, pupils, talking, and more? In this tutorial, we’ll walk through the process of taking a static Adobe Firefly character and turning it into an animated puppet using Adobe Photoshop or Illustrator plus Character Animator.
Nice props from The Futurum Group:
Here is why: Adobe Firefly is the most commercially successful generative AI product ever launched. Since it was introduced in March in beta and made generally available in June, at last count in October, Firefly users have generated more than 3 billion images. Adobe says Firefly has attracted a significant number of new Adobe users, making it hard to imagine that Firefly is not aiding Adobe’s bottom line.
It’s always great to learn from the master—especially when he’s making “spaghetti western” literal!
If you’ve been sleeping on Text to Vector, check out this handful of quick how-to vids that’ll get you up to speed:
Check out this quick demo of Illustrator’s new text-to-vector & mockup tools working together:
AI generated Logos onto any surface. pic.twitter.com/qY4tEkVK0Q
— Riley Brown (@rileybrown_ai) October 29, 2023
Matthew Vandeputte used a mix of Generative Fill and Content-Aware Fill (or both) to make these rad little animations in After Effects:
[Via Tom Hightower]
I got to spend time Friday live streaming with the Firefly community, showing off some of the new MAX announcements & talking about some of what might be coming down the line. I hope you enjoy it, and I’d welcome any feedback on this session or on what you’d like to see in the future.
Back in the day, I dreaded demoing Photoshop ahead of the After Effects team: we’d do something cool, and they’d make that cool thing move. I hear echoes of that in Project Fast Fill—generative fill for video.
Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.
Check it out:
Roughly 1,000 years ago (i.e. this past April!), I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.
Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.
Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.
Check it out:
Here’s a fun little tutorial from my teammate Kris on using reference images to style your prompt (in this case, her pet turtle Pablo). And meanwhile, here’s a little gallery of good style reference images (courtesy of my fellow PM Lee) that you’re welcome to download and use in your creations.
Tutorial: How to generate images in different styles using reference image library in Adobe Firefly
It’s a new feature! Write a prompt and experiment with styles from the library or upload your own as a reference.
#AdobeMAX #CommunityxAdobeTry here:https://t.co/c9g7CGiuBU pic.twitter.com/cSt3cmMNR3
— Kris Kashtanova (@icreatelife) October 12, 2023
I’m delighted to say that the first Firefly Vector model is now available (as a beta—feedback welcome!) in Illustrator. Just download your copy to get started. Here’s a quick tour:
And more generally, it’s just one of numerous enhancements now landing in Illustrator. Check ’em out:
Hey everyone—I’m just back from Adobe MAX, and hopefully my blog is back from some WordPress database shenanigans that have kept me from posting.
I don’t know what the site will enable right now, so I’ll start by simply pointing to a great 30-second tour of my favorite new feature in Firefly, Generative Match. It enables you to upload your own image as a style reference, or to pick one that Adobe provides, and mix it together with your prompt and other parameters.
You can then optionally share the resulting recipe (via “Copy link” in the Share menu that appears over results), complete with the image ingredient; try this example. This goes well beyond what one can do with just copying/pasting a prompt, and as we introduce more multimodal inputs (3D object, sketching, etc.), it’ll become all the more powerful.
All images below were generated with the following prompt: a studio portrait of a fluffy llama, hyperrealistic, shot on a white cyclorama + various style images:
Powered by Firefly, in development now:
In case you missed any or all of last week’s news, here’s a quick recap:
Firefly-powered workflows that have so far been limited to the beta versions of Adobe’s apps — like Illustrator’s vector recoloring, Express text-to-image effects, and Photoshop’s Generative Fill tools — are now generally available to most users (though there are some regional restrictions in countries with strict AI laws like China).
Adobe is also launching a standalone Firefly web app that will allow users to explore some of its generative capabilities without subscribing to specific Adobe Creative Suite applications. Adobe Express Premium and the Firefly web app will be included as part of a paid Creative Cloud subscription plan.
Specifically around credits:
To help manage the compute demand (and the costs associated with generative AI), Adobe is also introducing a new credit-based system that users can “cash in” to access the fastest Firefly-powered workflows. The Firefly web app, Express Premium, and Creative Cloud paid plans will include a monthly allocation of Generative Credits starting today, with all-app Creative Cloud subscribers receiving 1,000 credits per month.
Users can still generate Firefly content if they exceed their credit limit, though the experience will be slower. Free plans for supported apps will also include a credit allocation (subject to the app), but this is a hard limit and will require customers to purchase additional credits if they’re used up before the monthly reset. Customers can buy additional Firefly Generative Credit subscription packs starting at $4.99.
None of this AI magic would be possible without beautiful source materials from creative people, and in a new blog post and FAQ, the Adobe Stock team provides some new info:
All eligible Adobe Stock contributors with photos, vectors or illustrations in the standard and Premium collection, whose content was used to train the first commercial Firefly model will receive a Firefly bonus. This initial bonus, which will be different for each contributor, is based on the all-time total number of approved images submitted to Adobe Stock that were used for Firefly training, and the number of licenses that those images generated in the 12-month period between June 3rd, 2022, to June 2nd, 2023. The bonus is planned to pay out once a year and is currently weighted towards number of licenses issued for an image, which we consider a useful proxy for the demand and usefulness of those images. The next Firefly Bonus is planned for 2024 for new content used for training Firefly.
They’ve also provided info on what’s permissible around submitting AI-generated content:
With Adobe Firefly now commercially available, Firefly-generated works that meet our generative AI submission guidelines will now be eligible for submission to Adobe Stock. Given the proliferation of generative AI in tools like Photoshop, and many more tools and cameras to come, we anticipate that assets in the future will contain some number of generated pixels and we want to set up Adobe Stock for the future while protecting artists. We are increasing our moderation capabilities and systems to be more effective at preventing the use of creators’ names as prompts with a focus on protecting creators’ IP. Contributors who submit content that infringes or violates the IP rights of other creators will be removed from Adobe Stock.
I had fun catching up with folks at the AI Salon (see background) a couple of weeks ago, talking about the past, present, and future of Adobe Firefly. If that’s up your alley, here’s my talk (cued up to my starting point). Note that the content about watermarks & stock contributors predates last week’s “ready for commercial use” announcements.
I’m so pleased that we’ve now shipped a feature I’ve been nurturing since the launch of Firefly six years—er, months 🤪—ago.
It enables all kinds of fun visual ping-pong, like riffing on sloth politicians:
.
Check out this quick tutorial from Kris Kashtanova:
Tutorial: How to make a lo-fi animation with new Adobe Express!
Adobe Express is available to everyone today and I made this super short tutorial for you of what’s possible. It has GenAI, background remove, making cool animations and more.
Get it here: https://t.co/PovZvcmDqL pic.twitter.com/jG3hAYoGKk— Kris Kashtanova (@icreatelife) August 16, 2023
You know you’ve entered the cultural conversation when things like this happen. I’m reminded of the first Snapchat filters inspiring real-world Halloween costumes showing puking rainbows & more.
Who will move on to the FINAL?! 🤩 pic.twitter.com/22dHaiefyl
— FOX Soccer (@FOXSoccer) August 15, 2023
Good stuff just shipped on firefly.adobe.com:
You may need to perform a hard refresh on your browser to see the changes. Cmd (Ctrl) + Shift + R.
If anything looks amiss, or if there’s more you’d like to see changed, please let us know!
Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:
Restoring old photos using Generative Fill in @Photoshop?! 🤯 pic.twitter.com/UlXj5paDTD
— Howard Pinsky (@Pinsky) August 3, 2023
And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:
ControlNet did a good job rejuvenating a stained blurry 70 year old photo of my 90 year old grandparents.
by u/prean625 in StableDiffusion
I found PiXimpefect’s clever use of Quick Mask + GenFill interesting. It’s basically “Select Subject -> Quick Mask -> paint over hair edges -> generate,” filling in areas where the original selection/removal process left something to be desired.
I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:
Here’s a recording of today’s Space with @rufusd and @jnack from @Adobe
We’ll definitely do this again. Thank you all for joining us today.
— Kris Kashtanova (@icreatelife) August 3, 2023
I dig this simple, creative application from Nemanja Sekulic:
It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.
In addition:
Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.
What’s real, and what’s Generative Fill? Watch as photographer Peter McKinnon tries to tell the difference in real time!
What’s a great creative challenge?
What fun games make you feel more connected with friends?
What’s the “Why” (not just the “What” & “How”) at the heart of generative imaging?
These are some of the questions we’ve been asking ourselves as we seek out some delightful, low-friction ways to get folks creating & growing their skills. To that end I had a ball joining my teammates Candice, Beth Anne, and Gus for a Firefly livestream a couple of weeks ago, engaging in a good chat with the audience as we showed off some of the weirder & more experimental ideas we’ve had. I’ve cued up this vid to roughly the part where we get into those ideas, and I’d love to hear your thoughts on those—or really anything in the whole conversation. TIA!
Check it out!
Details, if you’re interested:
What’s new with Adobe Firefly?
Firefly can now support prompts in over 100 languages. Also, the Firefly website is now available in Japanese, French, German, Spanish, and Brazilian Portuguese, with additional languages to come.
How are the translations of prompts done?
Support for over 100 languages is in beta and uses machine translation to English provided by Microsoft Translator. This means that translations are done by computers and not manually by humans.
What if I see errors in translations or my prompt isn’t accurately translated?
Because Firefly uses machine translation, and given the nuances of each language, it’s possible certain generations based on translated prompts may be inaccurate or unexpected. You can report negative translation results using the Report tool available in every image.
Can I type in a prompt in another language in the Adobe Express, Photoshop, and Illustrator beta apps?
Not at this time, though this capability will be coming to those apps in the future.
Which languages will the Firefly site be in on 7/12?
We are localizing the Firefly website into Japanese, French, German, Spanish, Brazilian Portuguese and expanding to others on a rolling basis.
On the extremely off chance that you (or more likely, someone you know) is new to Firefly, check out this handy intro speed run from evangelist Paul Trani:
I had a ball chatting last week with Farhad & Faraz on the Bad Decisions Podcast. (My worst decision was to so fully embrace vacation that I spaced on when we were supposed to chat, leaving me to scramble from the dog park & go tear-assing home to hop on the chat. Hence my terrible hair, which Farhad more than offset with his. 😌) We had a fast-paced, wide-ranging conversation, and I hope you find it valuable. As always I’d love to hear any & all feedback on what we’re doing & what you need.
If you enjoyed yesterday’s session with Tomasz & Lisa, I think you’ll really dig this one as well:
Join Lisa Carney and Jesús Ramirez as they walk you through their real world projects and how they use Generative Ai tools to help their workflow. Join them as they show you they make revisions from client feedback, create different formats from a single piece, and collaborate together using Creative Cloud Libraries. Stay tuned to check out some of their work from real life TV shows!
Guest Lisa Carney is a photographer and photo retoucher based in LA. Host Jesús Ramirez is a San Francisco Bay Area Graphic Designer and the founder of the Photoshop Training Channel on YouTube.
Tomas Opasinski & Lisa Carney are *legit* Hollywood photo compositors, in Friday’s Adobe Live session they showed how they use Firefly to design movie posters.
Interestingly, easily the first half had little if anything to do with AI or other technology per se, and everything to do with the design language of posters (e.g. comedies being set on white, Japanese posters emphasizing text)—which I found just as intriguing.