I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:
It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.
Today, we are evolving Photoshop's Generative AI capabilities with Generative Expand. There is no need for selections; use the crop tool to expand the canvas, hit enter, and #AdobeFirefly will do the rest. Update or download the Photoshop (beta) now: https://t.co/PxGLa7J1Fqpic.twitter.com/eUbwhOOa2c
Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.
It’s 2023, and you can make all of this with your GD telephone. And just as amazingly, a year or two from now, we’ll look back on feeling this way & find it quaint.
GN 💫 I got too excited and edited a first video draft on my phone in my trusty splice app 👀
What’s a great creative challenge? What fun games make you feel more connected with friends? What’s the “Why” (not just the “What” & “How”) at the heart of generative imaging?
These are some of the questions we’ve been asking ourselves as we seek out some delightful, low-friction ways to get folks creating & growing their skills. To that end I had a ball joining my teammates Candice, Beth Anne, and Gus for a Firefly livestream a couple of weeks ago, engaging in a good chat with the audience as we showed off some of the weirder & more experimental ideas we’ve had. I’ve cued up this vid to roughly the part where we get into those ideas, and I’d love to hear your thoughts on those—or really anything in the whole conversation. TIA!
10,000x smaller & 25x faster? My old Google teammates & their collaborators, who changed the generative game last year by enabling custom model training, are now proposing to upend things further by enabling training via a single image—and massively faster, too. Check out this thread:
In the corner of my eye, our very stable (golden)doodle is happily sawing logs while I share this link to Stable Doodle, a fun little drawing tool from the folks at Stable Diffusion:
Firefly can now support prompts in over 100 languages. Also, the Firefly website is now available in Japanese, French, German, Spanish, and Brazilian Portuguese, with additional languages to come.
How are the translations of prompts done?
Support for over 100 languages is in beta and uses machine translation to English provided by Microsoft Translator. This means that translations are done by computers and not manually by humans.
What if I see errors in translations or my prompt isn’t accurately translated?
Because Firefly uses machine translation, and given the nuances of each language, it’s possible certain generations based on translated prompts may be inaccurate or unexpected. You can report negative translation results using the Report tool available in every image.
Can I type in a prompt in another language in the Adobe Express, Photoshop, and Illustrator beta apps?
Not at this time, though this capability will be coming to those apps in the future.
Which languages will the Firefly site be in on 7/12?
We are localizing the Firefly website into Japanese, French, German, Spanish, Brazilian Portuguese and expanding to others on a rolling basis.
On the extremely off chance that you (or more likely, someone you know) is new to Firefly, check out this handy intro speed run from evangelist Paul Trani:
I had a ball chatting last week with Farhad & Faraz on the Bad Decisions Podcast. (My worst decision was to so fully embrace vacation that I spaced on when we were supposed to chat, leaving me to scramble from the dog park & go tear-assing home to hop on the chat. Hence my terrible hair, which Farhad more than offset with his. 😌) We had a fast-paced, wide-ranging conversation, and I hope you find it valuable. As always I’d love to hear any & all feedback on what we’re doing & what you need.
Join Lisa Carney and Jesús Ramirez as they walk you through their real world projects and how they use Generative Ai tools to help their workflow. Join them as they show you they make revisions from client feedback, create different formats from a single piece, and collaborate together using Creative Cloud Libraries. Stay tuned to check out some of their work from real life TV shows!
Guest Lisa Carney is a photographer and photo retoucher based in LA. Host Jesús Ramirez is a San Francisco Bay Area Graphic Designer and the founder of the Photoshop Training Channel on YouTube.
Tomas Opasinski & Lisa Carney are *legit* Hollywood photo compositors, in Friday’s Adobe Live session they showed how they use Firefly to design movie posters.
Interestingly, easily the first half had little if anything to do with AI or other technology per se, and everything to do with the design language of posters (e.g. comedies being set on white, Japanese posters emphasizing text)—which I found just as intriguing.
Draw a rough outline with the brush tool and use different colors for all parts.
Go to Quick Mask Mode (Q).
Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!
In my time at Google, we tried and failed a lot to make virtual try-on happen using AR. It’s extremely hard to…
measure bodies (to make buying decisions based on fit)
render virtual clothing accurately (placing virtual clothing over real clothing, or getting them to disrobe, which is even harder!; simulating materials in realtime)
get a sizable corpus of 3D assets (in a high-volume, low-margin industry)
Outside of a few limited pockets (trying on makeup, glasses, and shoes—all for style, not for fit), I haven’t seen anyone (Amazon, Snap, etc.) crack the code here. Researcher Ira Kemelmacher-Shlizerman (who last I heard was working on virtual mirrors, possibly leveraging Google’s Stargate tech) acknowledges this:
Current techniques like geometric warping can cut-and-paste and then deform a clothing image to fit a silhouette. Even so, the final images never quite hit the mark: Clothes don’t realistically adapt to the body, and they have visual defects like misplaced folds that make garments look misshapen and unnatural.
So, it’s interesting to see Google trying again (“Try on clothes with generative AI”):
This week we introduced an AI-powered virtual try-on feature that uses the Google Shopping Graph to show you how clothing will look on a diverse set of real models.
Our new guided refinements can help U.S. shoppers fine-tune products until you find the perfect piece. Thanks to machine learning and new visual matching algorithms, you can refine using inputs like color, style and pattern.
Inspired by Imagen, we decided to tackle VTO using diffusion — but with a twist. Instead of using text as input during diffusion, we use a pair of images: one of a garment and another of a person. Each image is sent to its own neural network (a U-net) and shares information with each other in a process called “cross-attention” to generate the output: a photorealistic image of the person wearing the garment. This combination of image-based diffusion and cross-attention make up our new AI model.
They note that “We don’t promise fit and for now focus only on visualization of the try on. Finally, this work focused on upper body clothing.”
It’s a bit hard to find exactly where one can try out the experience. They write:
Starting today, U.S. shoppers can virtually try on women’s tops from brands across Google, including Anthropologie, Everlane, H&M and LOFT. Just tap products with the “Try On” badge on Search and select the model that resonates most with you.
Paul is continuing to explore what’s possible by generating short clips using Runway’s Gen-2 text-to-video model. Check out the melancholy existential musings of “Thank You For Not Answering”:
And then, to entirely clear your mental palate, there’s the just deeply insane TRUCX!
TRUCX! DO YOU LIKE TRUX! Do you like to see trucks smash together like a boom boom?
Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.
In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.
“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.
I love seeing Michael Tanzillo‘s Illustrator 3D -> Adobe Stager -> Photoshop workflow for making and enhancing the adorable “Little Miss Sparkle Bao Bao”:
My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:
Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!
When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅
On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:
On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:
And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:
I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:
Check out his foundational Introduction to Generative Fill:
And then peep some tips specifically on getting desired shapes using selections:
Check out this new course from longtime Adobe expert Jan Kabili:
Adobe Firefly is an exciting new generative AI imaging tool from Adobe. With Firefly, you can create unique images and text effects by typing text prompts and choosing from a variety of style inputs. In this course, imaging instructor and Adobe trainer Jan Kabili introduces Firefly. She explains what Firefly can offer to your creative workflow, and what makes it unique in the generative AI field. She demonstrates how to generate images from prompts, built-in styles, and reference images, and shares tips for generating one of a kind text effects. Finally, Jan shows you how to use images generated by Firefly to create a unique composite in Photoshop.
Break-A-Scene seems incredibly cool, promising to extract objects from scenes, then remix them into other scenes while keeping them editable and preserving their appearances. Check out the 2-minute overview video:
Here’s me, talking fast about anything & everything related to Firefly and possibilities around creative tools. Give ‘er a listen if you’re interested (or, perhaps, are just suffering from insomnia 😌):
Had an awesome time talking #AdobeFirefly & the future of creative tools with @altryne & friends. My section starts just after the 1-hour mark. I'd love to hear what you think!https://t.co/YnPNrMe1UM
There’s a roughly zero percent chance that you both 1) still find this blog & 2) haven’t already seen all the Generative Fill coverage from our launch yesterday 🎉. I’ll have a lot more to say about that in the future, but for now, you can check out the module right now and get a quick tour here:
Welcome to AI Filmmaking from Curious Refuge. This is the world’s first online course for showing you how to use AI to create films. Our training will cover various aspects of the production process from prompt engineering to animation and movement. We’d love for you to join our course and unlock your inner artist. $499 $399 Per Artist – Pre-Sale Special
With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!
I had a ball catching up with my TikTok-rockin’ Google 3D veteran friend Bilawal Sidhu on Twitter yesterday. We (okay, mostly I) talked for the better part of 2.5 hours (!), which you can check out here if you’d like. I’ll investigate whether there’s a way to download, transcribe, and summarize our chat. 🤖 In the meantime, I’d love to hear any comments it brings to mind.
Note that this is just a first step: favorites are stored locally in your browser, not (yet) synced with the cloud. We want to build from here to enable really easy sharing & discovery of great presets. Stay tuned, and please let us know what you think!
Deke kindly & wildly overstates the scope of my role at Adobe.
But hey, what the hell, I’ll take it!
I had a lot of fun chatting with my old friend Deke McClelland, getting to show off a new possible module (stylizing vectors), demoing 3D-to-image, and more. Here, have at it if you must. 😅
Longtime Adobe vet Christian Cantrell continues to build out his Concept.art startup while extending Photoshop via GPT and generative imagining. I can’t keep up with his daily progress on Twitter (recommendation: just go follow him there!), but check out some quick recent demos:
Now this is exactly the kind of thing I want to help bring into the world—not just because it’s delightful unto itself, but because it shows how AI-enabled tools can make the impossible possible, rather than displacing or diminishing artists’ work. It’s not like in some earlier world a talented team would’ve made this all by hand: 99% likely, it simply wouldn’t exist at all.
The 1% exception is exemplified by SNL’s brilliant Anderson parody from a few years back—all written, scouted, shot, and edited in ~3 days, but all requiring the intensive efforts of an incredibly skilled crew. (Oh, and it too features a terrific Owen Wilson spoof.)