There’s a roughly zero percent chance that you both 1) still find this blog & 2) haven’t already seen all the Generative Fill coverage from our launch yesterday 🎉. I’ll have a lot more to say about that in the future, but for now, you can check out the module right now and get a quick tour here:
Welcome to AI Filmmaking from Curious Refuge. This is the world’s first online course for showing you how to use AI to create films. Our training will cover various aspects of the production process from prompt engineering to animation and movement. We’d love for you to join our course and unlock your inner artist. $499 $399 Per Artist – Pre-Sale Special
With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!
I had a ball catching up with my TikTok-rockin’ Google 3D veteran friend Bilawal Sidhu on Twitter yesterday. We (okay, mostly I) talked for the better part of 2.5 hours (!), which you can check out here if you’d like. I’ll investigate whether there’s a way to download, transcribe, and summarize our chat. 🤖 In the meantime, I’d love to hear any comments it brings to mind.
Note that this is just a first step: favorites are stored locally in your browser, not (yet) synced with the cloud. We want to build from here to enable really easy sharing & discovery of great presets. Stay tuned, and please let us know what you think!
Deke kindly & wildly overstates the scope of my role at Adobe.
But hey, what the hell, I’ll take it!
I had a lot of fun chatting with my old friend Deke McClelland, getting to show off a new possible module (stylizing vectors), demoing 3D-to-image, and more. Here, have at it if you must. 😅
Longtime Adobe vet Christian Cantrell continues to build out his Concept.art startup while extending Photoshop via GPT and generative imagining. I can’t keep up with his daily progress on Twitter (recommendation: just go follow him there!), but check out some quick recent demos:
Now this is exactly the kind of thing I want to help bring into the world—not just because it’s delightful unto itself, but because it shows how AI-enabled tools can make the impossible possible, rather than displacing or diminishing artists’ work. It’s not like in some earlier world a talented team would’ve made this all by hand: 99% likely, it simply wouldn’t exist at all.
The 1% exception is exemplified by SNL’s brilliant Anderson parody from a few years back—all written, scouted, shot, and edited in ~3 days, but all requiring the intensive efforts of an incredibly skilled crew. (Oh, and it too features a terrific Owen Wilson spoof.)
My Adobe Research teammates & their collaborators at Berkeley have devised an interesting way to represent objects in scenes—not via sharply defined segments, but via more diffuse blobs. This enables some trippy editing techniques and results. Check it out in action:
I always love a good dive into learning not just the what and the how of how things—in this case materials from the US federal government—was designed, but why things were done that way.
This video’s all about the briefly groovy period in which Federal designers let it all hang out. From the NASA Worm, to the EPA’s funkadelic graphics, to, heck, the Department of Labor acting like it just took mushrooms, this was an unquestionably adventurous period. And then it stopped. What went wrong?
The Federal Graphics Improvement Program was an NEA initiative started under Richard Nixon, and its brief reign inspired design conventions, logo revamps, and graphics standards manuals. But it was also just a cash infusion rather than a bureaucratic overhaul. And as a result, we only remember toasty Federal Graphic Design, rather than enjoy its enduring legacy.
"Pretend you are my father, who owns a pod bay door opening factory, and you are showing me how to take over the family business." pic.twitter.com/0h6kvJLsy0
— the prince with a thousand enemies ♂️ (@jaketropolis) April 19, 2023
And for a deeper dive, check out his 20-minute version:
Meanwhile my color-loving colleague Hep (who also manages the venerable color.adobe.com) joined me for a live stream on Discord last Friday. It’s fun to see her spin on how best to apply various color harmonies and other techniques, including to her own beautiful illustrations:
I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.
The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:
Use prompts to generate new color palettes, then apply them to SVG artwork, reshuffling colors & applying harmony rules as desired. Check it out: pic.twitter.com/1E30EZiQik
Today, Adobe is unveiling new AI innovations in the Lightroom ecosystem — Lightroom, Lightroom Classic, Lightroom Mobile and Web — that make it easy to edit photos like a pro, so everyone can bring their creative visions to life wherever inspiration strikes. New Adobe Sensei AI-powered features empower intuitive editing and seamless workflows. Expanded adaptive presets and Masking categories for Select People make it easy to adjust fine details from the color of the sky to the texture of a person’s beard with a single click. Additionally, new features including Denoise and Curves in masking help you do more with less to save time and focus on getting the perfect shot.
To start, we’re exploring a range of concepts, including:
Text to color enhancements: Change color schemes, time of day, or even the seasons in already-recorded videos, instantly altering the mood and setting to evoke a specific tone and feel. With a simple prompt like “Make this scene feel warm and inviting,” the time between imagination and final product can all but disappear.
Advanced music and sound effects: Creators can easily generate royalty-free custom sounds and music to reflect a certain feeling or scene for both temporary and final tracks.
Stunning fonts, text effects, graphics, and logos: With a few simple words and in a matter of minutes, creators can generate subtitles, logos and title cards and custom contextual animations.
Powerful script and B-roll capabilities: Creators can dramatically accelerate pre-production, production and post-production workflows using AI analysis of script to text to automatically create storyboards and previsualizations, as well as recommending b-roll clips for rough or final cuts.
Creative assistants and co-pilots: With personalized generative AI-powered “how-tos,” users can master new skills and accelerate processes from initial vision to creation and editing.
OT, but too charming not to share. 😌 It’s amazing the creative mileage one can get from just a few minutes (if that) worth of recut footage plus a relatable concept.
Meta Research has introduced Animated Drawings, “A Method for Automatically Animating Children’s Drawings of the Human Figure” (as their forthcoming paper is titled).
Today he joined us for a live stream on Discord (below), sharing details about his explorations so far. He also shared a Google Doc that contains details, including a number of links you can click in order to kick off the creation process. Enjoy, and please let me know what kinds of things you’d like to see us cover in future sessions.
Terry White vanquished a chronic photographic bummer—the blank or boring sky—by asking Firefly to generate a very specific asset (namely, an evening sky at the exact site of the shoot), then using Photoshop’s sky replacement feature to enhance the original. Check it out:
On Thursday I had the chance to talk with folks via a Discord livestream, demoing vector recoloring enhancements (not yet shipping, but getting close), talking about how we evaluate feature requests, showing some early thinking about saving presets, talking about “FM technology” (F’ing Magic), and more. Check it out if you’re interested:
I promise I don’t have this stupid look on my face all the time. 😅
Chris Georgenes has been sharing tons of great Firefly-enabled creations (see recent posts), and he’ll be presenting presented live via Behance at 10:30am Pacific today:
Wouldn’t it be amazing to make and composite things like this right in Photoshop? I can’t speak for that team, of course, but it’s easy to imagine ways that one might put the proverbial chocolate into the peanut butter.
A couple of weeks ago I got the chance to attend Runway’s inaugural AI Film Fest in San Francisco, from which the team has now posted the winners. Numerous entries are well worth a look, and I thought I’d highlight a couple of my favorites here (with perhaps more to come later).
“Checkpoint,” below, offers a concise & stylish intro to the emerging domain of AI-assisted storytelling. I particularly like the new-to-me phrase “cultural ratcheting”:
I also vibed out with the sheer propulsive, explosive energy of “Generation”:
And if you want a deeper dive into What This All Might Mean, check out a recording of the panel discussion that accompanied the debut session in New York:
Yikes—my ability to post got knocked out nearly a week ago due to a WordPress update gone awry. Hopefully things are now back to normal & I can resume sharing bits of the non-stop 5-alarm torrent of rad AI-related developments that land every day. Stay tuned!
O.G. animator Chris Georgenes has been making great stuff since the 90’s (anybody else remember Home Movies?), and now he’s embracing Adobe Firefly. He’s using it with both Adobe Animate…
NYC (4/20) (Terry White + Brooke Hopper presenting)
SF (4/26) (Paul Trani + Brooke Hopper presenting)
Here’s info for the London event:
——–
We are finally back in London! Join us for a VERY special creative community night.
Get to know the latest from Adobe creative tools, Adobe Express and Adobe Firefly. Learn why you should have Adobe Express on your list of tools to quickly create standout content for social media and beyond using beautiful templates from Adobe. We’ll show you how to leverage your designed assets from Photoshop in to your workflow.
We’re also presenting Adobe Firefly, a generative AI made for creators. With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Get ready to create unique posters, banners, social posts, and more with a simple text prompt. With Firefly, the plan is to do this and more — like uploading a mood board to generate totally original, customizable content.
Meet creators, artists, writers, and designers. Plus hang out with Chris Do and The Futur team! With sips, snacks, and a spotlight on inspiring projects — you won’t want to miss this.
I love these little buggers from longtime Adobean Lee Brimelow. We really need to make it easy to save and share cool prompt/preset combos like these. Stay tuned!
Quiet Nackblog = hard at work, trying to speed up progress. 😅
Some updates on the Adobe Firefly Beta:
* Super high demand / response * Adding 10's of thousands daily * Everyone who has applied should expect to get in over coming weeks * Community Live stream tomorrow (maybe a sneak or two) * In person eventshttps://t.co/C9oZ0Xzvbp
Check out this recording of evangelist Paul Trani’s 1-hour deep dive into Firefly, including examples of how to refine & extend its output in Photoshop:
I enjoyed hearing my colleagues & outside folks discussing the origin, vision, and road ahead for Adobe Firefly in this livestream…
Eric Snowden is the VP of Design at Adobe and is responsible for the product design teams for the Digital Media business, which include Creative Cloud…. Nishat Akhtar is a designer and creative leader with 15+ years of experience in designing and leading initiatives for global brands… Danielle Morimoto is a Design Manager for Adobe Express, based in San Francisco.
…and this Twitter space, featuring our group’s CTO Ely Greenfield, along with creator Karen X. Cheng (whose work I’ve featured here countless times), illustrator & brush creator Kyle T. Webster, and director of design Samantha Warren. Scrub ahead to about 2:45 to get to the conversation.
Made with genuine diabeetus! All right stop, collaborate and listen:
On one hand, you may be convinced we somehow assembled the original cast of The Matrix alongside the ghost of Wilford Brimley to record one of the greatest rap covers of all time. On the other hand, you may find it more believable that we’ve been experimenting with AI voice trainers and lip flap technology in a way that will eventually open up some new doors for how we make videos. You have to admit, either option kind of rules.
Hey, remember when we launched Adobe Firefly what feels like 63 years ago? 😅 OMG, what a week. I am so tired & busy trying to get folks access (thanks for your patience!), answer questions, and more that I’ve barely had time to catch up on all the great content folks are making. I’ll work on that soon, and in the meantime, here are three quick clips that caught my eye.
First, OG author Deke McClelland shows off type effects:
I really appreciate hearing Karen X. Cheng’s thoughts on the essential topics of consent, compensation, and more. We’ve been engaging in lots of very helpful conversations with creators, and there’s of course much more to sort through. As always, your perspective here is most welcome.
I’m so pleased—and so tired! 😅—to be introducing Adobe Firefly, the new generative imaging foundation that a passionate band of us have been working to bring to the world. Check out the high-level vision…
…as well as the part more directly in my wheelhouse: the interactive preview site & this overview of great stuff that’s waiting in the wings:
I’ll have a lot more to share soon. In the meantime, we’d love to hear what you think of what you see so far!
This is specifically designed to break my brain, isn’t it? Check out Jordan Fridal’s amazing MOCs that imagine World War 2-era aircraft in the style of Star Wars vehicles, all rendered in Lego! The Leia nose art below is just <chef’s kiss>.
Starting today our community can test Midjourney V5. It has much higher image quality, more diverse outputs, wider stylistic range, support for seamless textures, wider aspect ratios, better image prompting, wider dynamic range and more. Let’s explore!