Adobe friends like Eli Shechtman have been publishing research for several years, and Creative Bloq reports that the functionality is due to make its way to the flagship imaging apps in the near future. Check out their post for details.
I keep meaning to try out this new capability, but there are so many tools, so few hours! In any case, it promises to be an exciting breakthrough. If you take it for a spin, I’d love to hear what you think of the results.
Sure, all this stuff—including what’s now my career’s work—will likely make it semi-impossible to reason together about any shared conception of reality, thereby calling into question the viability of democracy… but on the upside, moar dank memes!
Here’s how to create a dancing character using just an image + an existing video clip:
Viggle is the new hottest AI Creative Tool That is forever changing Memes and the future of AI Video.@aiwarper created a meme with the joker and Lil Yachty that caused a hilarious explosion.
Removing objects will be huge, and Generative Extend—which can add a couple of seconds to clips to ease transitions—seems handy. Check out what’s in the works:
Check out the latest work (downloadable for free here) from longtime Adobe veteran (and former VP of product at Stability AI) Christian Cantrell:
The new version of the Concept Art #photoshop plugin is here! Create your own AI-powered workflows by combining hundreds of different imaging models from @replicate — as well as DALL•E 2 and 3 — without leaving @Photoshop. This is a complete rewrite with tons of new features coming (including local inference).
Not content to let Adobe & ChatGPT have all the fun, Google is now making its Imagen available to developers for image synthesis, including inserting items & expanding images:
We’re also adding advanced photo editing features, including inpainting and outpainting.
Imagen, Google’s text-to-image mode, can now create live images from text, in preview. Just imagine generating animated images such as GIFs from a simple text prompt… Imagen also gets advanced photo editing features, including inpainting and outpainting, and a digital watermarking feature powered by Google DeepMind’s SynthID.
I’m eager to learn more about the last bit re: content provenance. Adobe has talked a bunch about image watermarking, but has not (as far as I know) shipped any support.
Meanwhile Google is also challenging Runway, Pika, & others in the creation of short video clips:
Our generative technology Imagen 2 can now create short, 4-second live images from a single prompt.
For 10 years or so I’ve been posting admiringly about the work of Paul Trillo (16 times so far; 17 now, good Lord), so I was excited to hear his conversation with the NYT Hard Fork crew—especially as he’s recently been pushing the limits with OpenAI’s Sora model. I think you’ll really enjoy this thoughtful, candid, and in-depth discussion about the possibilities & pitfalls of our new AI-infused creative world:
Some companies spend three months just on wringing their hands about whether to let you load a style reference image; others spend three people and go way beyond that, in realtime ¯\_(ツ)_/¯ :
These guys are doing such a good job creating intuitive visual interfaces for prompting
This is the new real-time image blending interface from @krea_ai
When DALL•E first dropped, it wasn’t full-image creation that captured my attention so much as inpainting, i.e. creating/removing objects in designated regions. Over the years (all two of ’em ;-)) I’ve lost track of whether DALL•E’s Web interface has remained available (’cause who’s needed it after Generative Fill?), but I’m very happy to see this sort of selective synthesis emerge in the ChatGPT-DALL•E environment:
Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:
Generative Fill, remaining awesome for family photos. From Yosemite yesterday: pic.twitter.com/GtRP0UCaV6
Hey, I know what you know (or quite possibly less :-)), but this demo (which for some reason includes Shaq) looks pretty cool:
From the description:
Elevate your data storytelling with #ProjectInfographIt, a game-changing solution leveraging Adobe Firefly generative AI. Simplify the infographic creation process by instantly generating design elements tailored to your key messages and data. With intuitive features for color palettes, chart types, graphics, and animations, effortlessly transform complex insights into visually stunning infographics.
Man, I can’t tell you how long I wanted folks to get this tech into their hands, and I’m excited that you can finally take it for a spin. Here are some great examples (from a thread by Min Choi, which contains more) showing how people are putting it into action:
Reinterpreted kids’ drawings:
Adobe Firefly structure reference:
I created these images using my kid’s art as reference + text prompts like these:
– red aeroplane toy made with felt, appliqué stitch, clouds, blue background – broken ship, flowing paint from a palette of yellow and green colors
Speaking of folks with whom I’ve somehow had the honor of working, some of my old teammates from Google have unveiled ObjectDrop. Check out this video & thread:
Google presents ObjectDrop
Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion
Diffusion models have revolutionized image editing but often generate images that violate physical laws, particularly the effects of objects on the scene, e.g., pic.twitter.com/j7TMadRhxo
Diffusion models have revolutionized image editing but often generate images that violate physical laws, particularly the effects of objects on the scene, e.g., occlusions, shadows, and reflections. By analyzing the limitations of self-supervised approaches, we propose a practical solution centered on a counterfactual dataset.
Our method involves capturing a scene before and after removing a single object, while minimizing other changes. By fine-tuning a diffusion model on this dataset, we are able to not only remove objects but also their effects on the scene. However, we find that applying this approach for photorealistic object insertion requires an impractically large dataset. To tackle this challenge, we propose bootstrap supervision; leveraging our object removal model trained on a small counterfactual dataset, we synthetically expand this dataset considerably.
Our approach significantly outperforms prior methods in photorealistic object removal and insertion, particularly at modeling the effects of objects on the scene.
“Why would you go work at Microsoft? What do they know or care about creative imaging…?” 🙂
I’m delighted to say that my new teammates have been busy working on some promising techniques for performing a range of image edits, from erasing to swapping, zooming, and more:
Microsoft presents DesignEdit!
It’s a image editing method that can remove objects, edit typography, swap, relocate, resize, add and flip multiple objects, pan and zoom images, remove decorations from images, and edit posters.https://t.co/1DGNiNAFw1pic.twitter.com/2N5n6MNkqf
I’m delighted to see that the longstanding #1 user request for Firefly—namely the ability to upload an image to guide the structure of a generated image—has now arrived:
Good morning! I’m excited to share with you a new tool on Adobe Firefly website called Structure Reference. I spent whole weekend creating art with it and find this new feature the most inspiring for my art.
This nicely complements the extremely popular style-matching capability we enabled back in October. You can check out details of how it works, as well a look at the UI (below)—plus my first creation made using the new tech ;-).
Last year I posted about the Imaginary Forces’ beautiful, eerie title sequence for Amazon’s Jack Ryan series, and now School of Motion has sat down for an in-depth discussion with creative director Karin Fong. They talk about a wide range of topics, including AI & its possible impacts towards the 1:09 mark.
Here’s a look behind the scenes of the Jack Ryan sequence:
Given just the latest news, the company’s name sounds ironic, but I love seeing them offer capabilities that we previewed in the Firefly teaser video now more than a year ago. (Here’s hoping Adobe announces some progress on that front at Adobe Summit this coming week.)
It’s amazing to see what two people (?!) are able to do. Check out this video & the linked thread, as well as the tool itself.
IT’S FINALLY HERE!
Magnific Style Transfer!
Transform any image, controlling the amount of style transferred and the structural integrity Infinite use cases! 3d, video games, interior design, for fun…
I cannot tell you how deeply I hope that the Photoshop team is paying attention to developments like this…
My Photoshop is more fun than yours :-p With a bit of help from Krea ai.
It’s a crazy feeling to see brushstrokes transformed like this in realtime.. And the feeling of control is magnitudes better than with text prompts.#ai#artpic.twitter.com/Rd8zSxGfqD
So, @StabilityAI has this new experimental imageTo3D model, and I just painted a moon buggy in SageBrush, dropped it into their Huggingface space, converted it in Reality Converter, and air dropped it onto the moon – all on #AppleVisionPropic.twitter.com/pj3TTcy5zt
Heh—these are obviously silly but well done, and they speak to the creative importance of being specific—i.e. representing particular famous faces. I sometimes note that a joke about a singer & a football player is one thing, whereas a joke about Taylor Swift & Travis Kelce is a whole other thing, all due to it being specific. Thus, for an AI toolmaker, knowing exactly where to draw the line (e.g. disallowing celebrity likenesses) isn’t always so clear.
It’s a great question, and I think it’s really thoughtful that the day before I joined, the company was generous enough to run a Superb Owl—er, Super Bowl—commercial, just to help me explain the mission to my parents. 😀
But seriously, this ad provides a brief peek into the world of how Copilot can already generate beautiful, interesting things based on your needs—and that’s a core part of the mission I’ve come here to tackle.
Founded by ex-Google Imagen engineers, Ideogram has just launched version 1.0 widely. It’s said to offer new levels of fidelity in the traditionally challenging domain of type rendering:
Introducing Ideogram 1.0: the most advanced text-to-image model, now available on https://t.co/Xtv2rRbQXI!
This offers state-of-the-art text rendering, unprecedented photorealism, exceptional prompt adherence, and a new feature called Magic Prompt to help with prompting. pic.twitter.com/VOjjulOAJU
Historically, AI-generated text within images has been inaccurate. Ideogram 1.0 addresses this with reliable text rendering capabilities, making it possible to effortlessly create personalized messages, memes, posters, T-shirt designs, birthday cards, logos and more. Our systematic evaluation shows that Ideogram 1.0 is the state-of-the-art in the accuracy of rendered text, reducing error rates by almost 2x compared to existing models.
So, it’s true: After nearly three great years back at Adobe, I’ve moved to just the third place I’ve worked since the Clinton Administration: Microsoft!
I’ve signed on with a great group of folks to bring generative imaging magic to as many people as possible, leveraging the power of DALL•E, ChatGPT, Copilot, and other emerging tech to help make fun, beautiful, meaningful things. And yes, they have a very good sense of humor about Clippy, so go ahead and get those jokes out now. :->
It really is a small world: The beautiful new campus (see below) is just two blocks from my old Google office (where I reported to the same VP who’s now in charge of my new group), which itself is just down the road from the original Adobe HQ; see map. (Maybe I should get out more!)
And it’s a small world in a much more meaningful sense: I remain in a very rare & fortunate spot, getting to help guide brilliant engineers’ efforts in service of human creativity, all during what feels like one of the most significant inflection points in decades. I’m filled with gratitude, curiosity, and a strong sense of responsibility to make the most of this moment.
Thank you to my amazing Adobe colleagues for your hard & inspiring work, and especially for chance to build Firefly over the last year. It’s just getting started, and there’s so much we can do together.
Thank you to my new team for opening this door for us. And thank you to the friends & colleagues reading these words. I’ll continue to rely on your thoughtful, passionate perspectives as we navigate these opportunities together.
My friend Nathan Shipley has been deeply exploring AnimateDiff for the last several months, and he’s just collaborated with the always entertaining Karen X. Cheng to make this little papercraft-styled video:
While we’re all waiting for access to Sora…
Here’s our test using open source tools. You can get a decent level of creative control with AnimateDiff
Just in case you’ll be around San Jose this Friday, check out this panel discussion featuring our old Photoshop designer Julie Meridian & other artists discussing their relationship with AI:
Panel discussion: Friday, February 23rd 7pm–9pm. Free admission
Featuring Artists: Julie Meridian, James Morgan, and Steve Cooley Moderator: Cherri Lakey
KALEID Gallery is proud to host this panel with three talented artists who are using various AI tools in their artistic practice while navigating all the ethical and creative dilemmas that arise with it. With all the controversy around AI collaborative / generated art, we’re looking forward to hearing from these avant-garde artists that are exploring the possibilities of a positive outcome for artists and creatives in this as-of-yet undefined new territory.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W
“The company says watermarks from C2PA will appear in images generated on the ChatGPT website and the API for the DALL-E 3 model. Mobile users will get the watermarks by February 12th. They’ll include both an invisible metadata component and a visible CR symbol, which will appear in the top left corner of each image.”
“Meta will employ various techniques to differentiate AI-generated images from other images. These include visible markers, invisible watermarks, and metadata embedded in the image files… Additionally, Meta is implementing new policies requiring users to disclose when media is generated by artificial intelligence, with consequences for failing to comply.”
“When you create a design in Designer you can also decide if you’d like to include basic, trustworthy facts about the origin of the design or the digital content you’ve used in the design with the file.”
Not having a spare $3500 burning a hole in my pocket, I’ve yet to take this for a spin myself, but I’m happy to see it. Per the Verge:
The interface of the Firefly visionOS app should be familiar to anyone who’s already used the web-based version of the tool — users just need to enter a text description within the prompt box at the bottom and hit “generate.” This will then spit out four different images that can be dragged out of the main app window and placed around the home like virtual posters or prints. […]
Meanwhile, we also now have a better look at the native Adobe Lightroom photo editing app that was mentioned back when the Apple Vision Pro was announced last June. The visionOS Lightroom experience is similar to that of the iPad version, with a cleaner, simplified interface that should be easier to navigate with hand gestures than the more feature-laden desktop software.
I’m delighted to say that firefly.adobe.com now supports a live stream of community-created generative recipes. You can share your own simply by creating images via the Text to Image module, then clicking the share button. I’m especially pleased that if you use Generative Match to choose a stylization guide image, that image will be included in the recipe for anyone to use.
I had a chance to sit down for an interesting & wide-ranging chat with folks from the Wharton Tech Club:
Tune into the latest episode of the Wharton Tech Toks podcast! Leon Zhang and Stephanie Kim chat with John Nack, Principal Product Manager at Adobe with 20+ years of PM experience across Adobe and Google, about GenAI for creators, AI ethics, and more. He also reflects on his career journey. This episode is great if you’re recruiting for tech, PM, or Adobe.
Daring Fireball’s Mac 40th anniversary post contained a couple of quotes that made me think about the current state of interaction with AI tools, particularly around imaging. First, there’s this line from Steven Levy’s review of the original Mac:
[W]hat you might expect to see is some sort of opaque code, called a “prompt,” consisting of phosphorescent green or white letters on a murky background.
Think about how revolutionarily different & better (DOS-head haters’ gripes notwithstanding) this was.
What you see with Macintosh is the Finder. On a pleasant, light background, little pictures called “icons” appear, representing choices available to you.
And then there’s this kicker:
“When you show Mac to an absolute novice,” says Chris Espinosa, the twenty-two-year-old head of publications for the Mac team, “he assumes that’s the way all computers work. That’s our highest achievement. We’ve made almost every computer that’s ever been made look completely absurd.”
I don’t know quite what will make today’s prompt-heavy approach to generation feel equivalently quaint, but think how far we’ve come in less than two years since DALL•E’s public debut—from swapping long, arcane codes to having more conversational, iterative creation flows (esp. via ChatGPT) and creating through direct, realtime UIs like those offered via Krea & Leonardo. Throw in a dash of spatial computing, perhaps via “glasses that look like glasses,” and who knows where we’ll be!
But it sure as heck won’t mainly be knowing “some sort of opaque code, called a ‘prompt.'”
Thanks to Jackson Beaman & crew for putting together a great event yesterday in SF. I joined him, KD Deshpande (founder of Simplified), and Sofiia Shvets (founder of Let’s Enhance & Claid.ai) for a 20-minute panel discussion (which starts at 3:32:03 or so, in case the embedded version doesn’t jump you to the proper spot) about creating production-ready imagery using AI. Enjoy, and please let me know if you have any comments or questions!
Well, not exactly—but T-Paine’s words about how we value things still resonate today:
We humans are fairly good at pricing effort (notably in dollars paid per hour worked), but we struggle much more with pricing value. Cue the possibly apocryphal story about Picasso asking $10,000 for a drawing he sketched in a matter of seconds, but the ability to create which had taken him a lifetime.
A couple of related thoughts:
My artist friend is a former Olympic athlete who talks about how people bond through shared struggle, particularly in athletics. For him, someone using AI-powered tools is similar to a guy showing up at the gym with a forklift, using it to move a bunch of weight, and then wanting to bond afterwards with the actual weightlifters.
I see ostensible thought leaders crowing about the importance of “taste,” but I wonder how they think that taste is or will be developed in the absence of effort.
As was said of—and by?—Steve Jobs, “The journey is the reward.”
I took a video of a guy zip lining in full Harry Potter costume and edited out the zip lines to make it look like he was flying. I mainly used Content Aware Fill and the free Redgiant/Maxon script 3D Plane Stamp to achieve this.
For the surprise bit at the end, I used Midjourney and Runway’s Motion Brush to generate and animate the clothing.
Trapcode Particular was used for the rain in the final shot.
I also did a full sky replacement in each shot and used assets from ProductionCrate for the lighting and magic wand blast.
I had the pleasure of hanging out with these crazy-fast-moving guys last week, and I remain amazed at the speed of their shipping velocity. Check out the latest updates to their realtime canvas:
We introduce Lumiere — a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion — a pivotal challenge in video synthesis. To this end, we introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution — an approach that inherently makes global temporal consistency difficult to achieve. […]
We demonstrate state-of-the-art text-to-video generation results, and show that our design easily facilitates a wide range of content creation tasks and video editing applications, including image-to-video, video inpainting, and stylized generation.
From its first launch, Adobe Firefly has included support for content credentials, providing more transparency around the origin of generated images, and I’m very pleased to see Open AI moving in the same direction:
Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3.
We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.
Helping discover Dave Werner & bring him into Adobe remains one of my favorite accomplishments at the company. He continues to do great work in designing characters as well as the tools that can bring them to life. Watch how he combines Firefly with Adobe Character Animator to create & animate a stylish tiger:
Adobe Firefly’s text to image feature lets you generate imaginative characters and assets with AI. But what if you want to turn them into animated characters with performance capture and control over elements like arm movements, pupils, talking, and more? In this tutorial, we’ll walk through the process of taking a static Adobe Firefly character and turning it into an animated puppet using Adobe Photoshop or Illustrator plus Character Animator.
Honestly, if you asked, “Hey, wanna spend an hour+ listening to current and former intellectual property attorneys talking about EU antitrust regulation, ethical data sourcing, and digital provenance,” I might say, “Ehmm, I’m good!”—but Nilay Patel & Dana Rao make it work.
I found the conversation surprisingly engrossing & fast-moving, and I was really happy to hear Dana (with whom I’ve gotten to work some regarding AI ethics) share thoughtful insights into how the company forms its perspectives & works to put its values into practice. I think you’ll enjoy it—perhaps more than you’d expect!
We’re only just beginning to discover the experiential possibilities around generative creation, so I’m excited to see this rare gig open up:
You will build new and innovative user interactions and interfaces geared towards our customers unique needs, test and refine those interfaces in collaboration with academic research, user researchers, designers, artists and product teams.
Here is why: Adobe Firefly is the most commercially successful generative AI product ever launched. Since it was introduced in March in beta and made generally available in June, at last count in October, Firefly users have generated more than 3 billion images. Adobe says Firefly has attracted a significant number of new Adobe users, making it hard to imagine that Firefly is not aiding Adobe’s bottom line.
Paint Anything 3D with Lighting-Less Texture Diffusion Models: “Paint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.”
Retro-futuristic alphabet rendered with Midjourney V6: “Just swapped out the letter and kept everything else the same. Prompt: Letter “A”, cyberpunk style, metal, retro-futuristic, star wars, intrinsic details, plain black background. Just change the letter only. Not all renders are perfect, some I had to do a few times to get a good match. Try this strategy for any type of cool alphabet!”
As many others have noted, Midjourney is now good at type. Find more here.
Using a simple kids’ drawing tablet to create art: “I used @Vizcom_ai to transform the initial sketch. This tool has gotten soo good by now. I then used @LeonardoAi_’s image to image to enhance the initial image a bit, and then used their new motion feature to make it move. I also used @Magnific_AI to add additional details to a few of the images and Decohere AI’s video feature.”
Latte art: “Photoshop paint sent to @freepik’s live canvas. The first few seconds of the video are real-time to show you how responsive it is. The music was made with @suno_ai_. Animation with Runways Gen-2.”
Photo editing:
Google Photos gets a generative upgrade: “Magic Eraser now uses gen AI to fill in detail when users remove unwanted objects from photos. Google Research worked on the MaskGIT generative image transformer for inpainting, and improved segmentation to include shadows and objects attached to people.”
AnyDoor is “a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations in a harmonious way.”
SDXL Auto FaceSwap enables to create new images using the face of a source image (example attached).
Oh look, I’m George Clooney! Kinda. You can be, too. FAL AI promises “AI inference faster than you can type.”
“100ms image generation at 1024×1024. Announcing Segmind-Vega and Segmind-VegaRT, the fastest and smallest, open source models for image generation at the highest resolution.”
Krea has announced their open beta, “free for everyone.”
Instagram has enabled image generation inside chat (pretty “meh,” in my experience so far), and in stories creation, “It allows you to replace a background of an image into whatever AI generated image you’d like.”
“Did you know that you can train an AI Art model and get paid every time someone uses it? That’s Generaitiv’s Model Royalties System for you.”