Monthly Archives: October 2023

Demos: Using Generative AI in Illustrator

If you’ve been sleeping on Text to Vector, check out this handful of quick how-to vids that’ll get you up to speed:

Sneak peek: Project Glyph Ease

Easy as ABC, 123?

Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style from existing vector shapes or ones they hand draw on paper, and this technology automatically create the remaining letters in a consistent style. Once created, designers have flexibility to edit the new font since the letters will appear as live text that can be scaled, rotated or moved in the project.

Project Primrose: Animated fabric (!) from Adobe

The week before MAX, my teammate Christine had a bit of a cough, and folks were suddenly concerned about the Project Primrose sneak: it’d be awfully hard to swap out presenters when the demo surface is a bespoke dress made by & for exactly one person. Thankfully good health prevailed, and she was able to showcase Project Primrose:

Here’s a bit more info about the tech:

We propose reflective light-diffuser modules for non-emissive flexible display systems. Our system leverages reflective-backed polymer-dispersed liquid crystal (PDLC), an electroactive material commonly used in smart window applications. This low-power non-emissive material can be cut to any shape, and dynamically diffuses light. We present the design & fabrication of two exemplar artifacts, a canvas and a handbag, that use the reflective light-diffuser modules. 

Reflect on this: Project See Through burns through glare

Marc Levoy (professor emeritus at Stanford) was instrumental in delivering the revolutionary Night Sight mode on Pixel 3 phones—and by extension on all the phones that quickly copied their published techniques. After leaving Google for Adobe, he’s been leading a research team that’s just shown off the reflection-zapping Project See Through:

Today, it’s difficult or impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.

Check out my MAX talk on the potential of Generative AI in education

I got to spend 30 minutes chatting with educator & author Matt Miller last week, riffing on some tough but important questions around weighty, fascinating stuff like what makes us human, what we value around creativity, and how we can all navigate the creative disruptions that surround us.

Hear how Adobe generative AI solutions are designed to continually evolve, develop, and empower educators and students from kindergarten to university level. Generative AI is expected to have a significant impact on the creativity of students. It has the potential to act as a powerful tool that can inspire and enhance the creative process by generating new and unique ideas. Join Matt Miller, author and educator, and John Nack, principal product manager at Adobe, for this exciting discussion.

In this session, you’ll:

  • Learn how Adobe approaches generative AI
  • Hear experts discuss how AI affects teaching and learning
  • Discover how AI can make learning more personalized and accessible

What if 3D were actually approachable?

That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

What’s even better than Generative Fill? GenFill that moves.

Back in the day, I dreaded demoing Photoshop ahead of the After Effects team: we’d do something cool, and they’d make that cool thing move. I hear echoes of that in Project Fast Fill—generative fill for video.

Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.

Check it out:

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

Generative Match: It’s Pablos all the way down…

Here’s a fun little tutorial from my teammate Kris on using reference images to style your prompt (in this case, her pet turtle Pablo). And meanwhile, here’s a little gallery of good style reference images (courtesy of my fellow PM Lee) that you’re welcome to download and use in your creations.

Important protections for creators in Generative Match

I’m really happy & proud that Firefly now enables uploading your own images & mixing them into your creations. For months & months, this has been users’ number 1 feature request.

But with power comes responsibility, of course, and we’ve spent a lot of time thinking about ways to discourage misuse of the tech (i.e. how do we keep this from becoming a rip-off engine?). I’m glad to say that we’ve invested in some good guidelines & guardrails:

First, we require users to confirm they have the right to use any work that they upload to Generative Match as a reference image.

Second, if an image’s Content Credentials include tags indicating that the image shouldn’t be used as a style reference, users won’t be able to use it with Generative Match. We will be rolling out the ability to add these tags to assets as part of the Content Credentials framework within our flagship products.

Third, when a reference image is used to generate an asset, we save a thumbnail of the image to help ensure that the use of Generative Match meets our terms of service. We also note that a reference image was used in the asset’s Content Credentials. Storing the reference image provides an important dose of accountability.

To be clear, these protections are just first steps, and we plan to do more to strengthen protections. In the meantime, your feedback is most welcome!

Introducing Generative Match in Firefly

Hey everyone—I’m just back from Adobe MAX, and hopefully my blog is back from some WordPress database shenanigans that have kept me from posting.

I don’t know what the site will enable right now, so I’ll start by simply pointing to a great 30-second tour of my favorite new feature in Firefly, Generative Match. It enables you to upload your own image as a style reference, or to pick one that Adobe provides, and mix it together with your prompt and other parameters.

You can then optionally share the resulting recipe (via “Copy link” in the Share menu that appears over results), complete with the image ingredient; try this example. This goes well beyond what one can do with just copying/pasting a prompt, and as we introduce more multimodal inputs (3D object, sketching, etc.), it’ll become all the more powerful.

All images below were generated with the following prompt: a studio portrait of a fluffy llama, hyperrealistic, shot on a white cyclorama + various style images: