Category Archives: Generative Fill

AI in Ai: Illustrator adds Vector GenFill

As I’ve probably mentioned already, when I first surveyed Adobe customers a couple of years ago (right after DALL•E & Midjourney first shipped), it was clear that they wanted selective synthesis—adding things to compositions, and especially removing them—much more strongly than whole-image synthesis.

Thus it’s no surprise that Generative Fill in Photoshop has so clearly delivered Firefly’s strongest product-market fit, and I’m excited to see Illustrator following the same path—but for vectors:

Generative Shape Fill will help you improve your workflow including:

  • Create detailed, scalable vectors: After you draw or select your shape, silhouette, or outline in your artboard, use a text prompt to ideate on vector options to fill it.
  • Style Reference for brand consistency: Create a wide variety of options that match the color, style, and shape of your artwork to ensure a consistent look and feel.
  • Add effects to your creations: Enhance your vector options further by adding styles like 3D, geometric, pixel art or more.

They’re also adding the ability to create vector patterns simply via prompting:

Photoshop’s new Selection Brush helps control GenFill

Soon after Generative Fill shipped last year, people discovered that using a semi-opaque selection could help blend results into an environment (e.g. putting fish under water). The new Selection Brush in Photoshop takes functionality that’s been around for 30+ years (via Quick Select mode) and brings it more to the surface, which in turn makes it easier to control GenFill behavior:

Can you use Photoshop GenFill on video?

Well, it doesn’t create animated results, but it can work perhaps surprisingly well on regions in static shots:

It can also be used to expand the canvas of similar shots:

GenFill comes to Lightroom!

When I surveyed thousands of Photoshop customers waaaaaay back in the Before Times—y’know, summer 2022—I was struck by the fact that beyond wanting to insert things into images, and far beyond wanting to create images from scratch, just about everyone wanted better ways to remove things.

Happily, that capability has now come to Lightroom. It’s a deceptively simple change that, I believe, required a lot of work to evolve Lr’s non-destructive editing pipeline. Traditionally all edits were expressed as simple parameters, and then masks got added—but as far as I know, this is the first time Lr has ventured into transforming pixels in an additive way (that is, modify one bunch, then make subsequent edits that depend on the previous edits). That’s a big deal, and a big step forward for the team.

A few more examples courtesy of Howard Pinsky:

Tomorrow & tomorrow & tomorrow…

I told filmmaker Paul Trillo that I’ve apparently blogged his work here more than a dozen times over the past 10 years—long before AI generation became a thing. That’s because he’s always been eager to explore the boundaries of what’s possible with any given set of tools. In “Notes To My Future Self,” he combines new & traditional methods to make a haunting, melancholy meditation:

And here he provides an illuminating 1-minute peek into the processes that helped him create all this in just over a week’s time:

GenFill: Eternal Sunshine Edition

I get that it’s all in good fun, but hoo boy, the “Ex-Terminator” feature from PhotoRoom makes me melancholy. Meet me in Montauk…

ChatGPT adds image editing

When DALL•E first dropped, it wasn’t full-image creation that captured my attention so much as inpainting, i.e. creating/removing objects in designated regions. Over the years (all two of ’em ;-)) I’ve lost track of whether DALL•E’s Web interface has remained available (’cause who’s needed it after Generative Fill?), but I’m very happy to see this sort of selective synthesis emerge in the ChatGPT-DALL•E environment:

It’s also nice to see more visual suggestions appearing there:

Lego + GenFill = Yosemite Magic

Or… something like that. Whatever the case, I had fun popping our little Lego family photo (captured this weekend at Yosemite Valley’s iconic Tunnel View viewpoint) into Photoshop, selecting part of the excessively large rock wall, and letting Generative Fill give me some more nature. Click or tap (if needed) to see the before/after animation:

GenFill + old photos = 🥰

Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:

And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:

ControlNet did a good job rejuvenating a stained blurry 70 year old photo of my 90 year old grandparents.
by u/prean625 in StableDiffusion

Photoshop introduces Generative Expand

It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.

In addition:

Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.

Guiding Photoshop’s Generative Fill through simple brushing

Check out this great little demo from Rob de Winter:


The steps are, he writes,

  1. Draw a rough outline with the brush tool and use different colors for all parts.
  2. Go to Quick Mask Mode (Q).
  3. Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
  4. Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
  5. Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!

Russell + GenFill, Part II

When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅

On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:

On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:

And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:

AI: Russell Brown talks Generative Fill

I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:

Check out his foundational Introduction to Generative Fill:

And then peep some tips specifically on getting desired shapes using selections:

Stay tuned for more soon!

Come try Generative Fill on the Web, no wait required!

There’s a roughly zero percent chance that you both 1) still find this blog & 2) haven’t already seen all the Generative Fill coverage from our launch yesterday 🎉. I’ll have a lot more to say about that in the future, but for now, you can check out the module right now and get a quick tour here:

https://twitter.com/jnack/status/1660971909327224832?s=20

And here’s a rad little workflow optimization I’m proud we were able to sneak in: