All posts by jnack

Remembering John Warnock

Like so many folks inside Adobe & far beyond, I’m saddened by the passing of our co-founder & a truly great innovator. I’m traveling this week in Ireland & thus haven’t time to compose a proper remembrance, but I’ve shared a few meaningful bits in this thread (click or tap through to see):

Firefly site gets faster, adds dark mode support & more

Good stuff just shipped on firefly.adobe.com:

  • New menu options enable sending images from the Text to Image module to Adobe Express.
  • The UI now supports Danish, Dutch, Finnish, Italian, Korean, Norwegian, Swedish, and Chinese. Go to your profile and select preferences to change the UI language.
  • New fonts are available for Korean, Chinese (Traditional), and Chinese (Simplified).
  • Dark mode is here! Go to your profile and select preferences to change the mode.
  • A licensing and indemnification workflow is supported for entitled users.
  • Mobile bug fixes include significant performance improvements.
  • You can now access Firefly from the Web section of CC Desktop.

You may need to perform a hard refresh on your browser to see the changes. Cmd (Ctrl) + Shift + R.

If anything looks amiss, or if there’s more you’d like to see changed, please let us know!

GenFill + old photos = 🥰

Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:

And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:

ControlNet did a good job rejuvenating a stained blurry 70 year old photo of my 90 year old grandparents.
by u/prean625 in StableDiffusion

“Where the Fireflies Fly”

I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:

Photoshop introduces Generative Expand

It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.

In addition:

Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.

AI images -> video: ridonkulous

It’s 2023, and you can make all of this with your GD telephone. And just as amazingly, a year or two from now, we’ll look back on feeling this way & find it quaint.

Food for thought: A more playful Firefly?

What’s a great creative challenge?
What fun games make you feel more connected with friends?
What’s the “Why” (not just the “What” & “How”) at the heart of generative imaging?

These are some of the questions we’ve been asking ourselves as we seek out some delightful, low-friction ways to get folks creating & growing their skills. To that end I had a ball joining my teammates Candice, Beth Anne, and Gus for a Firefly livestream a couple of weeks ago, engaging in a good chat with the audience as we showed off some of the weirder & more experimental ideas we’ve had. I’ve cued up this vid to roughly the part where we get into those ideas, and I’d love to hear your thoughts on those—or really anything in the whole conversation. TIA!

Like DreamBooth? Meet HyperDreamBooth.

10,000x smaller & 25x faster? My old Google teammates & their collaborators, who changed the generative game last year by enabling custom model training, are now proposing to upend things further by enabling training via a single image—and massively faster, too. Check out this thread:

https://twitter.com/natanielruizg/status/1679893292618752000?s=20

Hola! Willkommen! Bem-vindo! Firefly goes global

Check it out!

https://twitter.com/Adobe/status/1679114836322680832?s=20

Details, if you’re interested:

What’s new with Adobe Firefly?

Firefly can now support prompts in over 100 languages. Also, the Firefly website is now available in Japanese, French, German, Spanish, and Brazilian Portuguese, with additional languages to come.

How are the translations of prompts done?

Support for over 100 languages is in beta and uses machine translation to English provided by Microsoft Translator. This means that translations are done by computers and not manually by humans.

What if I see errors in translations or my prompt isn’t accurately translated?

Because Firefly uses machine translation, and given the nuances of each language, it’s possible certain generations based on translated prompts may be inaccurate or unexpected. You can report negative translation results using the Report tool available in every image.

Can I type in a prompt in another language in the Adobe Express, Photoshop, and Illustrator beta apps?

Not at this time, though this capability will be coming to those apps in the future.

Which languages will the Firefly site be in on 7/12?

We are localizing the Firefly website into Japanese, French, German, Spanish, Brazilian Portuguese and expanding to others on a rolling basis.

AI: Talking Firefly & the Future

I had a ball chatting last week with Farhad & Faraz on the Bad Decisions Podcast. (My worst decision was to so fully embrace vacation that I spaced on when we were supposed to chat, leaving me to scramble from the dog park & go tear-assing home to hop on the chat. Hence my terrible hair, which Farhad more than offset with his. 😌) We had a fast-paced, wide-ranging conversation, and I hope you find it valuable. As always I’d love to hear any & all feedback on what we’re doing & what you need.

Firefly livestream: “Using AI in the Real World”

If you enjoyed yesterday’s session with Tomasz & Lisa, I think you’ll really dig this one as well:

Join Lisa Carney and Jesús Ramirez as they walk you through their real world projects and how they use Generative Ai tools to help their workflow. Join them as they show you they make revisions from client feedback, create different formats from a single piece, and collaborate together using Creative Cloud Libraries. Stay tuned to check out some of their work from real life TV shows!

Guest Lisa Carney is a photographer and photo retoucher based in LA. Host Jesús Ramirez is a San Francisco Bay Area Graphic Designer and the founder of the Photoshop Training Channel on YouTube.

Firefly livestream: Pro compositors show how they use the tech

Tomas Opasinski & Lisa Carney are *legit* Hollywood photo compositors, in Friday’s Adobe Live session they showed how they use Firefly to design movie posters.

Interestingly, easily the first half had little if anything to do with AI or other technology per se, and everything to do with the design language of posters (e.g. comedies being set on white, Japanese posters emphasizing text)—which I found just as intriguing.

Fool me thrice? Insta360 GO 3 arrives

Having really enjoyed my Insta360 One X, X2, and X3 cams over the years, I’ve bought—and been burned by—the tiny GO & GO2:

And yet… I still believe that having an unobtrusive, AI-powered “wearable photographer” (as Google Clips sought to be) is a worthy and potentially game-changing north star. (See the second link above for some interesting history & perspective). So, damn if I’m not looking at the new GO 3 and thinking, “Maybe this time Lucy won’t pull away the football…”

Here’s Casey Neistat’s perspective:

Guiding Photoshop’s Generative Fill through simple brushing

Check out this great little demo from Rob de Winter:


The steps are, he writes,

  1. Draw a rough outline with the brush tool and use different colors for all parts.
  2. Go to Quick Mask Mode (Q).
  3. Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
  4. Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
  5. Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!

Google uses generative imaging for virtual try-on

In my time at Google, we tried and failed a lot to make virtual try-on happen using AR. It’s extremely hard to…

  • measure bodies (to make buying decisions based on fit)
  • render virtual clothing accurately (placing virtual clothing over real clothing, or getting them to disrobe, which is even harder!; simulating materials in realtime)
  • get a sizable corpus of 3D assets (in a high-volume, low-margin industry)

Outside of a few limited pockets (trying on makeup, glasses, and shoes—all for style, not for fit), I haven’t seen anyone (Amazon, Snap, etc.) crack the code here. Researcher Ira Kemelmacher-Shlizerman (who last I heard was working on virtual mirrors, possibly leveraging Google’s Stargate tech) acknowledges this:

Current techniques like geometric warping can cut-and-paste and then deform a clothing image to fit a silhouette. Even so, the final images never quite hit the mark: Clothes don’t realistically adapt to the body, and they have visual defects like misplaced folds that make garments look misshapen and unnatural.

So, it’s interesting to see Google trying again (“Try on clothes with generative AI”):

This week we introduced an AI-powered virtual try-on feature that uses the Google Shopping Graph to show you how clothing will look on a diverse set of real models.

Our new guided refinements can help U.S. shoppers fine-tune products until you find the perfect piece. Thanks to machine learning and new visual matching algorithms, you can refine using inputs like color, style and pattern.

They’ve posted a technical overview and a link to their project site:

Inspired by Imagen, we decided to tackle VTO using diffusion — but with a twist. Instead of using text as input during diffusion, we use a pair of images: one of a garment and another of a person. Each image is sent to its own neural network (a U-net) and shares information with each other in a process called “cross-attention” to generate the output: a photorealistic image of the person wearing the garment. This combination of image-based diffusion and cross-attention make up our new AI model.

They note that “We don’t promise fit and for now focus only on visualization of the try on. Finally, this work focused on upper body clothing.”

It’s a bit hard to find exactly where one can try out the experience. They write:

Starting today, U.S. shoppers can virtually try on women’s tops from brands across Google, including Anthropologie, Everlane, H&M and LOFT. Just tap products with the “Try On” badge on Search and select the model that resonates most with you.

“Don’t give up on the real world”: A great new campaign from Nikon

I’m really enjoying this new campaign from Nikon Peru:

“This obsession with the artificial is making us forget that our world is full of amazing natural places that are often stranger than fiction.

“We created a campaign with real unbelievable natural images taken with our cameras, with keywords like those used with Artificial Intelligence.”

Check out the resulting 2-minute piece:

And here are some of the stills, courtesy of PetaPixel:

New experimental filmmaking from Paul Trillo

Paul is continuing to explore what’s possible by generating short clips using Runway’s Gen-2 text-to-video model. Check out the melancholy existential musings of “Thank You For Not Answering”:

And then, to entirely clear your mental palate, there’s the just deeply insane TRUCX!

Adobe will offer Firefly indemnification

Per Reuters:

Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.

In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.

“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.

Demo: Using Firefly for poster creation

My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:

Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!

Russell + GenFill, Part II

When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅

On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:

On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:

And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:

AI: Russell Brown talks Generative Fill

I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:

Check out his foundational Introduction to Generative Fill:

And then peep some tips specifically on getting desired shapes using selections:

Stay tuned for more soon!

LinkedIn Learning tackles Firefly

Check out this new course from longtime Adobe expert Jan Kabili:

Adobe Firefly is an exciting new generative AI imaging tool from Adobe. With Firefly, you can create unique images and text effects by typing text prompts and choosing from a variety of style inputs. In this course, imaging instructor and Adobe trainer Jan Kabili introduces Firefly. She explains what Firefly can offer to your creative workflow, and what makes it unique in the generative AI field. She demonstrates how to generate images from prompts, built-in styles, and reference images, and shares tips for generating one of a kind text effects. Finally, Jan shows you how to use images generated by Firefly to create a unique composite in Photoshop.

A fun conversation: Firefly & the future

Here’s me, talking fast about anything & everything related to Firefly and possibilities around creative tools. Give ‘er a listen if you’re interested (or, perhaps, are just suffering from insomnia 😌):