Fun stuff from Red Giant:
All posts by jnack
Meshy promises AI-driven texturing & more
Among other magic, “Simply input an image, and our AI will automatically turn 2D into 3D in less than 15 minutes.”
Firefly: Making a lo-fi animation with Adobe Express
Check out this quick tutorial from Kris Kashtanova:
Tutorial: How to make a lo-fi animation with new Adobe Express!
Adobe Express is available to everyone today and I made this super short tutorial for you of what’s possible. It has GenAI, background remove, making cool animations and more.
Get it here: https://t.co/PovZvcmDqL pic.twitter.com/jG3hAYoGKk— Kris Kashtanova (@icreatelife) August 16, 2023
Irish panos 🏰🚁☘️
Just a wee bit o’ the droning for ya, overflying our cousins’ ancient neighbors (such show-offs!):
DJI “Spotlight Mode” looks rad
I’ve been flying a bunch here in Ireland this week & can’t wait to share some good stuff soon. (Weren’t transatlantic plane rides meant for video editing?) In the meantime, I’m only now learning of a really promising-looking way to have the drone focus on a subject of interest, leaving the operator free to vary other aspects of flight (height, rotation, etc.). Check it out:
Remembering John Warnock
Like so many folks inside Adobe & far beyond, I’m saddened by the passing of our co-founder & a truly great innovator. I’m traveling this week in Ireland & thus haven’t time to compose a proper remembrance, but I’ve shared a few meaningful bits in this thread (click or tap through to see):
I am so sorry to hear of the passing of Adobe cofounder John Warnock. He changed all our lives, and those of millions more, for the better. God bless and godspeed, sir. 🙏 pic.twitter.com/ZHZDdkqcOO
— John Nack (@jnack) August 20, 2023
Generative Fill stars in a Fox Sports ad
You know you’ve entered the cultural conversation when things like this happen. I’m reminded of the first Snapchat filters inspiring real-world Halloween costumes showing puking rainbows & more.
Who will move on to the FINAL?! 🤩 pic.twitter.com/22dHaiefyl
— FOX Soccer (@FOXSoccer) August 15, 2023
Firefly site gets faster, adds dark mode support & more
Good stuff just shipped on firefly.adobe.com:
- New menu options enable sending images from the Text to Image module to Adobe Express.
- The UI now supports Danish, Dutch, Finnish, Italian, Korean, Norwegian, Swedish, and Chinese. Go to your profile and select preferences to change the UI language.
- New fonts are available for Korean, Chinese (Traditional), and Chinese (Simplified).
- Dark mode is here! Go to your profile and select preferences to change the mode.
- A licensing and indemnification workflow is supported for entitled users.
- Mobile bug fixes include significant performance improvements.
- You can now access Firefly from the Web section of CC Desktop.
You may need to perform a hard refresh on your browser to see the changes. Cmd (Ctrl) + Shift + R.
If anything looks amiss, or if there’s more you’d like to see changed, please let us know!

Quick PSA: Update your Photoshop beta build to keep using GenFill
The title says pretty much everything, but FYI:
Behind the scenes on “Astroid City”
I’ve yet to see Wes Anderson’s latest, but I enjoyed this brief peek into how it was made:
Bonus: Bird!
Matilda Zombie
This is me, giving my now-48yo brain a birthday break by skipping any attempt at meaningfulness & instead sharing just some pure exuberant mayhem. 🤘😛🤘
Alpaca brings sketch-to-image, more to Photoshop
Super exciting stuff from this new plugin, free while it’s in beta:
1/ Introducing Alpaca’s public beta (goodbye waitlist!) for @Photoshop. So many exciting new features to share with this update! More below. Try it for free here: https://t.co/j2GAxt8VPY pic.twitter.com/gJHD5iv3vd
— Alpaca (@alpacaml) August 3, 2023
GenFill + old photos = 🥰
Speaking of using Generative Fill to build up areas with missing detail, check out this 30-second demo of old photo restoration:
Restoring old photos using Generative Fill in @Photoshop?! 🤯 pic.twitter.com/UlXj5paDTD
— Howard Pinsky (@Pinsky) August 3, 2023
And though it’s not presently available in Photoshop, check out this use of ControlNet to revive an old family photo:
ControlNet did a good job rejuvenating a stained blurry 70 year old photo of my 90 year old grandparents.
by u/prean625 in StableDiffusion
Clever hair selection via Generative Fill
I found PiXimpefect’s clever use of Quick Mask + GenFill interesting. It’s basically “Select Subject -> Quick Mask -> paint over hair edges -> generate,” filling in areas where the original selection/removal process left something to be desired.
“Where the Fireflies Fly”
I had a ball chatting with members of the Firefly community, including our new evangelist Kris Kashtanova & O.G. designer/evangelist Rufus Deuchler. It was a really energetic & wide-ranging conversation, and if you’d like to check it out, here ya go:
Here’s a recording of today’s Space with @rufusd and @jnack from @Adobe
We’ll definitely do this again. Thank you all for joining us today.
— Kris Kashtanova (@icreatelife) August 3, 2023
Using GenFill to make Photoshop brushes
I dig this simple, creative application from Nemanja Sekulic:
Photoshop introduces Generative Expand
It’s here (in your beta copy of Photoshop, same as Generative Fill), and it works pretty much exactly as I think you’d expect: drag out crop handles, then optionally specify what you want placed into the expanded region.
In addition:
Today, we’re excited to announce that Firefly-powered features in Photoshop (beta) will now support text prompts in 100+ languages — enabling users around the world to bring their creative vision to life with text prompts in the language they prefer.

AI images -> video: ridonkulous
It’s 2023, and you can make all of this with your GD telephone. And just as amazingly, a year or two from now, we’ll look back on feeling this way & find it quaint.
GN 💫 I got too excited and edited a first video draft on my phone in my trusty splice app 👀
images generated in #midjourney, edited in #photoshop and animated in #runway gen2 https://t.co/jKBeMevnqM pic.twitter.com/QQyu4FqqpG
— Julie W. Design (@juliewdesign_) July 24, 2023
AE + GPT: Good for you, good for me
Check out my teammate CJ’s exploration around using ChatGPT to produce expression code for use in After Effects:
AI Barbenheimer 🤯
AR: Google takes Space Invaders to the streets

“SPACE INVADERS: World Defense,” a mobile game on Android and iOS, invites players from around the world to get outside and defend the Earth. Space Invaders spawn from buildings and rooftops, hide behind structures and hover in the sky. Through global immersive gameplay, players from all over the world have to work together to save the planet.
“Reacting to YOUR INSANE AI Generated ‘Photos'”
What’s real, and what’s Generative Fill? Watch as photographer Peter McKinnon tries to tell the difference in real time!
Food for thought: A more playful Firefly?
What’s a great creative challenge?
What fun games make you feel more connected with friends?
What’s the “Why” (not just the “What” & “How”) at the heart of generative imaging?
These are some of the questions we’ve been asking ourselves as we seek out some delightful, low-friction ways to get folks creating & growing their skills. To that end I had a ball joining my teammates Candice, Beth Anne, and Gus for a Firefly livestream a couple of weeks ago, engaging in a good chat with the audience as we showed off some of the weirder & more experimental ideas we’ve had. I’ve cued up this vid to roughly the part where we get into those ideas, and I’d love to hear your thoughts on those—or really anything in the whole conversation. TIA!
Like DreamBooth? Meet HyperDreamBooth.
10,000x smaller & 25x faster? My old Google teammates & their collaborators, who changed the generative game last year by enabling custom model training, are now proposing to upend things further by enabling training via a single image—and massively faster, too. Check out this thread:
“Stable Doodle”: simple drawing-to-image
In the corner of my eye, our very stable (golden)doodle is happily sawing logs while I share this link to Stable Doodle, a fun little drawing tool from the folks at Stable Diffusion:
It helps produce sketch-to-image combos like this:

Meme legends meet Generative Fill
Hola! Willkommen! Bem-vindo! Firefly goes global
Check it out!
Details, if you’re interested:
What’s new with Adobe Firefly?
Firefly can now support prompts in over 100 languages. Also, the Firefly website is now available in Japanese, French, German, Spanish, and Brazilian Portuguese, with additional languages to come.
How are the translations of prompts done?
Support for over 100 languages is in beta and uses machine translation to English provided by Microsoft Translator. This means that translations are done by computers and not manually by humans.
What if I see errors in translations or my prompt isn’t accurately translated?
Because Firefly uses machine translation, and given the nuances of each language, it’s possible certain generations based on translated prompts may be inaccurate or unexpected. You can report negative translation results using the Report tool available in every image.
Can I type in a prompt in another language in the Adobe Express, Photoshop, and Illustrator beta apps?
Not at this time, though this capability will be coming to those apps in the future.
Which languages will the Firefly site be in on 7/12?
We are localizing the Firefly website into Japanese, French, German, Spanish, Brazilian Portuguese and expanding to others on a rolling basis.
5-Minute Tutorial: How To Use Firefly
On the extremely off chance that you (or more likely, someone you know) is new to Firefly, check out this handy intro speed run from evangelist Paul Trani:
“Photos Of Hollywood’s Biggest Stars Hanging With Their Younger Selves”
AI: Talking Firefly & the Future
I had a ball chatting last week with Farhad & Faraz on the Bad Decisions Podcast. (My worst decision was to so fully embrace vacation that I spaced on when we were supposed to chat, leaving me to scramble from the dog park & go tear-assing home to hop on the chat. Hence my terrible hair, which Farhad more than offset with his. 😌) We had a fast-paced, wide-ranging conversation, and I hope you find it valuable. As always I’d love to hear any & all feedback on what we’re doing & what you need.
Firefly livestream: “Using AI in the Real World”
If you enjoyed yesterday’s session with Tomasz & Lisa, I think you’ll really dig this one as well:
Join Lisa Carney and Jesús Ramirez as they walk you through their real world projects and how they use Generative Ai tools to help their workflow. Join them as they show you they make revisions from client feedback, create different formats from a single piece, and collaborate together using Creative Cloud Libraries. Stay tuned to check out some of their work from real life TV shows!
Guest Lisa Carney is a photographer and photo retoucher based in LA. Host Jesús Ramirez is a San Francisco Bay Area Graphic Designer and the founder of the Photoshop Training Channel on YouTube.
Firefly livestream: Pro compositors show how they use the tech
Tomas Opasinski & Lisa Carney are *legit* Hollywood photo compositors, in Friday’s Adobe Live session they showed how they use Firefly to design movie posters.
Interestingly, easily the first half had little if anything to do with AI or other technology per se, and everything to do with the design language of posters (e.g. comedies being set on white, Japanese posters emphasizing text)—which I found just as intriguing.
Fool me thrice? Insta360 GO 3 arrives
Having really enjoyed my Insta360 One X, X2, and X3 cams over the years, I’ve bought—and been burned by—the tiny GO & GO2:
- In 2019 I wrote The tiny Insta360 GO looks clever. Sadly I found it far more glitchy than clever.
- In 2021 I wrote Insta360 GO 2: Finally a wearable cam that doesn’t suck? It was better, but I still can’t count on it to actually record. Thus it’s largely gathered dust.
And yet… I still believe that having an unobtrusive, AI-powered “wearable photographer” (as Google Clips sought to be) is a worthy and potentially game-changing north star. (See the second link above for some interesting history & perspective). So, damn if I’m not looking at the new GO 3 and thinking, “Maybe this time Lucy won’t pull away the football…”
Here’s Casey Neistat’s perspective:
A stunning timelapse from Insta360
Guiding Photoshop’s Generative Fill through simple brushing
Check out this great little demo from Rob de Winter:
OK, this makes @Photoshop Generative Fill even more powerful. Make a rough sketch with the brush tool and generate an image based on it.❤️Find all the steps in the thread below #GenerativeAI #photoshop #photoshopai #beta #generativefill #controlnet @Adobe pic.twitter.com/wj5dEwhUKd
— Rob de Winter (@robdewinter) June 25, 2023
The steps are, he writes,
- Draw a rough outline with the brush tool and use different colors for all parts.
- Go to Quick Mask Mode (Q).
- Go to Edit > Fill and choose a 70% grey fill. The lower this percentage, the more the end result will resemble your original sketch (i.e.: increasingly cartoon-like).
- Exit Quick Mask Mode (Q). You now have a 70% opaque selection.
- Click Generative Fill and type your prompt. Something like: summer grassland landscape with tree (first example) or river landscape with mountains (second example). You can also keep it really simple, just play with it!

Google uses generative imaging for virtual try-on
In my time at Google, we tried and failed a lot to make virtual try-on happen using AR. It’s extremely hard to…
- measure bodies (to make buying decisions based on fit)
- render virtual clothing accurately (placing virtual clothing over real clothing, or getting them to disrobe, which is even harder!; simulating materials in realtime)
- get a sizable corpus of 3D assets (in a high-volume, low-margin industry)
Outside of a few limited pockets (trying on makeup, glasses, and shoes—all for style, not for fit), I haven’t seen anyone (Amazon, Snap, etc.) crack the code here. Researcher Ira Kemelmacher-Shlizerman (who last I heard was working on virtual mirrors, possibly leveraging Google’s Stargate tech) acknowledges this:
Current techniques like geometric warping can cut-and-paste and then deform a clothing image to fit a silhouette. Even so, the final images never quite hit the mark: Clothes don’t realistically adapt to the body, and they have visual defects like misplaced folds that make garments look misshapen and unnatural.
So, it’s interesting to see Google trying again (“Try on clothes with generative AI”):
This week we introduced an AI-powered virtual try-on feature that uses the Google Shopping Graph to show you how clothing will look on a diverse set of real models.
Our new guided refinements can help U.S. shoppers fine-tune products until you find the perfect piece. Thanks to machine learning and new visual matching algorithms, you can refine using inputs like color, style and pattern.

They’ve posted a technical overview and a link to their project site:
Inspired by Imagen, we decided to tackle VTO using diffusion — but with a twist. Instead of using text as input during diffusion, we use a pair of images: one of a garment and another of a person. Each image is sent to its own neural network (a U-net) and shares information with each other in a process called “cross-attention” to generate the output: a photorealistic image of the person wearing the garment. This combination of image-based diffusion and cross-attention make up our new AI model.
They note that “We don’t promise fit and for now focus only on visualization of the try on. Finally, this work focused on upper body clothing.”
It’s a bit hard to find exactly where one can try out the experience. They write:
Starting today, U.S. shoppers can virtually try on women’s tops from brands across Google, including Anthropologie, Everlane, H&M and LOFT. Just tap products with the “Try On” badge on Search and select the model that resonates most with you.
Firefly vs. male-pattern baldness
Salvation arrives, my thinning kings. 😌
And tangentially, see how baldness helped enhance Photoshop masking 15 years ago. 👴🏻
Generative Fill gets… intense!
Check out how you can vary intensity in your selections, leading to more realistic rendering & blending:
Note that in the Web module, you can vary intensity directly while painting a selection:

“Don’t give up on the real world”: A great new campaign from Nikon
I’m really enjoying this new campaign from Nikon Peru:
“This obsession with the artificial is making us forget that our world is full of amazing natural places that are often stranger than fiction.
“We created a campaign with real unbelievable natural images taken with our cameras, with keywords like those used with Artificial Intelligence.”
Check out the resulting 2-minute piece:
And here are some of the stills, courtesy of PetaPixel:



Art of the Title: “Jack Ryan” by Imaginary Forces
I’ve been struck by the beautiful, uncanny parallels drawn in the title sequence for Amazon’s “Jack Ryan” series:
It comes as no surprise to learn that they were made by Imaginary Forces, whose work I’ve admired since the 90’s. Check out this great behind-the-scenes piece from creative director Karin Fong & team:
New experimental filmmaking from Paul Trillo
Paul is continuing to explore what’s possible by generating short clips using Runway’s Gen-2 text-to-video model. Check out the melancholy existential musings of “Thank You For Not Answering”:
And then, to entirely clear your mental palate, there’s the just deeply insane TRUCX!
Adobe will offer Firefly indemnification
Per Reuters:
Adobe Inc. said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools.
In an effort to give those customers confidence, Adobe said it will offer indemnification for images created with the service, though the company did not give financial or legal details of how the program will work.
“We financially are standing behind all of the content that is produced by Firefly for use either internally or externally by our customers,” Ashley Still, senior vice president of digital media at Adobe, told Reuters.
AI: Illustrator introduces Firefly-powered recoloring
Design history: How those Nagel paintings ended up everywhere
…or at least in what seems like every marginal hair salon. I love bite-sized little cultural insights like this:
AI-made Lego minifigs
I love this kind of silliness—just the right mix of idiom & constraint to bring out folks’ creativity.
Firefly: “Using Generative AI to Enhance Your 3D Workflow”
I love seeing Michael Tanzillo‘s Illustrator 3D -> Adobe Stager -> Photoshop workflow for making and enhancing the adorable “Little Miss Sparkle Bao Bao”:
Demo: Using Firefly for poster creation
My teammates Danielle Morimoto & Tomasz Opasinski are accomplished artists who recently offered a deep dive on creating serious, ambitious work (not just one-and-done prompt generations) using Adobe Firefly. Check it out:
Explore the practical benefits of using Firefly in real-world projects with Danielle & Tomasz. Today, they’ll walk through the poster design process in Photoshop using prompts generated in Firefly. Tune into the live stream and join them as they discuss how presenting more substantial visuals to clients goes beyond simple sketches, and how this creative process could evolve in the future. Get ready to unlock new possibilities of personalization in your work, reinvent yourself as an artist or designer, and achieve what was once unimaginable. Don’t miss this opportunity to level up your creative journey and participate in this inspiring session!
“20 Epic Uses of Generative Fill”
Check out a speed run of fun, practical applications courtesy of PiXimperfect:
Russell + GenFill, Part II
When you see only one set of footprints on the sand… that’s when Russell GenFilled you out. 😅
On a chilly morning two years ago, I trekked out to the sand dunes in Death Valley to help (or at least observe) Russell on a dawn photoshoot with some amazing performers and costumes. Here he takes the imagery farther using Generative Fill in Photoshop:
On an adjacent morning, we made our way to Zabriskie Point for another shoot. Here he shows how to remove wrinkles and enhance fabric using the new tech:
And lastly—no anecdote here—he shows some cool non-photographic applications of artwork extension:
AI: Russell Brown talks Generative Fill
I owe a lot of my career to Adobe’s O.G. creative director—one of the four names on the Photoshop 1.0 splash screen—and seeing his starry-eyed exuberance around generative imaging has been one of my absolute favorite things over the past year. Now that Generative Fill has landed in Photoshop, Russell’s doing Russell things, sharing a bunch of great new tutorials. I’ll start by sharing two:
Check out his foundational Introduction to Generative Fill:
And then peep some tips specifically on getting desired shapes using selections:
Stay tuned for more soon!


