Monthly Archives: March 2023

Upcoming Firefly events

Come meet Adobe folks & fellow creators in person!

  • London (4/15) (Rufus Deuchler presenting)
  • NYC (4/20) (Terry White + Brooke Hopper presenting)
  • SF (4/26) (Paul Trani + Brooke Hopper presenting)

Here’s info for the London event:

——–

We are finally back in London! Join us for a VERY special creative community night.

Get to know the latest from Adobe creative tools, Adobe Express and Adobe Firefly. Learn why you should have Adobe Express on your list of tools to quickly create standout content for social media and beyond using beautiful templates from Adobe. We’ll show you how to leverage your designed assets from Photoshop in to your workflow.

We’re also presenting Adobe Firefly, a generative AI made for creators. With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Get ready to create unique posters, banners, social posts, and more with a simple text prompt. With Firefly, the plan is to do this and more — like uploading a mood board to generate totally original, customizable content.

Meet creators, artists, writers, and designers. Plus hang out with Chris Do and The Futur team! With sips, snacks, and a spotlight on inspiring projects — you won’t want to miss this.  


Space is limited, please register now.  

Good Firefly perspective: livestream & space

I enjoyed hearing my colleagues & outside folks discussing the origin, vision, and road ahead for Adobe Firefly in this livestream…

Eric Snowden is the VP of Design at Adobe and is responsible for the product design teams for the Digital Media business, which include Creative Cloud…. Nishat Akhtar is a designer and creative leader with 15+ years of experience in designing and leading initiatives for global brands… Danielle Morimoto is a Design Manager for Adobe Express, based in San Francisco.

…and this Twitter space, featuring our group’s CTO Ely Greenfield, along with creator Karen X. Cheng (whose work I’ve featured here countless times), illustrator & brush creator Kyle T. Webster, and director of design Samantha Warren. Scrub ahead to about 2:45 to get to the conversation.

AI does the impossible: making the first actually likable Vanilla Ice song

Made with genuine diabeetus! All right stop, collaborate and listen:

On one hand, you may be convinced we somehow assembled the original cast of The Matrix alongside the ghost of Wilford Brimley to record one of the greatest rap covers of all time. On the other hand, you may find it more believable that we’ve been experimenting with AI voice trainers and lip flap technology in a way that will eventually open up some new doors for how we make videos. You have to admit, either option kind of rules.

Some great Firefly reels

Hey, remember when we launched Adobe Firefly what feels like 63 years ago? 😅 OMG, what a week. I am so tired & busy trying to get folks access (thanks for your patience!), answer questions, and more that I’ve barely had time to catch up on all the great content folks are making. I’ll work on that soon, and in the meantime, here are three quick clips that caught my eye.

First, OG author Deke McClelland shows off type effects:

@dekenow Create Type Effects Out of Thin Air with Adobe Firefly #AdobeFirefly #photoshop #genai #deketok #typeeffects #texteffect #news ♬ original sound – Deke McClelland

Next, Kyle Nutt does some light painting, compositing himself into Firefly images:

And here Don Allen Stevenson puts Firefly creations into augmented reality with the help of Adobe Aero:

A creator’s perspective on Firefly & ethics

I really appreciate hearing Karen X. Cheng’s thoughts on the essential topics of consent, compensation, and more. We’ve been engaging in lots of very helpful conversations with creators, and there’s of course much more to sort through. As always, your perspective here is most welcome.

Introducing Adobe Firefly!

I’m so pleased—and so tired! 😅—to be introducing Adobe Firefly, the new generative imaging foundation that a passionate band of us have been working to bring to the world. Check out the high-level vision…

…as well as the part more directly in my wheelhouse: the interactive preview site & this overview of great stuff that’s waiting in the wings:

I’ll have a lot more to share soon. In the meantime, we’d love to hear what you think of what you see so far!

Midjourney v5 arrives

Now I just need some actual time to try it out !

Thread of visual comparisons against the already amazing v4:

https://twitter.com/nickfloats/status/1636116959267004416

Animation: “Grand Canons”

Enjoy, if you will, this “visual symphony of everyday objects“:

A brush makes watercolors appear on a white sheet of paper. An everyday object takes shape, drawn with precision by an artist’s hand. Then two, then three, then four… Superimposed, condensed, multiplied, thousands of documentary drawings in successive series come to life on the screen, composing a veritable visual symphony of everyday objects. The accumulation, both fascinating and dizzying, takes us on a trip through time.

Kottke notes, “More of Biet’s work can be found on his website or on Instagram.”

Stable Diffusion can draw the contents of your brain

“It’s all in your head.” — Gorillaz

I’ve spent the last ~year talking about my brain being “DALL•E-pilled,” where I’ve started seeing just about everything (e.g. a weird truck) as some kind of AI manifestation. But that’s nothing compared to using generative imaging models to literally see your thoughts:

Researchers Yu Takagi and Shinji Nishimoto, from the Graduate School of Frontier Biosciences at Osaka University, recently wrote a paper outlining how it’s possible to reconstruct high res images (PDF) using latent diffusion models, by reading human brain activity gained from functional Magnetic Resonance Imaging (fMRI), “without the need for training or fine-tuning of complex deep generative models” (via Vice).

Use Stable Diffusion ControlNet in Photoshop

Check out this integration of sketch-to-image tech—and if you have ideas/requests on how you’d like to see capabilities like these get more deeply integrated into Adobe tools, lay ’em on me!

Also, it’s not in Photoshop, but as it made me think of the Photo Restoration Neural Filter in PS, check out this use of ControlNet to revive an old family photo:

“What is Mise en Scène?”

One of the great pleasures of parenting is, of course, getting to see your kids’ interests and knowledge grow, and yesterday my 13yo budding photographer Henry and I were discussing the concept of mise en scène. In looking up a proper explanation for him, I found this great article & video, which Kubrick/Shining lovers in particular will enjoy:

3D + AI: Stable Diffusion comes to Blender

I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):