Upcoming Firefly events

Come meet Adobe folks & fellow creators in person!

  • London (4/15) (Rufus Deuchler presenting)
  • NYC (4/20) (Terry White + Brooke Hopper presenting)
  • SF (4/26) (Paul Trani + Brooke Hopper presenting)

Here’s info for the London event:

——–

We are finally back in London! Join us for a VERY special creative community night.

Get to know the latest from Adobe creative tools, Adobe Express and Adobe Firefly. Learn why you should have Adobe Express on your list of tools to quickly create standout content for social media and beyond using beautiful templates from Adobe. We’ll show you how to leverage your designed assets from Photoshop in to your workflow.

We’re also presenting Adobe Firefly, a generative AI made for creators. With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Get ready to create unique posters, banners, social posts, and more with a simple text prompt. With Firefly, the plan is to do this and more — like uploading a mood board to generate totally original, customizable content.

Meet creators, artists, writers, and designers. Plus hang out with Chris Do and The Futur team! With sips, snacks, and a spotlight on inspiring projects — you won’t want to miss this.  


Space is limited, please register now.  

Some delightful Firefly-made characters

I love these little buggers from longtime Adobean Lee Brimelow. We really need to make it easy to save and share cool prompt/preset combos like these. Stay tuned!

Good Firefly perspective: livestream & space

I enjoyed hearing my colleagues & outside folks discussing the origin, vision, and road ahead for Adobe Firefly in this livestream…

Eric Snowden is the VP of Design at Adobe and is responsible for the product design teams for the Digital Media business, which include Creative Cloud…. Nishat Akhtar is a designer and creative leader with 15+ years of experience in designing and leading initiatives for global brands… Danielle Morimoto is a Design Manager for Adobe Express, based in San Francisco.

…and this Twitter space, featuring our group’s CTO Ely Greenfield, along with creator Karen X. Cheng (whose work I’ve featured here countless times), illustrator & brush creator Kyle T. Webster, and director of design Samantha Warren. Scrub ahead to about 2:45 to get to the conversation.

AI does the impossible: making the first actually likable Vanilla Ice song

Made with genuine diabeetus! All right stop, collaborate and listen:

On one hand, you may be convinced we somehow assembled the original cast of The Matrix alongside the ghost of Wilford Brimley to record one of the greatest rap covers of all time. On the other hand, you may find it more believable that we’ve been experimenting with AI voice trainers and lip flap technology in a way that will eventually open up some new doors for how we make videos. You have to admit, either option kind of rules.

Some great Firefly reels

Hey, remember when we launched Adobe Firefly what feels like 63 years ago? 😅 OMG, what a week. I am so tired & busy trying to get folks access (thanks for your patience!), answer questions, and more that I’ve barely had time to catch up on all the great content folks are making. I’ll work on that soon, and in the meantime, here are three quick clips that caught my eye.

First, OG author Deke McClelland shows off type effects:

@dekenow Create Type Effects Out of Thin Air with Adobe Firefly #AdobeFirefly #photoshop #genai #deketok #typeeffects #texteffect #news ♬ original sound – Deke McClelland

Next, Kyle Nutt does some light painting, compositing himself into Firefly images:

And here Don Allen Stevenson puts Firefly creations into augmented reality with the help of Adobe Aero:

A creator’s perspective on Firefly & ethics

I really appreciate hearing Karen X. Cheng’s thoughts on the essential topics of consent, compensation, and more. We’ve been engaging in lots of very helpful conversations with creators, and there’s of course much more to sort through. As always, your perspective here is most welcome.

Introducing Adobe Firefly!

I’m so pleased—and so tired! 😅—to be introducing Adobe Firefly, the new generative imaging foundation that a passionate band of us have been working to bring to the world. Check out the high-level vision…

…as well as the part more directly in my wheelhouse: the interactive preview site & this overview of great stuff that’s waiting in the wings:

I’ll have a lot more to share soon. In the meantime, we’d love to hear what you think of what you see so far!

Midjourney v5 arrives

Now I just need some actual time to try it out !

Thread of visual comparisons against the already amazing v4:

Animation: “Grand Canons”

Enjoy, if you will, this “visual symphony of everyday objects“:

A brush makes watercolors appear on a white sheet of paper. An everyday object takes shape, drawn with precision by an artist’s hand. Then two, then three, then four… Superimposed, condensed, multiplied, thousands of documentary drawings in successive series come to life on the screen, composing a veritable visual symphony of everyday objects. The accumulation, both fascinating and dizzying, takes us on a trip through time.

Kottke notes, “More of Biet’s work can be found on his website or on Instagram.”

Stable Diffusion can draw the contents of your brain

“It’s all in your head.” — Gorillaz

I’ve spent the last ~year talking about my brain being “DALL•E-pilled,” where I’ve started seeing just about everything (e.g. a weird truck) as some kind of AI manifestation. But that’s nothing compared to using generative imaging models to literally see your thoughts:

Researchers Yu Takagi and Shinji Nishimoto, from the Graduate School of Frontier Biosciences at Osaka University, recently wrote a paper outlining how it’s possible to reconstruct high res images (PDF) using latent diffusion models, by reading human brain activity gained from functional Magnetic Resonance Imaging (fMRI), “without the need for training or fine-tuning of complex deep generative models” (via Vice).

Use Stable Diffusion ControlNet in Photoshop

Check out this integration of sketch-to-image tech—and if you have ideas/requests on how you’d like to see capabilities like these get more deeply integrated into Adobe tools, lay ’em on me!

Also, it’s not in Photoshop, but as it made me think of the Photo Restoration Neural Filter in PS, check out this use of ControlNet to revive an old family photo:

“What is Mise en Scène?”

One of the great pleasures of parenting is, of course, getting to see your kids’ interests and knowledge grow, and yesterday my 13yo budding photographer Henry and I were discussing the concept of mise en scène. In looking up a proper explanation for him, I found this great article & video, which Kubrick/Shining lovers in particular will enjoy:

3D + AI: Stable Diffusion comes to Blender

I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):

AI: Running image synthesis in seconds, *on your telephone*

Looks like a bunch of my former teammates have been doing great work to enable Stable Diffusion to synthesize images in ~15s on an Android device:

In a demo video, Qualcomm shows version 1.5 of Stable Diffusion generating a 512 x 512 pixel image in under 15 seconds. Although Qualcomm doesn’t say what the phone is, it does say it’s powered by its flagship Snapdragon 8 Gen 2 chipset (which launched last November and has an AI-centric Hexagon processor). The company’s engineers also did all sorts of custom optimizations on the software side to get Stable Diffusion running optimally.

ControlNet is wild

This new capability in Stable Diffusion (think image-to-image, but far more powerful) produces some real magic. Check out what I got with some simple line art:

And check out this thread of awesome sauce:

Welcome to the meme-predicted future.

Adobe Substance 3D wins an Academy Award!

Well deserved recognition for this amazing team & tech:

To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.

Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.

An entirely generative realtime musical performance

1992 Pink Floyd laser light show in Dubuque, IA—you are back. 😅

Through this AI DJ project, we have been exploring the future of DJ performance with AI. At first, we tried to make an AI-based music selection system as an AI DJ. In the second iteration, we utilized a few AI models on stage to generate real-time symbolic music (i.e., MIDI). In the performance, a human DJ (Tokui) controlled various parameters of the generative AI models and drum machines. This time, we aim to advance one step further and deploy AI models to generate audio on stage in near real-time. Everything you hear during the performance will be pure AI-generation (no synthesizer, no drum machine).

In this performance, Emergent Rhythm, the human DJ will become an AJ or “AI Jockey” instead of a Disk Jockey, and he is expected to tame and ride the AI-generated audio stream in real-time. The distinctive characteristics of AI-based audio generation and “morphing” will provide a unique and even otherworldly sonic experience for the audience.

Live talk Saturday: “An Introduction to AI for Designers”

Sounds like it could be an interesting session:

Introducing the new DigitalFUTURES course of free AI tutorials.

Several of the top AI designers in the world are coming together to offer the world’s first free, comprehensive course in AI for designers. This course starts off at an introductory level and gets progressively more advanced. 18 Feb, Introductory Session 10.00 am EST, 4.00 pm CET, 11.00 pm China What is AI? What are Midjourney, DALL•E, Stable Diffusion, etc.? What is GPT3? What is ChatGPT? And how are they revolutionizing design?

Neil Leach
Shael Patel
Reem Mosleh
Clay Odom

New generative delights

Paul Trillo used Runway’s new Gen-1 experimental model to create a Cubist Simpsons intro:

Meanwhile fabdream.ai salutes the power of love:

Back from the land of steam & snow 🚂

It’s been quiet here for a few days as my 13-year-old budding photographer son Henry & I were off at the Nevada Northern Railway’s Winter Steam Photo Weekend Spectacular. We had a staggeringly good time, and now my poor MacBook is liquefying under the weight of processing our visual haul. 🤪 I plan to share more images & observations soon from the experience (which was somehow the first photo workshop, or even proper photo class, I’ve taken!). Meanwhile, here’s a little Insta gallery of Lego Henry in action:

For a taste of how the workshop works, check out this overview from past events:

Runway introduces “Gen-1” to stylize video

Check out this new generative stylization model. I’m intrigued by the idea of using simple primitives (think dollhouse furniture) to guide synthesis & stylization (e.g. of the buildings shown briefly here).

See this thread from company founder Cristóbal Valenzuela:

“Diffused Reality” lecture this Thursday

Photographer Dan Marcolina has been pushing the limits of digital creation for many years, and on Feb. 9 at 11am Eastern time, he’s scheduled to present a lecture. You can register here & check out details below:

—————————

Dan will demonstrate how to use an AI workflow to create dynamic, personalized imagery using your own photos. Additional information on Augmented Reality and thoughts from Dan’s 35-year design career will also be presented.

What attendees will learn:

  • Tips from Dan’s book iPhone Obsessed, revealing how to best shoot and process photos on your cell for use in the AI re-imagination process  SEE THE BOOK
  • The AI photo re-creation workflow with tips and tricks to get started quickly, showing how a single source image can be crafted to create new meaning.
  • The post process of upscaling, clean-up, post manipulation and color correction to obtain a gallery ready image.
  • As a bonus he will show a little of how he did the augmented reality aspect of the show.

Anyone interested in image creation, photography, illustration, painting, storytelling, design or who is curious about AI/AR and the future of photography will gain valuable insights from the presentation.

3D capture comes to Adobe Substance 3D Sampler 4.0

Photogrammetrize all the things!!

Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.

Here’s the workflow in more detail:

And here’s info on capture tools:

“The impossibilities are endless”: Yet more NeRF magic

Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:

Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:

https://twitter.com/jnack/status/1616981915902554112?s=20&t=5LOmsIoifLw8oNVMV2fYIw
As I mentioned last week, Luma Labs has enabled interactive model embedding, and now they’re making the viewer crazy-fast:

Me talk generative imaging one day

I got my professional start at AGENCY.COM, a big dotcom-era startup co-founded by creative whirlwind Kyle Shannon. Kyle has been exploring AI imaging like mad, and recently he’s organized an AI Artists Salon that anyone is welcome to join in person (Denver) or online:

The AI Artists Salon is a collaborative group of creatively-minded people and we welcome anyone curious about the tsunami of inspiring generative technologies already rocking our our world. See Community Links & Resources.

On Tuesday evening I had the chance to present some ideas & progress that has inspired me—nothing confidential about Adobe work, of course, but hopefully illuminating nonetheless. If you’re interested, check it out (and pro tip: if you set playback to 1.5x speed or higher, I sound a lot sharper & funnier!).

The world’s first (?) NeRF-powered commercial

Karen X. Cheng, back with another 3D/AI banger:

As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence: