Category Archives: DALL•E

Content credentials are coming to DALL•E

From its first launch, Adobe Firefly has included support for content credentials, providing more transparency around the origin of generated images, and I’m very pleased to see Open AI moving in the same direction:

Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3. 

We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.

“Sky Dachshunds!” The future of creativity?

Here are four minutes that I promise you won’t regret spending as Nathan Shipley demonstrates DALL•E 3 working inside ChatGPT to build up an entire visual world:

I mean, seriously, the demo runs through creating:

  • Ideas
  • Initial visuals
  • Logos
  • Apparel featuring the logos
  • Game art
  • Box copy
  • Games visualized in multiple styles
  • 3D action figures
  • and more.

Insane. Also charming: its extremely human inability to reliably spell “Dachshund!”

DALL•E/Stable Diffusion Photoshop panel gains new features

Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:

Dalí meets DALL•E! 👨🏻‍🎨🤖

Among the great pleasures of this year’s revolutions in AI imaging has been the chance to discover & connect with myriad amazing artists & technologists. I’ve admired the work of Nathan Shipley, so I was delighted to connect him with my self-described “grand-mentee” Joanne Jang, PM for DALL•E. Nathan & his team collaborated with the Dalí Museum & OpenAI to launch Dream Tapestry, a collaborative realtime art-making experience.

The Dream Tapestry allows visitors to create original, realistic Dream Paintings from a text description. Then, it stitches a visitor’s Dream Painting together with five other visitors’ paintings, filling in the spaces between them to generate one collective Dream Tapestry. The result is an ever-growing series of entirely original Dream Tapestries, exhibited on the walls of the museum.

Check it out:

DALL•E is now available to everyone

Whew—no more wheedling my “grand-mentee” Joanne on behalf of colleagues wanting access. 😅

Starting today, we are removing the waitlist for the DALL·E beta so users can sign up and start using it immediately. More than 1.5M users are now actively creating over 2M images a day with DALL·E—from artists and creative directors to authors and architects—with over 100K users sharing their creations and feedback in our Discord community.

You can sign up here. Also exciting:

We are currently testing a DALL·E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system.

It’s hard to overstate just how much this groundbreaking technology has rocked our whole industry—all since publicly debuting less than 6 months ago! Congrats to the whole team. I can’t wait to see what they’re cooking up next.

DALL•E outpainting arrives

Let the canvases extend in every direction! The thoughtfully designed new tiling UI makes it easy to synthesize adjacent chunks in sequence, partly overcoming current resolution limits in generative imaging:

Here’s a nice little demo from our designer Davis Brown, who takes his dad Russell’s surreal desert explorations to totally new levels:

Using DALL•E for generative fashion design

Amazing work from the always clever Karen X. Cheng, collaborating with Paul Trillo & others:

 

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

 

 

A post shared by Karen X (@karenxcheng)

Speaking of Paul here’s a fun new little VFX creation made using DALL•E:

CLIP interrogator reveals what your robo-artist assistant sees

Ever since DALL•E hit the scene, I’ve been wanting to know what words its model for language-image pairing would use to describe images:

Now the somewhat scarily named CLIP Interrogator promises exactly that kind of insight:

What do the different OpenAI CLIP models see in an image? What might be a good text prompt to create similar images using CLIP guided diffusion or another text to image model? The CLIP Interrogator is here to get you answers!

Here’s hoping it helps us get some interesting image -> text -> image flywheels spinning.

DALL•E + Snapchat = Clothing synthesis + try-on

Though we don’t (yet?) have the ability to use 3D meshes (e.g. those generated from a photo of a person) to guide text-based synthesis through systems like DALL•E, here’s a pretty compelling example of making 2D art, then wrapping it onto a body in real time:

Ketchup goes AI…? Heinz puts DALL•E to work

Interesting, and of course inevitable:

“This emerging tech isn’t perfect yet, so we got some weird results along with ones that looked like Heinz—but that was part of the fun. We then started plugging in ketchup combination phrases like ‘impressionist painting of a ketchup bottle’ or ‘ketchup tarot card’ and the results still largely resembled Heinz. We ultimately found that no matter how we were asking, we were still seeing results that looked like Heinz.”

Pass the Kemp!

[Via Aaron Hertzmann]

More DALL•E + After Effects magic

Creator Paul Trillo (see previous) is back at it. Here’s new work + a peek into how it’s made: