Monthly Archives: February 2023

AI: Running image synthesis in seconds, *on your telephone*

Looks like a bunch of my former teammates have been doing great work to enable Stable Diffusion to synthesize images in ~15s on an Android device:

In a demo video, Qualcomm shows version 1.5 of Stable Diffusion generating a 512 x 512 pixel image in under 15 seconds. Although Qualcomm doesn’t say what the phone is, it does say it’s powered by its flagship Snapdragon 8 Gen 2 chipset (which launched last November and has an AI-centric Hexagon processor). The company’s engineers also did all sorts of custom optimizations on the software side to get Stable Diffusion running optimally.

ControlNet is wild

This new capability in Stable Diffusion (think image-to-image, but far more powerful) produces some real magic. Check out what I got with some simple line art:

And check out this thread of awesome sauce:

Welcome to the meme-predicted future.

Adobe Substance 3D wins an Academy Award!

Well deserved recognition for this amazing team & tech:

To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.

Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.

An entirely generative realtime musical performance

1992 Pink Floyd laser light show in Dubuque, IA—you are back. 😅

Through this AI DJ project, we have been exploring the future of DJ performance with AI. At first, we tried to make an AI-based music selection system as an AI DJ. In the second iteration, we utilized a few AI models on stage to generate real-time symbolic music (i.e., MIDI). In the performance, a human DJ (Tokui) controlled various parameters of the generative AI models and drum machines. This time, we aim to advance one step further and deploy AI models to generate audio on stage in near real-time. Everything you hear during the performance will be pure AI-generation (no synthesizer, no drum machine).

In this performance, Emergent Rhythm, the human DJ will become an AJ or “AI Jockey” instead of a Disk Jockey, and he is expected to tame and ride the AI-generated audio stream in real-time. The distinctive characteristics of AI-based audio generation and “morphing” will provide a unique and even otherworldly sonic experience for the audience.

Live talk Saturday: “An Introduction to AI for Designers”

Sounds like it could be an interesting session:

Introducing the new DigitalFUTURES course of free AI tutorials.

Several of the top AI designers in the world are coming together to offer the world’s first free, comprehensive course in AI for designers. This course starts off at an introductory level and gets progressively more advanced. 18 Feb, Introductory Session 10.00 am EST, 4.00 pm CET, 11.00 pm China What is AI? What are Midjourney, DALL•E, Stable Diffusion, etc.? What is GPT3? What is ChatGPT? And how are they revolutionizing design?

Neil Leach
Shael Patel
Reem Mosleh
Clay Odom

New generative delights

Paul Trillo used Runway’s new Gen-1 experimental model to create a Cubist Simpsons intro:

Meanwhile salutes the power of love:

Back from the land of steam & snow 🚂

It’s been quiet here for a few days as my 13-year-old budding photographer son Henry & I were off at the Nevada Northern Railway’s Winter Steam Photo Weekend Spectacular. We had a staggeringly good time, and now my poor MacBook is liquefying under the weight of processing our visual haul. 🤪 I plan to share more images & observations soon from the experience (which was somehow the first photo workshop, or even proper photo class, I’ve taken!). Meanwhile, here’s a little Insta gallery of Lego Henry in action:

For a taste of how the workshop works, check out this overview from past events:

Runway introduces “Gen-1” to stylize video

Check out this new generative stylization model. I’m intrigued by the idea of using simple primitives (think dollhouse furniture) to guide synthesis & stylization (e.g. of the buildings shown briefly here).

See this thread from company founder Cristóbal Valenzuela:

“Diffused Reality” lecture this Thursday

Photographer Dan Marcolina has been pushing the limits of digital creation for many years, and on Feb. 9 at 11am Eastern time, he’s scheduled to present a lecture. You can register here & check out details below:


Dan will demonstrate how to use an AI workflow to create dynamic, personalized imagery using your own photos. Additional information on Augmented Reality and thoughts from Dan’s 35-year design career will also be presented.

What attendees will learn:

  • Tips from Dan’s book iPhone Obsessed, revealing how to best shoot and process photos on your cell for use in the AI re-imagination process  SEE THE BOOK
  • The AI photo re-creation workflow with tips and tricks to get started quickly, showing how a single source image can be crafted to create new meaning.
  • The post process of upscaling, clean-up, post manipulation and color correction to obtain a gallery ready image.
  • As a bonus he will show a little of how he did the augmented reality aspect of the show.

Anyone interested in image creation, photography, illustration, painting, storytelling, design or who is curious about AI/AR and the future of photography will gain valuable insights from the presentation.

3D capture comes to Adobe Substance 3D Sampler 4.0

Photogrammetrize all the things!!

Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.

Here’s the workflow in more detail:

And here’s info on capture tools: