Monthly Archives: September 2022

Meta introduces text to video 👀

OMG, what is even happening?!

Per the site,

The system uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves. With this data, Make-A-Video lets you bring your imagination to life by generating whimsical, one-of-a-kind videos with just a few words or lines of text.

Completely insane. DesireToKnowMoreIntensifies.gif!

DALL•E is now available to everyone

Whew—no more wheedling my “grand-mentee” Joanne on behalf of colleagues wanting access. 😅

Starting today, we are removing the waitlist for the DALL·E beta so users can sign up and start using it immediately. More than 1.5M users are now actively creating over 2M images a day with DALL·E—from artists and creative directors to authors and architects—with over 100K users sharing their creations and feedback in our Discord community.

You can sign up here. Also exciting:

We are currently testing a DALL·E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system.

It’s hard to overstate just how much this groundbreaking technology has rocked our whole industry—all since publicly debuting less than 6 months ago! Congrats to the whole team. I can’t wait to see what they’re cooking up next.

NVIDIA’s GET3D promises text-to-model generation

Depending on how well it works, tech like this could be the greatest unlock in 3D creation the world has ever known.

The company blog post features interesting, promising details:

Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.

GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. […]

GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.

See also Dream Fields (mentioned previously) from Google:

Photoshop-Stable Diffusion plugin adds inpainting with masks, layer-based img2img

Christian Cantrell + the Stability devs remain a house on fire:

Here’s a more detailed (3-minute) walk-through of this free plugin:

Demo: Generating an illustrated narrative with DreamBooth

The Corridor Crew has been banging on Stable Diffusion & Google’s new DreamBooth tech (see previous) that enables training the model to understand a specific concept—e.g. one person’s face. Here they’ve trained it using a few photos of team member Sam Gorski, then inserted him into various genres:

From there they trained up models for various guys at the shop, then created an illustrated fantasy narrative. Just totally incredible, and their sheer exuberance makes the making-of pretty entertaining:

Generative dancing about architecture

Paul Trillo is back at it, extending a Chinese restaurant via Stable Diffusion, After Effects, and Runway:

Elsewhere, check out this mutating structure. (Next up: Falling Water made of actual falling water?)

Lexica adds reverse-image search

The Stable Diffusion-centered search engine (see a few posts back) now makes it easy to turn a real-world concept into a Stable Diffusion prompt:

This seems like precisely what I pined for publicly, albeit then about DALL•E:

Honoring creators’ wishes: Source+ & “Have I Been Trained”

I’m really excited to see this work from artists Holly Dryhurst & Mat Herndon. From Input Mag:

Dryhurst and Herndon are developing a standard they’re calling Source+, which is designed as a way of allowing artists to and opt into — or out of — allowing their work being used as training data for AI. (The standard will cover not just visual artists, but musicians and writers, too.) They hope that AI generator developers will recognize and respect the wishes of artists whose work could be used to train such generative tools.

Source+ (now in beta) is a product of the organization Spawning… [It] also developed Have I Been Trained, a site that lets artists see if their work is among the 5.8 billion images in the Laion-5b dataset, which is used to train the Stable Diffusion and MidJourney AI generators. The team plans to add more training datasets to pore through in the future.

The creators also draw a distinction between the rights of living vs. dead creators:

The project isn’t aimed at stopping people putting, say, “A McDonalds restaurant in the style of Rembrandt” into DALL-E and gazing on the wonder produced. “Rembrandt is dead,” Dryhurst says, “and Rembrandt, you could argue, is so canonized that his work has surpassed the threshold of extreme consequence in generating in their image.” He’s more concerned about AI image generators impinging on the rights of living, mid-career artists who have developed a distinctive style of their own.

And lastly,

“We’re not looking to build tools for DMCA takedowns and copyright hell,” he says. “That’s not what we’re going for, and I don’t even think that would work.”

On a personal note, I’m amused to see what the system thinks constitutes “John Nack”—apparently chubby German-ish old chaps…? 🙃

Google & NASA bring 3D to search

Great to see my old teammates (with whom I was working to enable cloud-rendered as well as locally rendered 3D experiences) continuing their work.

NASA and Google Arts & Culture have partnered to bring more than 60 3D models of planets, moons and NASA spacecraft to Google Search. When you use Google Search to learn about these topics, just click on the View in 3D button to understand the different elements of what you’re looking at even better. These 3D annotations will also be available for cells, biological concepts (like skeletal systems), and other educational models on Search.

Insta360 announces the X3

Who’s got two thumbs & just pulled the trigger? This guuuuuy. 😌

Now, will it be worth it? I sure hope so.

Fortunately I got to try out the much larger & more expensive One R 1″ Edition back in July & concluded that it’s not for me (heavier, lacking Bullet Time, and not producing appreciably better quality results—at least for the kind of things I shoot).

I’m of course hoping the X3 (success to my much-beloved One X2) will be more up my alley. Here’s some third-party perspective:

Relight faces via a slick little web app

Check out ClipDrop’s relighting app, demoed here:

Fellow nerds might enjoy reading about the implementation details.

AI art -> “Bullet Hell” & Sirenhead

Shoon is a recently released side scrolling shmup,” says Vice, “that is fairly unremarkable, except for one quirk: it’s made entirely with art created by Midjourney, an AI system that generates images from text prompts written by users.’ Check out the results:

Meanwhile my friend Bilawal is putting generative imaging to work in creating viral VFX:

DALL•E outpainting arrives

Let the canvases extend in every direction! The thoughtfully designed new tiling UI makes it easy to synthesize adjacent chunks in sequence, partly overcoming current resolution limits in generative imaging:

Here’s a nice little demo from our designer Davis Brown, who takes his dad Russell’s surreal desert explorations to totally new levels: