Category Archives: AI/ML

Neural JNack has entered the chat… 🤖

Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:

For comparison, here’s the 3D model generated via the photogrammetry approach:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

Feedback, please: AI-powered ideation & collaboration?

A new (to me, at least) group called Kive has just introduced AI Canvas.

Here’s a quick demo:

To my eye it’s similar to Prompt.ist, introduced a couple of weeks ago by Facet:

https://twitter.com/josephreisinger/status/1586042022401409024

I’m curious: Have you checked out these tools, and do you intend to put them to use in your creative processes? I have some thoughts that I can share soon, but in the meantime it’d be great to hear yours.

PetPortrait.ai promises bespoke images of animals

We’re at just the start of what I expect to be an explosion of hyper-specific offerings powered by AI.

For $24, PetPortrait.ai offers “40 high resolution, beautiful, one-of-a-kind portraits of your pets in a variety of styles.” They say it takes 4-6 hours and requires the following input:

  • ~10 portrait photos of their face
  • ~5 photos from different angles of their head and chest
  • ~5 full-body photos

It’ll be interesting to see what kind of traction this gets. The service Turn Me Royal offers more human-made offerings in a similar vein, and we delighted our son by commissioning this doge-as-Venetian-doge portrait (via an artist on Etsy) a couple of years ago:

Midjourney can produce stunning type

At Adobe MAX a couple of weeks ago, the company offered a sneak peek of editable type in Adobe Express being rendered via a generative model:

https://twitter.com/jnack/status/1582818166698217472?s=20&t=yI2t5EpbhqVNWb7Ws9DWxQ

That sort of approach could pair amazingly with this sort of Midjourney output:

I’m not working on such efforts & am not making an explicit link between the two—but broadly speaking, I find the intersection of such primitives/techniques to be really promising.

Runway “Infinite Canvas” enables outpainting

I’ve tried it & it’s pretty slick. These guys are cooking with gas! (Also, how utterly insane would this have been to see even six months ago?! What a year, what a world.)

A fistful of generative imaging news

Man, I can’t keep up with this stuff—and that’s a great problem to have. Here are some interesting finds from just the last few days:

Adobe “Made In The Shade” sneak is 😎

OMG—interactive 3D shadow casting in 2D photos FTW! 🔥

In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.

Check out AI backdrop generation, right in the Photoshop beta today

One of the sleeper features that debuted at Adobe MAX is the new Create Background, found under Neural Filters. (Note that you need to be running the current public beta release of Photoshop, available via the Creative Cloud app—y’know, that little “Cc” icon dealio you ignore in your menu bar. 🙃)

As this quick vid demonstrates, the filter can not only generate backgrounds based on text, it links to a Behance gallery containing images and popular prompts. You can use these visuals as inspiration, then use the prompts to produce artwork within the plugin:

https://youtu.be/oMVfxyQbO5c?t=74

Here’s the Behance browser:

Stable Diffusion + Adobe Fonts = 🧙‍♂️🔥

Check out my teammates’ new explorations, demoed here on Adobe Express:

Per the blog post:

Generative AI incorporated into Adobe Express will help less experienced creators achieve their unique goals. Rather than having to find a pre-made template to start a project with, Express users could generate a template through a prompt, and use Generative AI to add an object to the scene, or create a unique font based on their description. But they still will have full control — they can use all of the Adobe Express tools for editing images, changing colors, and adding fonts to create the flyer, poster, or social media post they imagine.

Turn images into usable Stable Diffusion prompts

LatentSpace.dev promises to turn your images into text prompts that can be used in Stable Diffusion to create new artwork. Watch it work:

It interpreted a pic of my old whip as being, among other things, a “5. 1975 pontiac firebird shooting brake wagon estate.” Not entirely bad! 😌

“Imagic”: Text-based editing of photos

It seems almost too good to be true, but Google Researchers & their university collaborators have unveiled a way to edit images using just text:

In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. — each within its single high-resolution natural image provided by the user.

Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object).

I can’t wait to see it in action!

Stable Diffusion meets WebAR

Back at the start of my DALL•E journey, I wished aloud for a diffusion-powered mobile app:

https://twitter.com/jnack/status/1529977613623496704?s=20&t=dlYc1z2m-Cxb61G0KCaiIw

Now, thanks to the openness of Stable Diffusion & WebAR, creators are bringing that vision closer to reality:

https://twitter.com/stspanho/status/1581707753747537920?s=20&t=JPLmD_bV0U4Gkv2-2bJX-g

I can’t wait to see what’s next!

Blender + Stable Diffusion = 🪄

Easy placement/movement of 3D primitives -> realistic/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it’s been difficult to bring the level of quality & consistency up to what Adobe users demand.

Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:

Check out NeRF Studio & some eye-popping results

The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:

https://twitter.com/akanazawa/status/1577686321119645696?s=20&t=OA61aUUy3A6P1aMQiUIzbA

The kind of results one can generate from just a series of photos or video frames is truly bonkers:

Here’s a tutorial on how to use it:

Meta introduces text to video 👀

OMG, what is even happening?!

Per the site,

The system uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves. With this data, Make-A-Video lets you bring your imagination to life by generating whimsical, one-of-a-kind videos with just a few words or lines of text.

Completely insane. DesireToKnowMoreIntensifies.gif!

DALL•E is now available to everyone

Whew—no more wheedling my “grand-mentee” Joanne on behalf of colleagues wanting access. 😅

Starting today, we are removing the waitlist for the DALL·E beta so users can sign up and start using it immediately. More than 1.5M users are now actively creating over 2M images a day with DALL·E—from artists and creative directors to authors and architects—with over 100K users sharing their creations and feedback in our Discord community.

You can sign up here. Also exciting:

We are currently testing a DALL·E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system.

It’s hard to overstate just how much this groundbreaking technology has rocked our whole industry—all since publicly debuting less than 6 months ago! Congrats to the whole team. I can’t wait to see what they’re cooking up next.

NVIDIA’s GET3D promises text-to-model generation

Depending on how well it works, tech like this could be the greatest unlock in 3D creation the world has ever known.

The company blog post features interesting, promising details:

Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.

GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. […]

GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.

See also Dream Fields (mentioned previously) from Google:

Photoshop-Stable Diffusion plugin adds inpainting with masks, layer-based img2img

Christian Cantrell + the Stability devs remain a house on fire:

Here’s a more detailed (3-minute) walk-through of this free plugin:

Demo: Generating an illustrated narrative with DreamBooth

The Corridor Crew has been banging on Stable Diffusion & Google’s new DreamBooth tech (see previous) that enables training the model to understand a specific concept—e.g. one person’s face. Here they’ve trained it using a few photos of team member Sam Gorski, then inserted him into various genres:

From there they trained up models for various guys at the shop, then created an illustrated fantasy narrative. Just totally incredible, and their sheer exuberance makes the making-of pretty entertaining:

Lexica adds reverse-image search

The Stable Diffusion-centered search engine (see a few posts back) now makes it easy to turn a real-world concept into a Stable Diffusion prompt:

This seems like precisely what I pined for publicly, albeit then about DALL•E:

Honoring creators’ wishes: Source+ & “Have I Been Trained”

I’m really excited to see this work from artists Holly Dryhurst & Mat Herndon. From Input Mag:

Dryhurst and Herndon are developing a standard they’re calling Source+, which is designed as a way of allowing artists to and opt into — or out of — allowing their work being used as training data for AI. (The standard will cover not just visual artists, but musicians and writers, too.) They hope that AI generator developers will recognize and respect the wishes of artists whose work could be used to train such generative tools.

Source+ (now in beta) is a product of the organization Spawning… [It] also developed Have I Been Trained, a site that lets artists see if their work is among the 5.8 billion images in the Laion-5b dataset, which is used to train the Stable Diffusion and MidJourney AI generators. The team plans to add more training datasets to pore through in the future.

The creators also draw a distinction between the rights of living vs. dead creators:

The project isn’t aimed at stopping people putting, say, “A McDonalds restaurant in the style of Rembrandt” into DALL-E and gazing on the wonder produced. “Rembrandt is dead,” Dryhurst says, “and Rembrandt, you could argue, is so canonized that his work has surpassed the threshold of extreme consequence in generating in their image.” He’s more concerned about AI image generators impinging on the rights of living, mid-career artists who have developed a distinctive style of their own.

And lastly,

“We’re not looking to build tools for DMCA takedowns and copyright hell,” he says. “That’s not what we’re going for, and I don’t even think that would work.”

On a personal note, I’m amused to see what the system thinks constitutes “John Nack”—apparently chubby German-ish old chaps…? 🙃

Relight faces via a slick little web app

Check out ClipDrop’s relighting app, demoed here:

Fellow nerds might enjoy reading about the implementation details.

AI art -> “Bullet Hell” & Sirenhead

Shoon is a recently released side scrolling shmup,” says Vice, “that is fairly unremarkable, except for one quirk: it’s made entirely with art created by Midjourney, an AI system that generates images from text prompts written by users.’ Check out the results:

Meanwhile my friend Bilawal is putting generative imaging to work in creating viral VFX:

DALL•E outpainting arrives

Let the canvases extend in every direction! The thoughtfully designed new tiling UI makes it easy to synthesize adjacent chunks in sequence, partly overcoming current resolution limits in generative imaging:

Here’s a nice little demo from our designer Davis Brown, who takes his dad Russell’s surreal desert explorations to totally new levels:

Using DALL•E for generative fashion design

Amazing work from the always clever Karen X. Cheng, collaborating with Paul Trillo & others:

 

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

 

 

A post shared by Karen X (@karenxcheng)

Speaking of Paul here’s a fun new little VFX creation made using DALL•E: