Monthly Archives: November 2022

Crowdsourced AI Snoop Doggs (is a real headline you can now read)

The Doggfather recently shared a picture of himself (rendered presumably via some Stable Diffusion/DreamBooth personalization instance)…

…thus inducing fans to reply with their own variations (click tweet above to see the thread). Among the many fun Snoop Doggs (or is it Snoops Dogg?), I’m partial to Cyberpunk…

…and Yodogg:

Some amazing AI->parallax animations

Great work from Guy Parsons, combining Midjourney with Capcut:

And from the replies, here’s another fun set:

Check out frame interpolation from Runway

I meant to share this one last month, but there’s just no keeping up with the pace of progress!

My initial results are on the uncanny side, but more skillful practitioners like Paul Trillo have been putting the tech to impressive use:

Happy Thanksgiving! Pass the tasty inpainting.

Among the many, many things for which I can give thanks this year, I want to express my still-gobsmacked appreciation of the academic & developer communities that have brought us this year’s revolution in generative imaging. One of those developers is our friend & Adobe veteran Christian Cantrell, and he continues to integrate new tech from his new company (Stability AI) into Photoshop at a breakneck pace. Here’s the latest:

Here he provides a quick comparison between results from the previous Stable Diffusion inpainting model (top) & the latest one:

In any event, wherever you are & however you celebrate (or don’t), I hope you’re well. Thanks for reading, and I wish all the best for the coming year!

Dalí meets DALL•E! 👨🏻‍🎨🤖

Among the great pleasures of this year’s revolutions in AI imaging has been the chance to discover & connect with myriad amazing artists & technologists. I’ve admired the work of Nathan Shipley, so I was delighted to connect him with my self-described “grand-mentee” Joanne Jang, PM for DALL•E. Nathan & his team collaborated with the Dalí Museum & OpenAI to launch Dream Tapestry, a collaborative realtime art-making experience.

The Dream Tapestry allows visitors to create original, realistic Dream Paintings from a text description. Then, it stitches a visitor’s Dream Painting together with five other visitors’ paintings, filling in the spaces between them to generate one collective Dream Tapestry. The result is an ever-growing series of entirely original Dream Tapestries, exhibited on the walls of the museum.

Check it out:

My Heritage introduces “AI Time Machine”

Another day, another special-purpose variant of AI image generation.

A couple of years ago, MyHeritage struck a chord with the world via Deep Nostalgia, an online app that could animate the faces of one’s long-lost ancestors. In reality it could animate just about any face in a photo, but I give them tons of credit for framing the tech in a really emotionally resonant way. It offered not a random capability, but rather a magical window into one’s roots.

Now the company is licensing tech from Astria, which itself builds on Stable Diffusion & Google Research’s DreamBooth paper. Check it out:

Interestingly (perhaps only to me), it’s been hard for MyHeritage to sustain the kind of buzz generated by Deep Nostalgia. They later introduced the much more ambitious DeepStory, which lets you literally put words in your ancestors’ mouths. That seems not to have bent the overall needle in awareness, at least in the way that the earlier offering did. Let’s see how portrait generation fares.

Neural JNack has entered the chat… 🤖

Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:

For comparison, here’s the 3D model generated via the photogrammetry approach:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

Feedback, please: AI-powered ideation & collaboration?

A new (to me, at least) group called Kive has just introduced AI Canvas.

Here’s a quick demo:

To my eye it’s similar to Prompt.ist, introduced a couple of weeks ago by Facet:

https://twitter.com/josephreisinger/status/1586042022401409024

I’m curious: Have you checked out these tools, and do you intend to put them to use in your creative processes? I have some thoughts that I can share soon, but in the meantime it’d be great to hear yours.

PetPortrait.ai promises bespoke images of animals

We’re at just the start of what I expect to be an explosion of hyper-specific offerings powered by AI.

For $24, PetPortrait.ai offers “40 high resolution, beautiful, one-of-a-kind portraits of your pets in a variety of styles.” They say it takes 4-6 hours and requires the following input:

  • ~10 portrait photos of their face
  • ~5 photos from different angles of their head and chest
  • ~5 full-body photos

It’ll be interesting to see what kind of traction this gets. The service Turn Me Royal offers more human-made offerings in a similar vein, and we delighted our son by commissioning this doge-as-Venetian-doge portrait (via an artist on Etsy) a couple of years ago:

Podcast: “Why Figma is selling to Adobe for $20 billion, with CEO Dylan Field”

I had the chance to grab breakfast with Figma founder & CEO Dylan Field a couple of weeks ago, and I found him to be incredibly modest and down to earth. He reminded me of certain fellow Brown CS majors—the brilliant & gracious founding team of Adobe After Effects. I can’t wait for them all to meet someday soon.

In any case, I really enjoyed the hour-long interview Dylan did with Nilay Patel of The Verge. Here’s hoping that the Adobe deal goes through as planned & that we get to do great things together!

Midjourney can produce stunning type

At Adobe MAX a couple of weeks ago, the company offered a sneak peek of editable type in Adobe Express being rendered via a generative model:

https://twitter.com/jnack/status/1582818166698217472?s=20&t=yI2t5EpbhqVNWb7Ws9DWxQ

That sort of approach could pair amazingly with this sort of Midjourney output:

I’m not working on such efforts & am not making an explicit link between the two—but broadly speaking, I find the intersection of such primitives/techniques to be really promising.

Adobe 3D Design is looking for 2023 interns

These sound like great gigs!

The 3D and Immersive Design Team at Adobe is looking for a design intern who will help envision and build the future of Adobe’s 3D and MR creative tools.

With the Adobe Substance 3D Collection and Adobe Aero, we’re making big moves in 3D, but it is still early days! This is a huge opportunity space to shape the future of 3D and AR at Adobe. We believe that tools shape our world, and by building the tools that power 3D creativity we can have an outsized impact on our world.

Runway “Infinite Canvas” enables outpainting

I’ve tried it & it’s pretty slick. These guys are cooking with gas! (Also, how utterly insane would this have been to see even six months ago?! What a year, what a world.)