I got my professional start at AGENCY.COM, a big dotcom-era startup co-founded by creative whirlwind Kyle Shannon. Kyle has been exploring AI imaging like mad, and recently he’s organized an AI Artists Salon that anyone is welcome to join in person (Denver) or online:
The AI Artists Salon is a collaborative group of creatively-minded people and we welcome anyone curious about the tsunami of inspiring generative technologies already rocking our our world. See Community Links & Resources.
On Tuesday evening I had the chance to present some ideas & progress that has inspired me—nothing confidential about Adobe work, of course, but hopefully illuminating nonetheless. If you’re interested, check it out (and pro tip: if you set playback to 1.5x speed or higher, I sound a lot sharper & funnier!).
💭 Ever wondered if you can embed Luma's interactive NeRFs, panoramas, video renders in your own websites, blogs, etc. or wish you had the ability to customise the UI of the share page? Look no further.
Here’s an example made from a quick capture I did of my friend (nothing special, but amazing what one can get simply by walking in a circle while recording video):
As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:
Three years ago today, the project that eventually became NeRF started working (positional encoding was the missing piece that got us from "hmm" to "wow"). Here's a snippet of that email thread between Matt Tancik, @_pratul_, @BenMildenhall, and me. Happy birthday NeRF! pic.twitter.com/UtuQpWsOt4
Thank God for the vibrant developer community—esp. Adobe vet Christian Cantrell (who somehow finds time to rev his plugin while serving as VP of product for Stability.ai):
“HEY MAN, you ever drop acid?? No? Well I do, and it looks *just like this*!!” — an excitable Googler when someone wallpapered a big meeting room in giant DeepDream renderings
In a similar vein, have fun tripping balls with AI, courtesy of Remi Molettee:
The company has announced a new mode for their Canvas painting app that turns simple brushstrokes into 360 environment maps for use in 3D apps or Omniverse. Check out this quick preview:
My teammate CJ Gammon has released a handy new Chrome extension that lets you select any image, then use it as the seed for new image generation. Check it out:
In this beautiful work from Paul Trillo & co., AI extends—instead of replaces—human creativity & effort:
Here’s a peek behind the scenes:
This project would have never existed without the use of AI. A variety of tools were used from #dalle2 and #stablediffusion to generate the background assets Automatic1111 #img2img and @runwayml to process the video along with @AdobeAE to create the camera moves and transitions pic.twitter.com/FwqwWto966
1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture) 2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”) 3. Upload your photo and then type in your prompts to remix the image
We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video
I’m not sure what to say about “The first rap fully written and sung by an AI with the voice of Snoop Dogg,” except that now I really want the ability to drop in collaborations by other well known voices—e.g. Christopher Walken.
Maybe someone can now lip-sync it with the faces of YoDogg & friends:
The marketers at Heinz had a little fun noticing that an AI image-making app (DALL•E, I’m guessing) tended to interpret requests for “ketchup” in the style of Heinz’s iconic bottle. Check it out:
The whole community of creators, including toolmakers, continues to feel its way forward in the fast-moving world of AI-enabled image generation. For reference, here are some of the statements I’ve been seeing:
“Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.”
Key questions they’ll ask include “Is a project copying or mimicking an artist’s work?” and “Does a project exploit a particular community or put anyone at risk of harm?”
From 3dtotal Publishing:
“3dtotal has four fundamental goals. One of them is to support and help the artistic community, so we cannot support AI art tools as we feel they hurt this community.”
“We oppose the commercial use of Artificially manufactured images and will not allow AI into our annual competitions at all levels.”
“AI was trained using copyrighted images. We will oppose any attempts to weaken copyright protections, as that is the cornerstone of the illustration community.”
This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:
Finally got to see Michaelangelo's David in Florence and rather than just take a photo like normal person, I spent 20 minutes walking around it capturing every angle looking like an insane person. It's hard to look cool when making a #NeRF but damn it looks cool later @LumaLabsAIpic.twitter.com/sLGJ2CKCJy
Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:
✨ Introducing Imagine 3D: a new way to create 3D with text! Our mission is to build the next generation of 3D and Imagine will be a big part of it. Today Imagine is in early access and as we improve we will bring it to everyone https://t.co/VIdilw7kpapic.twitter.com/v6Yi0mwZsY
On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:
In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:
Creative Reality Studio from D-ID (the folks behind the MyHeritage Deep Nostalgia tech that blew up a couple of years ago) can generate faces & scripts, then animate them. I find the results… interesting?
I believe strongly that creative tools must honor the wishes & rights of creative people. Hopefully that sounds thuddingly obvious, but it’s been less obvious how to get to a better state than the one we now inhabit, where a lot of folks are (quite reasonably, IMHO) up in arms about AI models having been trained on their work, without their consent. People broadly agree that we need solutions, but getting to them—especially via big companies—hasn’t been quick.
Thus it’s great to see folks like Mat Dryhurst & Holly Herndon driving things forward, working with Stability.ai and others to define opt-out/-in tools & get buy-in from model trainers. Check out the news:
Excited to announce that @StabilityAI have stepped up to honor artist opt-out requests in advance of the training of Stable Diffusion 3! 🧫🦾🎇
Artist & musician Ben Morin has been making some impressive pop-culture mashups, turning well-known characters into babies (using, I believe, Midjourney to combine a reference image with a prompt). Check out the results.
Our friend Christian Cantrell (20-year Adobe vet, now VP of Product at Stability.ai) continues his invaluable world to plug the world of generative imaging directly into Photoshop. Check out the latest, available for free here:
1) Support for extreme resolutions (up to 1MP). 2) Automatic selection of optimal models. 3) Access to all SD versions (1.4, 1.5, 2.0, and 2.1). 4) Account credits and avatar.https://t.co/gqFWpAkfnopic.twitter.com/DSwbC2xstL
It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:
Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)
NeRF update: Dollyzoom is now possible using @LumaLabsAI I shot this on my phone. NeRF is gonna empower so many people to get cinematic level shots Tutorial below –
Check out the latest magic, as described by Gizmodo:
To make an age-altering AI tool that was ready for the demands of Hollywood and flexible enough to work on moving footage or shots where an actor isn’t always looking directly at the camera, Disney’s researchers, as detailed in a recently published paper, first created a database of thousands of randomly generated synthetic faces. Existing machine learning aging tools were then used to age and de-age these thousands of non-existent test subjects, and those results were then used to train a new neural network called FRAN (face re-aging network).
When FRAN is fed an input headshot, instead of generating an altered headshot, it predicts what parts of the face would be altered by age, such as the addition or removal of wrinkles, and those results are then layered over the original face as an extra channel of added visual information. This approach accurately preserves the performer’s appearance and identity, even when their head is moving, when their face is looking around, or when the lighting conditions in a shot change over time. It also allows the AI generated changes to be adjusted and tweaked by an artist, which is an important part of VFX work: making the alterations perfectly blend back into a shot so the changes are invisible to an audience.
As I say, another day, another specialized application of algorithmic fine-tuning. Per Vice:
For $19, a service called PhotoAI will use 12-20 of your mediocre, poorly-lit selfies to generate a batch of fake photos specially tailored to the style or platform of your choosing. The results speak to an AI trend that seems to regularly jump the shark: A “LinkedIn” package will generate photos of you wearing a suit or business attire…
…while the “Tinder” setting promises to make you “the best you’ve ever looked”—which apparently means making you into an algorithmically beefed-up dudebro with sunglasses.
Meanwhile, the quality of generated faces continues to improve at a blistering pace:
…thus inducing fans to reply with their own variations (click tweet above to see the thread). Among the many fun Snoop Doggs (or is it Snoops Dogg?), I’m partial to Cyberpunk…
Among the many, many things for which I can give thanks this year, I want to express my still-gobsmacked appreciation of the academic & developer communities that have brought us this year’s revolution in generative imaging. One of those developers is our friend & Adobe veteran Christian Cantrell, and he continues to integrate new tech from his new company (Stability AI) into Photoshop at a breakneck pace. Here’s the latest:
Here he provides a quick comparison between results from the previous Stable Diffusion inpainting model (top) & the latest one:
In any event, wherever you are & however you celebrate (or don’t), I hope you’re well. Thanks for reading, and I wish all the best for the coming year!
Among the great pleasures of this year’s revolutions in AI imaging has been the chance to discover & connect with myriad amazing artists & technologists. I’ve admired the work of Nathan Shipley, so I was delighted to connect him with my self-described “grand-mentee” Joanne Jang, PM for DALL•E. Nathan & his team collaborated with the Dalí Museum & OpenAI to launch Dream Tapestry, a collaborative realtime art-making experience.
The Dream Tapestry allows visitors to create original, realistic Dream Paintings from a text description. Then, it stitches a visitor’s Dream Painting together with five other visitors’ paintings, filling in the spaces between them to generate one collective Dream Tapestry. The result is an ever-growing series of entirely original Dream Tapestries, exhibited on the walls of the museum.
Another day, another special-purpose variant of AI image generation.
A couple of years ago, MyHeritage struck a chord with the world via Deep Nostalgia, an online app that could animate the faces of one’s long-lost ancestors. In reality it could animate just about any face in a photo, but I give them tons of credit for framing the tech in a really emotionally resonant way. It offered not a random capability, but rather a magical window into one’s roots.
Now the company is licensing tech from Astria, which itself builds on Stable Diffusion & Google Research’s DreamBooth paper. Check it out:
Interestingly (perhaps only to me), it’s been hard for MyHeritage to sustain the kind of buzz generated by Deep Nostalgia. They later introduced the much more ambitious DeepStory, which lets you literally put words in your ancestors’ mouths. That seems not to have bent the overall needle in awareness, at least in the way that the earlier offering did. Let’s see how portrait generation fares.
Speaking of Bilawal, and in the vein of the PetPortrait.ai service I mentioned last week, here’s a fun little video in which he’s trained an AI model to create images of his mom’s dog. “Oreo lookin’ FESTIVE in that sweater, yo!” 🥰 I can only imagine that this kind of thing will become mainstream quickly.
Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:
I’m curious: Have you checked out these tools, and do you intend to put them to use in your creative processes? I have some thoughts that I can share soon, but in the meantime it’d be great to hear yours.
I’m not sure whom to credit with this impressive work (found here), nor how exactly they made it, but—like the bespoke pet portraits site I shared yesterday—I expect to see an explosion in such purpose-oriented applications of AI imaging:
We’re at just the start of what I expect to be an explosion of hyper-specific offerings powered by AI.
For $24, PetPortrait.ai offers “40 high resolution, beautiful, one-of-a-kind portraits of your pets in a variety of styles.” They say it takes 4-6 hours and requires the following input:
~10 portrait photos of their face
~5 photos from different angles of their head and chest
~5 full-body photos
It’ll be interesting to see what kind of traction this gets. The service Turn Me Royal offers more human-made offerings in a similar vein, and we delighted our son by commissioning this doge-as-Venetian-doge portrait (via an artist on Etsy) a couple of years ago:
A few weeks ago I shared info on Google’s “Infinite Nature” tech for generating eye-popping fly-throughs from still images. Now that team has shared various interesting tech details on how it all works. And if reading all that isn’t your bag, hey, at least enjoy some beautiful results:
I’m not working on such efforts & am not making an explicit link between the two—but broadly speaking, I find the intersection of such primitives/techniques to be really promising.
He notes, “Custom, fine-tuned models are absolutely game-changing, and in the future will almost certainly represent the majority of diffusion-based creativity.” 👀 Seems like a non-trivial statement coming from the new VP of product at Stability.ai.
I’ve tried it & it’s pretty slick. These guys are cooking with gas! (Also, how utterly insane would this have been to see even six months ago?! What a year, what a world.)
Introducing Infinite Image
Extend any image to infinite possibilities using a text description. A limitless canvas of creativity.
Christian has trained a model on Rivians & says (ambitiously, but not without some justification) that “This is how all advertising and marketing collateral will be made sooner than most of the world realizes.”
On a related note, here’s a thread (from an engineer at Shopify) on fine-tuning models to generate images of specific products (showing strengths/limitations).
I see numerous custom models emerging that enable creation of art in the style of Spider-Man, Pixar, and more.
OMG—interactive 3D shadow casting in 2D photos FTW! 🔥
In this sneak, we re-imagine what image editing would look like if we used Adobe Sensei-powered technologies to understand the 3D space of a scene – the geometry of a road and the car on the road, and the trees surrounding, the lighting coming from the sun and the sky, the interactions between all these objects leading to occlusions and shadows – from a single 2D photograph.
One of the sleeper features that debuted at Adobe MAX is the new Create Background, found under Neural Filters. (Note that you need to be running the current public beta release of Photoshop, available via the Creative Cloud app—y’know, that little “Cc” icon dealio you ignore in your menu bar. 🙃)
As this quick vid demonstrates, the filter can not only generate backgrounds based on text, it links to a Behance gallery containing images and popular prompts. You can use these visuals as inspiration, then use the prompts to produce artwork within the plugin:
I’m really excited to learn more about this development, which I’ve been eagerly awaiting. More control + more speed will make generative imaging truly, broadly useful. I’d like to understand how it compares to techniques like prompt editing.
Generative AI incorporated into Adobe Express will help less experienced creators achieve their unique goals. Rather than having to find a pre-made template to start a project with, Express users could generate a template through a prompt, and use Generative AI to add an object to the scene, or create a unique font based on their description. But they still will have full control — they can use all of the Adobe Express tools for editing images, changing colors, and adding fonts to create the flyer, poster, or social media post they imagine.