Mayhem in a minivan, LFC (Coexist)! Click or swipe through the gallery to see the video.
“The Heist,” conjured entirely in Google Veo
Here’s another interesting snapshot of progress in our collective speedrun towards generative storytelling. It’s easy to pick on the shortcomings, but can you imagine what you’d say upon seeing this in, say, the olden times of 2023?
The creator writes,
Introducing The Heist – Directed by Jason Zada. Every shot of this film was done via text-to video with Google Veo 2. It took thousands of generations to get the final film, but I am absolutely blown away by the quality, the consistency, and adherence to the original prompt. When I described “gritty NYC in the 80s” it delivered in spades – CONSISTENTLY. While this is still not perfect, it is, hands down, the best video generation model out there, by a long shot. Additionally, it’s important to add that no VFX, no clean up, no color correction has been added. Everything is straight out of Veo 2. Google DeepMind
SynthLight promises state-of-the-art relighting
Here’s a nice write-up covering this paper. It’ll be interesting to dig into the details of how it compares to previous work (see category). [Update: The work comes in part from Adobe Research—I knew those names looked familiar :-)—so here’s hoping we see it in Photoshop & other tools soon.]
this is wild..
this new AI relighting tool can detect the light source in the 3D environment of your image and relight your character, the shadows look so realistic..
it’s especially helpful for AI images
10 examples: pic.twitter.com/sxNR39YTeT
— el.cine (@EHuanglu) January 18, 2025
Krea introduces realtime 3D-guided image generation
Part 9,201 of me never getting over the fact we were working on stuff like this 2 years ago at Adobe (modulo the realtime aspect, which is rad) & couldn’t manage to ship it. It’ll be interesting to see whether the Krea guys (and/or others) pair this kind of interactive-quality rendering with a really high-quality pass, as NVIDIA demonstrated last week using Flux.
3D arrived to Krea.
this new feature lets you turn images into 3D objects and use them in our Real-time tool.
free for everyone. pic.twitter.com/b8gQMhUCN9
— KREA AI (@krea_ai) January 16, 2025
RIP David Lynch
Amidst all his groundbreaking masterworks, this clip remains my favorite. 🙂 I’ve been referring to my “f***ing TELEPHONE” for 15 years thanks to him.
Design history: Polish neon, scented movies, and more
Great stuff, as always, from 99 Percent Invisible:
Having taken the kids (and dog!) to the Las Vegas Museum of Neon (photos), now I want to drop by its Warsaw counterpart:
Creating a 3D scene from text
…featuring a dose of Microsoft Trellis!
Here’s how to create this cool 3D scene from a single image!
Midjourney (isometric image generation)
Trellis (Image to 3D Gaussian Splat)
Browser Lab (3D Editor Splat Import) pic.twitter.com/O1vJdaQRbc— IAN CURTIS (@XRarchitect) January 9, 2025
More about Trellis:
Powered by advanced AI, TRELLIS enables users to create high-quality, customizable 3D objects effortlessly using simple text or image prompts. This innovation promises to improve 3D design workflows, making it accessible to professionals and beginner alike. Here are some examples:
Coldest animation o’ the year
Happy Friday, y’all. 🙂
Previously: AI does the impossible: making the first actually likable Vanilla Ice song
Adobe demos generation of video with transparency
Exciting!
adobe just released a method that can generate transparent videos from text and images pic.twitter.com/zWKGvDxPxk
— Dreaming Tulpa (@dreamingtulpa) January 8, 2025
From the project page:
Alpha channels are crucial for visual effects (VFX), allowing transparent elements like smoke and reflections to blend seamlessly into scenes. We introduce TransPixar, a method to extend pretrained video models for RGBA generation while retaining the original RGB capabilities. […] Our approach effectively generates diverse and consistent RGBA videos, advancing the possibilities for VFX and interactive content creation.
NVIDIA + Flux = 3D magic
I may never stop being pissed that that the Firefly-3D integration we previewed nearly two years ago didn’t yield more fruit, at least on my watch:
Check out a sneak peek of #AdobeFirefly‘s forthcoming 3D module: https://t.co/3OHUqD4ZmI pic.twitter.com/E2ylawcPC1
— John Nack (@jnack) April 22, 2023
The world moves on, and now NVIDIA has teamed up with Black Forest Labs to enable 3D-conditioned image generation. Check out this demo (starting around 1:31:48):
For users interested in integrating the FLUX NIM microservice into their workflows, we have collaborated with NVIDIA to launch the NVIDIA AI Blueprint for 3D-guided generative AI. This packaged workflow allows users to guide image generation by laying out a scene in 3D applications like Blender, and using that composition with the FLUX NIM microservice to generate images that adhere to the scene. This integration simplifies image generation control and showcases what’s possible with FLUX models.
Skillful Lovecraftian horror
The Former Bird App™ is of course awash in mediocre AI-generated video creations, so it’s refreshing to see what a gifted filmmaker (in this case Ruairi Robinson) can do with emerging tools (in this case Google Veo)—even if that’s some slithering horror I’d frankly rather not behold!
AI vids get sublime & ridiculous
Matan Cohen-Grumi (see previous) asks, “What if music icons walked into their art?”
And on a much more ridiculous tip, there’s the Belt Squared & beyond!
— Awful Taste But Great Execution (@AwfulButGreat) January 4, 2025
Happy New Year!
Happy (very slightly belated) new year, everyone! Thanks for continuing to join me on this wild, sometimes befuddling, often exhilarating journey into our shared creative future. Some good perspective on the path ahead:
Good day to remember just how big that green tree is pic.twitter.com/KIow2bMB70
— Tim Urban (@waitbutwhy) January 1, 2025
Bonus wisdom from F. Scott Fitzgerald:
Wishing you the good cheer of this Lego fireplace
I hope you’ve been able to spend at least some warm times with friends & family this holiday season, and here’s to a great new year of crackling creativity:
New AI-powered upscalers arrive
Check out the latest from Topaz:
Topaz really cooked with their new upscaling model called “redefine” — basically every CSI “enhance” meme you’ve seen IRL.
Settings:
– 4x Upscale
– Creativity: 2
– Texture: 3
– No promptIt’s basically the Topaz take on the magnific style of “creative upscaling” where you use… pic.twitter.com/T7dLoAjFJt
— Bilawal Sidhu (@bilawalsidhu) December 17, 2024
Alternately, you can run InvSR via Gradio:
Image super-resolution model just dropped! Superior results even with a single sampling step.
InvSR: Arbitrary-steps Image Super-resolution via Diffusion Inversion. pic.twitter.com/gS7uoGwnQ8
— Gradio (@Gradio) December 16, 2024
When Hallmark movies go Lego
“From big-city go-getter to small-town goat-getter…” and the obligatory near-beards are in full effect:
Strolling through the latent space in Runway
I’ve long wanted—and advocated for building—this kind of flexible, spatial way to compose & blend among ideas. Here’s to new ideas for using new tools.
Supporting Non-Linear Exploration
Creative exploration rarely follows a straight line. The graph structure naturally affords exploration by allowing users to diverge at various points, creating new forks of possible alternatives. As more exploration occurs, the graph grows… pic.twitter.com/Yq18Caj94T
— Runway (@runwayml) December 2, 2024
Instagram previews AI-generated clothing & environments
It’s a touch odd to me that Meta is investing here while also shutting down the Meta Spark AR lens platform, but I guess interest in lenses has broadly faded, and AI interpretation of images may prove to be more accessible & scalable. (I wonder what’ll be its Dancing Hot Dog moment.)
Cute creatures emerge from Sora
I love seeing exactly how Chad Nelson was able to construct a Little Big Planet-inspired game world through some creative prompting & tweening in Open AI’s new Sora video creation model. Check out his exploratory process:
View this post on Instagram
A rather incredible demo of Pika Scene Ingredients
Director Matan Cohen-Grumi shows off the radical acceleration in VFX-heavy storytelling that’s possible through emerging tools—including Pika’s new Scene Ingredients:
For 10 years, I directed TV commercials, where storytelling was intuitive—casting characters, choosing locations, and directing scenes effortlessly. When I shifted to AI over a year ago, the process felt clunky—hacking together solutions, spending hours generating images, and… pic.twitter.com/pJUamLFgWI
— Matan Cohen-Grumi (@MatanCohenGrumi) December 18, 2024
Google introduces “Whisk,” a fun image remixer
Check out this fun little toy:
Instead of generating images with long, detailed text prompts, Whisk lets you prompt with images. Simply drag in images, and start creating.
Whisk lets you input images for the subject, one for the scene and another image for the style. Then, you can remix them to create something uniquely your own, from a digital plushie to an enamel pin or sticker.
Meet Whisk! Our new experiment that lets you use images as prompts to visualize your ideas and tell your story. Try it now: https://t.co/BR1z7gmDs6 pic.twitter.com/2zrPLQZlga
— labs.google (@labsdotgoogle) December 16, 2024
The blog post gives a bit more of a peek behind the scenes & sets some expectations:
Since Whisk extracts only a few key characteristics from your image, it might generate images that differ from your expectations. For example, the generated subject might have a different height, weight, hairstyle or skin tone. We understand these features may be crucial for your project and Whisk may miss the mark, so we let you view and edit the underlying prompts at any time.
In our early testing with artists and creatives, people have been describing Whisk as a new type of creative tool — not a traditional image editor. We built it for rapid visual exploration, not pixel-perfect edits. It’s about exploring ideas in new and creative ways, allowing you to work through dozens of options and download the ones you love.
And yes, uploading a 19th-century dog illustration to generate a plushie dancing an Irish jig is definitely the most JNack way to squander precious work time do vital market research. 🙂
Ideogram AI enables batch creation
I’m a near-daily user of Ideogram to create all manner of images—mainly goofy dad jokes to (ostensibly) entertain my family. Now they’re enabling batch creation to facilitate creation of lots of variations (e.g. versions of a logo):
Sora + scissors = … crazy bird puppetry?
Check out this wild video-to-video demo from Nathan Shipley:
Sora Remix test: Scissors to crane
Prompt was “Close up of a curious crane bird looking around a beautiful nature scene by a pond. The birds head pops into the shot and then out.” pic.twitter.com/CvAkdkmFBQ
— Nathan Shipley (@CitizenPlain) December 10, 2024
The cool generative 3D hits keep coming
Just a taste of the torrent the blows past daily on The Former Bird App:
- Rodin 3D: “Rodin 3D AI can create stunning, high-quality 3D models from just text or image inputs.”
- Trellis 3D: “Iterative prompting/mesh editing. You can now prompt ‘remove X, add Y, Move Z, etc.’… Allows decoding to different output formats: Radiance Fields, 3D Gaussians, and meshes.”
- Blender GPT: “Generating 3D assets has never been easier. Here’s me putting together an entire 3D scene in just over a minute.”
Google demos amazing image editing done purely through voice
This might be the world’s lowest-key demo of what promises to be truly game-changing technology!
I’ve tried a number of other attempts at unlocking this capability (e.g. Meta.ai (see previous), Playground.com, and what Adobe sneak-peeked at the Firefly launch in early 2023), but so far I’ve found them all more unpredictable & frustrating than useful. Could Gemini now have turned the corner? Only hands-on testing (not yet broadly available) will tell!
Microsoft opens 13 new AI + Design roles
If you or folks you know might be a good fit for one or more of these roles, please check ’em out & pass along info. Here’s some context from design director Mike Davidson.
————
These positions are United States only, Redmond-preferred, but we’ll also consider the Bay Area and other locations:
- Principal UX Researcher
- Design Director – News & Interests
- Principal Designer – News & Interests
- Senior Designer – News & Interests
- Principal Designer – Consumer Advertising
- Principal Designer – Growth
- Senior Designer – Growth
- Principal Designer – Rewards
- Principal Designer – Design Systems
These positions are specifically in our lovely Mountain View office:
Behind the scenes of “Senna”
I’m perpetually a sucker for peeks behind the curtain of movie- and TV-making magic like this:
Photoshop’s Object Selection tool gets upgraded
Nice to see this progress. (FWIW Microsoft Designer features similar tech; just putting that out there. :-))
New in the @Photoshop Beta! The Object Selection tool just got supercharged! pic.twitter.com/VrfxCQa84W
— Howard Pinsky (@Pinsky) December 9, 2024
Oil painting in Photoshop with AI
Karen X, back doing crafty Karen X things:
AI painting tutorial
Edited on my Intel AI PC – the DELL XPS 13 powered by Intel Core Ultra #ad pic.twitter.com/wcqpR3RhFk
— Karen X. Cheng (@karenxcheng) December 10, 2024
Times New Dumbass
Listen, if you’re gonna keep trying to fashion your dad-bod into a letter “X”, don’t be surprised if the Internet extrapolates from there.
Shedding new light with LumiNet
Diffusion models are ushering in what feels like a golden(-hour) age in relighting (see previous). Among the latest offerings is LumiNet:
[6/7] Here are a few more random relighting!
How accurate are these results? That’s very hard to answer at the moment But our tests on the MIT dataset, our user study, plus qualitative results all point to us being on the right track.
It’s like we’ve cracked open a… pic.twitter.com/1FNlz8S9Fk
— Anand Bhattad (@anand_bhattad) December 5, 2024
The world’s most laborious stick-figure animation?
Could be—but that’s what makes it fun! Take it away, Stephen:
Trolling Coca-Cola, AI, and sugar all at once
Zevia cleverly mocks Coke’s use of AI to generate its recent commercial:
Looks like Body Armor had a similar idea a few months back:
I’ve shipped my first feature at Microsoft!
What if your design tool could understand the meaning & importance of words, then help you style them accordingly?
I’m delighted to say that for what I believe is the first time ever, that’s now possible. For the last 40 years of design software, apps have of course provided all kinds of fonts, styles, and tools for manual typesetting. What they’ve lacked is an understanding of what words actually mean, and consequently of how they should be styled in order to map visual emphasis to semantic importance.
In Microsoft Designer, you can now create a new text object, then apply hierarchical styling (primary, secondary, tertiary) based on AI analysis of word importance:
I’d love to hear what you think. You can go to designer.microsoft.com, create a new document, and add some text. Note: The feature hasn’t yet been rolled out to 100% of users, so it may not yet be available to you—but even in that case it’d be great to hear your thoughts on Designer in general.
This feature came about in response to noticing that text-to-image models are not only learning to spell well (check out some examples I’ve gathered on Pinterest), but can also set text with varied size, position, and styling that’s appropriate to the importance of each word. Check out some of my Ideogram creations (which you can click on & remix using the included prompts):
These results of course incredible (imagine seeing any of this even three years ago!), but they’re just flat images, not editable text. Our new feature, by contrast, leverages semantic understanding and applies it to normal text objects.
What we’ve shipped now is just the absolute tip of the iceberg: to start we’re simply applying preset values based on word hierarchy, but you can readily imagine richer layouts, smart adaptive styling, and much more. Stay tuned—and let us know what you’d like to see!
Tons of interesting recent 3D/AI developments
I’m still catching up from Thanksgiving, obvs; enjoy these tasty leftovers (all links to demo vids on Twitter—yes, always “Twitter”):
Motion Brush enables more image-to-video control
Speaking of Kling, the new Motion Brush feature enables smart selection, generative fill, and animation all in one go. Check out this example, and click into the thread for more:
Kling AI 1.5 Motion Brush is incredible.
You can give different motions to multiple subjects in the same scene.
Game changing control and quality
6 wild examples: pic.twitter.com/sEDbNC1iPq
— Min Choi (@minchoi) November 30, 2024
Kling AI promises virtual try-ons
Accurately rendering clothing on humans, and especially estimating their dimensions to enable proper fit (and thus reduce costly returns), has remained a seductive yet stubbornly difficult problem. I’ve written previously about challenges I observed at Google, plus possible steps forward.
Now Kling is promising to use generative video to pair real people & real outfits for convincing visualization (but not fit estimation). Check it out:
Kling AI just dropped AI Try-On.
Now anyone can change outfits on anyone.
8 wild examples:pic.twitter.com/EKoYjKxTRd
— Min Choi (@minchoi) November 30, 2024
The product provides pre-set models and clothing.
But you can also upload your own – making anyone model anything.
Here’s me in a tank top and jeans I found online pic.twitter.com/FYX3QvHxP0
— Justine Moore (@venturetwins) November 29, 2024
Celebrating Saul Bass
It’s a real joy to see my 15yo son Henry’s interest in design & photography blossom, and last night he fell asleep perusing the giant book of vintage logos we scored at the Chicago Art Institute. I’m looking forward to acquainting him with the groundbreaking work of Saul Bass & figured we’d start here:
FlipSketch promises text-to-animation
We present FlipSketch, a system that brings back the magic of flip-book animation — just draw your idea and describe how you want it to move! …
Unlike constrained vector animations, our raster frames support dynamic sketch transformations, capturing the expressive freedom of traditional animation. The result is an intuitive system that makes sketch animation as simple as doodling and describing, while maintaining the artistic essence of hand-drawn animation.
Oh, I love this one!
FlipSketch can generate sketch animations from static drawings using text prompts!
Links ⬇️ pic.twitter.com/1XPzkWfaEl
— Dreaming Tulpa (@dreamingtulpa) November 22, 2024
Letter Love: 40 Postcards from the Collection of Letterform Archive
I had the chance to visit this space in SF a couple of months ago & really enjoyed just scratching the surface of their amazing collection. Now they’re offering a book of beautiful postcards drawn from their archives:
BlendBox AI promises fast, interactive compositing
I’m finding the app (which is free to try for a couple of moves, but which quickly runs out of credits) to be pretty wacky, as it continuously regenerates elements & thus struggles with identity preservation. The hero vid looks cool, though:
BlendBox AI: Seamlessly Blend Multiple Images with Ease
It makes blending images effortless and precise.
The real-time previews let us fine tune edits instantly, and we can generate images with AI or import our own Images.
Here is how to use it: pic.twitter.com/9LyVF8x8qN
— el.cine (@EHuanglu) November 19, 2024
8-bit mayhem: Jake Paul’s Senior Punch-Out
I’m deceased—along with everyone else who battled Iron Mike on the NES! Now let’s “Bring the pain to your elders”:
AI fixes (?) The Polar Express
Hmm—”fix” is a strong word for reinterpreting the creative choices & outcomes of an earlier generation of artists, but it’s certainly interesting to see the divisive Christmas movie re-rendered via emerging AI tech (Midjourney Retexturing + Hailuo Minimax). Do you think the results escape the original’s deep uncanny valley? See more discussion here.
Someone fixed Polar Express (Midjourney Retexturing + Hailuo Minimax) pic.twitter.com/6RjrABbAxO
— Angry Tom (@AngryTomtweets) November 12, 2024
NVIDIA promises text-to-3D-mesh
Check out LLaMA-Mesh (demo):
Nvidia presents LLaMA-Mesh
Unifying 3D Mesh Generation with Language Models pic.twitter.com/g8TTaXILMe
— AK (@_akhaliq) November 15, 2024
Incisive points on AI & filmmaking from Ben Affleck
Ignoring the misguided (IMHO) contents of the surrounding tweet, I found these four minutes of commentary to be extremely sharp & well informed:
I wonder whether such statements are psychological defense mechanisms such as repression and denial.
In any case, some people will very soon realize that reality is different from their illusory wishful thinking. pic.twitter.com/Y9mDkAZToI
— Chubby♨️ (@kimmonismus) November 15, 2024
Beautiful animated titles for “La Maison”
Happy Friday, y’all.
Bonus: Speaking of French fashion & technology, check out punch-card tech from 200+ years ago! (Side note: the machine lent its name to Google & Levis’ Project Jacquard smart clothing.)
[Both via fashionista/technologist Margot Nack]
Krea brings custom style training to Flux
Creative control to the people! I can’t wait to try this out:
the wait is over, our new AI trainer is out!
it comes with upgraded quality and hundreds of community styles you can use in your generations.
full tutorial below pic.twitter.com/oFtaiEmCS9
— KREA AI (@krea_ai) November 14, 2024
Typographical license plate o’ the day
10/10, no notes. :->
It’s done. https://t.co/Zbwrcww9ts pic.twitter.com/7KdfbrPCJt
— Eugene Fedorenko (@efedorenko) November 7, 2024
Also
New Google ReCapture tech enables post-capture camera control
Man, I miss working with these guys & gals…
We present ReCapture, a method for generating new videos with novel camera trajectories from a single user-provided video. Our method allows us to re-generate the source video, with all its existing scene motion, from vastly different angles and with cinematic camera motion.
They note that ReCapture is substantially different from other work. Existing methods can control camera either on images or on generated videos and not arbitrary user-provided videos. Check it out: