I’m still catching up from Thanksgiving, obvs; enjoy these tasty leftovers (all links to demo vids on Twitter—yes, always “Twitter”):
Motion Brush enables more image-to-video control
Speaking of Kling, the new Motion Brush feature enables smart selection, generative fill, and animation all in one go. Check out this example, and click into the thread for more:
Kling AI 1.5 Motion Brush is incredible.
You can give different motions to multiple subjects in the same scene.
Game changing control and quality
6 wild examples: pic.twitter.com/sEDbNC1iPq
— Min Choi (@minchoi) November 30, 2024
Kling AI promises virtual try-ons
Accurately rendering clothing on humans, and especially estimating their dimensions to enable proper fit (and thus reduce costly returns), has remained a seductive yet stubbornly difficult problem. I’ve written previously about challenges I observed at Google, plus possible steps forward.
Now Kling is promising to use generative video to pair real people & real outfits for convincing visualization (but not fit estimation). Check it out:
Kling AI just dropped AI Try-On.
Now anyone can change outfits on anyone.
8 wild examples:pic.twitter.com/EKoYjKxTRd
— Min Choi (@minchoi) November 30, 2024
The product provides pre-set models and clothing.
But you can also upload your own – making anyone model anything.
Here’s me in a tank top and jeans I found online pic.twitter.com/FYX3QvHxP0
— Justine Moore (@venturetwins) November 29, 2024
Celebrating Saul Bass
It’s a real joy to see my 15yo son Henry’s interest in design & photography blossom, and last night he fell asleep perusing the giant book of vintage logos we scored at the Chicago Art Institute. I’m looking forward to acquainting him with the groundbreaking work of Saul Bass & figured we’d start here:
FlipSketch promises text-to-animation
We present FlipSketch, a system that brings back the magic of flip-book animation — just draw your idea and describe how you want it to move! …
Unlike constrained vector animations, our raster frames support dynamic sketch transformations, capturing the expressive freedom of traditional animation. The result is an intuitive system that makes sketch animation as simple as doodling and describing, while maintaining the artistic essence of hand-drawn animation.
Oh, I love this one!
FlipSketch can generate sketch animations from static drawings using text prompts!
Links ⬇️ pic.twitter.com/1XPzkWfaEl
— Dreaming Tulpa (@dreamingtulpa) November 22, 2024
Letter Love: 40 Postcards from the Collection of Letterform Archive
I had the chance to visit this space in SF a couple of months ago & really enjoyed just scratching the surface of their amazing collection. Now they’re offering a book of beautiful postcards drawn from their archives:
BlendBox AI promises fast, interactive compositing
I’m finding the app (which is free to try for a couple of moves, but which quickly runs out of credits) to be pretty wacky, as it continuously regenerates elements & thus struggles with identity preservation. The hero vid looks cool, though:
BlendBox AI: Seamlessly Blend Multiple Images with Ease
It makes blending images effortless and precise.
The real-time previews let us fine tune edits instantly, and we can generate images with AI or import our own Images.
Here is how to use it: pic.twitter.com/9LyVF8x8qN
— el.cine (@EHuanglu) November 19, 2024
8-bit mayhem: Jake Paul’s Senior Punch-Out
AI fixes (?) The Polar Express
Hmm—”fix” is a strong word for reinterpreting the creative choices & outcomes of an earlier generation of artists, but it’s certainly interesting to see the divisive Christmas movie re-rendered via emerging AI tech (Midjourney Retexturing + Hailuo Minimax). Do you think the results escape the original’s deep uncanny valley? See more discussion here.
Someone fixed Polar Express (Midjourney Retexturing + Hailuo Minimax) pic.twitter.com/6RjrABbAxO
— Angry Tom (@AngryTomtweets) November 12, 2024
NVIDIA promises text-to-3D-mesh
Check out LLaMA-Mesh (demo):
Nvidia presents LLaMA-Mesh
Unifying 3D Mesh Generation with Language Models pic.twitter.com/g8TTaXILMe
— AK (@_akhaliq) November 15, 2024
Incisive points on AI & filmmaking from Ben Affleck
Ignoring the misguided (IMHO) contents of the surrounding tweet, I found these four minutes of commentary to be extremely sharp & well informed:
I wonder whether such statements are psychological defense mechanisms such as repression and denial.
In any case, some people will very soon realize that reality is different from their illusory wishful thinking. pic.twitter.com/Y9mDkAZToI
— Chubby♨️ (@kimmonismus) November 15, 2024
Beautiful animated titles for “La Maison”
Happy Friday, y’all.
Bonus: Speaking of French fashion & technology, check out punch-card tech from 200+ years ago! (Side note: the machine lent its name to Google & Levis’ Project Jacquard smart clothing.)
[Both via fashionista/technologist Margot Nack]
Krea brings custom style training to Flux
Creative control to the people! I can’t wait to try this out:
the wait is over, our new AI trainer is out!
it comes with upgraded quality and hundreds of community styles you can use in your generations.
full tutorial below pic.twitter.com/oFtaiEmCS9
— KREA AI (@krea_ai) November 14, 2024
Typographical license plate o’ the day
10/10, no notes. :->
It’s done. https://t.co/Zbwrcww9ts pic.twitter.com/7KdfbrPCJt
— Eugene Fedorenko (@efedorenko) November 7, 2024
Also

New Google ReCapture tech enables post-capture camera control
Man, I miss working with these guys & gals…
We present ReCapture, a method for generating new videos with novel camera trajectories from a single user-provided video. Our method allows us to re-generate the source video, with all its existing scene motion, from vastly different angles and with cinematic camera motion.
They note that ReCapture is substantially different from other work. Existing methods can control camera either on images or on generated videos and not arbitrary user-provided videos. Check it out:
Cheerful nihilism o’ the day
Surreal analog creations from Lola Dupre
“I’m so f*ckin’ sick & tired of the Photoshop” — Kendrick Lamar, and possibly Lola Dupre:
[Via Uri Ar]
Shake your bones
I meant to post this incredibly weird old-ish Chemical Brothers video for Halloween. Seems somehow just as appropriate this morning, imagery+mood-wise.
A love letter to splats
Paul Trillo relentlessly redefines what’s possible in VFX—in this case scanning his back yard to tour a magical tiny world:
Getting my hands dirty with 30 Gaussian splats scanned in my garden. Is this the most splats ever in a single shot?
Made with the support of @Lenovo @Snapdragon and the new Gaussian Splatting plugin by @irrealix pic.twitter.com/ezXo6MMnQi
— Paul Trillo (@paultrillo) October 3, 2024
Here he gives a peek behind the scenes:
How I created the love letter to the garden and bashed together 30 different Gaussian splats into a single scene pic.twitter.com/OKxDFtK8uE
— Paul Trillo (@paultrillo) October 18, 2024
And here’s the After Effects plugin he used:
Lego Brick-o-Lantern
Happy Halloween, y’all!
Stop-motion M4 Mac mini ad
You know I love some stop-motion animation, and the vibe & copywriting here are more cheeky & charming than just about anything I’ve seen in a while:
1980s Reimagined Logos of Popular Brands
Heh—charming vaporwave & chiptune:
Peeps of a certain age & demographic see these & get immediate PBS/WGBH vibes:
Thunder & The Deep Blue Sea
Everybody needs a good wingman, and when it comes to celebrating the beauty of aviation, I’ve got a great one in my son Henry. Much as we’ve done the last couple of years, this month we first took in the air show in Salinas, featuring the USAF Thunderbirds…

…followed by the Blue Angels buzzing Alcatraz & the Golden Gate at Fleet Week in San Francisco.

In both cases we were treated to some jaw-dropping performances—from a hovering F-35 to choreographed walls of fire—from some of the best aviators in the world. Check ’em out:
And thanks for the nice shootin’, MiniMe!

Stop-Motion: Will Smith vs. Pumpkin
Who needs AI Will gobbling spaghetti (which has somehow become the Utah Teapot of generative video!) when you’ve got this tiny, meticulous excellence?
More relighting goodness
DifFRelight can change flat-lit facial captures into high-quality images and dynamic sequences with complex lighting!
It uses a diffusion-based model for precise lighting control, accurately showing effects like eye reflections and skin texture.https://t.co/iXZjpVOxFx pic.twitter.com/Rwj7jTBnRd
— Dreaming Tulpa (@dreamingtulpa) October 23, 2024
Relighting via Midjourney
Check out this impressive use of the new “retexture” feature, which enables image-to-image transformations:
Wow!
Midjourney’s edit and retexture features are incredible!
I retextured my profile image using some of my sref codes and animated it with LumaLabs.
The new images are stunning, and the animated video looks even better.
You can apply this to any image in no time!
Prompt… pic.twitter.com/TYhD7VzZAS
— Umesh (@umesh_ai) October 24, 2024
Here’s a bit more on how the new editing features work:
We’re testing two new features today: our image editor for uploaded images and image re-texturing for exploring materials, surfacing, and lighting. Everything works with all our advanced features, such as style references, character references, and personalized models pic.twitter.com/jl3a1ZDKNg
— Midjourney (@midjourney) October 23, 2024
Ideogram Canvas arrives
I’ve become an Ideogram superfan, using it to create imagery daily, so I’m excited to kick the tires on this new interactive tool—especially around its ability to synthesize new text in the style of a visual reference.
Today, we’re introducing Ideogram Canvas, an infinite creative board for organizing, generating, editing, and combining images.
Bring your face or brand visuals to Ideogram Canvas and use industry-leading Magic Fill and Extend to blend them with creative, AI-generated content. pic.twitter.com/m2yjulvmE2
— Ideogram (@ideogram_ai) October 22, 2024
You can upload your own images or generate new ones within Canvas, then seamlessly edit, extend, or combine them using industry-leading Magic Fill (inpainting) and Extend (outpainting) tools. Use Magic Fill and Extend to bring your face or brand visuals to Ideogram Canvas and blend them with creative, AI-generated elements. Perfect for graphic design, Ideogram Canvas offers advanced text rendering and precise prompt adherence, allowing you to bring your vision to life through a flexible, iterative process.
“Big alpha channel energy”
The hilariously nerdy Elle Cordova (of “Fonts Hanging Out” fame) is back, this time as some of our favorite (and least favorite) file formats.
San Jose derails through the power of AI
Filmmaker & Pika Labs creative director Matan Cohen Grumi makes this town look way more dynamic than usual (than ever?) through the power of his team’s tech:
Took @pika_labs AI effects to the streets of San Jose. It’s crazy what you can create with just a phone, Pika and some basic edits #pikaffects pic.twitter.com/uzN2KyLHnh
— Matan Cohen-Grumi (@MatanCohenGrumi) October 19, 2024
Doom scrolling, the SNL way
Oh man, what a mixture of tiny, relatable jokes (“STOP”) with absolutely incomprehensible gibberish—i.e., TikTok in a nutshell. I think you’ll dig it:
Project Turntable spins me right ’round
Adobe’s new generative 3D/vector tech is a real head-turner. I’m impressed that the results look like clean, handmade paths, with colors that match the original—and not like automatic tracing of crummy text-to-3D output. I can’t wait to take it for a… oh man, don’t say it don’t say it… spin.
Project Perfect Blend promises game-changing compositing in Photoshop
Oh man, for years we wanted to build this feature into Photoshop—years! We tried many times (e.g. I wanted this + scribble selection to be the marquee features in Photoshop Touch back in 2011), but the tech just wasn’t ready. But now, maybe, the magic is real—or at least tantalizingly close!
Being a huge nerd, I wonder about how the tech works, and whether it’s substantially the same as what Magnific has been offering (including via a Photoshop panel) for the last several months. Here’s how I used that on my pooch:

But even if it’s all the same, who cares?
Being useful to people right where they live & work, with zero friction, is tremendous. Generative Fill is a perfect example: similar (if lower quality) inpainting was available from DALL•E for a year+ before we shipped GenFill in Photoshop, but the latter has quietly become an indispensible, game-changing piece of the imaging puzzle for millions of people. I’d love to see compositing improvements go the same way.
The ceiling can’t hold us stuffed animals
As I drove the Micronaxx to preschool back in 2013, Macklemore’s “Can’t Hold Us” hit the radio & the boys flipped out, making their stuffed buddies Leo & Ollie go nuts dancing to the tune. I remember musing with Dave Werner (a fellow dad to young kids) about being able to animate said buddies.
Fast forward a decade+, and now Dave is using Adobe’s recently unveiled Firefly Video model to do what we could only dimly imagine back then:
Bringing stuffed animals to life with Adobe Firefly Generate Video. pic.twitter.com/XSbQxaIDiD
— Dave Werner (@okaysamurai) October 16, 2024
Time to unearth Leo & get him on stage at last. :->
Extremely metal “I Voted” sticker
Aw hell yeah, 12yo illustrator Jane!
IM FUCKING CRYING pic.twitter.com/fsSqPxsHhQ
— Casey Shea Enthusiast (@csheaenthusiast) October 3, 2024
AI-flavored vacation pix: Delightful nightmare fuel
Enjoy the latest from Magnific impresario Javi Lopez!
PART 3: Handed my vacation videos to an AI for auto editing, and now I’m pretty sure I’ll have nightmares for life pic.twitter.com/jYX1TZ4rMX
— Javi Lopez (@javilopen) October 12, 2024
Striking visualizations of a storm surge
Amazing, and literally immersive, work by artists at The Weather Channel. Yikes—stay safe out there, everybody.
The 3D artists at the weather channel deserve a raise for this insane visual
Now watch this, and then realize forecasts are now predicting up to 15 ft of storm surge in certain areas on the western coast of Florida pic.twitter.com/HHrCVWNgpg
— wave (@0xWave) October 8, 2024
Flair AI promises brand-consistent video creation
As soon as Google dropped DreamBooth back in 2022, people have been trying—generally without much success—to train generative models that can incorporate the fine details of specific products. Thus far it just hasn’t been possible to meet most brands’ demanding requirements for fidelity.
Now tiny startup Flair AI promises to do just that—and to pair the object definitions with custom styling and even video. Check it out:
You can now generate brand-consistent video advertisements for your products on @flairAI_
1. Train a model on your brand’s aesthetic
2. Train a model on your clothing or product
3. Combine both models in one prompt
4. Animate✨In beta – comment/RT for access and free credits pic.twitter.com/88NYLVOFSQ
— Mickey Friedman (@mickeyxfriedman) October 7, 2024
In search of The Something Else
Late last night my wife & I found ourselves in the depths of the Sunday Evening Blues—staring out towards the expanse of yet another week of work & school, without much differentiation from most of those before & after it. I’m keenly aware of the following fact, of course:

And yet, oof… it’s okay to acknowledge the petty creeping of tomorrow & tomorrow & tomorrow. The ennui will pass—as everything always does—but it’s real.
This reminded me of the penguin heroine in what was one of our favorite books to read to the Micronaxx back when they were actually micro, A Penguin Story by Antoinette Portis. Ol’ Edna is always searching for The Something Else—and she finds it! I came across this charming little narration of the story, and just in case you too might need a little avian encouragement—well, enjoy:
Meta AI introduces conversational editing
I was super hyped last year when Meta announced “Emu Edit” tech for selectively editing images using just language:
Now you can try the tech via Meta.ai and in various apps:
Meta has casually released the best AI image editor
You can upload your image to Meta AI and just write the edits you want to make.
Accessible for free in WhatsApp, Instagram, Messenger, Facebook, etc. pic.twitter.com/jJEhMdJadT
— Paul Couvert (@itsPaulAi) October 2, 2024
In my limited experience so far, it’s cool but highly unpredictable. I’ll test it further, and I’d love to know how it works for you. Meanwhile you can try similar techniques via https://playground.com/:
Welcome to the new Playground
Use AI to design logos, t-shirts, social media posts, and more by just texting it like a person.
Watch: pic.twitter.com/eSwJcJUxtB
— Playground (@playground_ai) September 3, 2024
RIP Dikembe Mutombo
[I know this note seems supremely off topic, but bear with me.]
I’m sorry to hear of the passing of larger-than-life NBA star Dikembe Mutombo. He inspired the name of a “Project Mutombo” at Google, which was meant to block unintended sharing of content outside of one’s company. Unrelated (AFAIK he never knew of the project), back in 2015 I happened to see him biking around campus—dwarfing a hapless Google Bike & making its back tire cartoonishly flat.
RIP, big guy. Thanks for the memories, GIFs, and inspiration.
Fun VFX from Runway Turbo
As always, I’m blown away in equal parts by:
- Just how powerful this tech is becoming, and
- Just how blasé we all can be about it all
Days of Miracles & Wonder, amirite?
Wow @runwayml just dropped an updated Gen-3 Alpha Turbo Video-to-Video mode & it’s awesome! It’s super fast & lets you do 9:16 portrait video. Anything is possible! pic.twitter.com/AxeFaJwAPR
— Blaine Brown (@blizaine) September 28, 2024
Zuck talks AR wearables & much more
I quite enjoyed the Verge’s interview with Mark Zuckerberg, discussing how they think about building a whole range of reality-augmenting devices, from no-display Wayfarers to big-ass goggles, and especially to “glasses that look like glasses”—the Holy Grail in between.
Links to some of the wide-ranging topics they covered:
00:00 Orion AR smart glasses
00:27 Platform shift from mobile to AR
02:15 The vision for Orion & AR glasses
03:55 Why people will upgrade to AR glasses
05:20 A range of options for smart glasses
07:32 Consumer ambitions for Orion
11:40 Reality Labs spending & the cost of AR
12:44 Ray-Ban partnership
17:11 Ray-Ban Meta sales & success
18:59 Bringing AI to the Ray-Ban Meta
21:54 Replacing phones with AR glasses
25:18 Influx of AI content on social media
28:32 The vision for AI-filled social media
34:04 Will AI lead to less human interaction?
35:24 Success of Threads
36:41 Competing with X & the role of news
40:04 Why politics can hurt social platforms
41:52 Mark’s shift away from politics
46:00 Cambridge Analytica, in hindsight
49:09 Link between teen mental health and social media
53:52 Disagreeing with EU regulation
56:06 Debate around AI training data & copyright
1:00:07 Responsibility around AR as a platform
Tangentially, I gave myself an unintended chuckle with this:
Fun, unintended juxtaposition at the top of my camera roll. pic.twitter.com/MsNHJUeFvB
— John Nack (@jnack) September 26, 2024
A sobering micro-critique of AI
iPhone goes on safari
Austin Mann puts the new gear through its paces in Kenya:
Last week at the Apple keynote event, the iPhone camera features that stood out the most to me were the new Camera Control button, upgraded 48-megapixel Ultra Wide sensor, improved audio recording features (wind reduction and Audio Mix), and Photographic Styles. […]
Over the past week we’ve traveled over a thousand kilometers across Kenya, capturing more than 10,000 photos and logging over 3TB of ProRes footage with the new iPhone 16 Pro and iPhone 16 Pro Max cameras. Along the way, we’ve gained valuable insights into these camera systems and their features.
A little encouragement from Carl Jung

Or as I said upon launching the first-ever Photoshop public beta, all those years ago:
“Be bold, and mighty forces will come to your aid.” – Goethe
Pillow fight NYC!
Fernando Livschitz, whose amazing work I’ve featured many times over the years, is back with some delightfully pillowy interactions in & over the Big Apple:
Big GoT
So is the expanded Midwest the Midwesteros? 🙂 Whatever the case, enjoy this little mashup before House Killjoy lawyers go full loot train on it.
Guillermo del Toro on AI
Oof. But of course he’s right that a tool is just a tool, not a provider of meaning & value unto itself.
GDT says it all here. pic.twitter.com/pK5WPtDY7l
— Todd Vaziri (@tvaziri) September 17, 2024
“Jurassic Park – 1950’s Super Panavision 70”
Chaos reigns!
I have no idea what AI and other tools were used here, but it’d be fun to get a peek behind the curtain. As a commenter notes,
The meandering strings in the soundtrack. The hard studio lighting of the close-ups. The midtone-heavy Technicolor grading. The macro-lens DOF for animation sequences. This is spot-on 50’s film aesthetic, bravo.
[Via Andy Russell]
Flux goes realtime with Krea
And if that headline makes no sense, it probably just means your not terminally AI-pilled, and I’m caught flipping a grunt. 😉 Anyway, the tiny but mighty crew at Krea have brought the new Flux text-to-image model—including its ability to spell—to their realtime creation tool:
Flux now in Realtime.
available in Krea with hundreds of styles included.
free for everyone. pic.twitter.com/4gmMOmcUvg
— KREA AI (@krea_ai) September 12, 2024






