Here’s just a beautiful little bit of filmmaking (starting at 2:06, in case the link below fails to cue up the right spot). Let’s go Stratolaunch!
Quick tutorial: Runway References
Identity preservation FTW—though I’ve yet to test this feature with my own face & will reserve judgement a bit until I’ve done so:
[Via Jan Kabili]
Krea introduces “GPT Paint”
Continuing their excellent work to offer more artistic control over image creation, the fast-moving crew at Krea has introduced GPT Paint—essentially a simple canvas for composing image references to guide the generative process. You can directly sketch, and/or position reference images, then combine the input with prompts & style references to fine-tune compositions:
introducing GPT Paint.
now you can prompt ChatGPT visually through edit marks, basic shapes, notes, and reference images.
available now on Krea Image. pic.twitter.com/oHiPIedUNz
— KREA AI (@krea_ai) May 1, 2025
Historically, approaches like this have sounded great but—at least in my experience—have fallen short.
Think about what you’d get from just saying “draw a photorealistic beautiful red Ferrari” vs. feeing in a crude sketch + the same prompt.
In my quick tests here, however, providing a simple reference sketch seems helpful—maybe because GPT-4o is smart enough to say, “Okay, make a duck with this rough pose/position—but don’t worry about exactly matching the finger-painted brushstrokes.” The increased sense of intentionality & creative ownership feels very cool. Here’s a quick test:

I’m not quite sure where the spooky skull and, um, lightning-infused martini came from. 🙂

Japanese Safety Signage Supercut
Wild devotion to capture, organization, and alignment:
Bedazzle My MiG
The sheer insanity of this undertaking… At a glance it seems like a product of AI, but evidently it’s entirely real. I’m here for it!
New USAF Thunderbirds documentary looks amazing
Having really enjoyed shooting the Thunderbirds over the years, I’m eager to check this out:
From a recent show we saw in Salinas:
Runway adds References
This looks amazing for character consistency! See thread for more examples.
GPT-4o infographics: Faraway, so close!
Things are night-and-day better than they were just a month ago (in the dark DALL•E days), but would you like your owl with FEAFERS?
Oh, ChatGPT, you are *almost* good at infographics… But what’s with the EATON and FEAFERS? pic.twitter.com/K7vDRjRsdP
— John Nack (@jnack) April 16, 2025
“When Identity Preservation Goes Wrong”
Hah hah oh nooooo… Enjoy some creepy & unintended fun from GPT-4o:
ChatGPT prompted 74 times
“Create the exact replica of this image, do not change a thing”
This is why I say you need to start a new chat after each edit pic.twitter.com/LTFjQebA5e
— A.I.Warper (@AIWarper) April 28, 2025
The explosive titles of “Your Friends & Neighbors”
As Motionographer aptly puts it,
Director John Likens and FX Supervisor Tomas Slancik dissect existential collapse in Your Friends & Neighbors’ haunting opener, blending Jon Hamm’s live-action gravitas with a symphony of digital decay. […]
Shot across two days and polished by world-class VFX artists, the title sequence mirrors Hamm’s crumbling protagonist, juxtaposing his stoic performance against hyper-detailed destruction.
Daft Bricks
Heh—this proposed Lego set would be 1000% up my alley. Click/swipe through the gallery to see fun animations:
GPT-4o image creation is coming to Designer!
Having created 200+ images in just the last month via this still-new image model (see new blog category that gathers some of them), I’m delighted to say that my team is working to bring it to Microsoft Designer, Copilot, and beyond. From the boss himself:
5/ Create: This one is fun. Turn a PowerPoint into an explainer video, or generate an image from a prompt in Copilot with just a few clicks.
We’ve also added new features to make Copilot even more personalized to you, plus a redesigned app built for human-agent collaboration. pic.twitter.com/m1oTf53aai
— Satya Nadella (@satyanadella) April 23, 2025
Fun recent GPT-4o explorations
Just sharing a few things I’ve been trying.
For Easter, my cousin’s sweet pup as sweet treats:
Closing out Easter by turning my cousin’s dog into peeps, Paas, Jelly Belly, and more. pic.twitter.com/OnJhjiM6Z7
— John Nack (@jnack) April 21, 2025
Bespoke felt ornaments FTW:
Getting an early start on Christmas, visualizing friends’ cars as felt ornaments (GPT-4o + @higgsfield_ai): pic.twitter.com/FkAjp8FWdz
— John Nack (@jnack) April 21, 2025
Creating cozy slippers from an A-10 Warthog:
“You’re now the proud owner of the most dangerously cozy footwear in the sky. Plush, cartoon A-10 Warthogs with big doe eyes and turbine engines ready to warm your toes and deliver cuddly close air support. Let me know if you want tiny GAU-8 Gatling gun detailing on the front.”… pic.twitter.com/lKLRJGALaw
— John Nack (@jnack) April 18, 2025
StarVector: Text/Image->SVG Code
Back at Adobe we introduced Firefly text-to-vector creation, but behind the scenes it was really text-to-image-to-tracing. That could be fine, actually, provided that the conversion process did some smart things around segmenting the image, moving objects onto their own layers, filling holes, and then harmoniously vectorizing the results. I’m not sure whether Adobe actually got around to shipping that support.
In any event, StarVector promises actual, direct creation of SVG. The results look simple enough that it hasn’t yet piqued my interest enough to spend my time with it, but I’m glad that folks are trying.
StarVector official app is out on Hugging Face
Generating Scalable Vector Graphics Code from Images and Text pic.twitter.com/4nIr0eHJzG
— AK (@_akhaliq) March 24, 2025
“You eat what you are”
There’s a way-higher-than-zero chance that you won’t want to check out this AI rendering; just sayin’. 🙂
AI is getting out of hand pic.twitter.com/4nLya89kyY
— Charly Wargnier (@DataChaz) April 7, 2025
Rive introduces Vector feathering
I really hope that the makers of traditional vector-editing apps are paying attention to rich, modern, GPU-friendly techniques like this one. (If not—and I somewhat cynically expect that it’s not—it won’t be for my lack of trying to put it onto their radar. ¯\_(ツ)_/¯)
Introducing Vector Feathering — a new way to create vector glow and shadow effects. Vector Feathering is a technique we invented at Rive that can soften the edge of vector paths without the typical performance impact of traditional blur effects. (Audio on) pic.twitter.com/39kfjmFsTJ
— Rive (@rive_app) February 11, 2025
Vibe-animating with Magic Animator?
I know only what you see below, but Magic Animator (how was that domain name available?) promises to “Animate your designs in seconds with AI,” which sounds right up my alley, and I’ve signed up for their waitlist.
Figma designs, animated with AI
Magic Animator by @LottielabHQ coming soon pic.twitter.com/qvxxIggT0J
— Daryl Patigas (@darel023) April 14, 2025
Another set of designs animated by AI
Magic Animator waitlist is now open → https://t.co/Gc2N18Kk6X ✨
Breakdown in thread https://t.co/D5pq3u9xTs pic.twitter.com/suwceDd4ED
— Daryl Patigas (@darel023) April 17, 2025
That Happy Meal feel
Sure, the environmental impact of this silliness isn’t great, but it’s probably still healthier than actually eating McDonald’s. :-p
Having a ball turning my family into Happy Meal figures. (See prompt in quoted post from @firatbilal) https://t.co/9Az1Om6BBe pic.twitter.com/bB55ebBQ3r
— John Nack (@jnack) April 17, 2025
Tangentially, I continue to have way too much fun applying different genres to amigos:
My love language is turning friends’ family photos into GPT-4o-powered illustrations. pic.twitter.com/5qYzmhtxFq
— John Nack (@jnack) April 8, 2025
AI Logofluff
ChatGPT has famous marks marching to fuzz:
How about using the same prompt to create fluffy logos? https://t.co/SIVCbhmJ1x pic.twitter.com/m4wyF7zADM
— Gizem Akdag (@gizakdag) April 13, 2025
And now Microsoft Designer has me feeling truly warm & fuzzy:

Google, Dolphins, and Ai-i-i-i-i!
Three years ago (seems like an eternity), I remarked regarding generative imaging.,
The disruption always makes me think of The Onion’s classic “Dolphins Evolve Opposable Thumbs“: “Holy f*ck, that’s it for us monkeys.” My new friend August replied with the armed dolphin below.

I’m reminded of this seeing Google’s latest AI-powered translation (?!) work. Just don’t tell them about abacuses!
Meet DolphinGemma, an AI helping us dive deeper into the world of dolphin communication. pic.twitter.com/2wYiSSXMnn
— Google DeepMind (@GoogleDeepMind) April 14, 2025
[Via Rick McCawley]
How MCP is like electricity & the REA
Wait, first, WTF is MCP? Check out my old friend (and former Illustrator PM) Mordy’s quick & approachable breakdown of Model Context Protocol and why it promises to be interesting to us (e.g. connecting Claude to the images on one’s hard drive).
Don’t mind me, just turning the dog into a guy, because why not?
The better question may be, what are you waiting for? 😉 Let’s roll, ChatGPT:
Fun! Here’s my guy Seamus as a sleepy dude. pic.twitter.com/JZ7XOT7Kws
— John Nack (@jnack) April 9, 2025
Comma comedian
Elle Cordova, doing her inimitable thing; wait for the ellipsis… 🙂
Draw 3D-styled characters with Google Gemini
Check out this fun toy:
[1/8] Drawing → 3D render with Gemini 2.0 image generation… by @dev_valladares + me
Make your own at the link belowhttps://t.co/sy8poJZYuQ pic.twitter.com/DkGT6DRUsb
— Trudy Painter (@trudypainter) April 4, 2025
Apparently I’m over my quota, so sadly the world will never get to see a Ghiblified rendering of my crudely drawn goldendoodle!

Friday’s Microsoft Copilot event in 9 minutes
The team showed of good new stuff, including—OMG—showing how to use Photoshop! (On an extremely personal level, “This is what it’s like when worlds colliiiide!!”)
As it marks its 50th anniversary, Microsoft is updating Copilot with a host of new features that bring it in line with other AI systems like ChatGPT or Claude. We got a look at them during the tech giant’s 50th anniversary event today, including new search capabilities, Copilot Vision which will be able to analyze real-time video from a mobile camera. Copilot will also now be able to use the web on your behalf. Here’s everything you missed.
Lego Interstellar is stunning
Wow—I marvel at the insane, loving attention to detail in this shot-for-shot re-creation of a scene from Interstellar:
The creator writes,
I started working on this project in mid-2023, and it has been an incredible challenge, from modeling and animation to rendering and post-production. Every detail, from the explosive detachment of the Ranger to the rotation of the Endurance, the space suits of the minifigures, the insides of the Lander, and even the planet in the background, was carefully recreated in 3D.
Interstellar is one of the films that has moved me the most. Its story, visuals, and soundtrack left a lasting impact on me, and this video is my personal love letter to the movie—and to cinema in general. I wanted to capture the intensity and emotion of this scene in LEGO form, as a tribute to the power of storytelling through film.
Side by side:
Rustlin’ up some Russells
2025 marks an unheard-of 40th year in Adobe creative director Russell Brown’s remarkable tenure at the company. I remember first encountering him via the Out Of Office message marking his 15-year (!) sabbatical (off to Burning Man with Rick Smolan, if I recall correctly). If it weren’t for Russell’s last-minute intervention back in 2002, when I was living out my last hours before being laid off from Adobe (interviewing at Microsoft, lol), I’d never have had the career I did, and you wouldn’t be reading this now.
In any event, early in the pandemic Russell kept himself busy & entertained by taking a wild series of self portraits. Having done some 3D printing with him (the output of which still forms my Twitter avatar!), I thought, “Hmm, what would those personas look like as plastic action figures? Let’s see what ChatGPT thinks.” And voila, here they are.
Click through the tweet below if you’re curious about the making-of process (e.g. the app starting to render him very faithfully, then freaking out midway through & insisting on delivering a more stylized, less specific rendition). But forget that—how insane is it that any of this is possible??
Can you show ChatGPT 5 portraits of legendary Adobe creative director Russell Brown and get a whole set of action figures? Yep! pic.twitter.com/gLTIcGqLJ0
— John Nack (@jnack) April 4, 2025

“The Worlds of Riley Harper”
It’s pretty stunning what a single creator can now create in a matter of days! Check out this sequence & accompanying explanation (click on the post) from Martin Gent:
I tried to make this title sequence six months ago, but the AI tools just weren’t up to it. Today it’s a different story. Sound on!
Since the launch of ChatGPT’s 4o image generator last week, I’ve been testing a new workflow to bring my characters – Riley Harper and her dog,… pic.twitter.com/SMgjDnJWH1
— Martin Gent (@martgent) April 3, 2025
Tools used:
- @OpenAI‘s ChatGPT 4o (Images)
- @hedra_labs (Lipsync)
- @elevenlabsio (Voice & Sound Effects)
- @SunoMusic (Music & Lyrics – made with v3 six months ago)
- @Kling_ai (Animation)
- @higgsfield_ai (Animation)
- @ideogram_ai (Title Lockup)
- @topazlabs (Upscaling)
Severance, through the animated lens of ChatGPT
People can talk all the smack they want about “AI slop”—and to be sure, there’s tons of soulless slop going around—but good luck convincing me that there’s no creativity in remixing visual idioms, and in reskinning the world in never-before-possible ways. We’re just now dipping a toe into this new ocean.
ChatGPT 4o’s new image gen is insane. Here’s what Severance would look like in 8 famous animation styles
1/8:
Rankin/Bass – That nostalgic stop-motion look like Rudolph the Red-Nosed Reindeer. Cozy and janky. pic.twitter.com/5rFL8SGttS— Bennett Waisbren (@BennettWaisbren) March 27, 2025
See the whole thread for a range of fun examples:
4/8:
Pixar – Clean, subtle facial animation, warm lighting, and impeccable shot composition. pic.twitter.com/FNWgPccHcI— Bennett Waisbren (@BennettWaisbren) March 27, 2025
OMG AI KFC
It’s insane what a single creator—in this case David Blagojević—can do with AI tools; insane.
I’m blown away!
This KFC concept ad is 100% AI generated!
My friend David Blagojevic (he’s not on X) created this ad concept for KFC and it’s incredible!
Tools used: Runway, Pika, Kling AI, Google DeepMind Veo2, Luma AI, OpenAI Sora, upscaled with Topaz Labs and music… pic.twitter.com/u9ics8M51x
— Salma (@Salmaaboukarr) March 31, 2025
It’s worth noting that creative synthesis like this doesn’t “just happen,” much less in some way that replaces or devalues the human perspective & taste at the heart of the process: everything still hinges on having an artistic eye, a wealth of long-cultivated taste, and the willpower to make one’s vision real. It’s just that the distance between that vision & reality is now radically shorter than it’s ever been.
New generative video hotness: Runway + Higgsfield
It’s funny to think of anyone & anything as being an “O.G.” in the generative space—but having been around for the last several years, Runway has as solid a claim as anyone. They’ve just dropped their Gen-4 model. Check out some amazing examples of character consistency & camera control:
Today we’re introducing Gen-4, our new series of state-of-the-art AI models for media generation and world consistency. Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media.
Gen-4 Image-to-Video is rolling out today to all paid… pic.twitter.com/VKnY5pWC8X
— Runway (@runwayml) March 31, 2025
Here’s just one of what I imagine will be a million impressive uses of the tech:
First test with @runwayml‘s Gen-4 early access!
First impressions: I am very impressed! 10 second generations, and this is the only model that could do falling backwards off a cliff. Love it! pic.twitter.com/GZS1B7Wpq0
— Christopher Fryant (@cfryant) March 31, 2025
Meanwhile Higgsfield (of which I hadn’t heard before now) promises “AI video with swagger.” (Note: reel contains occasionally gory edgelord imagery.)
Now, AI video doesn’t have to feel lifeless.
This is Higgsfield AI: cinematic shots with bullet time, super dollies and robo arms — all from a single image.
It’s AI video with swagger.
Built for creators who move culture, not just pixels. pic.twitter.com/dJdQ978Jqd
— Higgsfield AI (@higgsfield_ai) March 31, 2025
Fun with empowering existential dread
It’s so good, it’s bad! 😀
Currently asking ChatGPT for faux-German words like
Überintelligenzchatbotrichtigkeitsahnungsscham:“The bizarre cocktail of joy, panic, and existential dread a product manager experiences when an AI answers a tough product question better than they could.” pic.twitter.com/yVZVdoKZi9
— John Nack (@jnack) March 30, 2025
Virtual product photography in ChatGPT
Seeing this, I truly hope that Adobe isn’t as missing in action as they seem to be; fingers crossed.
In the meantime, simply uploading a pair of images & a simple prompt is more than enough to get some compelling results. See subsequent posts in the thread for details, including notes on some shortcomings I observed.
A quick test of ChatGPT virtual product photography, combining real shoes with a quick render from @krea_ai/@bfl_ml Flux:
“Please put these shoes into the image of the basketball court, held aloft in the foreground by a man’s hand.” pic.twitter.com/k1AhTdHFcs— John Nack (@jnack) March 28, 2025
See also (one of a million tests being done in parallel, I’m sure):
Still experimenting with chatgpt4o
prompt: “model wearing cap provided”
not bad pic.twitter.com/FObSXeyxOS
— Salma (@Salmaaboukarr) March 26, 2025
Depression-era Ghibli
We’re speed-running our way through the novelty->saturation->nausea cycle of Studio Ghibli-style meme creation, but I find this idea fresher: turn Ghibli characters into Dorothea Lange-style photos:
— Sterling Crispin (@sterlingcrispin) March 26, 2025
Ideogram 3.0 is here
In the first three workdays of this week, we saw three new text-to-image models arrive! And now that it’s Thursday, I’m like, “WTF, no new Flux/Runway/etc.?” 🙂
For the last half-year or so, Ideogram has been my go-to model (see some of my more interesting creations), so I’m naturally delighted to see them moving things forward with the new 3.0 model:
I don’t yet quite understand the details of how their style-reference feature will work, but I’m excited to dig in.
Meanwhile, here’s a thread of some really impressive initial creations from the community:
We launched Ideogram 3.0 just three hours ago, and we’ve already seen an incredible wave of striking images. Here are 16 of our favorites so far:
1/ @krampus76 pic.twitter.com/tbwfMfkvg5
— Ideogram (@ideogram_ai) March 26, 2025
LegoGPT
The family that bricks together, sticks together? 🙂
Worked great on this group shot as well—with the exception of disappearing one cousin! pic.twitter.com/pZzXfPurv3
— John Nack (@jnack) March 26, 2025
ChatGPT reimagines family photos
“Dress Your Family in Corduroy and Denim” — David Sedaris
“Turn your fam into Minecraft & GTA” — Bilawal Sidhu
Entire ComfyUI workflows just became a text prompt.
Open an image in GPT-4o and type “turn us into Roblox / GTA-3 /Minecraft / Studio Ghibli characters” pic.twitter.com/rCXclZklq5
— Bilawal Sidhu (@bilawalsidhu) March 26, 2025
And meanwhile, on the server side:
ChatGPT when another Studio Ghibli request comes in pic.twitter.com/NF5sy24GlU
— Justine Moore (@venturetwins) March 26, 2025
RIP to ZIP
Oh man, this vid from Aaron Draplin—stalwart hoarder of obsolete removable media—gave me all the feels, and if you’re a creative of a certain age, it might give you them, too:
Google’s “Photoshop Killer”?
Nearly twenty years ago (!), I wrote here about how The Killing’s Gotta Stop—ironically, perhaps, about then-new Microsoft apps competing with Adobe. I rejected false, zero-sum framing then, and I reject it now.
Having said that, my buddy Bilawal’s provocative framing in this video gets at something important: if Adobe doesn’t get on its game, actually delivering the conversational editing capabilities we publicly previewed 2+ years ago, things are gonna get bad. I’m reminded of the axiom that “AI will not replace you, but someone using AI just might.” The same goes for venerable old Photoshop competing against AI-infused & AI-first tools.
In any case, if you’re interested in the current state of the art around conversational editing (due to be different within weeks, of course!), I think you’ll enjoy this deep dive into what is—and isn’t—possible via Gemini:
Specific topic sections, if you want to jump right to ’em:
- 00:00 Conversational Editing with Google’s Multimodal AI
- 00:53 Image Generation w/ LLM World Knowledge
- 02:12 Easy Image Editing & Colorization
- 02:46 Advanced Conversational Edits (Chaining Prompts Together)
- 03:37 Long Text Generation (Google Beats OpenAI To The Punch)
- 04:25 Making Spicy Memes (Google AI Studio Safety Settings)
- 05:48 Advanced Prompting (One Shot ComfyUI Workflows)
- 07:19 Re-posing Characters (While Keeping Likeness Intact)
- 08:27 Spatial 3D Understanding (NO ControlNet)
- 10:42 Semantic Editing & In/Out Painting
- 13:46 Sprite Sheets & Animation Keyframes
- 14:40 Using Gemini To Build Image Editing Apps
- 16:37 Making Videos w/ Conversational Editing
Happy birthday, Adobe Firefly
The old (hah! but it seems that way) gal turns two today.
The ride has been… interesting, hasn’t it? I remain eager to see what all the smart folks at Adobe have been cooking up. As a user of Photoshop et al. for the last 30+ years, I selfishly hope it’s great!
Welcome to the world, #AdobeFirefly! https://t.co/R92lBktZIQ
We have great stuff you can out try right now, plus so much brewing in the lab. Here’s a quick preview: pic.twitter.com/hIaW9EpMor
— John Nack (@jnack) March 21, 2023
In the meantime, I’ll admit that watching the video above—which I wrote & then made with the help of Davis Brown (son of Russell)—makes me kinda blue. Everything it depicts was based on real code we had working at the time. (I insisted that we not show anything that we didn’t think we could have shipping within three months’ time.) How much of that has ever gotten into users’ hands?
Yeah.
But as I say, I’m hoping and rooting for the best. My loyalty has never been to Adobe or to any other made-up entity, but rather to the spirit & practice of human creativity. Always will be, until they drag me off this rock. Rock the F on.
Adobe to offer access to non-Firefly models
Man, I’m old enough to remember writing a doc called “Yes, And…” immediately upon the launch of DALL•E in 2022, arguing that of course Adobe should develop its own generative models and of course it should also offer customers a choice of great third-party models—because of course no single model would be the best for every user in every situation.
And I’m old enough to remember being derided for just not Getting It™ about how selling per-use access to Firefly was going to be a goldmine, so of course we wouldn’t offer users a choice. ¯\_(ツ)_/¯
Oh well. Here we are, exactly two years after the launch of Firefly, and Adobe is going to offer access to third-party models. So… yay!
Even more news today! We are expanding our footprint in the @Adobe ecosystem to offer more choice to their creators pic.twitter.com/A4tHRkb25h
— Black Forest Labs (@bfl_ml) March 19, 2025
Inconvenient versions of everyday objects
Heh—let’s get funcomfortable with Katerina Kamprani:
Roblox lets players create 3D objects simply by describing them
I guarantee you, the TTP for the feature is less than the length of this 45-second promo. :-p
Putting Gemini editing to the test
Here’s a little holiday-appropriate experiment featuring a shot of my dad & me (in Lego form, naturally) at my grandmother’s family farm in County Mayo. Sláinte!
A little St. Paddy’s fun testing Google @GeminiApp‘s conversational editing abilities on Lego pics from Ireland: pic.twitter.com/LPCD0D3igi
— John Nack (@jnack) March 17, 2025
Happy glitter-free St. Pat’s
“When are we gonna start jazzing things down?? St. Patrick’s Day should be shit!” :-p
Wardrobe upgrades courtesy of Gemini
Speaking of reskinning imagery (see last several posts), check out what’s now possible via Google’s Gemini model, below. I’ve been putting it to the test & will share results shortly.
Alright, Google really killed it here.
You can easily swap your garment just by uploading the pieces to Gemini Flash 2.0 and telling it what to do. pic.twitter.com/pNPBkIdRqy
— Halim Alrasihi (@HalimAlrasihi) March 14, 2025
Photoshop gets new background-removal skills
This enhanced capability, which apparently now uses a cloud-hosted model, looks really promising. See before & after:
The Photoshop Beta also has some pretty wild improvements to Remove Background pic.twitter.com/yu7u8ISbMW
— Howard Pinsky (@Pinsky) March 13, 2025
Another example:
https://t.co/VuXQVHMkN1 pic.twitter.com/mcy0nQ3b6m
— Howard Pinsky (@Pinsky) March 14, 2025
Runway reskins rock
Another day, another set of amazing reinterpretations of reality. Take it away Nathan…
3 tests of Runway’s first frame feature. It’s very impressive and temporally coherent. Input is a video and stylized first frame. ✨
First example here is a city aerial to: circuit board, frost, fire, Swiss cheese, Tokyo. #aivideo #VFX pic.twitter.com/Y7HST74uBy
— Nathan Shipley (@CitizenPlain) March 6, 2025
…and Bilawal:
Playing guitar, reskinned with Runway’s restyle feature — pretty epic for digital character replacement.
I’m genuinely impressed by how well the fretting & strumming hands hold up.
Not perfect yet, but pulling this off would basically be impossible with Viggle or even Wonder… pic.twitter.com/UJBS9c8U1a
— Bilawal Sidhu (@bilawalsidhu) March 7, 2025
PikaSwaps nails virtual try-on
This temporally coherent inpainting is utterly bonkers. It’s just the latest—and perhaps the most promising—in myriad virtual try-on techniques I’ve seen & written about over the years.
This is effortless fashion
Made with @pika_labs Pikaswaps feature pic.twitter.com/BE9LDP8eAR
— Jessie_Ma (@ytjessie_) March 12, 2025
Mystic structure reference: Dracarys!
I love seeing the Magnific team’s continued rapid march in delivering identity-preserving reskinning
IT’S FINALLY HERE!
Mystic Structure Reference!
Generate any image controlling structural integrity Infinite use cases! Films, 3D, video games, art, interiors, architecture… From cartoon to real, the opposite, or ANYTHING in between!
Details & 12 tutorials pic.twitter.com/brw4Dx39gz
— Javi Lopez (@javilopen) February 27, 2025
This example makes me wish my boys were, just for a moment, 10 years younger and still up for this kind of father/son play. 🙂
Storyboarding? No clue! But with some toy blocks, my daughter’s wild imagination, and a little help from Magnific Structure Reference, we built a castle attacked by dragons. Her idea coming to life powered up with AI magic.
Just a normal Saturday Morning.
Behold, my daughter’s… pic.twitter.com/52tDZokmIT— Jesus Plaza (@JesusPlazaX) March 8, 2025
