Director John Likens and FX Supervisor Tomas Slancik dissect existential collapse in Your Friends & Neighbors’ haunting opener, blending Jon Hamm’s live-action gravitas with a symphony of digital decay. […]
Shot across two days and polished by world-class VFX artists, the title sequence mirrors Hamm’s crumbling protagonist, juxtaposing his stoic performance against hyper-detailed destruction.
Having created 200+ images in just the last month via this still-new image model (see new blog category that gathers some of them), I’m delighted to say that my team is working to bring it to Microsoft Designer, Copilot, and beyond. From the boss himself:
5/ Create: This one is fun. Turn a PowerPoint into an explainer video, or generate an image from a prompt in Copilot with just a few clicks.
We’ve also added new features to make Copilot even more personalized to you, plus a redesigned app built for human-agent collaboration. pic.twitter.com/m1oTf53aai
“You’re now the proud owner of the most dangerously cozy footwear in the sky. Plush, cartoon A-10 Warthogs with big doe eyes and turbine engines ready to warm your toes and deliver cuddly close air support. Let me know if you want tiny GAU-8 Gatling gun detailing on the front.”… pic.twitter.com/lKLRJGALaw
Back at Adobe we introduced Firefly text-to-vector creation, but behind the scenes it was really text-to-image-to-tracing. That could be fine, actually, provided that the conversion process did some smart things around segmenting the image, moving objects onto their own layers, filling holes, and then harmoniously vectorizing the results. I’m not sure whether Adobe actually got around to shipping that support.
In any event, StarVector promises actual, direct creation of SVG. The results look simple enough that it hasn’t yet piqued my interest enough to spend my time with it, but I’m glad that folks are trying.
I really hope that the makers of traditional vector-editing apps are paying attention to rich, modern, GPU-friendly techniques like this one. (If not—and I somewhat cynically expect that it’s not—it won’t be for my lack of trying to put it onto their radar. ¯\_(ツ)_/¯)
Introducing Vector Feathering — a new way to create vector glow and shadow effects. Vector Feathering is a technique we invented at Rive that can soften the edge of vector paths without the typical performance impact of traditional blur effects. (Audio on) pic.twitter.com/39kfjmFsTJ
I know only what you see below, but Magic Animator (how was that domain name available?) promises to “Animate your designs in seconds with AI,” which sounds right up my alley, and I’ve signed up for their waitlist.
Three years ago (seems like an eternity), I remarked regarding generative imaging.,
The disruption always makes me think of The Onion’s classic “Dolphins Evolve Opposable Thumbs“: “Holy f*ck, that’s it for us monkeys.” My new friend August replied with the armed dolphin below.
I’m reminded of this seeing Google’s latest AI-powered translation (?!) work. Just don’t tell them about abacuses!
Meet DolphinGemma, an AI helping us dive deeper into the world of dolphin communication. pic.twitter.com/2wYiSSXMnn
Wait, first, WTF is MCP? Check out my old friend (and former Illustrator PM) Mordy’s quick & approachable breakdown of Model Context Protocol and why it promises to be interesting to us (e.g. connecting Claude to the images on one’s hard drive).
The team showed of good new stuff, including—OMG—showing how to use Photoshop! (On an extremely personal level, “This is what it’s like when worlds colliiiide!!”)
As it marks its 50th anniversary, Microsoft is updating Copilot with a host of new features that bring it in line with other AI systems like ChatGPT or Claude. We got a look at them during the tech giant’s 50th anniversary event today, including new search capabilities, Copilot Vision which will be able to analyze real-time video from a mobile camera. Copilot will also now be able to use the web on your behalf. Here’s everything you missed.
Wow—I marvel at the insane, loving attention to detail in this shot-for-shot re-creation of a scene from Interstellar:
The creator writes,
I started working on this project in mid-2023, and it has been an incredible challenge, from modeling and animation to rendering and post-production. Every detail, from the explosive detachment of the Ranger to the rotation of the Endurance, the space suits of the minifigures, the insides of the Lander, and even the planet in the background, was carefully recreated in 3D.
Interstellar is one of the films that has moved me the most. Its story, visuals, and soundtrack left a lasting impact on me, and this video is my personal love letter to the movie—and to cinema in general. I wanted to capture the intensity and emotion of this scene in LEGO form, as a tribute to the power of storytelling through film.
2025 marks an unheard-of 40th year in Adobe creative director Russell Brown’s remarkable tenure at the company. I remember first encountering him via the Out Of Office message marking his 15-year (!) sabbatical (off to Burning Man with Rick Smolan, if I recall correctly). If it weren’t for Russell’s last-minute intervention back in 2002, when I was living out my last hours before being laid off from Adobe (interviewing at Microsoft, lol), I’d never have had the career I did, and you wouldn’t be reading this now.
In any event, early in the pandemic Russell kept himself busy & entertained by taking a wild series of self portraits. Having done some 3D printing with him (the output of which still forms my Twitter avatar!), I thought, “Hmm, what would those personas look like as plastic action figures? Let’s see what ChatGPT thinks.” And voila, here they are.
Click through the tweet below if you’re curious about the making-of process (e.g. the app starting to render him very faithfully, then freaking out midway through & insisting on delivering a more stylized, less specific rendition). But forget that—how insane is it that any of this is possible??
Can you show ChatGPT 5 portraits of legendary Adobe creative director Russell Brown and get a whole set of action figures? Yep! pic.twitter.com/gLTIcGqLJ0
It’s pretty stunning what a single creator can now create in a matter of days! Check out this sequence & accompanying explanation (click on the post) from Martin Gent:
I tried to make this title sequence six months ago, but the AI tools just weren’t up to it. Today it’s a different story. Sound on!
Since the launch of ChatGPT’s 4o image generator last week, I’ve been testing a new workflow to bring my characters – Riley Harper and her dog,… pic.twitter.com/SMgjDnJWH1
People can talk all the smack they want about “AI slop”—and to be sure, there’s tons of soulless slop going around—but good luck convincing me that there’s no creativity in remixing visual idioms, and in reskinning the world in never-before-possible ways. We’re just now dipping a toe into this new ocean.
ChatGPT 4o’s new image gen is insane. Here’s what Severance would look like in 8 famous animation styles
1/8: Rankin/Bass – That nostalgic stop-motion look like Rudolph the Red-Nosed Reindeer. Cozy and janky. pic.twitter.com/5rFL8SGttS
It’s worth noting that creative synthesis like this doesn’t “just happen,” much less in some way that replaces or devalues the human perspective & taste at the heart of the process: everything still hinges on having an artistic eye, a wealth of long-cultivated taste, and the willpower to make one’s vision real. It’s just that the distance between that vision & reality is now radically shorter than it’s ever been.