It’s hard to believe that when I dropped by Google in 2022, arguing vociferously that we work together to put Imagen into Photoshop, they yawned & said, “Can you show up with nine figures?”—and now they’re spending eight figures on a 60-second ad to promote the evolved version of that tech. Funny ol’ world…
A couple of weeks ago I mentioned a cool, simple UI for changing camera angles using the Qwen imaging model. Along related lines, here’s an interface for relighting images:
Qwen-Image-Edit-3D-Lighting-Control app, featuring 8× horizontal and 3× elevational positions for precise 3D multi-angle lighting control. It enables studio-level lighting with fast Qwen Image Edit inference, paired with Multi-Angle-Lighting adapter. Try it now on @huggingface. pic.twitter.com/b3UrELE6Cn
AniStudio exists because we believe animation deserves a future that’s faster, more accessible, and truly built for the AI era—not as an add-on, but from the ground up. This isn’t a finished story. It’s the first step of a new one, and we want to build it together with the people who care about animation the most.
This is a subtle but sneakily transformative development, potentially enabling layer-by-layer creation of editable elements:
Awesome! I’ve been asking this of Ideogram & other image creators forever.
Transparency is *huge* unlock for generative creation & editing in design tools (Photoshop, After Effects, Canva, PPT, and beyond). https://t.co/UGJQVDuet5
This new tech from Meta promises to create geometry from video frames. You can try feeding it up to 16 frames via this demo site—or just check out this quick vid:
Huge drop by Meta: ActionMesh turns any video into an animated 3D mesh.
I’m excited to learn more about GenLit, about which its creators say,
Given a single image and the 5D lighting signal, GenLit creates a video of a moving light source that is inside the scene. It moves around and behind scene objects, producing effects such as shading, cast shadows, secularities, and interreflections with a realism that is hard to obtain with traditional inverse rendering methods.
Video diffusion models have strong implicit representations of 3D shape, material, and lighting, but controlling them with language is cumbersome, and control is critical for artists and animators.
I stumbled across some compelling teaser videos for this product, about which only a bit of info seems to be public:
A Photoshop plugin that brings truly photorealistic, prompt-free relighting into existing workflows. Instead of describing what you want in text, control lighting through visual adjustments. Change direction, intensity, and mood with precision… Modify lighting while preserving the structure and integrity of the original image. No more destructive edits or starting over.
Identity preservation—that is, exactly maintaining the shape & character of faces, products, and other objects—has been the lingering downfall of generative approaches to date, so I’m eager to take this for a spin & see how it compares to other approaches.
This stuff of course looks amazing—but not wholly new. Krea debuted realtime generation more than two years ago, leading to cool integrations with various apps, including Photoshop:
My photoshop is more fun than yours With a bit of help from Krea ai.
It’s a crazy feeling to see brushstrokes transformed like this in realtime.. And the feeling of control is magnitudes better than with text prompts.#ai#artpic.twitter.com/Rd8zSxGfqD
The interactive paradigm is brilliant, but comparatively low quality has always kept this approach from wide adoption. Compare these high-FPS renders to ChatGPT’s Studio Ghibli moment: the latter could require multiple minutes to produce a single image, but almost no one mentioned its slowness. “Fast is good, but good is better.”
I hope that Krea (and others) are quietly beavering away on a hybrid approach that combines this sort of addictive interactivity with a slower but higher-quality render (think realtime output fed into Nano Banana or similar for a final pass). I’d love to compare the results against unguided renders from the slower models. Perhaps we shall see!
Apple’s new 2D-to-3D tech looks like another great step in creating editable representations of the world that capture not just what a camera sensor saw, but what we humans would experience in real life:
Excited to release our first public AI model web app, powered by Apple’s open-source ML SHARP.
Turn a single image into a navigable 3D Gaussian Splat with depth understanding in seconds.
Almost exactly 19 years ago (!), I blogged about some eye-popping tech that promised interactive control over portrait lighting:
I was of course incredibly eager to get it into Photoshop—but alas, it’d take years to iron out the details. Numerous projects have reached the market (see the whole big category here I’ve devoted to them), and now with “Light Touch,” Adobe is promising even more impressive & intuitive control:
This generative AI tool lets you reshape light sources after capture — turning day to night, adding drama, or adjusting focus and emotion without reshoots. It’s like having total control over the sun and studio lights, all in post.
Check it out:
If nothing else, make sure you see the pumpkin part, which rightfully causes the audience to go nuts. 🙂
Less prompting, more direct physicality: that’s what we need to see in Photoshop & beyond.
As an example, developer apolinario writes, “I’ve built a custom camera control @gradio component for camera control LoRAs for image models Here’s a demo of @fal’s Qwen-Image-Edit-2511-Multiple-Angles-LoRA using the interactive camera component”:
As AI continues to infuse itself more deeply into our world, I feel like I’ll often think of Paul Graham’s observation here:
Paul Graham on why you shouldn’t write with AI:
“In preindustrial times most people’s jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be. It will be the same with writing. There will… pic.twitter.com/RWGZeJetUp
I initially mistook this tech as text->layers, but it’s actually image->layers. Having said that, if it works well, it might be functionally similar to direct layer output. I need to take it for a spin!
We’re finally getting layers in AI images.
The new Qwen Image Layered LoRA allows you to decompose any image into layers – which means you can move, resize, or replace an object / background.
Hey gang—thanks for being part of a wild 2025, and here’s to a creative year ahead. Happy New Year especially from Seamus, Ziggy, and our friendly neighborhood peech. 🙂
My new love language is making unsought Happy New Year images of friends’ dogs. (HT to @NanoBanana, @ChatGPTapp, and @bfl_ml Flux.)
For the latter, I used Photoshop to remove a couple of artifacts from the initial Scarface-to-puppy Nano Banana generation, and to resize the image to fit onto a canvas—but geez, there’s almost no world where I’d now think to start in PS, as I would’ve for the last three decades.
Back in 2002, just after Photoshop godfather Mark Hamburg left the project in order to start what became Lightroom, he talked about how listening too closely to existing customers could backfire: they’ll always give you an endless list of nerdy feature requests, but in addressing those, you’ll get sucked up the complexity curve & end up focusing on increasingly niche value.
Meanwhile disruptive competitors will simply discard “must-have” features (in the case of Lightroom, layers), as those had often proved to be irreducibly complex. iOS did this to macOS not by making the file system easier to navigate, but by simply omitting normal file system access—and only later grudgingly allowing some of it.
Steve Jobs famously talked about personal computers vs. mobile devices in terms of cars vs. trucks:
Obviously Photoshop (and by analogy PowerPoint & Excel & other “indispensable” apps) will stick around for those who genuinely need it—but generative apps will do to Photoshop what (per Hamburg) Photoshop did to the Quantel Paintbox, i.e. shove it up into the tip of the complexity/usage pyramid.
Adobe will continue to gamely resist this by trying to make PS easier to use, which is fine (except of course where clumsy new affordances get in pros’ way, necessitating a whole new “quiet mode” just to STFU!). And—more excitingly to guys like me—they’ll keep incorporating genuinely transformative new AI tech, from image transformation to interactive lighting control & more.
Still, everyone sees what’s unfolding, and “You cannot stop it, you can only hope to contain it.” Where we’re going, we won’t need roads.
“Please create a funny infographic showing a cutaway diagram for the world’s most dangerous hospital cuisine: chicken pot pie. It should show an illustration of me (attached) gazing in fear…” pic.twitter.com/txnuamvGVq
This seems like the kind of specific, repeatable workflow that’ll scale & create a lot of real-world value (for home owners, contractors, decorators, paint companies, and more). In this thread Justine Moore talks about how to do it (before, y’know, someone utterly streamlines it ~3 min from now!):
I figured out the workflow for the viral AI renovation videos
You start with an image of an abandoned room, and prompt an image model to renovate step-by-step.
Then use a video model for transitions between each frame.
Well, after years and years of trying to make it happen, Google has now shipped the ability to upload a selfie & see yourself in a variety of outfits. You can try it here.
U.S. shoppers, say goodbye to bad dressing room lighting. You can now use Nano Banana (our Gemini 2.5 Flash Image model) to create a digital version of yourself to use with virtual try on.
As I’m fond of noting, only thing more incredible than witchcraft like this is just how little notice people now take of it. ¯\_(ツ)_/¯ But Imma keep noticing!
Two years ago (i.e. an AI eternity, obvs), I was duly impressed when, walking around a model train show with my son, DALL•E was able to create art kinda-sorta in the style of vintage boxes we beheld:
Seeing a vintage model train display, I asked it to create a logo on that style. It started poorly, then got good. pic.twitter.com/v7qL8Xnqpp
I still think that’s amazing—and it is!—but check out how far we’ve come. At a similar gathering yesterday, I took the photo below…
…and then uploaded it to Gemini with the following prompt: “Please create a stack of vintage toy car boxes using the style shown in the attached picture. The cars should be a silver 1990 Mazda Miata, a red 2003 Volkswagen Eurovan, a blue 2024 Volvo XC90, and a gray 2023 BMW 330.” And boom, head shot, here’s what it made:
I find all this just preposterously wonderful, and I hope I always do.
As Einstein is said to have remarked, “There are only two ways to live your life: one is as though nothing is a miracle, the other is as though everything is.”
Me: “What is the most ridiculous question I asked this year?” Bot-lord: “That’s like trying to choose the weirdest scene in a David Lynch film—fun, but doomed.”
Jesús Ramirez has forgotten, as the saying goes, more about Photoshop than most people will ever know. So, encountering some hilarious & annoying Remove Tool fails…
.@Photoshop AI fail: trying to remove my sons heads (to enable further compositing), I get back… whatever the F these are. pic.twitter.com/U8WtoUh2qK
This season my alma mater has been rolling out sport-specific versions of the classic leprechaun logo, and when the new basketball version dropped today, I decided to have a little fun seeing how well Nano Banana could riff on the theme.
My quick take: It’s pretty great, though applying sequential turns may cause the style to drift farther from the original (more testing needed).
Interesting—if not wholly unexpected—finding: People dig what generative systems create, but only if they don’t know how the pixel-sausage was made. ¯\_(ツ)_/¯
AI created visual ads got 20% more clicks than ads created by human experts as part of their jobs… unless people knew the ads are AI-created, which lowers click-throughs to 31% less than human-made ads
Being crazy-superstitious when it comes to college football, I must always repay Notre Dame for every score by doing a number of push-ups equivalent to the current point total.
In a normal game, determining the cumulative number of reps is pretty easy (e.g. 7 + 14 + 21), but when the team is able to pour it on, the math—and the burn—get challenging. So, I used Gemini the other day to whip up this little counter app, which it did in one shot! Days of Miracles & Wonder, Vol. ∞.
Introduced my son to vibe coding with @GeminiApp by whipping up a push-up counter for @NDFootball. (RIP my pecs!) #GoIrish
I can’t think of a more burn-worthy app than Concur (whose “value prop” to enterprises, I swear, includes the amount they’ll save when employees give up rather than actually get reimbursed).
That’s awesome!
Given my inability to get even a single expense reimbursed at Microsoft, plus similar struggles at Adobe, I hope you won’t mind if I get a little Daenerys-style catharsis on Concur (via @GeminiApp, natch). pic.twitter.com/128VExTDoS
The ever thoughtful Blaise Agüera y Arcas (CTO of Technology & Society at Google) recently sat down for a conversation with the similarly deep-thinking Dan Faggella. I love that I was able to get Gemini to render a high-level view of the talk:
Creating clean vectors has proven to be an elusive goal. Firefly in Illustrator still (to my knowledge) just generates bitmaps which then get vectorized. Therefore this tweet caught my attention:
Free-form SVG generation has always been an incredibly hard problem – a challenge I’ve worked on for two years. But with #Gemini3, everything has changed! Now, everyone is designer.
In my very limited testing so far, however, results have been, well, impressionistic. 🙂
Here’s a direct comparison of my friend Kevin’s image (which I received as an image) vectorized via Image Trace (way more points than I’d like, but generally high fidelity), vs. the same one converted to SVG via Gemini(clean code/lines, but large deviation from the source drawing):
But hey, give it time. For now I love seeing the progress!
My buddy Bilawal recently sat down with Canva cofounder & Chief Product Officer Cameron Adams for an informative conversation. These points, among others, caught my attention:
“Canva is a goal-achievement machine.” That is, users approach it with particular outcomes in mind (e.g. land your first customer, get your first investment), and the feature development team works back from those goals. As the old saying goes, “People don’t want a quarter-inch drill, they want a quarter-inch hole”—i.e. a specific outcome.
They seek to reduce the gap between idea & outcome. This reminded me of the first Adobe promo I saw more than 30 years ago: “Imagine what you can create. Create what you can imagine.”
Measuring the achievement of goals is critical. That includes gathering insights from audience response.
They’re pursuing a three-tiered AI strategy: homegrown foundational models that they need to own (based on deep insight into user behavior); partnerships with state-of-the-art models (e.g. GPT, Veo); and a rich ecosystem and app marketplace (hosting image & music generation and more).
“When you think about AI as a collaborator, it opens up a whole palette of different interactions & product experiences you can deliver.” No single modality (e.g. prompting alone) is ideal for everything from ideation to creation to refinement.
What’s it like to author at a higher level of abstraction? “It’s a dance,” and it’s still a work in progress.
What’s the role of personalization? Responsive content. Personalizing messaging has been a huge driver of Canva’s growth, and they want to bring similar tools & best practices to everyone.
“The real crux of Canva is storytelling.” Video is now used by tens of millions of people. Across media (video, images, presentations), the same challenges appear: Properly complete your idea. Make fine-grained edits. Bring in others & get their feedback.
“Knowing the start & the end, but less of the middle.” AI-enabled tools can remove production drudgery, but one’s starting point & desired outcome remain essential. Start: Fundamental understanding of what works. Ideas, thinking creatively. Elements of editorship & taste are essential. Later: It’s how you express this, measure impact, take insights into the creation loop.
00:00 – Canva’s $32B Empire the future of Design 02:26 – Design for Everyone: Canva’s Origin Story 04:19 – Why Canva Bet on the Web 07:29 – How Have Canva Users Changed Over the Years? 12:14 – Why Canva Isn’t Just Unbundling Adobe 14:50 – Canva’s AI Strategy Explained 18:12 – What Does Designing With AI Look Like? 22:55 – Scaling Content with Sheets, Data, and AI 27:17 – What is Canva Code? 29:38 – How Does Canva Fit Into Today’s AI Ecosystem? 32:35 – Why Adobe and Microsoft Should Be Worried 37:52 – Will Canva Expand Into Video Creation? 41:10 – Will AI Eliminate or Expand Creative Jobs?
On Friday I got to meet Dr. Fei-Fei Li, “the godmother of AI,” at the launch party for her new company, World Labs (see her launch blog post). We got to chat a bit about a paradox of complexity: that as computer models for perceiving & representing the world grow massively more sophisticated, the interfaces for doing common things—e.g. moving a person in a photo—can get radically simpler & more intentional. I’ll have more to say about this soon.
Meanwhile, here’s her fascinating & wide-ranging conversation with Lenny Rachitsky. I’m always a sucker for a good Platonic allegory-of-the-cave reference. 🙂
From the YouTube summary:
(00:00) Introduction to Dr. Fei-Fei Li (05:31) The evolution of AI (09:37) The birth of ImageNet (17:25) The rise of deep learning (23:53) The future of AI and AGI (29:51) Introduction to world models (40:45) The bitter lesson in AI and robotics (48:02) Introducing Marble, a revolutionary product (51:00) Applications and use cases of Marble (01:01:01) The founder’s journey and insights (01:10:05) Human-centered AI at Stanford (01:14:24) The role of AI in various professions (01:18:16) Conclusion and final thoughts
And here’s Gemini’s solid summary of their discussion of world models:
The Motivation: While LLMs are inspiring, they lack the spatial intelligence and world understanding that humans use daily. This ability to reason about the physical world—understanding objects, movement, and situational awareness—is essential for tasks like first response or even just tidying a kitchen 32:23.
The Concept: A world model is described as the lynchpin connecting visual intelligence, robotics, and other forms of intelligence beyond language 33:32. It is a foundational model that allows an agent (human or robot) to:
Create worlds in their mind’s eye through prompting 35:01.
Interact with that world by browsing, walking, picking up objects, or changing things 35:12.
Reason within the world, such as a robot planning its path 35:31.
The Application: World models are considered the key missing piece for building effective embodied AI, especially robots 36:08. Beyond robotics, the technology is expected to unlock major advances in scientific discovery (like deducing 3D structures from 2D data) 37:48, games, and design 37:31.
The Product: Dr. Li co-founded World Labs to pursue this mission 34:25. Their first product, Marble, is a generative model that outputs genuinely 3D worlds which users can navigate and explore 49:11. Current use cases include virtual production/VFX, game development, and creating synthetic data for robotic simulation 53:05.
I was so chuffed to text my wife from the Adobe MAX keynote and report that the next-gen video editor she’d kicked off as PM several years ago has now come to the world, at least in partial form, as the new Firefly Video Editor (currently accepting requests for access). Here our pal Dave Werner provides a characteristically charming tour:
I thought this was a pretty interesting & thoughtful conversation. It’s interesting to think about ways to evaluate & reward process (hard work through challenges) and not just product (final projects, tests, etc.). AI obviously enables a lot of skipping the former in pursuit of the latter—but (shocker!) people then don’t build knowhow around solving problems, or even remember (much less feel pride in) the artifacts they produce.
The issues go a lot deeper, to the very philosophy of education itself. So we sat down and talked to a lot of teachers — you’ll hear many of their voices throughout this episode — and we kept hearing one cri du coeur again and again: What are we even doing here? What’s the point?
Links, courtesy of the Verge team:
A majority of high school students use gen AI for schoolwork | College Board
About a quarter of teens have used ChatGPT for schoolwork | Pew Research
Check out MotionStream, “a streaming (real-time, long-duration) video generation system with motion controls, unlocking new possibilities for interactive content generation.” It’s said to run at 29fps on a single H100 GPU (!).
MotionStream: Real-time, interactive video generation with mouse-based motion control; runs at 29 FPS with 0.4s latency on one H100; uses point tracks to control object/camera motion and enables real-time video editing.https://t.co/fFi9iB9ty7pic.twitter.com/zKb9u3bj9g
What I’m really wondering, though, it whether/when/how an interactive interface like this can come to Photoshop & other image-editing environments. I’m not yet sure how the dots connect, but could it be paired with something like this model?
Qwen Image Multiple Angles LoRA is an exquisitely trained LoRA!˚₊‧꒰ა
Keep character and scenes consistent, and flies the camera around! Open source got there! One of the best LoRAs I’ve come across lately pic.twitter.com/1mkmCpXgIY
Oh man, this parody of the messaging around AI-justified (?) price increases is 100% pitch perfect. (“It’s the corporate music that sends me into a rage.”)
My friend Bilawal got to sit down with VFX pioneer John Gaeta to discuss “A new language of perception,” Bullet Time, groundbreaking photogrammetry, the coming Big Bang/golden age of storytelling, chasing “a feeling of limitlessness,” and much more.
In this conversation:
— How Matrix VFX techniques became the prototypes for AI filmmaking tools, game engines, and AR/VR systems — How The Matrix team sourced PhD thesis films from university labs to invent new 3D capture techniques — Why “universal capture” from Matrix 2 & 3 was the precursor to modern volumetric video and 3D avatars — The Matrix 4 experiments with Unreal Engine that almost launched a transmedia universe based on The Animatrix — Why dystopian sci-fi becomes infrastructure (and what that means for AI safety) — Where John is building next: Escape.art and the future of interactive storytelling
I recently shared a really helpful video from Jesús Ramirez that showed practical uses for each model inside Photoshop (e.g. text editing via Flux). Now here’s a direct comparison from Colin Smith, highlighting these strengths:
Flux: Realistic, detailed; doesn’t produce unwanted shifts in regions that should stay unchanged. Tends to maintain more of the original image, such as hair or background elements.
Nano Banana: Smooth & pleasing (if sometimes a bit “Disney”); good at following complex prompts. May be better at removing objects.
These specific examples are great, but I continue to wish for more standardized evals that would help produce objective measures across models. I’m investigating the state of the art there. More to share soon, I hope!
Improvements to imaging continues its breakneck pace, as engines evolve from “simple” text-to-image (which we considered miraculous just three years ago—and which I still kinda do, TBH) to understanding time & space.
Now Emu (see project page, code) can create entire multi-page/image narratives, turn 2D images into 3D worlds, and more. Check it out:
“Nodes, nodes, nodes!” — my exasperated then-10yo coming home from learning Unreal at summer camp 🙂
Love ’em or hate ’em, these UI building blocks seem to be everywhere these days—including in Runway’s new Workflows environment:
Introducing Workflows, a new way to build your own tools inside of Runway.
Now you can create your own custom node-based workflows chaining together multiple models, modalities and intermediary steps for even more control of your generations. Build the Workflows that work for… pic.twitter.com/5VHABPj8et