This new tool (currently in closed beta, to which one can request access via the site)
Martini puts you in the director’s chair so you can make the video you see in your head… Get the exact shot you want, not whatever the model gives you. Step into virtual worlds and compose shots with camera position, lenses, and movement… No more juggling disconnected tools. Image generation, video generation, and world models—all in one place, with a built-in timeline.
I can’t wait to try stepping into the set. Beyond filmmaking, think what something like this could mean to image creation & editing…
Just yesterday I was chatting with a new friend from Punjab about having worked with a coincidentally named pair of teammates at Google—Kieran Murphy & Kiran Murthy. I love getting name-based insights into culture & history, and having met cool folks in Zimbabwe last year, this piece from 99% Invisible is 1000% up my alley.
“It’s not that you’re not good enough, it’s just that we can make you better.”
So sang Tears for Fears, and the line came to mind as the recently announced PhotaLabs promised to show “your reality, but made more magical.” That is, they create the shots you just missed, or wish you’d have taken:
Honestly, my first reaction was “ick.” I know that human memory is famously untrustworthy, and photos can manipulate it—not even through editing, but just through selective capture & curation. Even so, this kind of retroactive capture seems potentially deranging. Here’s the date you wish you’d gone on; here’s the college experience you wish you’d had.
I’m reminded of the Nathaniel Hawthorne quote featured on the Sopranos:
No man for any considerable period can wear one face to himself, and another to the multitude, without finally getting bewildered as to which may be the true.
Like, at what point did you take these awkward sibling portraits…?
We all need an awkward ’90s holiday photoshoot with our siblings.
If you missed the boat (like I did), you’re in luck – I wrote some prompts you can use with Nano Banana Pro
Upload a photo of each person and then use the following:
And, hey, darn if I can resist the devil’s candy: I wasn’t able to capture a shot of my sons together with their dates, so off I went to a combo of Gemini & Ideogram. I honestly kinda love the results, and so down the cognitive rabbit hole I slide… ¯\_(ツ)_/¯
Of course, depending on how far all this goes, the following tweet might prove to be prophetic:
Modern day horror story where you look though the photo albums of you as a kid and realize all the pictures have this symbol in the corner pic.twitter.com/dHnUrUJs0r
I’m not fully sure what this rather eye-popping little demo says about how our brains perceive reality, and thus what we can & cannot trust, but dang if it isn’t interesting:
I’m down in LA having tons of great conversations around AI and the future of creativity. If you want to chat, please hit me up. firstname dot lastname at gmail.
Apropos of the song featured in my previous post, in case you haven’t already beheld the ludicrous majesty of the Peacemaker Season 2 intros, well, stop cheating yourself!
Better still, here’s a peek behind the scenes of creating this inspired mayhem:
“Yes, And”: It’s the golden rule of improv comedy, and it’s the title of the paper I wrote & circulated throughout Adobe as soon as DALL•E dropped 3+ years ago: yes, we should make our own great models, and of course we should integrate the best of what the rest of the world is making! I mean, duh, why wouldn’t we??
This stuff can take time, of course (oh, so much time), but here we are: Adobe has announced that Google’s Nano Banana editing model will be coming to a Photoshop beta build near you in the immediate future.
Side note: it’s funny that in order to really upgrade Photoshop, one of the key minds behind Firefly simply needed to quit the company, move to Google, build Nano Banana, and then license it back to Adobe. Funny ol’ world…
It’s time to peel back a sneak and reveal that Nano Banana (Gemini 2.5 Flash Image) floats into Photoshop this September!
Soon you’ll be able to combine prompt-based edits with the power of Photoshop’s non-destructive tools like selections, layers, masks, and more! pic.twitter.com/CSLgJYVsHo
This could be an awesome opportunity for the right person, who’d get to work on things I’ve wanted the team to do for 15+ years!
We’re looking for an expert technical product manager to lead Photoshop’s foundational architecture and performance strategy. This is a pivotal role responsible for evolving the core technologies that power Photoshop’s speed, stability, and future scalability across platforms.
You’ll drive major efforts to modernize our rendering and compute architecture, migrate legacy systems to more scalable platforms, and accelerate performance through GPU and hardware optimization. This work touches nearly every part of Photoshop, from canvas rendering to feature responsiveness to long-term cross-platform consistency.
This is a principal-level individual contributor role with the potential to grow a team in the future.
I meant to post this incredibly weird old-ish Chemical Brothers video for Halloween. Seems somehow just as appropriate this morning, imagery+mood-wise.
[I know this note seems supremely off topic, but bear with me.]
I’m sorry to hear of the passing of larger-than-life NBA star Dikembe Mutombo. He inspired the name of a “Project Mutombo” at Google, which was meant to block unintended sharing of content outside of one’s company. Unrelated (AFAIK he never knew of the project), back in 2015 I happened to see him biking around campus—dwarfing a hapless Google Bike & making its back tire cartoonishly flat.
RIP, big guy. Thanks for the memories, GIFs, and inspiration.
Fernando Livschitz, whose amazing work I’ve featured many times over the years, is back with some delightfully pillowy interactions in & over the Big Apple:
I fondly recall Andy Samberg saying years ago that they’d sometimes cook up a sketch that would air at the absolute tail end of Saturday Night Live, be seen by almost no one, and be gotten by far fewer still—and yet for, like, 10,000 kids, it would become their favorite thing ever.
Given that it was just my birthday, I’ve dug up such an old… gem (?). This is why I’ve spent the last ~25 years hearing Jack Black belting out “Ha-ppy Birth-DAYYY!!” Enjoy (?!).
Wandering alone around the campus of my alma mater this past weekend had me in a deeply wistful, reflective mood. I reached out across time & space to some long-separated friends, and I thought you might enjoy this beautiful tune that’s been in my head the whole while.
Here’s a micro tutorial on how to create similar effects:
Here’s how to morph memes using Dream Machine’s new Keyframe feature. Simply upload two of your favorite memes, write a prompt that describes how you’d like to transition between them, and we’ll dream up the rest. https://t.co/G3HUEBEAcO#LumaDreamMachinepic.twitter.com/yNaRhERutn
I really enjoyed this TED talk from Fei-Fei Li on spatial computing & the possible dawning of a Cambrian explosion on how we—and our creations—perceive the world.
In the beginning of the universe, all was darkness — until the first organisms developed sight, which ushered in an explosion of life, learning and progress. AI pioneer Fei-Fei Li says a similar moment is about to happen for computers and robots. She shows how machines are gaining “spatial intelligence” — the ability to process visual data, make predictions and act upon those predictions — and shares how this could enable AI to interact with humans in the real world.
Given that my wife is the one responsible enough to chase the eclipse today & not roast her eyeballs, I’m left at home digging up a classic Dana Carvey bit about the eclipse (30 seconds, starts at 2:04). Enjoy! :-p
I think the spirit of maximally inclusive “Irishness” has special resonance for millions of people around the world, like me, who can trace a portion (but not all) of their ancestry to the Emerald Isle. (For me it’s 75%, surname notwithstanding.) I’m reminded of Notre Dame’s “What Would You Fight For?” campaign, which features scientists, engineers, and humanitarians from around the world who conclude with “We are the Fighting Irish.” I dunno—it’s hard to explain, but it really warms my heart—as did the Irish & Chinese Railroad Workers float we saw in SF’s St. Paddy’s parade on Saturday.
Anyway, I found this bit starring & directed by Jason Momoa to be pretty charming. Enjoy:
In her 12+ years in Adobe’s video group, my wife Margot worked to bring more women into the world of editing & filmmaking, participating in efforts supporting all kinds of filmmakers across a diverse range of ages, genders, types of subject matter, experience levels, and backgrounds. I’m delighted to see such efforts continuing & growing:
Adobe and the Adobe Foundation will partner with a cohort of global organizations that are committed to empowering underrepresented communities, including Easterseals, Gold House, Latinx House, Sundance Institute and Yuvaa, funding fellowships and apprenticeships that offer direct, hands-on industry access. The grants will also enable organizations to directly support filmmakers in their communities with funding for short and feature films.
The first fellowship is a collaboration with the NAACP, designed to increase representation in post-production. The NAACP Editing Fellowship is a 14-week program focused on education and training, career growth and workplace experience and will include access to Adobe Creative Cloud to further set up emerging creators with the necessary tools. Applications open on Jan. 18, with four fellows selected to participate in the program starting in May.
According to the team, the audio workflow changes now available in the beta include:
Interactive Fade Handles: Now you can simply click and drag from the edge of a clip to create a variety of custom audio fades in the timeline or drag across two clips to create a crossfade. These visual fades provide more precision and control over audio transitions while making it easy to see where they are applied across your sequence.
AI-powered Audio Category Tagging: When you drag clips into the sequence, they’ll automatically be identified and labeled with new icons for dialogue, music, sound effects, or ambience. A single click on the icon provides access to the most relevant tools for that audio type in the Essential Sound panel — such as Loudness Matching or Auto Ducking.
Redesigned FX Clip Badges: An updated badge makes it easier for you to see which clips have effects added to them. New effects can be added by right clicking the badge, and a single click opens the Effect Control panel for even more adjustment without changing the workspace or searching for the panel.
Modern, Intelligent Waveforms and Clips: Waveforms now dynamically resize when you change the track height and improved clip colors make it easier for you to see and work with audio on the timeline.
My friend Kevin had the honor of designing the designing the art & animation for this interactive wearable demo. Stick around (or jump) to the end to see the moving images:
I’m really happy & proud that Firefly now enables uploading your own images & mixing them into your creations. For months & months, this has been users’ number 1 feature request.
But with power comes responsibility, of course, and we’ve spent a lot of time thinking about ways to discourage misuse of the tech (i.e. how do we keep this from becoming a rip-off engine?). I’m glad to say that we’ve invested in some good guidelines & guardrails:
First, we require users to confirm they have the right to use any work that they upload to Generative Match as a reference image.
Second, if an image’s Content Credentials include tags indicating that the image shouldn’t be used as a style reference, users won’t be able to use it with Generative Match. We will be rolling out the ability to add these tags to assets as part of the Content Credentials framework within our flagship products.
Third, when a reference image is used to generate an asset, we save a thumbnail of the image to help ensure that the use of Generative Match meets our terms of service. We also note that a reference image was used in the asset’s Content Credentials. Storing the reference image provides an important dose of accountability.
To be clear, these protections are just first steps, and we plan to do more to strengthen protections. In the meantime, your feedback is most welcome!
Man, if you feel like you can’t keep up with technology while you do your day job, just please know that the same is true even at the company where one works, regarding one’s old app. At least we have smart folks like Deke McClelland to show us what’s been happening:
“Believe in creativity. Believe in imagination. Believe in innovation. Believe in the future.” As we say farewell to our co-founder Dr. John Warnock, we remember how his incredible ideas touched and transformed countless lives. Thank you for the gift of your creativity. ❤️️ pic.twitter.com/SrxOYohkqu
Like so many folks inside Adobe & far beyond, I’m saddened by the passing of our co-founder & a truly great innovator. I’m traveling this week in Ireland & thus haven’t time to compose a proper remembrance, but I’ve shared a few meaningful bits in this thread (click or tap through to see):
I am so sorry to hear of the passing of Adobe cofounder John Warnock. He changed all our lives, and those of millions more, for the better. God bless and godspeed, sir. 🙏 pic.twitter.com/ZHZDdkqcOO
OT, but too charming not to share. 😌 It’s amazing the creative mileage one can get from just a few minutes (if that) worth of recut footage plus a relatable concept.
Yikes—my ability to post got knocked out nearly a week ago due to a WordPress update gone awry. Hopefully things are now back to normal & I can resume sharing bits of the non-stop 5-alarm torrent of rad AI-related developments that land every day. Stay tuned!