[I know this note seems supremely off topic, but bear with me.]
I’m sorry to hear of the passing of larger-than-life NBA star Dikembe Mutombo. He inspired the name of a “Project Mutombo” at Google, which was meant to block unintended sharing of content outside of one’s company. Unrelated (AFAIK he never knew of the project), back in 2015 I happened to see him biking around campus—dwarfing a hapless Google Bike & making its back tire cartoonishly flat.
RIP, big guy. Thanks for the memories, GIFs, and inspiration.
Wow @runwayml just dropped an updated Gen-3 Alpha Turbo Video-to-Video mode & it’s awesome! It’s super fast & lets you do 9:16 portrait video. Anything is possible! pic.twitter.com/AxeFaJwAPR
I quite enjoyed the Verge’s interview with Mark Zuckerberg, discussing how they think about building a whole range of reality-augmenting devices, from no-display Wayfarers to big-ass goggles, and especially to “glasses that look like glasses”—the Holy Grail in between.
Links to some of the wide-ranging topics they covered:
00:00 Orion AR smart glasses 00:27 Platform shift from mobile to AR 02:15 The vision for Orion & AR glasses 03:55 Why people will upgrade to AR glasses 05:20 A range of options for smart glasses 07:32 Consumer ambitions for Orion 11:40 Reality Labs spending & the cost of AR 12:44 Ray-Ban partnership 17:11 Ray-Ban Meta sales & success 18:59 Bringing AI to the Ray-Ban Meta 21:54 Replacing phones with AR glasses 25:18 Influx of AI content on social media 28:32 The vision for AI-filled social media 34:04 Will AI lead to less human interaction? 35:24 Success of Threads 36:41 Competing with X & the role of news 40:04 Why politics can hurt social platforms 41:52 Mark’s shift away from politics 46:00 Cambridge Analytica, in hindsight 49:09 Link between teen mental health and social media 53:52 Disagreeing with EU regulation 56:06 Debate around AI training data & copyright 1:00:07 Responsibility around AR as a platform
Tangentially, I gave myself an unintended chuckle with this:
Last week at the Apple keynote event, the iPhone camera features that stood out the most to me were the new Camera Control button, upgraded 48-megapixel Ultra Wide sensor, improved audio recording features (wind reduction and Audio Mix), and Photographic Styles. […]
Over the past week we’ve traveled over a thousand kilometers across Kenya, capturing more than 10,000 photos and logging over 3TB of ProRes footage with the new iPhone 16 Pro and iPhone 16 Pro Max cameras. Along the way, we’ve gained valuable insights into these camera systems and their features.
Fernando Livschitz, whose amazing work I’ve featured many times over the years, is back with some delightfully pillowy interactions in & over the Big Apple:
I have no idea what AI and other tools were used here, but it’d be fun to get a peek behind the curtain. As a commenter notes,
The meandering strings in the soundtrack. The hard studio lighting of the close-ups. The midtone-heavy Technicolor grading. The macro-lens DOF for animation sequences. This is spot-on 50’s film aesthetic, bravo.
And if that headline makes no sense, it probably just means your not terminally AI-pilled, and I’m caught flipping a grunt. 😉 Anyway, the tiny but mighty crew at Krea have brought the new Flux text-to-image model—including its ability to spell—to their realtime creation tool:
Flux now in Realtime.
available in Krea with hundreds of styles included.
What a fun little project & great NYC vibe-catcher: the folks at Runway captured street scenes with a disposable film camera, then used their model to put the images in motion. Check it out:
Check out my friend Bilawal’s summary thread, which pairs quick demos from Apple with bits of useful context:
Caught the Apple keynote? I’ve distilled down the most intriguing highlights for AI and spatial computing creators and builders—no need to sift through it yourself. Thread: pic.twitter.com/hiLM7iMzi4
There are some great additional details in this thread from Halide Camera as well:
There’s a lot of info to digest from the keynote, so here’s our summary of all the changes and new features of iPhone 16 and 16 Pro cameras in this quick thread pic.twitter.com/z7xB0aekLi
Somehow, despite my wife being a huge fan of the show over the last couple of years, I hadn’t previously seen the delightful titles for Only Murders In The Building:
“The brief was this idea of a love letter to New York in a way and true crime and true crime podcasts,” Lisa Bolan, a creative director at Elastic, told Salon. “John really wanted to capture this romantic illustrative approach to New York, building on the magic of Hirschfeld and The New Yorker – illustrators who have abstracted New York in a way that’s beautiful and also speaks to these little glimpses of magic in the urban landscape.
I love seeing how scrappy creators combine tools in new ways, blazing trails that we may come to see as commonplace soon enough. Here Eric Solorio (enigmatic_e) shows how he used Viggle & other tools to create his viral Deadpool animation:
As promised, here is a breakdown of how I did the Deadpool animation I recently posted. pic.twitter.com/F130Skq17U
If you never see the use of After Effects in this delightfully madcap vid—well, that’s exactly as it should be. Apparently the filmmakers were featured in an Adobe trade show booth after it was released.
I’ve been having a ball using the new Ideogram app for iOS to import photos & remix them into new creations. This is possible via their web UI as well, but there’s something extra magical about the immediacy of capture & remix. Check out a couple quick explorations I did while out with the kids, starting from a ballcap & the fuel tank of an old motorcycle:
I love this level of transparency from the folks behind Photo AI. Developer @levelsio reports,
[Flux] made Photo AI finally good enough overnight to be actually used by people and be satisfied with the results… it’s more expensive [than SD] but worth it because the photos are way way better… Not sure about profitability but with SD it was about 85% profit. With Flux def less maybe 65%… Very unplanned and grateful the foundational models got better.
We’re arguably in something of a trough of disillusionment in the AI-art hype cycle, but this kind of progress gives reason for hope: more quality & more utility do translate into more sustainable value—and there’s every reason to think that things will only improve from here.
Flux, the new AI model, changes businesses (and lives)
It made https://t.co/1vEawpI5vb finally good enough overnight to be actually used by people and be satisfied with the results
All my improvements before helped but now it’s accelerating with Flux’s photo quality pic.twitter.com/BiAqi5BgnY
Listen, I know that it’s a lot more seductive & cathartic to say “I f*cking hate generative AI,” and you can get 90,000+ likes for doing so, but—believe it or not—thoughtfulness & nuance actually matter. That is, how one uses generative tech can have very different implications for the creative community.
It’s therefore important to evaluate a range of risk/reward scenarios: What’s unambiguously useful & low-risk, vs. what’s an inducement to ripping people off, and what lies in the middle?
I see a continuum like this (click/tap to see larger):
None of this will draw any attention or generate much conversation—at least if my attempts to engage people on Twitter are any indication—but it’s the kind of thing actual toolmakers must engage with if we’re to make progress together. And so, back to work.
“Tell me about a product you hate that you use regularly.” I asked this question of hundreds of Google PM candidates I interviewed, and it was always a great bozo detector. Most people don’t have much of an answer—no real passion or perspective. I want to know not just what sucks, but why it sucks.
If I were asked the same question, I’d immediately say “Every car infotainment system ever made.” As Tolstoy might say, “Each one is unhappy in its own way.” The most interesting thing, I think, isn’t just to talk about the crappy mismatched & competing experiences, but rather about why every system I’ve ever used sucks. The answer can’t be “Every person at every company is a moron”—so what is it?
So much comes down to the structure of the industry, with hardware & software being made by a mishmash of corporate frenemies, all contending with a soup of regulations, risk aversion (one recall can destroy the profitability of a whole product line), and surprisingly bargain-bin electronics.
Despite all that, talented folks continue to fight the good fight, and I enjoyed John LePore’s speculative designs that reinterpret the instrument clusters of classic cars (from Corvettes to DeLoreans) through Apple’s latest CarPlay framework:
My friend Nathan has fed a mix of Schwarzenegger photos & drawings from Aesop’s Fables into the new open-source Flux model, creating a rad woodcut style. That’s interesting enough on its own—but it’s so 24 hours ago, and thus he’s now taken to animating the results. Check out the thread below for details:
Animating yesterday’s #FLUX woodcut Arnold using one of my favorite clips from the old soundboards
This uses Follow-Your-Emoji / Reference UNet in ComfyUI, which did a better job than LivePortrait.
It’s wild that capabilities that blew our minds two years ago—for which I & others spent months on a waiting list for DALL•E, which demanded beefy servers to run—are now available (only better) running in your pocket, on your telephone. Check out the latest from Google:
Pixel Studio is a first-of-its-kind image generator. So now you can bring all ideas to life from scratch, right on your phone — a true creative canvas.9
It’s powered by combining an on-device diffusion model running on Tensor G4 and our Imagen 3 text-to-image model in the cloud. With a UI optimized for easy prompting, style changes and editing, you can quickly bring your ideas to conversations with friends and family.
3. Pixel Studio
Create anything you imagine with PixelStudio, a groundbreaking image generator powered by an on-device diffusion model. It’s your AI canvas. pic.twitter.com/oDBqkUfqOR
Back when I worked on Google Photos, and especially later when I worked in Research, I really wanted to ship a camera mode that would help ensure great group photos. Prior to the user pressing the capture button, it would observe the incoming video stream, notice when it had at least one instance of each face smiling with their eyes open, and then knit together a single image in which everyone looked good.
Of course, the idea was hardly new: I’d done the same thing manually with my own wedding photos back in 2005, and in 2013 Google+ introduced “AutoAwesome Smile” to select good expressions across images & merge them into a single shot. It was a great feature, though sadly the only time people noticed its existence is when it failed in often hilarious “AutoAwful” ways (turning your baby or dog into, say, a two-nosed Picasso). My idea was meant to improve on this by not requiring multiple photos, and of course by suppressing unwanted hilarity.
Anyway, Googlers gonna Google, and now the Pixel team has introduced an interactive mode that helps you capture & merge two shots—the first one of a group, and the second of the photographer who took the first. Check out Marques Brownlee’s 1-minute demo:
The most interesting AI feature on the new Pixels IMO: “Add Me”
We take the intuitive conversational flow of ChatGPT and merge it with Uizard generative UI capabilities and drag-and-drop editor, to provide you with an intuitive UI design generator. You can turn a couple of ideas into a digital product design concept in a flash!
I’m really curious to see how the application of LLMs & conversational AI reshapes the design process, from ideation & collaboration to execution, deployment, and learning—and I’d love to hear your thoughts! Meanwhile here’s a very concise look at how Autodesigner works:
And if that piques your interest, here’s a more in-depth look:
I fondly recall Andy Samberg saying years ago that they’d sometimes cook up a sketch that would air at the absolute tail end of Saturday Night Live, be seen by almost no one, and be gotten by far fewer still—and yet for, like, 10,000 kids, it would become their favorite thing ever.
Given that it was just my birthday, I’ve dug up such an old… gem (?). This is why I’ve spent the last ~25 years hearing Jack Black belting out “Ha-ppy Birth-DAYYY!!” Enjoy (?!).
99% Invisible is back at it, uncovering hidden but fascinating bits of design in action. This time around it’s concerned with the art of movie title & poster design—specifically with how to deal with actors who insist on being top billed. In the case of the otherwise forgotten movie Outrageous Fortune:
Two different prints of the movie were made, one listing Shelley Long’s name first and the other listing Bette Midler’s name first. Not only that, two different covers to take-home products (LaserDisc and VHS) were also made, with different names first. The art was mirrored, so that the names aligned with the actors images.
One interesting pattern that’s emerged is to place one actor’s name in the lower left & another in the upper right—thus deliberately conflicting with normal reading order in English:
Anyway, as always with this show, just trust me—the subject is way more interesting than you might think.
I’m old enough to remember 2020, when we sincerely (?) thought that everyone would be excited to put 3D-scanned virtual Olympians onto their coffee tables… or something. (Hey, it was fun while it lasted! And it temporarily kept a bunch of graphics nerds from having to slink back to the sweatshop grind of video game development.)
Anyway, here’s a look back to what Google was doing around augmented reality and the 2020 (’21) Olympics:
I swear I spent half of last summer staring at tiny 3D Naomi Osaka volleying shots on my desktop. I remain jealous of my former teammates who got to work with these athletes (and before them, folks like Donald Glover as Childish Gambino), even though doing so meant dealing with a million Covid safety protocols. Here’s a quick look at how they captured folks flexing & flying through space:
Man do I ever love these guys. Do yourself a solid and listen to this quick, accessible history covering the design of the ’68 games in Mexico City—one inexorably wrapped up in political conflict & civic design. It’s great.
Google Research has devised “Alchemist,” a new way to swap object textures:
And people keep doing wonderful things with realtime image synthesis:
Happy mixing of decoder embeddings in real-time! Base prompt is ‘photo of a room, sofa, decor’ and the two knobs are ‘industrial’ and ‘rococo’. If you are wondering what is running there in the background… pic.twitter.com/5svyDy5C4e
Always pushing the limits of expressive tech, Martin Nebelong has paired Photoshop painting with AI rendering, followed by Runway’s new image-to-video model. “Days of Miracles & Wonder,” as always:
Painting with AI in photoshop – And doing magic with Runways new Gen 3 image to video. This stuff is insane.. wow.
Our tools and workflows are at the brink of an incredible renaissance.
In this history books, this clip will be referred to as “Owl and cake” 😛
Man, I’m old enough to remember rotoscoping video by hand—a process that quickly made me want to jump right out a window. Years later, when we were working on realtime video segmentation at Google, I was so proud to show the tech to a bunch of high school design students—only to have them shrug and treat it as completely normal.
Ah, but so it goes: “One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations. Once people get used to a certain luxury, they take it for granted.” — Yuval Noah Harari
In any case, Meta has just released what looks like a great update to their excellent—and open-source—Segment Anything Model. Check it out:
Introducing Meta Segment Anything Model 2 (SAM 2) — the first unified model for real-time, promptable object segmentation in images & videos.
SAM 2 is available today under Apache 2.0 so that anyone can use it to build their own experiences
You can play with the demo and learn more on the site:
Following up on the success of the Meta Segment Anything Model (SAM) for images, we’re releasing SAM 2, a unified model for real-time promptable object segmentation in images and videos that achieves state-of-the-art performance.
In keeping with our approach to open science, we’re sharing the code and model weights with a permissive Apache 2.0 license.
We’re also sharing the SA-V dataset, which includes approximately 51,000 real-world videos and more than 600,000 masklets (spatio-temporal masks).
SAM 2 can segment any object in any video or image—even for objects and visual domains it has not seen previously, enabling a diverse range of use cases without custom adaptation.
Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:
Did you see this mind blowing Adobe ControlNet + 3D Composer Adobe is going to launch! It will really boost creatives’ workflow. Video through @jnack
A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:
Here’s a quick exploration from the always-interesting Martin Nebelong:
A very quick first test of Adobe Project Neo.. didn’t realize this was out in open beta by now. Very cool!
I had to try to sculpt a burger and take that through Krea. You know, the usual thing!
There’s some very nice UX in NEO and the list-based SDF editing is awesome.. very… pic.twitter.com/e3ldyPfEDw
And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:
Tutorial: Direct your cartoons with Project Neo + Firefly + ToonCrafter
1) Model your characters in Project Neo 2) Generate first and last frame with Firefly + Structure Reference 3) Use ToonCrafter to make a video interpolation between the first and the last frame
As I’ve probably mentioned already, when I first surveyed Adobe customers a couple of years ago (right after DALL•E & Midjourney first shipped), it was clear that they wanted selective synthesis—adding things to compositions, and especially removing them—much more strongly than whole-image synthesis.
Thus it’s no surprise that Generative Fill in Photoshop has so clearly delivered Firefly’s strongest product-market fit, and I’m excited to see Illustrator following the same path—but for vectors:
Generative Shape Fill will help you improve your workflow including:
Create detailed, scalable vectors: After you draw or select your shape, silhouette, or outline in your artboard, use a text prompt to ideate on vector options to fill it.
Style Reference for brand consistency: Create a wide variety of options that match the color, style, and shape of your artwork to ensure a consistent look and feel.
Add effects to your creations: Enhance your vector options further by adding styles like 3D, geometric, pixel art or more.
They’re also adding the ability to create vector patterns simply via prompting:
Soon after Generative Fill shipped last year, people discovered that using a semi-opaque selection could help blend results into an environment (e.g. putting fish under water). The new Selection Brush in Photoshop takes functionality that’s been around for 30+ years (via Quick Select mode) and brings it more to the surface, which in turn makes it easier to control GenFill behavior:
For now the functionality is limited to upscaling, but I have to think that they’ll soon turn on the super cool relighting & restyling tech that enables fun like transforming my dog using just different prompts (click to see larger):
I wish Adobe hadn’t given up (at least for the last couple of years and foreseeable future) on the Smart Portrait tech we were developing. It’s been stuck at 1.0 since 2020 and could be so much better. Maybe someday!
In the meantime, check out LivePortrait:
Some impressive early results coming out of LivePortrait, a new model for face animation.
Upload a photo + a reference video and combine them!
Being able to declare what you want, instead of having to painstakingly set up parameters for materials, lighting, etc. may prove to be an incredibly unlock for visual expressivity, particularly around the generally intimidating realm of 3D. Check out what tyFlow is bringing to the table:
You can see a bit more about how it works in this vid…
Years ago Adobe experimented with a real-time prototype of Photoshop’s Landscape Mixer Neural Filter, and the resulting responsiveness made one feel like a deity—fluidly changing summer to winter & back again. I was reminded of using Google Earth VR, where grabbing & dragging th
Nothing came of it, but in the time since then, realtime diffusion rendering (see amazing examples from Krea & others) and image-to-image restyling have opened some amazing new doors. I wish I could attach filters to any layer in Photoshop (text, 3D, shape, image) and have it reinterpreted like this:
New way to navigate latent space. It preservers the underlying image structure and feels a bit like a powerful style-transfer that can be applied to anything. The trick is to… pic.twitter.com/orFBysBpkT
Pretty cool! I’d love to see Illustrator support model import & rendering of this sort, such that models could be re-posed in one’s .Ai doc, but this still looks like a solid approach:
3D meets 2D!
With the Expressive or Pixel Art styles in Project Neo, you can export your designs as SVGs to edit in Illustrator or use on your websites. pic.twitter.com/vOsjb2S2Un