Insta360 takes us down & not so dirty around Hamburg’s Miniatur Wunderland in this fun 2-minute tour:
Deeply chill photography
(Cue Metallica’s Trapped Under Ice!)
Russell Brown & some of my old Photoshop teammates recently ventured into -40º (!!) weather in Canada, pushing themselves & their gear to the limits to witness & capture the Northern Lights:
Perhaps on future trips they can team up with these folks:
To film an ice hockey match from this new angle of action, Axis Communications used a discrete modular camera — commonly seen in ATM machines, onboard vehicles, and other small spaces where a tiny camera needs to fit — and froze it inside the ice.
Check out the results:
Behind—and under—the scenes:
Adobe’s hiring a prototyper to explore generative AI
We’re only just beginning to discover the experiential possibilities around generative creation, so I’m excited to see this rare gig open up:
You will build new and innovative user interactions and interfaces geared towards our customers unique needs, test and refine those interfaces in collaboration with academic research, user researchers, designers, artists and product teams.
Check out the listing for the full details.
Amazing Lego recreations of extreme sports
Stunning work from Legosteeze. Make sure to click the arrows on the post to see all the clips (amazingly based on real-world footage!):
View this post on Instagram
View this post on Instagram
[Via Cristobal Garcia]
Adobe Project Primrose dazzles Sofía Vergara
My friend Kevin had the honor of designing the designing the art & animation for this interactive wearable demo. Stick around (or jump) to the end to see the moving images:
@el_hormiguero Marron nos trae el proyecto Primrose desarrollado por @Adobe: el vestido interactivo capaz de cambiar su apariencia #vestidointeractivo #adobe #elhormiguero #SofíaVergaraEH ♬ sonido original – El Hormiguero
Two quotes worth reflecting on as we go into the new year
One, I swear I think of this observation from author Sebastian Junger at least once a day:

We’d do well to reflect on it in how we treat our colleagues, and especially—in this time of disruptive AI—how we treat the sensitive, hardworking creators who’ve traditionally supported toolmarkers like Adobe. Our “empowering” tech can all too easily make people feel devalued, thrown away like an old piece of fruit. And when that happens, we’re next.
Two, this observation hits me where I live:

I’ve joked for years about my “Irish Alzheimer’s,” in which one forgets everything but the grudges. It’s funny ’cause it’s true—but taken any real distance (focusing on failures & futility), it becomes corrosive, “like taking poison and hoping the other guy gets sick.”
Earlier today an old friend observed, “I’ve always had a justice hang-up.” So have I, and that’s part of what made us friends for so long.
But as I told him, “It’s such a double-edged sword: my over-inflamed sense of justice is a lot of what causes me to speak up too sharply and then light my way by all the burning bridges.” Finding the balance—between apathetic acquiescence on one end & alienating militancy on the other—can be hard.
So, for 2024 I’m trying to lead with gratitude. It’s the best antidote, I’m finding, to bitterness & bile. Let’s be glad for our fleeting opportunities to do, as Mother Teresa put it, “small things with great love.”
Here’s to courage, empathy, and wisdom for our year ahead.
Adobe Firefly named “Product of the Year”
Nice props from The Futurum Group:
Here is why: Adobe Firefly is the most commercially successful generative AI product ever launched. Since it was introduced in March in beta and made generally available in June, at last count in October, Firefly users have generated more than 3 billion images. Adobe says Firefly has attracted a significant number of new Adobe users, making it hard to imagine that Firefly is not aiding Adobe’s bottom line.
Happy New Year!
Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).
AI Holiday Leftovers, Vol. 3
- Fun with famous IP:
- Vectors: StarVector: Generating Scalable Vector Graphics Code from Images
- How to train a custom Stable Diffusion model to generate consistent characters via LensGo.ai, which lets you train 3 custom models for free every month.
- Insane 128x zoom-in on AI-generated meat.
AI Holiday Leftovers, Vol. 2
- 3D:
- Paint Anything 3D with Lighting-Less Texture Diffusion Models: “Paint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.”
- “Google just revealed an ABSOLUTE depth estimation model. As opposed to recent depth models (Marigold, PatchFusion) which aim for maximum details, DMD aims to estimate the ABSOLUTE depth (in meters) within the image.”
- Typography:
- Retro-futuristic alphabet rendered with Midjourney V6: “Just swapped out the letter and kept everything else the same. Prompt: Letter “A”, cyberpunk style, metal, retro-futuristic, star wars, intrinsic details, plain black background. Just change the letter only. Not all renders are perfect, some I had to do a few times to get a good match. Try this strategy for any type of cool alphabet!”
- As many others have noted, Midjourney is now good at type. Find more here.
AI Holiday Leftovers, Vol. 1
Dig in, friends. 🙂
- Drawing/painting:
- Using a simple kids’ drawing tablet to create art: “I used @Vizcom_ai to transform the initial sketch. This tool has gotten soo good by now. I then used @LeonardoAi_’s image to image to enhance the initial image a bit, and then used their new motion feature to make it move. I also used @Magnific_AI to add additional details to a few of the images and Decohere AI’s video feature.”
- Latte art: “Photoshop paint sent to @freepik’s live canvas. The first few seconds of the video are real-time to show you how responsive it is. The music was made with @suno_ai_. Animation with Runways Gen-2.”
- Photo editing:
- Google Photos gets a generative upgrade: “Magic Eraser now uses gen AI to fill in detail when users remove unwanted objects from photos. Google Research worked on the MaskGIT generative image transformer for inpainting, and improved segmentation to include shadows and objects attached to people.”
- Clothing/try-on:
- PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns: “We propose a novel virtual try-on from unconstrained designs (ucVTON) task to enable photorealistic synthesis of personalized composite clothing on input human images.”
- AnyDoor is “a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations in a harmonious way.”
- SDXL Auto FaceSwap enables to create new images using the face of a source image (example attached).

AI: Tons of recent rad things
- Realtime:
- Oh look, I’m George Clooney! Kinda. You can be, too. FAL AI promises “AI inference faster than you can type.”
- “100ms image generation at 1024×1024. Announcing Segmind-Vega and Segmind-VegaRT, the fastest and smallest, open source models for image generation at the highest resolution.”
- Krea has announced their open beta, “free for everyone.”
- How incredible would it be to have realtime generative brushes like this?
- Drawing to Video, made using Vizcom -> Leonardo -> Pika.
- 3D generation:
- ByteDance has released ImageDream (image to 3D)
- SceneWiz3D offers “A new approach to create high-fidelity 3D scenes from text and 3D object control”
- Image -> depth -> geometry using Marigold + Blender
- 3D for fashion, sculpting, and more:
- This is what Adobe Substance & a notional 3D mode of Firefly Text-to-Image should feel like.
- Outfit Anyone + Animate Anyone = virtual try on + movement.
- Sculpting/rendering via Adobe Substance 3D Modeler + Dreams + Unbound + Krea.
- AnimateDiff v3 was just released.
- Instagram has enabled image generation inside chat (pretty “meh,” in my experience so far), and in stories creation, “It allows you to replace a background of an image into whatever AI generated image you’d like.”
- “Did you know that you can train an AI Art model and get paid every time someone uses it? That’s Generaitiv’s Model Royalties System for you.”
How-to: Combining Photoshop + ComfyUI
It’s a little nerdy even for my blood, but some of my teammates swear by these techniques that enable connecting Photoshop to a hosted instance of Stable Diffusion, enabling one to guide the process via a Photoshop doc and/or custom-trained styles:
“I Draw Better Than AI!”
Hah—I can dig this finger-rich pin from Pictoplasma.

To the moon! Insta360 makes a satellite
What if your tiny planet—a visual genre I’ve enjoyed beating halfway into the ground—were our actual planet? Insta360, on whom I’ve spent crazy amounts of money buying brilliant-if-maddening gear, has now sent their devices to the edge of space:

Catch up on great new Illustrator features in 60 seconds
Let’s talk vector generation, 3D mockup support, image-to-type, and more. Take ‘er away, Deke!
Promising 3D research from Adobe
AI image generation is getting *crazy* fast
Gemini is bonkers
I mean, seriously, what even is all this?? I can’t explain; just please watch.
- 0:00 Intro
- 0:19 Multimodal Dialogue
- 1:32 Multilinguality 2:04
- Game Creation 2:31
- Visual Puzzles 3:17
- Making Connections
- 3:39 Image & Text Generation
- 4:06 Logic & Spatial Reasoning
- 4:55 Translating Visuals
- 5:27 Cultural Understanding
Baby, You Can Drive My Bricks
I’ve had way too much fun creating custom Lego sets based on friends’ & family’s rides, so to help others do it, I’ve made my first custom GPT, “Baby You Can Drive My Bricks.” Take it for a spin & let me know what you create!

Pika Labs “Idea-to-Video” looks stunning
It’s ludicrous to think that these folks formed the company just six months ago, and even more ludicrous to see what the model can already do—from video synthesis, to image animation, to inpainting/outpainting:
Our vision for Pika is to enable everyone to be the director of their own stories and to bring out the creator in each of us. Today, we reached a milestone that brings us closer to our vision. We are thrilled to unveil Pika 1.0, a major product upgrade that includes a new AI model capable of generating and editing videos in diverse styles such as 3D animation, anime, cartoon and cinematic, and a new web experience that makes it easier to use. You can join the waitlist for Pika 1.0 at https://pika.art.
“Emu Edit” enables instructional image editing
This tech—or something much like it—is going to be a very BFD. Imagine simply describing the change you’d like to see in your image—and then seeing it.
[Generative models] still face limitations when it comes to offering precise control. That’s why we’re introducing Emu Edit, a novel approach that aims to streamline various image manipulation tasks and bring enhanced capabilities and precision to image editing.
Emu Edit is capable of free-form editing through instructions, encompassing tasks such as local and global editing, removing and adding a background, color and geometry transformations, detection and segmentation, and more. […]
Emu Edit precisely follows instructions, ensuring that pixels in the input image unrelated to the instructions remain untouched. For instance, when adding the text “Aloha!” to a baseball cap, the cap itself should remain unchanged.
And for some conceptually related (but technically distinct) ideas, see previous: Iterative creation with ChatGPT.
“We’re on a mission from God…”
On the off chance you missed me over the last week or so, it’s due to my being off in Illinois with the fam, having fun making silliness like this:
NBA goes NeRF
Here’s a great look at how the scrappy team behind Luma.ai has helped enable beautiful volumetric captures of Phoenix Suns players soaring through the air:
Go behind the scenes of the innovative collaboration between Profectum Media and the Phoenix Suns to discover how we overcame technological and creative challenges to produce the first 3D bullet time neural radiance field NeRF effect in a major sports NBA arena video. This involved not just custom-building a 48 GoPro multi-cam volumetric rig but also integrating advanced AI tools from Luma AI to capture athletes in stunning, frozen-in-time 3D visual sequences. This venture is more than just a glimpse behind the scenes – it’s a peek into the evolving world of sports entertainment and the future of spatial capture.
Phat Splats
If you keep hearing about “Gaussian Splatting” & wondering “WTAF,” check out this nice primer from my buddy Bilawal:
There’s also Two-Minute Papers, offering a characteristically charming & accessible overview:
GenAI demos from Russell Brown
It’s always great to learn from the master—especially when he’s making “spaghetti western” literal!
- The power of selections with Generative Fill
- Create watercolors and other art styles with Generative Fill
- Manage the stacking order of Generative layers

Iterative creation with ChatGPT
I’m really digging the experience of (optionally) taking a photo, feeding it into ChatGPT, and then riffing my way towards an interesting visual outcome. Here’s a gallery in which you can see some of the journeys I’ve undertaken recently.
- Image->description->image quality is often pretty hit-or-miss. Even so, it’s such a compelling possibility that I keep wanting to try it (e.g. seeing a leaf on the ground, wanting to try turning it into a stingray).
- The system attempts to maintain various image properties (e.g. pose, color, style) while varying others (e.g. turning the attached vehicle from a box truck to a tanker while maintaining its general orientation plus specifics like featuring three Holstein cows).
- Overall text creation is vastly improved vs. previous models, though it can still derail. It’s striking that one can iteratively improve a particular line of text (e.g. “Make sure that the second line says ‘TRAIN’“).


GenFill vs. eternal dog-pant mysteries
Hah! This is my kind of ridiculous Adobe social content. 🙂 Happy Friday.
The Young & The Spiderverse
Man, I’m inspired—and TBH a little jealous—seeing 14yo creator Preston Mutanga creating amazing 3D animations, as he’s apparently been doing for fully half his life. I think you’ll enjoy the short talk he gave covering his passions:
The presentation will take the audience on a journey, a journey across the Spider-Verse where a self-taught, young, talented 14-year-old kid used Blender, to create high-quality LEGO animations of movie trailers. Through the use of social media, this young artist’s passion and skill caught the attention of Hollywood producers, leading to a life-changing invitation to animate in a new Hollywood movie.
Hands up for Res Up ⬆️
Speaking of increasing resolution, check out this sneak peek from Adobe MAX:
It’s a video upscaling tool that uses diffusion-based technology and artificial intelligence to convert low-resolution videos to high-resolution videos for applications. Users can directly upscale low-resolution videos to high resolution. They can also zoom-in and crop videos and upscale them to full resolution with high-fidelity visual details and temporal consistency. This is great for those looking to bring new life into older videos or to prevent blurry videos when playing scaled versions on HD screens.
Adventures in Upsampling
Interesting recent finds:
- Google Zoom Enhance. “Using generative AI, Zoom Enhance intelligently fills in the gaps between pixels and predicts fine details, opening up more possibilities when it comes to framing and flexibility to focus on the most important part of your photo.”
- Nick St. Pierre writes, “I just upscaled an image in MJ by 4x, then used Topaz Photo AI to upscale that by another 6x. The final image is 682MP and 32000×21333 pixels large.”
- Here’s a thread of 10 Midjourney upsampling examples, including a direct comparison against Topaz.
Demos: Photoshop Generative AI tips
Demos: Using Generative AI in Illustrator
If you’ve been sleeping on Text to Vector, check out this handful of quick how-to vids that’ll get you up to speed:
- Welcome to Generative AI in Illustrator
- Generate artwork from text with Text to Vector Graphic (Beta)
- Explore creating stunning patterns with Text to Vector Graphics
- Tips for making your best artwork with Text to Vector Graphic (Beta)
- Tips: Take Your Text to Vector Graphic (Beta) patterns to “Wow!”
- Tip: Control your pattern color with Text to Vector Graphic (Beta)
Ai + AI FTW
Check out this quick demo of Illustrator’s new text-to-vector & mockup tools working together:
AI generated Logos onto any surface. pic.twitter.com/qY4tEkVK0Q
— Riley Brown (@rileybrown_ai) October 29, 2023
360º AI: Skybox adds new sketching & style features
Directly sketch inside a 360º canvas, then generate results:
And see also the styles these folks are working to bring online:
Sneak peek: Project Glyph Ease
Easy as ABC, 123?
Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style from existing vector shapes or ones they hand draw on paper, and this technology automatically create the remaining letters in a consistent style. Once created, designers have flexibility to edit the new font since the letters will appear as live text that can be scaled, rotated or moved in the project.
DreamCraft 2D->3D tech looks wild
Can you imagine something like this running in Photoshop, making it possible to re-pose objects and then merge them back into one’s scene?
Project Primrose: Animated fabric (!) from Adobe
The week before MAX, my teammate Christine had a bit of a cough, and folks were suddenly concerned about the Project Primrose sneak: it’d be awfully hard to swap out presenters when the demo surface is a bespoke dress made by & for exactly one person. Thankfully good health prevailed, and she was able to showcase Project Primrose:
Here’s a bit more info about the tech:
We propose reflective light-diffuser modules for non-emissive flexible display systems. Our system leverages reflective-backed polymer-dispersed liquid crystal (PDLC), an electroactive material commonly used in smart window applications. This low-power non-emissive material can be cut to any shape, and dynamically diffuses light. We present the design & fabrication of two exemplar artifacts, a canvas and a handbag, that use the reflective light-diffuser modules.
Come work on Firefly!
We’re looking to meet great PMs, engineers, data scientists, and more; come check out open roles!

Reflect on this: Project See Through burns through glare
Marc Levoy (professor emeritus at Stanford) was instrumental in delivering the revolutionary Night Sight mode on Pixel 3 phones—and by extension on all the phones that quickly copied their published techniques. After leaving Google for Adobe, he’s been leading a research team that’s just shown off the reflection-zapping Project See Through:
Today, it’s difficult or impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.
A couple of fun Photoshop-After Effects collabs
Matthew Vandeputte used a mix of Generative Fill and Content-Aware Fill (or both) to make these rad little animations in After Effects:
[Via Tom Hightower]
New features come to Lightroom Classic
Lens Blur, HDR, Point Color, and more: Katrin Eismann breaks down the update in this overview, and Matt Kloskowski shows the features in action here:
Remembering Microsoft’s “Big-Ass Table”
TBH I still kinda want one. 🙂
“Boing!” New Google animation tech looks fun.
My old teammates Richard Tucker, Noah Snavely, and co. have been busy. Check out how their Generative Image Dynamics work makes it possible to interactively add small, periodic motion to photos:
My recent Firefly demos & previews
I got to spend time Friday live streaming with the Firefly community, showing off some of the new MAX announcements & talking about some of what might be coming down the line. I hope you enjoy it, and I’d welcome any feedback on this session or on what you’d like to see in the future.
Check out Illustrator’s new font-identifying Retype feature
Short & sweet:
Adobe Illustrator has this feature called Retype (beta). With it you can select an image in Illustrator and enter Retype (beta) to determine the fonts that were used (at least close matches) in the JPG! It will also do the same for text that has been outlined. It’s amazing!
One sketchy vector erector: Project Draw & Delight
This tech lets you use augment text-based instructions with visual hints, such as rough sketches and paint strokes. Draw & Delight then uses Firefly to generate high-quality vector illustrations or animations in various color palettes, style variations, poses and backgrounds.
Check out my MAX talk on the potential of Generative AI in education
I got to spend 30 minutes chatting with educator & author Matt Miller last week, riffing on some tough but important questions around weighty, fascinating stuff like what makes us human, what we value around creativity, and how we can all navigate the creative disruptions that surround us.

Hear how Adobe generative AI solutions are designed to continually evolve, develop, and empower educators and students from kindergarten to university level. Generative AI is expected to have a significant impact on the creativity of students. It has the potential to act as a powerful tool that can inspire and enhance the creative process by generating new and unique ideas. Join Matt Miller, author and educator, and John Nack, principal product manager at Adobe, for this exciting discussion.
In this session, you’ll:
- Learn how Adobe approaches generative AI
- Hear experts discuss how AI affects teaching and learning
- Discover how AI can make learning more personalized and accessible
What if 3D were actually approachable?
That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:
Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.
Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.
The wonderful miniatures of Henry Sugar
Wes Anderson & crew are back to making delightful miniature worlds, this time for “The Wonderful Story of Henry Sugar.” Enjoy three charming minutes, won’t you?