Category Archives: 3D

Mystic structure reference: Dracarys!

I love seeing the Magnific team’s continued rapid march in delivering identity-preserving reskinning

This example makes me wish my boys were, just for a moment, 10 years younger and still up for this kind of father/son play. 🙂

NeRFtastic BAFTAs

The British Academy Film Awards have jumped into a whole new dimension to commemorate the winners of this year’s awards:

The capturing work was led by Harry Nelder and Amity Studio. Nelder used his 16-camera rig to capture the recent winners. The reconstruction software was a combination of a cloud-based platform created by Nelder, which is expected to be released later this year, along with Postshot. Nelder further utilized the Radiance Field method known as Gaussian Splatting for the reconstruction. A compilation video of all the captures, recently posted by BAFTA, was edited by Amity Studio

[Via Dan Goldman]

Perhaps image-to-3D was a mistake…

Behold the majesty (? :-)) of CapCut’s new “Microwave” filter (whose name makes more sense if you listen with sound on):

https://youtube.com/shorts/bshQXczbZdw?si=aFwvtgs-fKf2wl8x

As I asked Bilawal, who posted the compilation, “What is this, and how can I know less about it?”

Quick fun with Krea, Flux, custom training, and 3D

Putting the proverbial chocolate in the peanut butter, those fast-moving kids at Krea have combined custom model training with 3D-guided image generation. Generation is amazingly fast, and the results are some combo of delightful & grotesque (aka “…The JNack Story”). Check it out:

Krea introduces realtime 3D-guided image generation

Part 9,201 of me never getting over the fact we were working on stuff like this 2 years ago at Adobe (modulo the realtime aspect, which is rad) & couldn’t manage to ship it. It’ll be interesting to see whether the Krea guys (and/or others) pair this kind of interactive-quality rendering with a really high-quality pass, as NVIDIA demonstrated last week using Flux.

Creating a 3D scene from text

…featuring a dose of Microsoft Trellis!

More about Trellis:

Powered by advanced AI, TRELLIS enables users to create high-quality, customizable 3D objects effortlessly using simple text or image prompts. This innovation promises to improve 3D design workflows, making it accessible to professionals and beginner alike. Here are some examples:

NVIDIA + Flux = 3D magic

I may never stop being pissed that that the Firefly-3D integration we previewed nearly two years ago didn’t yield more fruit, at least on my watch:

The world moves on, and now NVIDIA has teamed up with Black Forest Labs to enable 3D-conditioned image generation. Check out this demo (starting around 1:31:48):

Details:

For users interested in integrating the FLUX NIM microservice into their workflows, we have collaborated with NVIDIA to launch the NVIDIA AI Blueprint for 3D-guided generative AI. This packaged workflow allows users to guide image generation by laying out a scene in 3D applications like Blender, and using that composition with the FLUX NIM microservice to generate images that adhere to the scene. This integration simplifies image generation control and showcases what’s possible with FLUX models.

The cool generative 3D hits keep coming

Just a taste of the torrent the blows past daily on The Former Bird App:

  • Rodin 3D: “Rodin 3D AI can create stunning, high-quality 3D models from just text or image inputs.”
  • Trellis 3D: “Iterative prompting/mesh editing. You can now prompt ‘remove X, add Y, Move Z, etc.’… Allows decoding to different output formats: Radiance Fields, 3D Gaussians, and meshes.”
  • Blender GPT: “Generating 3D assets has never been easier. Here’s me putting together an entire 3D scene in just over a minute.”

A love letter to splats

Paul Trillo relentlessly redefines what’s possible in VFX—in this case scanning his back yard to tour a magical tiny world:

Here he gives a peek behind the scenes: 

And here’s the After Effects plugin he used:

Throwback: “Behind the scenes with Olympians & Google’s AR ‘Scan Van'”

I’m old enough to remember 2020, when we sincerely (?) thought that everyone would be excited to put 3D-scanned virtual Olympians onto their coffee tables… or something. (Hey, it was fun while it lasted! And it temporarily kept a bunch of graphics nerds from having to slink back to the sweatshop grind of video game development.)

Anyway, here’s a look back to what Google was doing around augmented reality and the 2020 (’21) Olympics:


I swear I spent half of last summer staring at tiny 3D Naomi Osaka volleying shots on my desktop. I remain jealous of my former teammates who got to work with these athletes (and before them, folks like Donald Glover as Childish Gambino), even though doing so meant dealing with a million Covid safety protocols. Here’s a quick look at how they captured folks flexing & flying through space:

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Google (@google)

You can play with the content just by searching:

[Via Chikezie Ejiasi]

Neural rendering: Neo + Firefly

Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:

A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:

Here’s a quick exploration from the always-interesting Martin Nebelong:

And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:

tyFlow: Stable Diffusion-based rendering in 3ds Max

Being able to declare what you want, instead of having to painstakingly set up parameters for materials, lighting, etc. may prove to be an incredibly unlock for visual expressivity, particularly around the generally intimidating realm of 3D. Check out what tyFlow is bringing to the table:

You can see a bit more about how it works in this vid…

…or a lot more in this one:

Google’s CAT3D makes eye-popping worlds

I still can’t believe I was allowed in the building with these giant throbbing brains. 🙂


This kind of evolution should make a lot of people rethink what it means to be an image editor going forward—or even an image.

Drawing-based magic with Firefly & Magnific

Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:


Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:

Tiny Glade: “Wholesome” 3D sculpting—and more?

This app looks like a delightful little creation tool that’s just meant for doodling, but I’d love to see this kind of physical creation paired with the world of generative AI rendering. I’m reminded of how “Little Big Planet” years ago made me yearn for Photoshop tools that felt like Sackboy’s particle-emitting jetpack. Someday, maybe…?

Fun little AI->3D->AR experiments with Vision Pro

I love watching people connect the emerging creative dots, right in front of our eyes:

Shhh, No One Cares

Heh—this fun little animation makes me think back to how I considered changing my three-word Google bio from “Teaching Google Photoshop” (i.e. getting robots to see & create like humans, making beautiful things based on your life & interests) to “Wow! Nobody Cares.” :-p Here’s to less of that in 2024.

The first great Vision Pro demo I’ve seen

F1 racing lover John LePore (whose VFX work you’ve seen in Iron Man 2 and myriad other productions over the years) has created the first demo for Apple Vision Pro that makes me say, “Okay, dang, that looks truly useful & compelling.” Check out his quick demo & behind-the-scenes narration:

Happy New Year!

Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).

AI: Tons of recent rad things

  • Realtime:
  • 3D generation:
  • 3D for fashion, sculpting, and more:
  • AnimateDiff v3 was just released.
  • Instagram has enabled image generation inside chat (pretty “meh,” in my experience so far), and in stories creation, “It allows you to replace a background of an image into whatever AI generated image you’d like.”
  • Did you know that you can train an AI Art model and get paid every time someone uses it? That’s Generaitiv’s Model Royalties System for you.”

Promising 3D research from Adobe

Check out LooseControl

https://twitter.com/alexcarliera/status/1733154617998074183

…and Diffusion Handles:

.

NBA goes NeRF

Here’s a great look at how the scrappy team behind Luma.ai has helped enable beautiful volumetric captures of Phoenix Suns players soaring through the air:

Go behind the scenes of the innovative collaboration between Profectum Media and the Phoenix Suns to discover how we overcame technological and creative challenges to produce the first 3D bullet time neural radiance field NeRF effect in a major sports NBA arena video. This involved not just custom-building a 48 GoPro multi-cam volumetric rig but also integrating advanced AI tools from Luma AI to capture athletes in stunning, frozen-in-time 3D visual sequences. This venture is more than just a glimpse behind the scenes – it’s a peek into the evolving world of sports entertainment and the future of spatial capture.

Phat Splats

If you keep hearing about “Gaussian Splatting” & wondering “WTAF,” check out this nice primer from my buddy Bilawal:

There’s also Two-Minute Papers, offering a characteristically charming & accessible overview:

The Young & The Spiderverse

Man, I’m inspired—and TBH a little jealous—seeing 14yo creator Preston Mutanga creating amazing 3D animations, as he’s apparently been doing for fully half his life. I think you’ll enjoy the short talk he gave covering his passions:

The presentation will take the audience on a journey, a journey across the Spider-Verse where a self-taught, young, talented 14-year-old kid used Blender, to create high-quality LEGO animations of movie trailers. Through the use of social media, this young artist’s passion and skill caught the attention of Hollywood producers, leading to a life-changing invitation to animate in a new Hollywood movie.

What if 3D were actually approachable?

That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

Luma adds NeRF-powered fly-throughs

“Get cinematic and professional-looking drone Flythroughs in minutes from shaky amateur recorded videos.” The results are slick:

Tangentially, here’s another impressive application of Luma tech—turning drone footage into a dramatically manipulable 3D scene:

https://youtube.com/shorts/6eOLsKr224c?si=u1mWHM1qlNfbPuMf

“The AI-Powered Tools Supercharging Your Imagination”

I’m so pleased & even proud (having at least having offered my encouragement to him over the years) to see my buddy Bilawal spreading his wings and spreading the good word about AI-powered creativity.

Check out his quick thoughts on “Channel-surfing realities layered on top of the real world,” “3D screenshots for the real world,” and more:

Favorite quote 😉:

Skybox scribble: Create 360º immersive views just by drawing

Pretty slick stuff! This very short vid is well worth watching:

With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!

Sneak peek: Adobe Firefly 3D

I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.

The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:

My demo/narrative starts around the 58:10 mark:

3D + AI: Stable Diffusion comes to Blender

I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):

Adobe Substance 3D wins an Academy Award!

Well deserved recognition for this amazing team & tech:

To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.

Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.