Category Archives: 3D

A love letter to splats

Paul Trillo relentlessly redefines what’s possible in VFX—in this case scanning his back yard to tour a magical tiny world:

Here he gives a peek behind the scenes: 

And here’s the After Effects plugin he used:

Throwback: “Behind the scenes with Olympians & Google’s AR ‘Scan Van'”

I’m old enough to remember 2020, when we sincerely (?) thought that everyone would be excited to put 3D-scanned virtual Olympians onto their coffee tables… or something. (Hey, it was fun while it lasted! And it temporarily kept a bunch of graphics nerds from having to slink back to the sweatshop grind of video game development.)

Anyway, here’s a look back to what Google was doing around augmented reality and the 2020 (’21) Olympics:


I swear I spent half of last summer staring at tiny 3D Naomi Osaka volleying shots on my desktop. I remain jealous of my former teammates who got to work with these athletes (and before them, folks like Donald Glover as Childish Gambino), even though doing so meant dealing with a million Covid safety protocols. Here’s a quick look at how they captured folks flexing & flying through space:

 
 
 
 
 
View this post on Instagram
 
 
 
 
 
 
 
 
 
 
 

A post shared by Google (@google)

You can play with the content just by searching:

[Via Chikezie Ejiasi]

Neural rendering: Neo + Firefly

Back when we launched Firefly (alllll the way back in March 2023), we hinted at the potential of combining 3D geometry with diffusion-based rendering, and I tweeted out a very early sneak peek:

A year+ later, I’m no longer working to integrate the Babylon 3D engine into Adobe tools—and instead I’m working directly with the Babylon team at Microsoft (!). Meanwhile I like seeing how my old teammates are continuing to explore integrations between 3D (in this case, project Neo). Here’s one quick flow:

Here’s a quick exploration from the always-interesting Martin Nebelong:

And here’s a fun little Neo->Firefly->AI video interpolation test from Kris Kashtanova:

tyFlow: Stable Diffusion-based rendering in 3ds Max

Being able to declare what you want, instead of having to painstakingly set up parameters for materials, lighting, etc. may prove to be an incredibly unlock for visual expressivity, particularly around the generally intimidating realm of 3D. Check out what tyFlow is bringing to the table:

You can see a bit more about how it works in this vid…

…or a lot more in this one:

Google’s CAT3D makes eye-popping worlds

I still can’t believe I was allowed in the building with these giant throbbing brains. 🙂


This kind of evolution should make a lot of people rethink what it means to be an image editor going forward—or even an image.

Drawing-based magic with Firefly & Magnific

Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:


Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:

Tiny Glade: “Wholesome” 3D sculpting—and more?

This app looks like a delightful little creation tool that’s just meant for doodling, but I’d love to see this kind of physical creation paired with the world of generative AI rendering. I’m reminded of how “Little Big Planet” years ago made me yearn for Photoshop tools that felt like Sackboy’s particle-emitting jetpack. Someday, maybe…?

Fun little AI->3D->AR experiments with Vision Pro

I love watching people connect the emerging creative dots, right in front of our eyes:

Shhh, No One Cares

Heh—this fun little animation makes me think back to how I considered changing my three-word Google bio from “Teaching Google Photoshop” (i.e. getting robots to see & create like humans, making beautiful things based on your life & interests) to “Wow! Nobody Cares.” :-p Here’s to less of that in 2024.

The first great Vision Pro demo I’ve seen

F1 racing lover John LePore (whose VFX work you’ve seen in Iron Man 2 and myriad other productions over the years) has created the first demo for Apple Vision Pro that makes me say, “Okay, dang, that looks truly useful & compelling.” Check out his quick demo & behind-the-scenes narration:

Happy New Year!

Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).

AI: Tons of recent rad things

  • Realtime:
  • 3D generation:
  • 3D for fashion, sculpting, and more:
  • AnimateDiff v3 was just released.
  • Instagram has enabled image generation inside chat (pretty “meh,” in my experience so far), and in stories creation, “It allows you to replace a background of an image into whatever AI generated image you’d like.”
  • Did you know that you can train an AI Art model and get paid every time someone uses it? That’s Generaitiv’s Model Royalties System for you.”

Promising 3D research from Adobe

Check out LooseControl

https://twitter.com/alexcarliera/status/1733154617998074183

…and Diffusion Handles:

.

NBA goes NeRF

Here’s a great look at how the scrappy team behind Luma.ai has helped enable beautiful volumetric captures of Phoenix Suns players soaring through the air:

Go behind the scenes of the innovative collaboration between Profectum Media and the Phoenix Suns to discover how we overcame technological and creative challenges to produce the first 3D bullet time neural radiance field NeRF effect in a major sports NBA arena video. This involved not just custom-building a 48 GoPro multi-cam volumetric rig but also integrating advanced AI tools from Luma AI to capture athletes in stunning, frozen-in-time 3D visual sequences. This venture is more than just a glimpse behind the scenes – it’s a peek into the evolving world of sports entertainment and the future of spatial capture.

Phat Splats

If you keep hearing about “Gaussian Splatting” & wondering “WTAF,” check out this nice primer from my buddy Bilawal:

There’s also Two-Minute Papers, offering a characteristically charming & accessible overview:

The Young & The Spiderverse

Man, I’m inspired—and TBH a little jealous—seeing 14yo creator Preston Mutanga creating amazing 3D animations, as he’s apparently been doing for fully half his life. I think you’ll enjoy the short talk he gave covering his passions:

The presentation will take the audience on a journey, a journey across the Spider-Verse where a self-taught, young, talented 14-year-old kid used Blender, to create high-quality LEGO animations of movie trailers. Through the use of social media, this young artist’s passion and skill caught the attention of Hollywood producers, leading to a life-changing invitation to animate in a new Hollywood movie.

What if 3D were actually approachable?

That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

Luma adds NeRF-powered fly-throughs

“Get cinematic and professional-looking drone Flythroughs in minutes from shaky amateur recorded videos.” The results are slick:

Tangentially, here’s another impressive application of Luma tech—turning drone footage into a dramatically manipulable 3D scene:

https://youtube.com/shorts/6eOLsKr224c?si=u1mWHM1qlNfbPuMf

“The AI-Powered Tools Supercharging Your Imagination”

I’m so pleased & even proud (having at least having offered my encouragement to him over the years) to see my buddy Bilawal spreading his wings and spreading the good word about AI-powered creativity.

Check out his quick thoughts on “Channel-surfing realities layered on top of the real world,” “3D screenshots for the real world,” and more:

Favorite quote 😉:

Skybox scribble: Create 360º immersive views just by drawing

Pretty slick stuff! This very short vid is well worth watching:

With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!

Sneak peek: Adobe Firefly 3D

I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.

The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:

My demo/narrative starts around the 58:10 mark:

3D + AI: Stable Diffusion comes to Blender

I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):

Adobe Substance 3D wins an Academy Award!

Well deserved recognition for this amazing team & tech:

To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.

Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.

3D capture comes to Adobe Substance 3D Sampler 4.0

Photogrammetrize all the things!!

Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.

Here’s the workflow in more detail:

And here’s info on capture tools:

“The impossibilities are endless”: Yet more NeRF magic

Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:

Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:

https://twitter.com/jnack/status/1616981915902554112?s=20&t=5LOmsIoifLw8oNVMV2fYIw
As I mentioned last week, Luma Labs has enabled interactive model embedding, and now they’re making the viewer crazy-fast:

The world’s first (?) NeRF-powered commercial

Karen X. Cheng, back with another 3D/AI banger:

As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:

CGI: Primordial soup for you!

Check out these gloriously detailed renderings from Markos Kay. I just wish the pacing were a little more chill so I could stare longer at each composition!

Colossal notes,

Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.

The artist has prints available for sale in his shop, and you can find more work on his website and Behance.

AI: From dollhouse to photograph

Check out Karen X. Cheng’s clever use of simple wooden props + depth-to-image synthesis to create 3D renderings:

She writes,

1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture)
2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”)
3. Upload your photo and then type in your prompts to remix the image

We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video

More NeRF magic: From Michelangelo to NYC

This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:

Then here’s AJ from the NYT doing a neat day-to-night transition:

And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):

https://twitter.com/smallfly/status/1604609303255605251?s=20&t=jdSW1NC_n54YTxsnkkFPJQ

A cool, quick demo of Midjourney->3D

Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:

On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:

In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:

https://twitter.com/jnack/status/1599476677918478337?s=20&t=vu_Q7Wme3Q3Ueqp1WaGUpA

[Via Shi Yan]