Category Archives: 3D

Google’s CAT3D makes eye-popping worlds

I still can’t believe I was allowed in the building with these giant throbbing brains. 🙂


This kind of evolution should make a lot of people rethink what it means to be an image editor going forward—or even an image.

Drawing-based magic with Firefly & Magnific

Man, who knew that posting the tweet below would get me absolutely dragged by AI haters (“Worst. Dad. Ever.”) who briefly turned me into the Bean Dad of AI art? I should say more about that eye-opening experience, but for now, enjoy (unlike apparently thousands of others!) this innocuous mixing of AI & kid art:


Elsewhere, here’s a cool thread showing how even simple sketches can be interpreted in the style of 3D renderings via Magnific:

Tiny Glade: “Wholesome” 3D sculpting—and more?

This app looks like a delightful little creation tool that’s just meant for doodling, but I’d love to see this kind of physical creation paired with the world of generative AI rendering. I’m reminded of how “Little Big Planet” years ago made me yearn for Photoshop tools that felt like Sackboy’s particle-emitting jetpack. Someday, maybe…?

Fun little AI->3D->AR experiments with Vision Pro

I love watching people connect the emerging creative dots, right in front of our eyes:

Shhh, No One Cares

Heh—this fun little animation makes me think back to how I considered changing my three-word Google bio from “Teaching Google Photoshop” (i.e. getting robots to see & create like humans, making beautiful things based on your life & interests) to “Wow! Nobody Cares.” :-p Here’s to less of that in 2024.

The first great Vision Pro demo I’ve seen

F1 racing lover John LePore (whose VFX work you’ve seen in Iron Man 2 and myriad other productions over the years) has created the first demo for Apple Vision Pro that makes me say, “Okay, dang, that looks truly useful & compelling.” Check out his quick demo & behind-the-scenes narration:

Happy New Year!

Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).

AI: Tons of recent rad things

  • Realtime:
  • 3D generation:
  • 3D for fashion, sculpting, and more:
  • AnimateDiff v3 was just released.
  • Instagram has enabled image generation inside chat (pretty “meh,” in my experience so far), and in stories creation, “It allows you to replace a background of an image into whatever AI generated image you’d like.”
  • Did you know that you can train an AI Art model and get paid every time someone uses it? That’s Generaitiv’s Model Royalties System for you.”

Promising 3D research from Adobe

Check out LooseControl

https://twitter.com/alexcarliera/status/1733154617998074183

…and Diffusion Handles:

.

NBA goes NeRF

Here’s a great look at how the scrappy team behind Luma.ai has helped enable beautiful volumetric captures of Phoenix Suns players soaring through the air:

Go behind the scenes of the innovative collaboration between Profectum Media and the Phoenix Suns to discover how we overcame technological and creative challenges to produce the first 3D bullet time neural radiance field NeRF effect in a major sports NBA arena video. This involved not just custom-building a 48 GoPro multi-cam volumetric rig but also integrating advanced AI tools from Luma AI to capture athletes in stunning, frozen-in-time 3D visual sequences. This venture is more than just a glimpse behind the scenes – it’s a peek into the evolving world of sports entertainment and the future of spatial capture.

Phat Splats

If you keep hearing about “Gaussian Splatting” & wondering “WTAF,” check out this nice primer from my buddy Bilawal:

There’s also Two-Minute Papers, offering a characteristically charming & accessible overview:

The Young & The Spiderverse

Man, I’m inspired—and TBH a little jealous—seeing 14yo creator Preston Mutanga creating amazing 3D animations, as he’s apparently been doing for fully half his life. I think you’ll enjoy the short talk he gave covering his passions:

The presentation will take the audience on a journey, a journey across the Spider-Verse where a self-taught, young, talented 14-year-old kid used Blender, to create high-quality LEGO animations of movie trailers. Through the use of social media, this young artist’s passion and skill caught the attention of Hollywood producers, leading to a life-changing invitation to animate in a new Hollywood movie.

What if 3D were actually approachable?

That’s the promise of Adobe’s Project Neo—which you can sign up to test & use now! Check out the awesome sneak peek they presented at MAX:

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

Adobe Project Posable: 3D humans guiding image generation

Roughly 1,000 years ago (i.e. this past April!),  I gave an early sneak peek at the 3D-to-image work we’ve been doing around Firefly. Now at MAX, my teammate Yi Zhou has demonstrated some additional ways we could put the core tech to work—by adding posable humans to the scene.

Project Poseable makes it easy for anyone to quickly design 3D prototypes and storyboards in minutes with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

Check it out:

Luma adds NeRF-powered fly-throughs

“Get cinematic and professional-looking drone Flythroughs in minutes from shaky amateur recorded videos.” The results are slick:

Tangentially, here’s another impressive application of Luma tech—turning drone footage into a dramatically manipulable 3D scene:

https://youtube.com/shorts/6eOLsKr224c?si=u1mWHM1qlNfbPuMf

“The AI-Powered Tools Supercharging Your Imagination”

I’m so pleased & even proud (having at least having offered my encouragement to him over the years) to see my buddy Bilawal spreading his wings and spreading the good word about AI-powered creativity.

Check out his quick thoughts on “Channel-surfing realities layered on top of the real world,” “3D screenshots for the real world,” and more:

Favorite quote 😉:

Skybox scribble: Create 360º immersive views just by drawing

Pretty slick stuff! This very short vid is well worth watching:

With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!

Sneak peek: Adobe Firefly 3D

I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.

The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:

My demo/narrative starts around the 58:10 mark:

3D + AI: Stable Diffusion comes to Blender

I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):

Adobe Substance 3D wins an Academy Award!

Well deserved recognition for this amazing team & tech:

To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.

Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.

3D capture comes to Adobe Substance 3D Sampler 4.0

Photogrammetrize all the things!!

Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.

Here’s the workflow in more detail:

And here’s info on capture tools:

“The impossibilities are endless”: Yet more NeRF magic

Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:

Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:

https://twitter.com/jnack/status/1616981915902554112?s=20&t=5LOmsIoifLw8oNVMV2fYIw
As I mentioned last week, Luma Labs has enabled interactive model embedding, and now they’re making the viewer crazy-fast:

The world’s first (?) NeRF-powered commercial

Karen X. Cheng, back with another 3D/AI banger:

As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:

CGI: Primordial soup for you!

Check out these gloriously detailed renderings from Markos Kay. I just wish the pacing were a little more chill so I could stare longer at each composition!

Colossal notes,

Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.

The artist has prints available for sale in his shop, and you can find more work on his website and Behance.

AI: From dollhouse to photograph

Check out Karen X. Cheng’s clever use of simple wooden props + depth-to-image synthesis to create 3D renderings:

She writes,

1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture)
2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”)
3. Upload your photo and then type in your prompts to remix the image

We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video

More NeRF magic: From Michelangelo to NYC

This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:

Then here’s AJ from the NYT doing a neat day-to-night transition:

And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):

https://twitter.com/smallfly/status/1604609303255605251?s=20&t=jdSW1NC_n54YTxsnkkFPJQ

A cool, quick demo of Midjourney->3D

Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:

On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:

In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:

https://twitter.com/jnack/status/1599476677918478337?s=20&t=vu_Q7Wme3Q3Ueqp1WaGUpA

[Via Shi Yan]

More NeRF magic: Dolly zoom & beyond

It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:

Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)

Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):

Some amazing AI->parallax animations

Great work from Guy Parsons, combining Midjourney with Capcut:

And from the replies, here’s another fun set:

Neural JNack has entered the chat… 🤖

Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:

For comparison, here’s the 3D model generated via the photogrammetry approach:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

Adobe 3D Design is looking for 2023 interns

These sound like great gigs!

The 3D and Immersive Design Team at Adobe is looking for a design intern who will help envision and build the future of Adobe’s 3D and MR creative tools.

With the Adobe Substance 3D Collection and Adobe Aero, we’re making big moves in 3D, but it is still early days! This is a huge opportunity space to shape the future of 3D and AR at Adobe. We believe that tools shape our world, and by building the tools that power 3D creativity we can have an outsized impact on our world.

Blender + Stable Diffusion = 🪄

Easy placement/movement of 3D primitives -> realistic/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it’s been difficult to bring the level of quality & consistency up to what Adobe users demand.

Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:

Check out NeRF Studio & some eye-popping results

The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:

https://twitter.com/akanazawa/status/1577686321119645696?s=20&t=OA61aUUy3A6P1aMQiUIzbA

The kind of results one can generate from just a series of photos or video frames is truly bonkers:

Here’s a tutorial on how to use it:

NVIDIA’s GET3D promises text-to-model generation

Depending on how well it works, tech like this could be the greatest unlock in 3D creation the world has ever known.

The company blog post features interesting, promising details:

Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.

GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. […]

GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.

See also Dream Fields (mentioned previously) from Google: