I love seeing Michael Tanzillo‘s Illustrator 3D -> Adobe Stager -> Photoshop workflow for making and enhancing the adorable “Little Miss Sparkle Bao Bao”:
Category Archives: 3D
Skybox scribble: Create 360º immersive views just by drawing
Pretty slick stuff! This very short vid is well worth watching:
With Sketch mode, we’re introducing a new palette of tools and guides that let you start taking control of your skybox generations. Want a castle in the distance? Sketch it out, specify a castle in your prompt and hit generate to watch as your scribbles influence your skybox. If you don’t get what you want the first time, your sketch sticks around to try a new style or prompt from – or switch to Remix mode to give that castle a new look!
No drone, no problem: Luma NeRF FTW
Great visual storytelling trickery, as always, from Karen X. Cheng:
Sneak peek: Adobe Firefly 3D
I had a ball presenting Firefly during this past week’s Adobe Live session. I showed off the new Recolor Vectors feature, and my teammate Samantha showed how to put it to practical use (along with image generation) as part of a moodboarding exercise. I think you’d dig the whole session, if you’ve got time.
The highlight for me was the chance to give an early preview of the 3D-to-image creation module we have in development:
My demo/narrative starts around the 58:10 mark:
ControlNet + Blender = 🔥
I love this; just don’t make me download and learn Blender to use it. 😅
3D + AI: Stable Diffusion comes to Blender
I’m really excited to see what kinds of images, not to mention videos & textured 3D assets, people will now be able to generate via emerging techniques (depth2img, ControlNet, etc.):
Photogrammetry -> ControlNet = Makeover Magic
My friend Bilawal Sidhu made a 3D scan of his parents’ home (y’know, as one does), and he recently used the new ControlNet functionality in Stable Diffusion to restyle it on the fly. Check out details in this post & in the vid below:
Adobe Substance 3D wins an Academy Award!
Well deserved recognition for this amazing team & tech:
To Sébastien Deguy and Christophe Soum for the concept and original implementation of Substance Engine, and to Sylvain Paris and Nicolas Wirrmann for the design and engineering of Substance Designer.
Adobe Substance 3D Designer provides artists with a flexible and efficient procedural workflow for designing complex textures. Its sophisticated and art-directable pattern generators, intuitive design, and renderer-agnostic architecture have led to widespread adoption in motion picture visual effects and animation.
Chilling with NeRF
I’m still digging out (of email, Slack, and photos, but thankfully no longer of literal snow) following last weekend’s amazing photo adventure in Ely, NV. I need to try processing more footage via the amazing Luma app, but for now here’s a cool 3D version of the Nevada Northern Railway‘s water tower, made simply by orbiting it with my drone & uploading the footage:
3D capture comes to Adobe Substance 3D Sampler 4.0
Photogrammetrize all the things!!
Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.
Here’s the workflow in more detail:
And here’s info on capture tools:
“The impossibilities are endless”: Yet more NeRF magic
Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:
Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:
Luma enables embedding of interactive NeRF captures
Here’s an example made from a quick capture I did of my friend (nothing special, but amazing what one can get simply by walking in a circle while recording video):
The world’s first (?) NeRF-powered commercial
Karen X. Cheng, back with another 3D/AI banger:
As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:
CGI: Primordial soup for you!
Check out these gloriously detailed renderings from Markos Kay. I just wish the pacing were a little more chill so I could stare longer at each composition!
Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.
The artist has prints available for sale in his shop, and you can find more work on his website and Behance.
AI: From dollhouse to photograph
Check out Karen X. Cheng’s clever use of simple wooden props + depth-to-image synthesis to create 3D renderings:
1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture)
2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”)
3. Upload your photo and then type in your prompts to remix the image
We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video
NeRF On The Shelf
Heh—before the holidays get past us entirely, check out this novel approach to 3D motion capture from the always entertaining Kevin Parry:
[Via Victoria Nece]
More NeRF magic: From Michelangelo to NYC
This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:
Then here’s AJ from the NYT doing a neat day-to-night transition:
And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):
A cool, quick demo of Midjourney->3D
Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:
On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:
In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:
[Via Shi Yan]
More NeRF magic: Dolly zoom & beyond
It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:
Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)
Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):
Some amazing AI->parallax animations
Great work from Guy Parsons, combining Midjourney with Capcut:
And from the replies, here’s another fun set:
Thanks!!! Turned my bernedoodle puppy into a ‘90s Disney movie promo with this. Hahah pic.twitter.com/ShakTS4E6t
— Spencer Albers (@SpencerAlbers) November 28, 2022
Neural JNack has entered the chat… 🤖
Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:
For comparison, here’s the 3D model generated via the photogrammetry approach:
The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:
Adobe 3D Design is looking for 2023 interns
The 3D and Immersive Design Team at Adobe is looking for a design intern who will help envision and build the future of Adobe’s 3D and MR creative tools.
With the Adobe Substance 3D Collection and Adobe Aero, we’re making big moves in 3D, but it is still early days! This is a huge opportunity space to shape the future of 3D and AR at Adobe. We believe that tools shape our world, and by building the tools that power 3D creativity we can have an outsized impact on our world.
Adobe Mixamo + 3D scanning = AR magic
Check out this cool little workflow from Sergei Galkin:
It uses Mixamo specifically for auto-rigging:
“Prey” VFX breakdown
I always enjoy this kind of quick peek behind the scenes:
Blender + Stable Diffusion = 🪄
Easy placement/movement of 3D primitives -> realistic/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it’s been difficult to bring the level of quality & consistency up to what Adobe users demand.
Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:
Check out NeRF Studio & some eye-popping results
The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:
The kind of results one can generate from just a series of photos or video frames is truly bonkers:
Here’s a tutorial on how to use it:
NVIDIA’s GET3D promises text-to-model generation
Depending on how well it works, tech like this could be the greatest unlock in 3D creation the world has ever known.
The company blog post features interesting, promising details:
Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.
GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. […]
GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.
See also Dream Fields (mentioned previously) from Google:
Google & NASA bring 3D to search
Great to see my old teammates (with whom I was working to enable cloud-rendered as well as locally rendered 3D experiences) continuing their work.
NASA and Google Arts & Culture have partnered to bring more than 60 3D models of planets, moons and NASA spacecraft to Google Search. When you use Google Search to learn about these topics, just click on the View in 3D button to understand the different elements of what you’re looking at even better. These 3D annotations will also be available for cells, biological concepts (like skeletal systems), and other educational models on Search.
“Curt Skelton,” homebrew AI influencer
[Update: Seems that much of this may be fake. :-\ Still, the fact that it’s remotely plausible is nuts!]
Good lord (and poor Conan!). This creator used:
- DALL•E to create hundreds of similar-looking images of a face
- Create Skeleton to convert them into a 3D model
- DeepMotion.com to generate 3D body animation
- Deepfake Lab to generate facial animation
- Audio tools to deepen & distort her voice, creating a new one
A really amazing spin on AR furniture shopping
After seeing years & years of AR demos featuring the placement of furniture, I once heard someone say in exasperation, “Bro… how much furniture do you think I buy?”
Happily here’s a decidedly fresh approach, surrounding the user & some real-world furniture with a projection of the person’s 3D-scanned home. Wild!
Now, how easy can 3D home scanning be made—and how much do people care about this kind of scenario? I don’t know, but I love what the tech can enable already.
Snap Research promises 3D creation from photo collections
Hmm—this is no doubt brilliant tech, and I’d like to learn more, but I wonder about the Venn diagram between “Objects that people want in 3D,” “Objects for which a sufficiently large number of good images exist,” and “Objects for which good human-made 3D models don’t already exist.” In my experience photogrammetry is most relevant for making models from extremely specific subjects (e.g. a particular apartment) rather than from common objects that are likely to exist on Sketchfab et al. It’s entirely possible I’m missing a nuanced application here, though. As I say, cool tech!
Buildify promises fast, slick parametric creation of 3D cities
Looks really fun to use! Check it out:
Google Maps rolls out photorealistic aerial views
Awesome work from my friend Bilawal Sidhu & team:
[W]e’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.
Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.
MegaPortraits: One-shot Megapixel Neural Head Avatars￼
AI animation tech, which in this case leverages the motion of a face in a video to animate a different face in a still image, keeps getting better & better. Check out these results from Samsung Labs:
Adobe Substance 3D is Hiring
Check out the site to see details & beautiful art—but at a glance here are the roles:
- Multi Surface Graphics Software Engineer – macOS & iOS
- Sr. Software Engineer UI Oriented, Substance 3D Designer
- Sr. Software Development Engineer, Substance 3D Painter
- Senior Software Engineer, 3D Graphics
- Sr. Software Development Engineer, Test Automation
- Creative Cloud Desktop Frontend Developer (CDD 12 mois)
- Sr. 3D Artist
- Sr. Manager, Strategic Initiatives and Partnerships
- Data Engineer – Contract Role
- Sr. DevOps Engineer – Contract Role
Capturing Reality With Machine Learning: A NeRF 3D Scan Compilation
Check out this high-speed overview of recent magic courtesy of my friend Bilawal:
Photogrammetry is an art form that has been around for decades, but it’s never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3D captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.
Google tech can generate 3D from text
“Skynet begins to learn at a geometric rate…”
While we’re all still getting our heads around the 2D image-generation magic of DALL•E, Imagen, MidJourney, and more, Google researchers are stepping into a new dimension as well with Dream Fields—synthesizing geometry simply from words.
My 3D dronie… in space
Greetings from the galactic core, to which my friend Bilawal has dispatched me by editing the 3D model he made from drone-selfie footage that I recorded last year:
In case you’re having trouble loading it on your device, you can check out a recording of this non-starlit version:
A smashing little comparison of gravities
As someone remarked on Twitter, “This looks better than any Marvel movie since the first Iron Man.” 🙃
Substance for Unreal Engine 5
I’m no 3D artist (had I but world enough and time…), but I sure love their work & anything that makes it faster and easier. Perhaps my most obscure point of pride from my Photoshop years is that we added per-layer timestamps into PSD files, so that Pixar could more efficiently render content by noticing which layers had actually been modified.
Anyway, now that Adobe has made a much bigger bet on 3D tooling, it’s great to see new support for Substance Painter coming to Unreal Engine:
The Substance 3D plugin (BETA) enables the use of Substance materials directly in Unreal Engine 5 and Unreal Engine 4. Whether you are working on games, visualization and or deploying across mobile, desktop, or XR, Substance delivers a unique experience with optimized features for enhanced productivity.
Work faster, be more productive: Substance parameters allow for real-time material changes and texture updates.
Substance 3D for Unreal Engine 5 contains the plugin for Substance Engine.
Access over 1000 high-quality tweakable and export-ready 4K materials with presets on the Substance 3D Asset library. You can explore community-contributed assets in the community assets library.
The Substance Assets platform is a vast library containing high-quality PBR-ready Substance materials and is accessible directly in Unreal through the Substance plugin. These customizable Substance files can easily be adapted to a wide range of projects.
The Slap, but with ragdoll physics
As a fan of extremely dumb, simple gags, I hope that you too will enjoy these six seconds:
More elaborate, also fun:
Adobe is acquiring BRIO XR
Once the deal closes, BRIO XR will be joining an unparalleled community of engineers and product experts at Adobe – visionaries who are pushing the boundaries of what’s possible in 3D and immersive creation. Our BRIO XR team will contribute to Adobe’s Creative Cloud 3D authoring and experience design teams. Simply put, Adobe is the place to be, and in fact, it’s a place I’ve long set my sights on joining.
Showreel: Substance 3D in games
Speaking of Substance, here’s a fun speed run through the kinds of immersive worlds artists are helping create using it:
New Content-Aware Fill power comes to Adobe Substance 3D
Adobe demos new screen-to-AR shopping tech
[Adobe] announced a tool that allows consumers to point their phone at a product image on an ecommerce site—and then see the item rendered three-dimensionally in their living space. Adobe says the true-to-life size precision—and the ability to pull multiple products into the same view—set its AR service apart from others on the market. […]
Chang Xiao, the Adobe research scientist who created the tool, said many of the AR services currently on the market provide only rough estimations of the size of the product. Adobe is able to encode dimensions information in its invisible marker code embedded in the photos, which its computer vision algorithms can translate into more precisely sized projections.
Animation: “The Evolution of F1”
Slot cars & CGI FTW:
And here’s a peek behind the scenes:
Death Valley 3D
Last year I enjoyed creating a 3D dronie during my desert trip with Russell Brown, flying around the Pinnacles outside of Trona:
This year I just returned (hours ago!) from another trip with Russell, this time being joined by his son Davis (who coincidentally is my team’s new UI designer!). On Monday we visited the weird & wonderful International Car Forest of the Last Church, where Davis used his drone plus Metashape to create this 3D model:
And yes, technically neither of these locations is in Death Valley, where drone flying is prohibited. Close enough! ¯\_(ツ)_/¯
Google takes the periodic table 3D
Neat work from my old teammates: search for “periodic table” to rotate elements in 3D thanks to the <model-viewer> component:
ShapesXR promises easy prototyping in VR
Hmm—I always want to believe in tools like this, but I remain skeptical. Back at Google I played with Blocks, which promised to make 3D creation fun, but which in my experience combined the inherent complexity of that art with the imprecision and arm fatigue of waving controllers in space. But who knows—maybe Shapes is different?
The “Netaverse”: Brooklyn BB goes 3D
I’m intrigued but not quite sure how to feel about this. Precisely tracking groups of fast-moving human bodies & producing lifelike 3D copies in realtime is obviously a stunning technical coup—but is watching the results something people will prefer to high-def video of the real individuals & all their expressive nuances? I have no idea, but I’d like to know more.