Unleash the power of photogrammetry in Adobe Substance 3D Sampler 4.0 with the new 3D Capture tool! Create accurate and detailed 3D models of real-world objects with ease. Simply drag and drop a series of photos into Sampler and let it automatically extract the subject from its background and generate a 3D textured model. It’s a fast and handy way to create 3D assets for your next project.
💭 Ever wondered if you can embed Luma's interactive NeRFs, panoramas, video renders in your own websites, blogs, etc. or wish you had the ability to customise the UI of the share page? Look no further.
Here’s an example made from a quick capture I did of my friend (nothing special, but amazing what one can get simply by walking in a circle while recording video):
As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:
Three years ago today, the project that eventually became NeRF started working (positional encoding was the missing piece that got us from "hmm" to "wow"). Here's a snippet of that email thread between Matt Tancik, @_pratul_, @BenMildenhall, and me. Happy birthday NeRF! pic.twitter.com/UtuQpWsOt4
Check out these gloriously detailed renderings from Markos Kay. I just wish the pacing were a little more chill so I could stare longer at each composition!
Kay has focused on the intersection of art and science in his practice, utilizing digital tools to visualize biological or primordial phenomena. “aBiogenesis” focuses a microscopic lens on imagined protocells, vesicles, and primordial foam that twists and oscillates in various forms.
The artist has prints available for sale in his shop, and you can find more work on his website and Behance.
1. Take reference photo (you can use any photo – e.g. your real house, it doesn’t have to be dollhouse furniture) 2. Set up Stable Diffusion Depth-to-Image (google “Install Stable Diffusion Depth to Image YouTube”) 3. Upload your photo and then type in your prompts to remix the image
We recommend starting with simple prompts, and then progressively adding extra adjectives to get the desired look and feel. Using this method, @justinlv generated hundreds of options, and then we went through and cherrypicked our favorites for this video
This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:
Finally got to see Michaelangelo's David in Florence and rather than just take a photo like normal person, I spent 20 minutes walking around it capturing every angle looking like an insane person. It's hard to look cool when making a #NeRF but damn it looks cool later @LumaLabsAIpic.twitter.com/sLGJ2CKCJy
Numerous apps are promising pure text-to-geometry synthesis, as Luma AI shows here:
✨ Introducing Imagine 3D: a new way to create 3D with text! Our mission is to build the next generation of 3D and Imagine will be a big part of it. Today Imagine is in early access and as we improve we will bring it to everyone https://t.co/VIdilw7kpapic.twitter.com/v6Yi0mwZsY
On a more immediately applicable front, though, artists are finding ways to create 3D (or at least “two-and-a-half-D”) imagery right from the output of apps like Midjourney. Here’s a quick demo using Blender:
In a semi-related vein, I used CapCut to animate a tongue-in-cheek self portrait from my friend Bilawal:
It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:
Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)
NeRF update: Dollyzoom is now possible using @LumaLabsAI I shot this on my phone. NeRF is gonna empower so many people to get cinematic level shots Tutorial below –
Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:
The 3D and Immersive Design Team at Adobe is looking for a design intern who will help envision and build the future of Adobe’s 3D and MR creative tools.
With the Adobe Substance 3D Collection and Adobe Aero, we’re making big moves in 3D, but it is still early days! This is a huge opportunity space to shape the future of 3D and AR at Adobe. We believe that tools shape our world, and by building the tools that power 3D creativity we can have an outsized impact on our world.
Easy placement/movement of 3D primitives -> realistic/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it’s been difficult to bring the level of quality & consistency up to what Adobe users demand.
Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:
The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:
Depending on how well it works, tech like this could be the greatest unlock in 3D creation the world has ever known.
The company blog post features interesting, promising details:
Though quicker than manual methods, prior 3D generative AI models were limited in the level of detail they could produce. Even recent inverse rendering methods can only generate 3D objects based on 2D images taken from various angles, requiring developers to build one 3D shape at a time.
GET3D can instead churn out some 20 shapes a second when running inference on a single NVIDIA GPU — working like a generative adversarial network for 2D images, while generating 3D objects. […]
GET3D gets its name from its ability to Generate Explicit Textured 3D meshes — meaning that the shapes it creates are in the form of a triangle mesh, like a papier-mâché model, covered with a textured material. This lets users easily import the objects into game engines, 3D modelers and film renderers — and edit them.
See also Dream Fields (mentioned previously) from Google:
Great to see my old teammates (with whom I was working to enable cloud-rendered as well as locally rendered 3D experiences) continuing their work.
NASA and Google Arts & Culture have partnered to bring more than 60 3D models of planets, moons and NASA spacecraft to Google Search. When you use Google Search to learn about these topics, just click on the View in 3D button to understand the different elements of what you’re looking at even better. These 3D annotations will also be available for cells, biological concepts (like skeletal systems), and other educational models on Search.
After seeing years & years of AR demos featuring the placement of furniture, I once heard someone say in exasperation, “Bro… how much furniture do you think I buy?”
Happily here’s a decidedly fresh approach, surrounding the user & some real-world furniture with a projection of the person’s 3D-scanned home. Wild!
Now, how easy can 3D home scanning be made—and how much do people care about this kind of scenario? I don’t know, but I love what the tech can enable already.
What if you could bring your home shopping?
Our latest prototype lets you teleport real furniture from the store into a 3D scan of your living room. It's amazing to experience first hand. 🤩 pic.twitter.com/gAPawRXgGB
Hmm—this is no doubt brilliant tech, and I’d like to learn more, but I wonder about the Venn diagram between “Objects that people want in 3D,” “Objects for which a sufficiently large number of good images exist,” and “Objects for which good human-made 3D models don’t already exist.” In my experience photogrammetry is most relevant for making models from extremely specific subjects (e.g. a particular apartment) rather than from common objects that are likely to exist on Sketchfab et al. It’s entirely possible I’m missing a nuanced application here, though. As I say, cool tech!
[W]e’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.
Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.
AI animation tech, which in this case leverages the motion of a face in a video to animate a different face in a still image, keeps getting better & better. Check out these results from Samsung Labs:
Check out this high-speed overview of recent magic courtesy of my friend Bilawal:
Photogrammetry is an art form that has been around for decades, but it’s never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3D captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.
While we’re all still getting our heads around the 2D image-generation magic of DALL•E, Imagen, MidJourney, and more, Google researchers are stepping into a new dimension as well with Dream Fields—synthesizing geometry simply from words.
Greetings from the galactic core, to which my friend Bilawal has dispatched me by editing the 3D model he made from drone-selfie footage that I recorded last year:
I’m no 3D artist (had I but world enough and time…), but I sure love their work & anything that makes it faster and easier. Perhaps my most obscure point of pride from my Photoshop years is that we added per-layer timestamps into PSD files, so that Pixar could more efficiently render content by noticing which layers had actually been modified.
The Substance 3D plugin (BETA) enables the use of Substance materials directly in Unreal Engine 5 and Unreal Engine 4. Whether you are working on games, visualization and or deploying across mobile, desktop, or XR, Substance delivers a unique experience with optimized features for enhanced productivity.
Work faster, be more productive: Substance parameters allow for real-time material changes and texture updates.
Substance 3D for Unreal Engine 5 contains the plugin for Substance Engine.
The Substance Assets platform is a vast library containing high-quality PBR-ready Substance materials and is accessible directly in Unreal through the Substance plugin. These customizable Substance files can easily be adapted to a wide range of projects.
Once the deal closes, BRIO XR will be joining an unparalleled community of engineers and product experts at Adobe – visionaries who are pushing the boundaries of what’s possible in 3D and immersive creation. Our BRIO XR team will contribute to Adobe’s Creative Cloud 3D authoring and experience design teams. Simply put, Adobe is the place to be, and in fact, it’s a place I’ve long set my sights on joining.
[Adobe] announced a tool that allows consumers to point their phone at a product image on an ecommerce site—and then see the item rendered three-dimensionally in their living space. Adobe says the true-to-life size precision—and the ability to pull multiple products into the same view—set its AR service apart from others on the market. […]
Chang Xiao, the Adobe research scientist who created the tool, said many of the AR services currently on the market provide only rough estimations of the size of the product. Adobe is able to encode dimensions information in its invisible marker code embedded in the photos, which its computer vision algorithms can translate into more precisely sized projections.
Last year I enjoyed creating a 3D dronie during my desert trip with Russell Brown, flying around the Pinnacles outside of Trona:
This year I just returned (hours ago!) from another trip with Russell, this time being joined by his son Davis (who coincidentally is my team’s new UI designer!). On Monday we visited the weird & wonderful International Car Forest of the Last Church, where Davis used his drone plus Metashape to create this 3D model:
Hmm—I always want to believe in tools like this, but I remain skeptical. Back at Google I played with Blocks, which promised to make 3D creation fun, but which in my experience combined the inherent complexity of that art with the imprecision and arm fatigue of waving controllers in space. But who knows—maybe Shapes is different?
I’m intrigued but not quite sure how to feel about this. Precisely tracking groups of fast-moving human bodies & producing lifelike 3D copies in realtime is obviously a stunning technical coup—but is watching the results something people will prefer to high-def video of the real individuals & all their expressive nuances? I have no idea, but I’d like to know more.
Earlier this week I was amazed to see the 3D scan that Polycam founder Chris Heinrich was able to achieve by flying around LA & capturing ~100 photos of a neighborhood, then generating 3D results via the new Web version of Polycam:
You can take the results for a (literal) spin here, though note that they didn’t load properly on my iPhone.
As you may have seen in Google Earth & elsewhere, scanning & replicating amorphous organic shapes like trees remains really challenging:
It’s therefore all the more amazing to see the incredible results these artists exacting artists are able to deliver when creating free-to-use (!) assets for Unreal Engine:
Discover the experience for yourself with these QR Codes by downloading the Aero app. We recommend running the experience for iOS on 8S and above, or on Android, Private Beta, US only, a list of Android can be found here on HelpX. (FYI, the experience may take a few seconds to load as it is a more sophisticated AR project.)
This new witchcraft “synthesizes not only high-resolution, multi-view-consistent images in real time, but also produces high-quality 3D geometry.” Plus it makes a literally dizzying array of gatos!
Unsupervised generation of high-quality multi-view-consistent images and 3D shapes using only collections of single-view 2D photographs has been a long-standing challenge. Existing 3D GANs are either compute-intensive or make approximations that are not 3D-consistent; the former limits quality and resolution of the generated images and the latter adversely affects multi-view consistency and shape quality. In this work, we improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations. For this purpose, we introduce an expressive hybrid explicit-implicit network architecture that, together with other design choices, synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry. By decoupling feature generation and neural rendering, our framework is able to leverage state-of-the-art 2D CNN generators, such as StyleGAN2, and inherit their efficiency and expressiveness. We demonstrate state-of-the-art 3D-aware synthesis with FFHQ and AFHQ Cats, among other experiments.
The imagineers (are they still called that?) promise a new way to create photorealistic full-head portrait renders from captured data without the need for artist intervention.
Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2).
The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings.
10 years ago we put a totally gratuitous (but fun!) 3D view of the layers stack into Photoshop Touch. You couldn’t actually edit in that mode, but people loved seeing their 2D layers with 3D parallax.
More recently apps are endeavoring to turn 2D photos into 3D canvases via depth analysis (see recent Adobe research), object segmentation, etc. That is, of course, an extension of what we had in mind when adding 3D to Photoshop back in 2007 (!)—but depth capture & extrapolation weren’t widely available, and it proved too difficult to shoehorn everything into the PS editing model.
Now Mental Canvas promises to enable some truly deep expressivity:
I do wonder how many people could put it to good use. (Drawing well is hard; drawing well in 3D…?) I Want To Believe… It’ll be cool to see where this goes.