Category Archives: NeRF

Happy New Year!

Hey gang—here’s to having a great 2024 of making the world more beautiful & fun. Here’s a little 3D creation (with processing courtesy of Luma Labs) made from some New Year’s Eve drone footage I captured at Gaviota State Beach. (If it’s not loading for some reason, you can see a video version in this tweet).

NBA goes NeRF

Here’s a great look at how the scrappy team behind Luma.ai has helped enable beautiful volumetric captures of Phoenix Suns players soaring through the air:

Go behind the scenes of the innovative collaboration between Profectum Media and the Phoenix Suns to discover how we overcame technological and creative challenges to produce the first 3D bullet time neural radiance field NeRF effect in a major sports NBA arena video. This involved not just custom-building a 48 GoPro multi-cam volumetric rig but also integrating advanced AI tools from Luma AI to capture athletes in stunning, frozen-in-time 3D visual sequences. This venture is more than just a glimpse behind the scenes – it’s a peek into the evolving world of sports entertainment and the future of spatial capture.

Luma adds NeRF-powered fly-throughs

“Get cinematic and professional-looking drone Flythroughs in minutes from shaky amateur recorded videos.” The results are slick:

Tangentially, here’s another impressive application of Luma tech—turning drone footage into a dramatically manipulable 3D scene:

https://youtube.com/shorts/6eOLsKr224c?si=u1mWHM1qlNfbPuMf

“The AI-Powered Tools Supercharging Your Imagination”

I’m so pleased & even proud (having at least having offered my encouragement to him over the years) to see my buddy Bilawal spreading his wings and spreading the good word about AI-powered creativity.

Check out his quick thoughts on “Channel-surfing realities layered on top of the real world,” “3D screenshots for the real world,” and more:

Favorite quote 😉:

“The impossibilities are endless”: Yet more NeRF magic

Last month Paul Trillo shared some wild visualizations he made by walking around Michelangelo’s David, then synthesizing 3D NeRF data. Now he’s upped the ante with captures from the Louvre:

Over in Japan, Tommy Oshima used the tech to fly around, through, and somehow under a playground, recording footage via a DJI Osmo + iPhone:

https://twitter.com/jnack/status/1616981915902554112?s=20&t=5LOmsIoifLw8oNVMV2fYIw
As I mentioned last week, Luma Labs has enabled interactive model embedding, and now they’re making the viewer crazy-fast:

The world’s first (?) NeRF-powered commercial

Karen X. Cheng, back with another 3D/AI banger:

As luck (?) would have it, the commercial dropped on the third anniversary of my former teammate Jon Barron & collaborators bringing NeRFs into existence:

More NeRF magic: From Michelangelo to NYC

This stuff—creating 3D neural models from simple video captures—continues to blow my mind. First up is Paul Trillo visiting the David:

Then here’s AJ from the NYT doing a neat day-to-night transition:

And lastly, Hugues Bruyère used a 360º camera to capture this scene, then animate it in post (see thread for interesting details):

https://twitter.com/smallfly/status/1604609303255605251?s=20&t=jdSW1NC_n54YTxsnkkFPJQ

More NeRF magic: Dolly zoom & beyond

It’s insane to me how much these emerging tools democratize storytelling idioms—and then take them far beyond previous limits. Recently Karen X. Cheng & co. created some wild “drone” footage simply by capturing handheld footage with a smartphone:

Now they’re creating an amazing dolly zoom effect, again using just a phone. (Click through to the thread if you’d like details on how the footage was (very simply) captured.)

Meanwhile, here’s a deeper dive on NeRF and how it’s different from “traditional” photogrammetry (e.g. in capturing reflective surfaces):

Neural JNack has entered the chat… 🤖

Last year my friend Bilawal Singh Sidhu, a PM driving 3D experiences for Google Maps/Earth, created an amazing 3D render (also available in galactic core form) of me sitting atop the Trona Pinnacles. At that time he used “traditional” photogrammetry techniques (kind of a funny thing to say about an emerging field that remains new to the world), and this year he tried processing the same footage (comprised of a couple simple orbits from my drone) using new Neural Radiance Field (“NeRF”) tech:

For comparison, here’s the 3D model generated via the photogrammetry approach:

The file is big enough that I’ve had some trouble loading it on my iPhone. If that affects you as well, check out this quick screen recording:

Check out NeRF Studio & some eye-popping results

The power & immersiveness of rendering 3D from images is growing at an extraordinary rate. NeRF Studio promises to make creation much more approachable:

https://twitter.com/akanazawa/status/1577686321119645696?s=20&t=OA61aUUy3A6P1aMQiUIzbA

The kind of results one can generate from just a series of photos or video frames is truly bonkers:

Here’s a tutorial on how to use it:

Capturing Reality With Machine Learning: A NeRF 3D Scan Compilation

Check out this high-speed overview of recent magic courtesy of my friend Bilawal:

Photogrammetry is an art form that has been around for decades, but it’s never looked better thanks to ML techniques like Neural Radiance Fields (NeRF). This video shows a wide range of 3D captures made using this technique. And I gotta say, NeRF really breathes new life into my old photo scans! All these datasets were posed in COLMAP and trained + rendered with NVIDIA’s free Instant NGP tools.

“NeRF” promises amazing 3D capture

“This is certainly the coolest thing I’ve ever worked on, and it might be one of the coolest things I’ve ever seen.”

My Google Research colleague Jon Barron routinely makes amazing stuff, so when he gets a little breathless about a project, you know it’s something special. I’ll pass the mic to him to explain their new work around capturing multiple photos, then synthesizing a 3D model:

I’ve been collaborating with Berkeley for the last few months and we seem to have cracked neural rendering. You just train a boring (non-convolutional) neural network with five inputs (xyz position and viewing angle) and four outputs (RGB+alpha), combine it with the fundamentals of volume rendering, and get an absurdly simple algorithm that beats the state of the art in neural rendering / view synthesis by *miles*.

You can change the camera angle, change the lighting, insert objects, extract depth maps — pretty much anything you would do with a CGI model, and the renderings are basically photorealistic. It’s so simple that you can implement the entire algorithm in a few dozen lines of TensorFlow.

Check it out in action:

[YouTube]