Category Archives: AR/VR

Google AR search gets super buggy!

…In the best possible way, of course.

My mom loves to remind me about how she sweltered, hugely pregnant with me, through a muggy Illinois summer while listening to cicadas drone on & on. Now I want to bring a taste of the 70’s back to her via Google’s latest AR content.

You can now search for all these little (and not-so-little) guys via your Android or iPhone and see them in your room:

Here’s a list of new models:

  • Rhinoceros beetle
  • Hercules beetle
  • Atlas beetle
  • Stag beetle
  • Giant stag
  • Miyama stag beetle
  • Shining ball scarab beetle
  • Jewel beetle
  • Ladybug
  • Firefly
  • Rosalia batesi
  • Swallowtail butterfly
  • Morpho butterfly
  • Atlas moth
  • Mantis
  • Grasshopper
  • Dragonfly
  • Hornet
  • Robust cicada
  • Brown cicada
  • Periodical cicada
  • Walker’s cicada
  • Evening cicada.

Disney unveils high-res face-swapping tech

“Your scientists were so preoccupied…” that, well, you know.

PetaPixel writes,

“To the best of our knowledge, this is the first method capable of rendering photo-realistic and temporally coherent results at megapixel resolution,” the team of researchers at Disney Research Studios and ETH Zurich write in their new paper, titled “High-Resolution Neural Face Swapping for Visual Effects.”

The new method developed aims to “disentangle” the “static identity information” of a person’s face from the “dynamic behavioral information,” allowing any performance to be transferable between any two people.

Fun with Natzke’s virtual Legos

Artist/technologist Erik Natzke has kept me inspired for the better part of 20 years. His work played a key role in sending me down a multi-year rabbit hole trying to get Flash (and later HTML) to be a live layer type within Photoshop and other Adobe apps. The creative possibilities were tremendous, and though I’ll always be sad we couldn’t make it happen, I’m glad we tried & grateful for the inspiration.

Anyway, since going independent following a multi-year stint at Adobe, Erik has been sharing delightful AR explorations—recently featuring virtual Legos interacting with realtime depth maps of a scene. He’s been sharing so much so quickly lately that I can’t keep up and would encourage you to follow his Twitter & Instagram feeds, but meanwhile here are some fun tastes:

Now, how soon until we can create the Fell In Love With A Girl video in realtime? 😌🤘

Virtual backgrounds & blurs are coming to Google Meet

It may seem like a small thing, but I’m happy to say that my previous team’s work on realtime human segmentation + realtime browser-based machine learning will be coming to Google Meet soon, powering virtual backgrounds:

Since making Google Meet premium video meetings free and available to everyone, we’ve continued to accelerate the development of new features… In the coming months, we’ll make it easy to blur out your background, or replace it with an image of your choosing so you can keep your team’s focus solely on you. 

Replace your background.jpg

Google Maps improves “blue dot” accuracy via AR Live View

My team has been collaborating with Maps folks for the last year+ to power great AR experiences, and while the feature below isn’t AR per se, it leverages the same tech stack to address a longstanding problem with GPS. Per 9to5 Google,

Google is now letting you “Calibrate with Live View” to improve the accuracy of the blue dot in Maps. Most are familiar with the dot that marks current location having a beam to signify what direction you’re pointed in. […]

Tapping the blue circle will open a full-screen menu with the new option at the bottom under “Save your parking.” This will launch the same camera UI used by Live View and should only take a few seconds of panning.

Afterwards, your location should be highly accurate and not jump around. Meanwhile, the beam is replaced by a solid arrow.

AR dinosaurs stomp into Google search

Check out some fun new work from my team, available now on both Android & iOS:

Google started adding augmented reality animals to searches last year at Google I/O and has since introduced a veritable menagerie, covering cats, scorpions, bears, tigers, and many more. Now, a herd of dinosaurs has also been added to this list, each of which uses graphics developed for the Jurassic World Alive augmented reality mobile game.

The full list of available dinosaurs includes the Tyrannosaurus rex, Velociraptor, Triceratops, Spinosaurus, Stegosaurus, Brachiosaurus, Ankylosaurus, Dilophosaurus, Pteranodon, and Parasaurolophus.

ARCore rolls out depth support

Exciting news from my teammates:

Today, we’re taking a major step forward and announcing the Depth API is available in ARCore 1.18 for Android and Unity, including AR Foundation, across hundreds of millions of compatible Android devices.

As we highlighted last year, a key capability of the Depth API is occlusion: the ability for digital objects to accurately appear behind real world objects. This makes objects feel as if they’re actually in your space, creating a more realistic AR experience.

Check out the new Depth Lab app (also available as an open-source Unity project) to try it for yourself. You can play hide-the-hot-dog with Snap, as well as check out an Android-exclusive undersea lens:

ML Kit gets pose detection

This is kinda inside-baseball, but I’m really happy that friends from my previous team will now have their work distributed on hundreds of millions, if not billions, of devices:

[A] face contours model — which can detect over 100 points in and around a user’s face and overlay masks and beautification elements atop them — has been added to the list of APIs shipped through Google Play Services…

Lastly, two new APIs are now available as part of the ML Kit early access program: entity extraction and pose detection… Pose detection supports 33 skeletal points like hands and feet tracking.

Let’s see what rad stuff the world can build with these foundational components. Here’s an example of folks putting an earlier version to use, and you can find a ton more in my Body Tracking category:

[Via]

Google releases “Sodar,” visualizing social distancing via WebXR

It’s as much about testing/showcasing emerging standards as anything. Per The Verge:

If you’ve got an Android device, just open up the Chrome browser and go to goo.gle/sodar to launch the tool, named SODAR. There’s no app required, though it won’t work on iOS or older Android devices. Your phone will use augmented reality to map the space around you, superimposing a two-meter radius circle on the view from your camera.

Garments strut on their own in a 3D fashion show

No models, no problem: Congolese designer Anifa Mvuemba used software to show off her designs swaying in virtual space:

https://twitter.com/goatsandbacon/status/1264697586755047425?s=20

Cool context:

Inspired by her hometown in Congo, Anifa was intentional about shedding light on issues facing the Central African country with a short documentary at the start of the show. From mineral site conditions to the women and children who suffer as a result of these issues, Anifa’s mission was to educate before debuting any clothes. “Serving was a big part of who I am, and what I want to do,” she said in the short documentary.

Katy Perry live pushes the limits of mixed-reality storytelling

I can’t say I love the song, but props to the whole team who pulled off this ambitious set piece:

As VR Scout notes,

With the exception of a single yellow chair, it appears as though every visual shown during the performance was generated in post. What really sells the performance, however, is the choreography. Throughout the entirety of the performance, Perry reacts and responds to every visual element shown “on-stage”.

Come learn & share in AR, right from Google search

So, this is what I do all day: I help shape the efforts of a bunch of smart people working to make Google better at answering questions, by making the results a lot richer & more interactive.

My subtle-not-subtle ambition is to help creative people (like those I served at Adobe) bring their beautiful, immersive work (3D & AR) to an enormous audience, by solving the last-mile problem much like Flash Player did back in the day. It’s all about my 20-year mission of standing out of creators’ light.

Recently the team has turned on some great new features & content. On both iOS & Android you can search for numerous subjects (animals, biology, anatomy, and more), then view the results in 3D or in your environment via AR. On Android you can also now navigate among search results, record videos, and share the results.

We’re partnering with BioDigital so that you can explore 11 human body systems with AR in Search on mobile. Search for circulatory system and tap “View in 3D” to see a heart up close or look up skeletal system to trace the bones in the human body and see how they connect. Read labels on each body part to learn more about it or view life-size images in AR to better understand its scale.

Spatial promises video chat in VR & AR

I can’t wait for my CrossFit-via-Zoom classes to go this way, so I’m surrounding by a bunch of glitchy avatars sweating through burpees. 🙃

Promised features:

  • “Join from a VR/AR headset or PC/Phone.
  • Create your lifelike avatar from a single 2D selfie in seconds.
  • Organize 3d models, videos, docs, images, notes and even your own screen.
  • Instantly set up rooms, scribble, search, pin information, save it, and access it anytime.
  • Easily share your room and invite anyone to join your meeting”

New AR effects debut in Google Duo

In the past I’ve mentioned augmented reality lipstick, eyeshadow, & entertainment effects running in YouTube. I’m pleased to say that fun effects are arriving in Google Duo as well:

In addition to bringing masks and effects to our new family mode, we’re bringing them to any one-on-one video calls on Android and iOS—starting this week with a Mother’s Day effect. We’re also rolling out more effects and masks that help you express yourself, from wearing heart glasses to transforming into a flower. 

Cut & paste your surroundings to Photoshop

This near-realtime segmentation, copy, and paste is wild:

Inside XR writes,

In a Twitter thread, Diagne said the secret is BASNet, an architecture for salient object detection with boundaries. (Paper here).

The delay is about 2.5 seconds to cut and 4 seconds to paste, though Diagne notes there are ways to speed that up.

The GitHub page for the project is available here.

Matterport brings 3D room-scanning & reconstruction to iPhone

Hmm—I look forward to taking this thing for a spin:

The developers write,

MATTERPORT CAPTURE APP ALLOWS YOU TO:
* Share your 3D virtual tour on social and messaging platforms with a Matterport-generated URL
* Sit back and relax, with automatic image processing, color correction, and face blurring
* Guide viewers around by highlighting features in your space with Mattertags and labels
* Add measurements to your 3D capture to accurately size the space

Open-source AR face doodling, right in your browser

Back when I was pitching myself for the job I somehow got in Google AI’s Perception group, I talked a lot about democratizing access to perceptive tech to enable permissionless innovation. Not that I can take any credit for it, but I love seeing more of the vision become reality through tech the team has built:

Google Doodle “Back to the Moon” jumps to AR

Et voila:

VR Scout writes,

The fully immersive AR experience brings to life a magical world crafted by Georges Méliès, a French illusionist and film director from the early 1900’s… Back to the Moon transports you into Méliès magical world inspired by some of his most well-known films. The experience honors Méliès unique style of filmmaking, highlighting several of his groundbreaking techniques… [It] features an original score by composer Mathieu Alvado performed by the legendary London Symphony Orchestra.

Download Back to the Moon in AR through the free Google Spotlights Stories app on Android or iOS.

[YouTube]

Charmingly low-fi AR: Skiing in the living room

Hah:

As PetaPixel notes,

“Just before the current health situation locked us in, I was about to go Freeriding with my family. It was supposed to be the big adventure of the year, the one I had been eagerly awaiting for a year,” explains Herrero. “Therefore, the lockdown had me thinking about skiing the whole time, so I started to think how I could ski without leaving my living room.”

[YouTube]

Oil paintings come alive in AR

A couple of years ago, Adobe unveiled some really promising style-transfer tech that could apply the look of oil paintings to animated characters:

I have no idea whether it uses any of the same tech, but now 8th Wall is bringing a similar-looking experience to augmented reality via an entirely browser-based stack—very cool:

 [YouTube]

Free streaming classes on photography, 3D

It’s really cool to see companies stepping up to help creative people make the most of our forced downtime. PetaPixel writes,

If you’re a photographer stuck at home due to the coronavirus pandemic, Professional Photographers of America (PPA) has got your back. The trade association has made all of its 1,100+ online photography classes free for the next two weeks. […]

You can spend some of your lockdown days learning everything from how to make money in wedding photography to developing a target audience to printing in house.

UntitledImage

Meanwhile Unity is opening up their Learn Premium curricula:

During the COVID-19 crisis, we’re committed to supporting the community with complimentary access to Unity Learn Premium for three months (March 19 through June 20). Get exclusive access to Unity experts, live interactive sessions, on-demand learning resources, and more.

UntitledImage

“NeRF” promises amazing 3D capture

“This is certainly the coolest thing I’ve ever worked on, and it might be one of the coolest things I’ve ever seen.”

My Google Research colleague Jon Barron routinely makes amazing stuff, so when he gets a little breathless about a project, you know it’s something special. I’ll pass the mic to him to explain their new work around capturing multiple photos, then synthesizing a 3D model:

I’ve been collaborating with Berkeley for the last few months and we seem to have cracked neural rendering. You just train a boring (non-convolutional) neural network with five inputs (xyz position and viewing angle) and four outputs (RGB+alpha), combine it with the fundamentals of volume rendering, and get an absurdly simple algorithm that beats the state of the art in neural rendering / view synthesis by *miles*.

You can change the camera angle, change the lighting, insert objects, extract depth maps — pretty much anything you would do with a CGI model, and the renderings are basically photorealistic. It’s so simple that you can implement the entire algorithm in a few dozen lines of TensorFlow.

Check it out in action:

[YouTube]

Google DeepMind helps briefly reanimate an extinct rhino in AR

Yeesh—talk about bittersweet at best:

He first appears as a crude collection of 3-D pixels—or voxels. Soon, he looks like a conglomeration of blocks morphing into the shape of an animal. Gradually, his image evolves until he becomes a sharp representation of a northern white rhino, grunting and squealing as he might in a grassy African or Asian field. There comes a moment—just a moment—when the viewer’s eyes meet his. Then, the 3-D creature vanishes, just like his sub-species, which due to human poaching is disappearing into extinction.

Smithsonian continues,

The Mill, which has studios in London, New York, Los Angeles, Chicago, Bangalore and Berlin, provided animation for this project, and Dr. Andrea Banino at DeepMind, an international company that develops useful forms of artificial intelligence, provided the experimental data to set the rhino’s paths. After each two-minute episode, the rhino reappears and follows another of the three programmed paths.

Exeunt. Sadness ensues. [Vimeo]

UntitledImage

Check out the gesture-sensing holographic Looking Glass

This little dude looks nifty as heck:

The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.

This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.

UntitledImage

[Vimeo]

AR: The floor is lava

Back in 2014, Action Movie Dad posted a delightful vid of his niño evading the hot foot:

But now instead of needing an hour-long tutorial on how to create this effect, you can do it it realtime, with zero effort, on your friggin’ telephone. (Old Man Nack does wonder just how much this cheapens the VFX coin—but on charges progress.)

https://twitter.com/tomemrich/status/1230535407609057281

UntitledImage

[YouTube]

Cloaking device engaged: Going invisible via Google’s browser-based ML

Heh—here’s a super fun application of body tracking tech (see whole category here for previous news) that shows off how folks have been working to redefine what’s possible with. realtime machine learning on the Web (!):

AuraRing, a trippy ring + wristband combo gesture system

Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐

VentureBeat writes,

[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.

[YouTube] [Via]

Visually inspecting trains at high speed

It’s pretty OT for my blog, I know, but as someone who’s been working in computer vision for the last couple of years, I find it interesting to see how others are applying these techniques.

Equipped with ultra-high definition cameras and high-powered illumination, the [Train Inspection Portal (TIP)] produces 360° scans of railcars passing through the portal at track speed. Advanced machine vision technology and software algorithms identify defects and automatically flag cars for repair.

[Vimeo]

Apple releases Reality Converter

I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!

Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:

The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.

Come build on Google’s open-source ML & computer vision

Not Hot Dog! My teammates have just shared a tech for recognizing objects & tracking them over time:

https://twitter.com/googledevs/status/1204497003746643969

The AI dev blog explains,

In MediaPipe v0.6.7.1, we are excited to release a box tracking solution, that has been powering real-time tracking in Motion Stills, YouTube’s privacy blur, and Google Lens for several years and that is leveraging classic computer vision approaches. Pairing tracking with ML inference results in valuable and efficient pipelines. In this blog, we pair box tracking with object detection to create an object detection and tracking pipeline. With tracking, this pipeline offers several advantages over running detection per frame.

Read on for more, and let us know what you create!

Depth sensing comes to ARCore

I’m delighted to say that my team has unveiled depth perception in ARCore. Here’s a quick taste:

Check out how it enables real objects to occlude virtual ones:

Here’s a somewhat deeper dive into the whole shebang:

The features are designed to be widely available, not requiring special sensors:

The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.

And we’re looking for partners:

We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.

[YouTube 1 & 2]

Google releases open-source, browser-based BodyPix 2.0

Over the last couple of years I’ve pointed out a number of cool projects (e.g. driving image search via your body movements) powered by my teammates’ efforts to 1) deliver great machine learning models, and 2) enable Web browsers to run them efficiently. Now they’ve released BodyPix 2.0 (see live demo):

We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X.

Enjoy!

“Extreme Augmented Reality”: Nvidia connects cloud & handset

Instead of squeezing models down to a couple of megs & constraining rendering to what a phone alone can do, what if you could use a full-power game engine to render gigabyte-sized models in realtime in the cloud, streaming the results onto your device to combine with the world? That’s the promise of Nvidia CloudXR (which looks similar to Microsoft Azure Remote Rendering), announced this week:

[YouTube]