Category Archives: VR/AR

Visually inspecting trains at high speed

It’s pretty OT for my blog, I know, but as someone who’s been working in computer vision for the last couple of years, I find it interesting to see how others are applying these techniques.

Equipped with ultra-high definition cameras and high-powered illumination, the [Train Inspection Portal (TIP)] produces 360° scans of railcars passing through the portal at track speed. Advanced machine vision technology and software algorithms identify defects and automatically flag cars for repair.

[Vimeo]

Apple releases Reality Converter

I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!

Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:

The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.

Come build on Google’s open-source ML & computer vision

Not Hot Dog! My teammates have just shared a tech for recognizing objects & tracking them over time:

https://twitter.com/googledevs/status/1204497003746643969

The AI dev blog explains,

In MediaPipe v0.6.7.1, we are excited to release a box tracking solution, that has been powering real-time tracking in Motion Stills, YouTube’s privacy blur, and Google Lens for several years and that is leveraging classic computer vision approaches. Pairing tracking with ML inference results in valuable and efficient pipelines. In this blog, we pair box tracking with object detection to create an object detection and tracking pipeline. With tracking, this pipeline offers several advantages over running detection per frame.

Read on for more, and let us know what you create!

Depth sensing comes to ARCore

I’m delighted to say that my team has unveiled depth perception in ARCore. Here’s a quick taste:

Check out how it enables real objects to occlude virtual ones:

Here’s a somewhat deeper dive into the whole shebang:

The features are designed to be widely available, not requiring special sensors:

The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.

And we’re looking for partners:

We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.

[YouTube 1 & 2]

Google releases open-source, browser-based BodyPix 2.0

Over the last couple of years I’ve pointed out a number of cool projects (e.g. driving image search via your body movements) powered by my teammates’ efforts to 1) deliver great machine learning models, and 2) enable Web browsers to run them efficiently. Now they’ve released BodyPix 2.0 (see live demo):

We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X.

Enjoy!