Heh—here’s a super fun application of body tracking tech (see whole category here for previous news) that shows off how folks have been working to redefine what’s possible with. realtime machine learning on the Web (!):
[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
It’s pretty OT for my blog, I know, but as someone who’s been working in computer vision for the last couple of years, I find it interesting to see how others are applying these techniques.
Equipped with ultra-high definition cameras and high-powered illumination, the [Train Inspection Portal (TIP)] produces 360° scans of railcars passing through the portal at track speed. Advanced machine vision technology and software algorithms identify defects and automatically flag cars for repair.
I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!
Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:
The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.
In MediaPipe v0.6.7.1, we are excited to release a box tracking solution, that has been powering real-time tracking in Motion Stills, YouTube’s privacy blur, and Google Lens for several years and that is leveraging classic computer vision approaches. Pairing tracking with ML inference results in valuable and efficient pipelines. In this blog, we pair box tracking with object detection to create an object detection and tracking pipeline. With tracking, this pipeline offers several advantages over running detection per frame.
Read on for more, and let us know what you create!
Check out how it enables real objects to occlude virtual ones:
Here’s a somewhat deeper dive into the whole shebang:
The features are designed to be widely available, not requiring special sensors:
The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.
And we’re looking for partners:
We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.
Looks like a simple but perhaps compelling use of ML & AR:
Zenia encompasses the best of computer vision and machine learning. She uses motion tracking and the data from thousands of yoga lessons to analyze my movements. During the practice, Zenia provides gentle feedback and also takes care of basic safety rules.