Category Archives: AR/VR

Give FX the finger in AR

The free new app Diorama pairs with the $99 finger-worn Litho device to let you create AR movies directly inside your phone, using a selection of props & tapping into the Google Poly library:

VR Focus writes,

“Diorama will democratize the creation of special effects in the same way the smartphone democratized photography. It will allow anyone to create beautiful visual effects the likes of which have previously only been accessible to Hollywood studios,” said Nat Martin, Founder at Litho in a statement.

When combined with the Litho controller users can animate objects simply by dragging them, fine tuning the path by grabbing specific points. Mood lighting can be added thanks to a selection of filters plus the app supports body tracking so creators can interact with a scene.

TensorFlow.js brings faster ML to your browser

Speaking of making machine learning-powered experiences fast & ubiquitous via the Web, check out the latest work from our friends on TensorFlow.js:

In March we introduced a new WebAssembly (Wasm) accelerated backend for TensorFlow.js (scroll further down to learn more about Wasm and why this is important). Today we are excited to announce a major performance update: as of TensorFlow.js version 2.3.0, our Wasm backend has become up to 10X faster by leveraging SIMD (vector) instructions and multithreading via XNNPACK, a highly optimized library of neural network operators.

You can see the performance improvements for yourself:

Check out this demo of our BlazeFace model, which has been updated to use the new Wasm backend: https://tfjs-wasm-simd-demo.netlify.app/ To compare against the unoptimized binary, try this version of the demo, which manually turns off SIMD and multithreading support.

Zen & the ARt of Motorcycle Customization

I have to admit, as eager as I am to see augmented reality thrive, I was a little skeptical about the value of this AR bike-modding application, but my neighbor Chris (who rides when he’s not designing motorsports gear) is enthusiastic and offered some good perspective:

Over the winter I will build my Suzuki into a pure track bike, but there are things I won’t know if they will fit until I get them all together. I know they all fit an otherwise stock bike, but won’t know if they fit together. 

Check it out for yourself:

Google’s pose detection is now available on iOS & Android

Awesome work by the team. Come grab a copy & build something great!

The ML Kit Pose Detection API is a lightweight versatile solution for app developers to detect the pose of a subject’s body in real time from a continuous video or static image. A pose describes the body’s position at one moment in time with a set of x,y skeletal landmark points. The landmarks correspond to different body parts such as the shoulders and hips. The relative positions of landmarks can be used to distinguish one pose from another.

Put semi-horrifying ancient creatures in your room through Google AR

I know it’s not the ‘woke’ thing to say, but I hope the world is enslaved by an ancient soulless sentience.” — my Lovecraft-loving weirdo friend

Heh—they’re not all creepy, to be sure, but come browse fun stuff on Google Arts & Culture, view the models in 3D, and if you’re on your phone, place them in your space via AR.

DesignTaxi writes,

Some of the creatures include the Aegirocassis, a sea creature that existed 480 million years ago; a creepy-looking ancient crustacean; and a digital remodel of the whale skeleton, which is currently in view in the National History Museum’s Hintze Hall.

Exceedingly tangentially: who doesn’t love a good coelacanth reference?

Body Movin’: Google AI releases new tech for body tracking, eye measurement

My old teammates keep slapping out the bangers, releasing machine-learning tech to help build apps that key off the human form.

First up is Media Pipe Iris, enabling depth estimation for faces without fancy (iPhone X-/Pixel 4-style) hardware, and that in turn opens up access to accurate virtual try-on for glasses, hats, etc.:

https://twitter.com/GoogleAI/status/1291430839088103424

The model enables cool tricks like realtime eye recoloring:

I always find it interesting to glimpse the work that goes in behind the scenes. For example:

To train the model from the cropped eye region, we manually annotated ~50k images, representing a variety of illumination conditions and head poses from geographically diverse regions, as shown below.

The team has followed up this release with MediaPipe BlazePose, which is in testing now & planned for release via the cross-platform ML Kit soon:

Our approach provides human pose tracking by employing machine learning (ML) to infer 33, 2D landmarks of a body from a single frame. In contrast to current pose models based on the standard COCO topology, BlazePose accurately localizes more keypoints, making it uniquely suited for fitness applications…

If one leverages GPU inference, BlazePose achieves super-real-time performance, enabling it to run subsequent ML models, like face or hand tracking.

Now I can’t wait for apps to help my long-suffering CrossFit coaches actually quantify the crappiness of my form. Thanks, team! 😛

“Ghost Pacer” is coming to sweat you in AR

“Comparison is the thief of joy.” — Theodore Roosevelt
“Move your ass, fat boy!” — CrossFit

Okay, CF doesn’t say the latter, at least at my gym, but there’s a lot to be said for having a mix of social support/pressure—which is exactly why I’m happy to pay for CF as well as Peloton (leaderboards, encouragement, etc.).

Now the Ghost Pacer headset promises to run you ragged, or at least keep you honest, through augmented reality:

Wild view-synthesis work from Google + Cornell

Noah Snavely is the O.G. researcher whose thesis work gave rise to the PhotoSynth crowd-sourcing imaging tech with which Microsoft blew minds back in the mid-aughts. He’s been at Google for the last several years, and now his team of student researchers are whipping up new magic from large sets of tourist photos:

Check out this overview:

Google AR search gets super buggy!

…In the best possible way, of course.

My mom loves to remind me about how she sweltered, hugely pregnant with me, through a muggy Illinois summer while listening to cicadas drone on & on. Now I want to bring a taste of the 70’s back to her via Google’s latest AR content.

You can now search for all these little (and not-so-little) guys via your Android or iPhone and see them in your room:

Here’s a list of new models:

  • Rhinoceros beetle
  • Hercules beetle
  • Atlas beetle
  • Stag beetle
  • Giant stag
  • Miyama stag beetle
  • Shining ball scarab beetle
  • Jewel beetle
  • Ladybug
  • Firefly
  • Rosalia batesi
  • Swallowtail butterfly
  • Morpho butterfly
  • Atlas moth
  • Mantis
  • Grasshopper
  • Dragonfly
  • Hornet
  • Robust cicada
  • Brown cicada
  • Periodical cicada
  • Walker’s cicada
  • Evening cicada.