In addition to moving augmented images (see previous), my team’s tracking tech enables object detection & tracking on iOS & Android:
The Object Detection and Tracking API identifies the prominent object in an image and then tracks it in real time. Developers can use this API to create a real-time visual search experience through integration with a product search backend such as Cloud Product Search.
Hmm—I’ve never had occasion to use this solipsistic-but-cool flight mode on my drone, but now I’m tempted to try capturing some epic dronies. (Just gotta figure out where I misplaced my moody Scottish highlands…)
A couple of years ago, we used Google’s super-resolution tech (think “Genuine Fractals gone wild,” fellow oldsters) to dramatically reduce bandwidth costs for users without any perceptible loss in visual quality. Now that tech is used in the Pixel phone camera, and this quick video gives a nice overview of how it works:
Who knew that the goofball mannequin challenge could generate a 2000-video dataset that could help train AI to compute depth, segment humans, and (optionally) content-aware fill them out of existence? This new work from Google Research handles scenes where both the camera & human subjects are moving. Check it out:
During their performance that night, Steven Drozd from The Flaming Lips, who usually plays a variety of instruments, played a “magical bowl of fruit” for the first time. He tapped each fruit in the bowl, which then played different musical tones, “singing” the fruit’s own name. With help from Magenta, the band broke into a brand-new song, “Strawberry Orange.”
The Flaming Lips also got help from the audience: At one point, they tossed giant, blow-up “fruits” into the crowd, and each fruit was also set up as a sensor, so any audience member who got their hands on one played music, too. The end result was a cacophonous, joyous moment when a crowd truly contributed to the band’s sound.
New research from Samsung Moscow can turn a single image (or, for better quality results, a series of images) into a puppet that can be driven by another person’s performance. (Hmm, new feature for Google Arts & Culture’s artistic doppelgänger-finder? 😌)
This is the first time people will be able to use Tilt Brush on a completely wireless VR system. It costs $19.99, though if you previously purchased it on Oculus Home, you’ll have it for free on Oculus Quest.
The original Glass will be to AR wearables as the Apple Newton was to smartphones—ambitious, groundbreaking, unfocused, premature. After that first… well, learning experience… Google didn’t give up, and folks have cranked away quietly to find product-market fit. Check out the new device—dramatically faster, more extensible, and focused on specific professionals in medicine, manufacturing, and more:
My team has been collaborating with TensorFlow Lite & researchers working on human-pose estimation (see manypreviousposts) to accelerate on-device machine learning & enable things like the fun “Dance Like” app on iOS & Android: