Monthly Archives: August 2019

Luminar invents Google+ 2013

This “AI Structure” feature looks neat, but I take slight exception to the claim to being “the first-ever content-aware tool to improve details only where needed.” The Auto Enhance feature built into Google+ Photos circa 2013 used to do this kind of thing (treating skin one way, skies another, etc.) to half a billion photos per day.

Of course, as I learned right after joining the team, Google excels at doing incredibly hard stuff & making no one notice. I observed back then that we could switch the whole thing off & no one would notice or care—and that’s exactly what happened. Why? Because A) G+ didn’t show you before/after (so people would never know what difference it had made) and B) most people are to photography as I am to wine (“Is it total shite? No? Then good enough for me”). Here at least the tech is going to that tiny fraction of us who actually care—so good on ‘em.

[YouTube]

Google AR puts Notable Women onto currency

Zall about the Tubmans:

CNET writes

The app, called Notable Women, was developed by Google and former US Treasurer Rosie Rios. It uses augmented reality to let people see what it would look like if women were on US currency. Here’s how it works: Place any US bill in front of your phone’s camera, and the app uses digital filters — like one you’d see on Instagram or Snapchat — to overlay a new portrait on the bill. Users can choose from a database of 100 women, including the civil rights icon Rosa Parks and astronaut Sally Ride.

[YouTube]

AR: Adobe & MIT team up on body tracking to power presentations

Fun, funky idea:

Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]

The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.

Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.

Photography: Spin me right ’round, Milky Way edition

Whoa:

Creator Aryeh Nirenberg writes,

A timelapse of the Milky Way that was recorded using an equatorial tracking mount over a period of around 3 hours to show Earth’s rotation relative to the Milky Way.

I used a Sony a7SII with the Canon 24-70mm f2.8 lens and recorded 1100 10″ exposures at a 12-second interval. All the frames were captured at F/2.8 and 16000iso.

Kinda reminds me of “Turn Down For Spock”:

[YouTube 1 & 2] [Via]

Give yourself a hand! Realtime 3D hand-tracking for your projects

“Why doesn’t it recognize The Finger?!” asks my indignant, mischievous 10-year-old Henry, who with his brother has offered to donate a rich set of training data. 🙃

Juvenile amusement notwithstanding, I’m delighted that my teammates have released a badass hand-tracking model, especially handy (oh boy) for use with MediaPipe (see previous), our open-source pipeline for building ML projects.

Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.

🙌

Don’t nag your family. Make Google do it.

I’ve gotta give this new capability a shot:

To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”