Today, we’re excited to announce our latest AIY Project, the Vision Kit. It’s our first project that features on-device neural network acceleration, providing powerful computer vision without a cloud connection. […]
The provided software includes three TensorFlow-based neural network models for different vision applications. One based on MobileNets can recognize a thousand common objects, a second can recognize faces and their expressions and the third is a person, cat and dog detector. We’ve also included a tool to compile models for Vision Kit, so you can train and retrain models with TensorFlow on your workstation or any cloud service.
Ah good—I’ve figured something like this must be in development, and I’m excited at the prospect of removing some drudgery from selection & adjustment. (This is why “AI” applied to creative tools is interesting—not to take work away from artists, but to cut the crap so they can focus on, y’know, art.) Take it away, Meredith:
“Teaching Google Photoshop” has been my working mantra here—i.e. getting computers to see like artists & wield their tools. A lot of that hinges upon understanding the shape & movements of the human body. Along those lines, my Google Research teammates Tyler Zhu, George Papandreou, and co. are doing cool work to estimate human poses in video. Check out the demo below, and see their poster and paper for more details.
Rodeo has posted some interactive before/after shots on their site along with the breakdown reel. I’m kinda surprised by the number of non-CGI elements involved (e.g. the giant wireframe wrecking ball).
Does Google seem like exactly the kind of company that would celebrate the 20th anniversary of the Guggenheim Bilbao by commissioning a freerunner to launch off the iconic facades & carom around monumental works by Richard Serra? Why yes, yes it does. Explore the museum via Google Arts & Culture, and go behind the scenes of the short film below here.
Julian Tryba scripts After Effects to produce carefully segmented, meticulously choreographed “layer lapses” that produce a “visual time dilation” that juxtaposes the same scene shot at different times of day. Here, just check it out: