Days of miracles & wonder, part 6,392…
Today, we’re excited to announce our latest AIY Project, the Vision Kit. It’s our first project that features on-device neural network acceleration, providing powerful computer vision without a cloud connection. […]
The provided software includes three TensorFlow-based neural network models for different vision applications. One based on MobileNets can recognize a thousand common objects, a second can recognize faces and their expressions and the third is a person, cat and dog detector. We’ve also included a tool to compile models for Vision Kit, so you can train and retrain models with TensorFlow on your workstation or any cloud service.
You can pre-order it here.
I’m eager to show the Micronaxx, who are studying Native American cultures in school:
Dr Jago Cooper, Curator, Head of the Americas at the British Museum, introduces Google Arts & Culture’s new collection on the preservation of the Maya Heritage.
The video showcases pioneering and cutting edge technologies that enable to preserve some unique traces of this Guatemalan civilization, inherited by British explorer Alfred Maudslay.
Ah good—I’ve figured something like this must be in development, and I’m excited at the prospect of removing some drudgery from selection & adjustment. (This is why “AI” applied to creative tools is interesting—not to take work away from artists, but to cut the crap so they can focus on, y’know, art.) Take it away, Meredith:
[YouTube] [Via John Lin]
“Teaching Google Photoshop” has been my working mantra here—i.e. getting computers to see like artists & wield their tools. A lot of that hinges upon understanding the shape & movements of the human body. Along those lines, my Google Research teammates Tyler Zhu, George Papandreou, and co. are doing cool work to estimate human poses in video. Check out the demo below, and see their poster and paper for more details.
Just a little Monday-morning silliness to ease you out of any lingering tryptophan-induced stupor.
Peacock spiders seem like impossibly wonderful little showmen, especially when teamed up with the Bee Gees…
…and then upgraded with lightsabers:
Elsewhere, here’s 1,069 Chinese robots dancing in unison, apparently breaking a no-one-knew-this-was-a-record record:
[YouTube 1, 2, & 3] [Via]
Rodeo has posted some interactive before/after shots on their site along with the breakdown reel. I’m kinda surprised by the number of non-CGI elements involved (e.g. the giant wireframe wrecking ball).
Heh—this is pretty jammed with fun details that I don’t want to spoil. Enjoy!
Mmm—who wants techno-smores, just like mom used to make?
I can hardly believe that this little GoPro toughed it out, continuing to record even while catching on fire—but that’s indeed the story. Good stuff starts around 1:25 here:
Does Google seem like exactly the kind of company that would celebrate the 20th anniversary of the Guggenheim Bilbao by commissioning a freerunner to launch off the iconic facades & carom around monumental works by Richard Serra? Why yes, yes it does. Explore the museum via Google Arts & Culture, and go behind the scenes of the short film below here.
Julian Tryba scripts After Effects to produce carefully segmented, meticulously choreographed “layer lapses” that produce a “visual time dilation” that juxtaposes the same scene shot at different times of day. Here, just check it out:
You can read more about the project on PetaPixel:
Tryba visited NYC 22 times, drove 9988 miles, spent 352 hours shooting 232,000 photos with 6 cameras (5 Canon DSLRs and a Sony a7R II) and 11 different lenses, and paid $1,430 in parking fees.