Analyzing images in the cloud can be rad, but it’s DOA if connectivity is so spotty & expensive—as it is in much of the world—that people can’t schlep their images up there in the first place. Thus I’m happy to see news like this: Google partners with Project Tango chipmaker to bring vision to mobile devices.
The object is to get Google’s neural computation algorithms to run locally on a device and therefore not have to rely on an internet connection. Current day approaches to photo recognition, like in Google Photos, are done by first uploading and analyzing everything in Google’s cloud. Movidius hopes that by being able to quickly analyze images and audio future devices can be more personalized and contextualized. The latter part fits in with Google’s goal of making virtual assistants that are more aware of what you’re doing and what you need.
Onward.