“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:
Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.
Our technology powers products across Alphabet, including image understanding in Search and Google Photos, camera enhancements for the Pixel Phone, handwriting interfaces for Android, optical character recognition for Google Drive, video understanding and summarization for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including Motion Stills, PhotoScan and Allo.
We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).
[Via Peyman Milanfar]
hi John,
Nice work if you can get it! But perhaps it’s unfortunate that Google divested itself of Skybox/Terra Bella. Lots of opportunities there for image interpretations based upon machine learning. Even with commercial prospects (for significantly improved agricultural insurance, as one example). What I understand is that their deal with Planet did though contain an agreement which gives Alphabet the right to purchase SkySat imaging data in a preferential way. At Planet itself the combination of the SkySat resources (in what was originally the Terra Bella satellite fleet) plus what their own Dove satellite constellation can deliver seems to be a uniquely powerful combination.