{"id":6152,"date":"2017-10-29T07:47:05","date_gmt":"2017-10-29T14:47:05","guid":{"rendered":"http:\/\/jnack.com\/blog\/?p=6152"},"modified":"2018-02-06T10:43:05","modified_gmt":"2018-02-06T18:43:05","slug":"my-new-teams-new-page-check-out-google-machine-perception","status":"publish","type":"post","link":"http:\/\/jnack.com\/blog\/2017\/10\/29\/my-new-teams-new-page-check-out-google-machine-perception\/","title":{"rendered":"My new team&#8217;s new page: Check out Google Machine Perception"},"content":{"rendered":"<p><span style=\"font-size: 12px; font-family: Helvetica;\">\u201cSo, what would you say you\u2026 <a href=\"https:\/\/www.youtube.com\/watch?v=RAY27NU1Jog\"><em>do<\/em>\u00a0here<\/a>?\u201d Well, I get to hang around <a href=\"https:\/\/research.google.com\/teams\/perception\/\">these folks<\/a> and try to variously augment your reality:<\/span><\/p>\n<blockquote><p>\nResearch in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.<\/span><\/p>\n<p>Our technology powers products across Alphabet, including <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2013\/06\/improving-photo-search-step-across.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">image understanding<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> in Search and Google Photos, <\/span><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\"><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2014\/10\/hdr-low-light-and-high-dynamic-range.html\">camera<\/a> <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/10\/portrait-mode-on-pixel-2-and-pixel-2-xl.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">enhancements<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> for the Pixel Phone, <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/googleresearch.blogspot.com\/2015\/04\/google-handwriting-input-in-82.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">handwriting<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> interfaces for Android, <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2015\/05\/paper-to-digital-in-200-languages.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">optical character recognition<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> for Google Drive, <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/cloud.google.com\/blog\/big-data\/2017\/03\/announcing-google-cloud-video-intelligence-api-and-more-cloud-machine-learning-updates\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">video understanding<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> and <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2015\/10\/improving-youtube-video-thumbnails-with.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">summarization<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/07\/motion-stills-now-on-android.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">Motion Stills<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\">, <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/04\/photoscan-taking-glare-free-pictures-of.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">PhotoScan<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> and <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/05\/neural-network-generated-illustrations.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">Allo<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\">.<\/span><\/p>\n<p><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\">We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.google.com\/pubs\/pub43022.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">Inception<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> and <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.google.com\/pubs\/pub43442.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">Batch Normalization<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\">, are available in <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/www.tensorflow.org\/\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">TensorFlow<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\">. Further, we have released several large-scale datasets for machine learning, including: <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/03\/announcing-audioset-dataset-for-audio.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">AudioSet<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> (audio event detection); <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/10\/announcing-ava-finely-labeled-video.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">AVA<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> (human action understanding in video); <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/07\/an-update-to-open-images-now-with.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">Open Images<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> (image classification and object detection); and <\/span><a style=\"text-decoration: none;\" href=\"https:\/\/research.googleblog.com\/2017\/02\/an-updated-youtube-8m-video.html\"><span style=\"color: #3367d6; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;\">YouTube-8M<\/span><\/a><span style=\"color: #4a4a4a; background-color: transparent; font-variant-ligatures: normal; font-variant-east-asian: normal; font-variant-position: normal; vertical-align: baseline; white-space: pre-wrap;\"> (video labeling).<\/span><\/p>\n<\/blockquote>\n<p><span style=\"font-size: 12px; font-family: Helvetica;\"><br \/><\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" title=\"NewImage.png\" src=\"http:\/\/jnack.com\/blog\/wp-content\/uploads\/2017\/10\/NewImage-34.png\" alt=\"NewImage\" width=\"599\" height=\"82\" border=\"0\" \/><\/p>\n<p>[Via Peyman Milanfar]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u201cSo, what would you say you\u2026 do\u00a0here?\u201d Well, I get to hang around these folks and try to variously augment your reality: Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality. Our [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[8,6,15],"tags":[],"_links":{"self":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/6152"}],"collection":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/comments?post=6152"}],"version-history":[{"count":3,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/6152\/revisions"}],"predecessor-version":[{"id":6155,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/6152\/revisions\/6155"}],"wp:attachment":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/media?parent=6152"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/categories?post=6152"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/tags?post=6152"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}