{"id":7106,"date":"2018-06-20T09:43:46","date_gmt":"2018-06-20T16:43:46","guid":{"rendered":"http:\/\/jnack.com\/blog\/?p=7106"},"modified":"2018-06-20T09:45:32","modified_gmt":"2018-06-20T16:45:32","slug":"demo-synthesizing-new-views-from-your-multi-lens-phone-images","status":"publish","type":"post","link":"http:\/\/jnack.com\/blog\/2018\/06\/20\/demo-synthesizing-new-views-from-your-multi-lens-phone-images\/","title":{"rendered":"Demo: Synthesizing new views from your multi-lens phone images"},"content":{"rendered":"<p>You know the \u201c<a href=\"http:\/\/knowyourmeme.com\/memes\/i-forced-a-bot\">I forced a bot to<\/a>\u2026\u201d meme? Well, my colleagues Noah &amp; team actually did it, forcing bots to watch real estate videos (which feature lots of stable, horizontal tracking shots) in order to <a href=\"https:\/\/people.eecs.berkeley.edu\/~tinghuiz\/projects\/mpi\/\">synthesize animations between multiple independent images<\/a>\u2014say, the ones captured by a multi-lens phone:<\/p>\n<blockquote>\n<p>We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube.<\/p>\n<\/blockquote>\n<p>Check out what it can enable:<\/p>\n<p><iframe loading=\"lazy\" width=\"604\" height=\"340\" src=\"https:\/\/www.youtube.com\/embed\/oAKDhHPwSUE?feature=oembed\" frameborder=\"0\" allow=\"autoplay; encrypted-media\" allowfullscreen><\/iframe><\/p>\n<p>\u00a0<img decoding=\"async\" loading=\"lazy\" title=\"NewImage.png\" src=\"http:\/\/jnack.com\/blog\/wp-content\/uploads\/2018\/06\/NewImage-19.png\" alt=\"NewImage\" width=\"597\" height=\"360\" border=\"0\" \/><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" title=\"NewImage.png\" src=\"http:\/\/jnack.com\/blog\/wp-content\/uploads\/2018\/06\/NewImage-20.png\" alt=\"NewImage\" width=\"599\" height=\"336\" border=\"0\" \/><\/p>\n<p>[<a href=\"https:\/\/youtu.be\/oAKDhHPwSUE\">YouTube<\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>You know the \u201cI forced a bot to\u2026\u201d meme? Well, my colleagues Noah &amp; team actually did it, forcing bots to watch real estate videos (which feature lots of stable, horizontal tracking shots) in order to synthesize animations between multiple independent images\u2014say, the ones captured by a multi-lens phone: We call this problem stereo magnification, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[8],"tags":[],"_links":{"self":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/7106"}],"collection":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/comments?post=7106"}],"version-history":[{"count":3,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/7106\/revisions"}],"predecessor-version":[{"id":7109,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/7106\/revisions\/7109"}],"wp:attachment":[{"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/media?parent=7106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/categories?post=7106"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/tags?post=7106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}