Photogrammetry (building 3D from 2D inputs—in this case several source images) is what my friend learned in the Navy to refer to as “FM technology”: “F’ing Magic.”
Side note: I know that saying “Time is a flat circle” is totally worn out… but, like, time is a flat circle, and what’s up with Adobe style-transfer demos showing the same (?) fishing village year after year? Seriously, compare 2013 to 2019. And what a super useless superpower I have in remembering such things. ¯\_(ツ)_/¯
This new iOS & Android app (not yet available, though you can sign up for prerelease access) promises to analyze images, suggest effects, and keep the edits adjustable (though it’s not yet clear whether they’ll be editable as layers in “big” Photoshop).
I’m reminded of really promising Photoshop Elements mobile concepts from 2011 that went nowhere; of the Fabby app some of my teammates created before being acquired by Google; and of all I failed to enable in Google Photos. “Poo-tee-weet?” ¯\_(ツ)_/¯ Anyway, I’m eager to take it for a spin.
Placing this ML-driven tech atop the set of now-vintage (!) Quick Selection & Magic Wand tools should help get it discovered, and the ability to smartly add & subtract chunks of an image looks really promising. I can’t wait to put it to the test.
“The Camera Professor” (as Reddit called him) Marc Levoy gave a great overview today of his team’s work in computational photography, after which Annie Leibovitz came to the stage to discuss her craft & Pixel 4. “My IQ went up by at least 10 by the time he was done,” per the same thread. 😌 Enjoy!
(Starts around 47:12, just in case the deep link above doesn’t take you there directly)
I’ve found sharing actual 360º content to be kind of a non-starter (too much of a pain, too uncertain how to consume), but being able to reframe shots in post is a ball. Here’s an Insta example from some zip lining our fam did this summer: