Heh—check out Google’s latest Auto Awesome feature, introduced for Valentine’s Day. Team member Vincent Mo writes,
Just in time for Valentine’s Day, we launched Auto Awesome Hearts! Just upload a photo of kissing or hugging, and Google Photos will add hearts automatically.
It even works on bear hugs. 😉
The fascinating thing for me isn’t so much this fun if slightly silly example, but rather the idea of using computer vision to understand the contents of photography, then do interesting things as a result. I might have an idea or two in that regard—and I’d like to hear yours.
hi John,
This “hearty thing” made quite an impact today – several folks in the technical press picked it up. But there is indeed a truly hearty set of possibilities that are open – I posted one scheme on your forerunner blog recently – it reads:
“There’s no doubt to me over what contemporary storytellers could use. It really isn’t any more complicated tools for perfectionist editing (as within Photoshop, etc.), leading to ultra-sharp definition across an ultra-vivid color gamut, etc., via new and prescriptive techniques. (The pathway to the “prettyfication” of digital photographs is truly well-traveled to this point in time. Plus the pathway to “nostalgification” – well, we just won’t go there.) Rather, what’s overdue is an alternative route; one that’s more expressive. Where what the camera captures can become the subject of a more minimalist and interpretive practice. For example, along those lines, a much more content-centric idea is: begin by actually taking just one single photographic input apart – dividing it into isolated and/or reduced “pictorial chunks” – and autonomously processing and re-assembling these … in a user-friendly framework.”
I’ve done some work (but actually using Photoshop CC) to do such deconstructions, by tone (shadowed, mid-toned and highlighted regions), by color (via Channels, being transferred to individual Layers), and more. And then the processing plus re-assembly. It seems quite promising. Pictorial breakdown by regions of focus would be great – by “using computer vision to understand the contents of photography” just as you have written. (In the distant past, I was introduced to some possibilities for that which were the intellectual property of Segmentis Ltd. in my homeland. That company developed the buZZ Pro plug-in for Photoshop, about a decade ago.)