Check out how it enables real objects to occlude virtual ones:
Here’s a somewhat deeper dive into the whole shebang:
The features are designed to be widely available, not requiring special sensors:
The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.
And we’re looking for partners:
We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.
Now, you can turn a photo into a portrait on Pixel by blurring the background post-snap. So whether you took the photo years ago, or you forgot to turn on portrait mode, you can easily give each picture an artistic look with Portrait Blur in Google Photos.
I’m also pleased to see that the realtime portrait-blurring tech my team built has now come to Google Duo for use during video calls:
I’ll always owe Russell Brown a great debt for bending the arc of my career, and I’m so happy to see him staying crazy after all these (35+!!) years at Adobe. In the entertaining video below, he squeezes great images out of phones & tablets while squeezing himself through the slot canyons of the Southwest—and not going all “127 Hours” in the process!
Prepare for retinal blast-off (and be careful if you’re sensitive to flashing lights).
What happens when everything in the world has been photographed? From multiple angles, multiple times per day? Eventually we’ll piece those photos and videos together to be able to see the entire history of a location from every possible angle.
“I sifted through probably ~100,000 photos on Instagram using location tags and hashtags, then sorted, and then hand-animated in After Effects to create a crowdsourced hyperlapse video of New York City,” Morrison tells PetaPixel. “I think the whole project took roughly 200 hours to create!”
Looks like a simple but perhaps compelling use of ML & AR:
Zenia encompasses the best of computer vision and machine learning. She uses motion tracking and the data from thousands of yoga lessons to analyze my movements. During the practice, Zenia provides gentle feedback and also takes care of basic safety rules.
Now when you share one-off photos and videos, you’ll have the option to add them to an ongoing, private conversation in the app. This gives you one place to find the moments you’ve shared with your friends and family…
You can like photos or comment in the conversation, and you can easily save these photos or videos to your own gallery. This feature isn’t designed to replace the chat apps you already use, but we do hope it improves sharing memories with your friends and family in Google Photos. This is gradually rolling out over the next week.
Bonus smart-ass response o’ the day:
Sometimes Google goes two or three months without launching a new messaging app and I get worried. So this news comes as a great relief https://t.co/QFz5Q7iye3
Hey gang—I’m working my way out of the traditional tryptophan-induced haze enough to wish you a slightly belated Happy Thanksgiving. I hope you were able to grab a restful few days. Amidst bleak (for Cali) weather I was able to grab a few fun tiny planet shots (see below) and learn about how to attach a 360º cam to a drone (something I’ve not yet been brave/foolhardy enough to try):