Toronto mom Angela Young and her two children Lilah and Levi celebrated both the 2019 holiday season and the 20th Anniversary of the Beastie Boys music video for “Intergalactic” by recreating part of the celebrated video for their annual holiday card. Young and her kids, who were dressed in the same hazard suits, black vests with fluorescent stripes, yellow boots, and gloves as the original, struck poses and danced their way around Toronto’s underground PATH, Union Station and the TTC subway.
Hey everyone—happy holidays & Merry Christmas from me, Margot, Seamus, and the Micronaxx to you & yours. Thanks so much for being a reader (“the few, the ostensibly proud” 😛), and here’s to making more funky, inspiring discoveries in the new year. Meanwhile, here’s a quick glimpse of our tour of the holiday lights in Los Gatos (complete with throbbing fonky beatz in the tunnel of lights 🙃).
I’ll admit that I haven’t yet taken the plunge into photogrammetry, but this tutorial makes me think I just might be able to do it. (And as we close out 2019, let’s take a moment to note how bonkers it is that for the price of a few hundred dollars in flying gear, just about anyone can generate 3D geometry and share it to just about any device sporting a Web browser!)
I know this post might be of super niche interest, but I’m going to try out its recommendations tonight when we drive through holiday lights. I think the flowcharts basically boil down to “Go manual, keep ISO at 400 or lower, and bump it up/down to get the exposure right. Oh, and set shutter speed to 2x frame rate for no motion & 4x for moderate motion.” Any shooting tips you may have to share are most welcome as well!
“FM technology: that stands for F’ing Magic…” So said the old Navy radio repair trainer, and it comes to mind reading about how the Google camera team used machine learning plus a dual-lens setup to deliver beautiful portraiture on the Pixel 4:
With the Pixel 4, we have made two more big improvements to this feature, leveraging both the Pixel 4’s dual cameras and dual-pixel auto-focus system to improve depth estimation, allowing users to take great-looking Portrait Mode shots at near and far distances. We have also improved our bokeh, making it more closely match that of a professional SLR camera.
Amidst all the frustrations that come along with working in any big company, I’ve always found that meeting amazing people at Google is far & away the best part of working here. In the last year I’ve gotten to collaborate a bit with Sasha Blair-Goldensohn, a Google Maps engineer who’s using his own life experience of needing a wheelchair to help others navigate the world more easily. Check out his story:
[T]he more I shared with colleagues, the more I found people who wanted to help solve real-world access needs. Using “20 percent time”—time spent outside day-to-day job descriptions—my colleagues like Rio Akasaka and Dianna Hu pitched in and we launched wheelchair-friendly transit directions. That initial work has now led to a full-time team dedicated to accessibility on Maps.
Boy, what I wouldn’t have given to have had this tech in Photoshop Touch, where Scribble Selection was the hotness du jour. Pam Clark writes,
This feature on the iPad works exactly the same as on Photoshop on the desktop and produces the same results, vastly enhancing selection capabilities and speed available on the iPad. With cloud documents, you can make a selection on the desktop or the iPad and continue your work seamlessly using Photoshop on another device with no loss of fidelity; no imports or exports required.
We originally released Select Subject in Photoshop on the desktop in 2018. The 2019 version now runs on both the desktop and the iPad and produces cleaner selection edges on the mask and delivers massively faster performance (almost instantaneous), even on the iPad.
I’m delighted to see the team’s work bringing new creators into the animation fold.
In this episode we take a look at a wide variety of interesting Character Animator projects, including an Emmy award winning series on ESPN, narration by a T-Rex, a successful Kickstarter campaign around trolley murder, and live events with robots and bunnies!
The feature is rolling out today; I was able to try it on my Pixel 4 without a hitch. It works across 44 languages, and is available on both Android and iOS. Google Assistant is built into Android phones and no separate app is required. For iOS, simply download the Google Assistant app to try it out.
In MediaPipe v0.6.7.1, we are excited to release a box tracking solution, that has been powering real-time tracking in Motion Stills, YouTube’s privacy blur, and Google Lens for several years and that is leveraging classic computer vision approaches. Pairing tracking with ML inference results in valuable and efficient pipelines. In this blog, we pair box tracking with object detection to create an object detection and tracking pipeline. With tracking, this pipeline offers several advantages over running detection per frame.
Read on for more, and let us know what you create!
It’s highly possible that my impending (and overdue) bedtime accounts for my finding this gleefully wacko Samsung ad so charming. But what the heck, it’s fun to see someone swing for the expressive fences.
An animated short film written and directed by Carol Freeman uses an old-fashioned technique called paint-on-glass to form each luminescent frame. The Bird & the Whale is comprised of 4,300 paintings and tells the story of a young whale, struggling to find its voice, who finds a caged bird that is the sole survivor of a shipwreck.
“Sweded films” are amateur shot-for-shot recreations of famous movies (or in this case, trailers). The term was coined in the 2008 film Be Kind Rewind, in which two video store employees try to replace their entire ruined VHS collection by re-shooting each movie with neither budget nor skill.
The group doesn’t just create these re-makes for their YouTube channel, they also host an annual “Swede Fest” that’s all about screening Hollywood remakes that were created with “backyard budgets.”
Check out how it enables real objects to occlude virtual ones:
Here’s a somewhat deeper dive into the whole shebang:
The features are designed to be widely available, not requiring special sensors:
The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.
And we’re looking for partners:
We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.
Now, you can turn a photo into a portrait on Pixel by blurring the background post-snap. So whether you took the photo years ago, or you forgot to turn on portrait mode, you can easily give each picture an artistic look with Portrait Blur in Google Photos.
I’m also pleased to see that the realtime portrait-blurring tech my team built has now come to Google Duo for use during video calls:
I’ll always owe Russell Brown a great debt for bending the arc of my career, and I’m so happy to see him staying crazy after all these (35+!!) years at Adobe. In the entertaining video below, he squeezes great images out of phones & tablets while squeezing himself through the slot canyons of the Southwest—and not going all “127 Hours” in the process!
Prepare for retinal blast-off (and be careful if you’re sensitive to flashing lights).
What happens when everything in the world has been photographed? From multiple angles, multiple times per day? Eventually we’ll piece those photos and videos together to be able to see the entire history of a location from every possible angle.
“I sifted through probably ~100,000 photos on Instagram using location tags and hashtags, then sorted, and then hand-animated in After Effects to create a crowdsourced hyperlapse video of New York City,” Morrison tells PetaPixel. “I think the whole project took roughly 200 hours to create!”
Looks like a simple but perhaps compelling use of ML & AR:
Zenia encompasses the best of computer vision and machine learning. She uses motion tracking and the data from thousands of yoga lessons to analyze my movements. During the practice, Zenia provides gentle feedback and also takes care of basic safety rules.
Now when you share one-off photos and videos, you’ll have the option to add them to an ongoing, private conversation in the app. This gives you one place to find the moments you’ve shared with your friends and family…
You can like photos or comment in the conversation, and you can easily save these photos or videos to your own gallery. This feature isn’t designed to replace the chat apps you already use, but we do hope it improves sharing memories with your friends and family in Google Photos. This is gradually rolling out over the next week.
Bonus smart-ass response o’ the day:
Sometimes Google goes two or three months without launching a new messaging app and I get worried. So this news comes as a great relief https://t.co/QFz5Q7iye3
Hey gang—I’m working my way out of the traditional tryptophan-induced haze enough to wish you a slightly belated Happy Thanksgiving. I hope you were able to grab a restful few days. Amidst bleak (for Cali) weather I was able to grab a few fun tiny planet shots (see below) and learn about how to attach a 360º cam to a drone (something I’ve not yet been brave/foolhardy enough to try):