“Are they gonna use the Snapchat dancing hot dog to steer them or what?” — Henry Nack, age 11, bringing the 🔥 feature requests 😌
Funded by the US military and developed by a Seattle-based company called Command Sight, the new goggles will allow handlers to see through a dog’s eyes and give directions while staying out of sight and at a safe distance.
While looking through the dog’s eyes thanks to the goggle’s built-in camera, the handler can direct the dog by controlling an augmented reality visual indicator seen by the dog wearing the goggles.
That’s what I said—or at least what I thought—upon seeing Paul next to me in line for coffee at Google. I’d known his name & work for decades, especially via my time PM’ing features related to HDR imaging—a field in which Paul is a pioneer.
Anyway, Paul & his team have been at Google for the last couple of years, and he’ll be giving a keynote talk at VIEW 2020 on Oct 18th. “You can now register for free access to the VIEW Conference Online Edition,” he notes, “to livestream its excellent slate of animation and visual effects presentations.”
In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning. I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded. Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach. I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware. I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.
A new project using sonification turns astronomical images from NASA’s Chandra X-Ray Observatory and other telescopes into sound. This allows users to “listen” to the center of the Milky Way as observed in X-ray, optical, and infrared light. As the cursor moves across the image, sounds represent the position and brightness of the sources.
Seems like the Illustrator feature sneak-peeked last year is about to be released. Of course, I wouldn’t be a salty B 🙃 if I didn’t slip in some reference to all this going back ~15 years to Adobe Kuler & Illustrator Live Color & whatnot. (Okay, there—personal brand promise kept!)
Okay, not wars—how about enamel pins? Color me a little skeptical that the augmented reality portion of these pins will get much use, but hey, if it’s just a nice little bonus on something people already wanted, what the heck?
The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.
I have been waiting, I kid you not, since the Bush Administration to have an easy way to adjust lighting on faces. I just didn’t expect it to appear on my telephone before it showed up in Photoshop, but ¯\_(ツ)_/¯. Anyway, check out what you can now do on Pixel 4 & 5 devices:
This feature arrives, as PetaPixel notes, as one of several new Suggestions:
Nestled into a new ‘Suggestions’ tab that shows up first in the Photos editor, the options displayed there “[use] machine learning to give you suggestions that are tailored to the specific photo you’re editing.” For now, this only includes three options—Color Pop, Black & White, and Enhance—but more suggestions will be added “in the coming months” to deal specifically with portraits, landscapes, sunsets, and beyond.
Lastly, the photo editor overall has gotten its first major reorganization since we launched it in 2015: