All posts by jnack

Warrior dogs to get AR goggles

“Are they gonna use the Snapchat dancing hot dog to steer them or what?” — Henry Nack, age 11, bringing the 🔥 feature requests 😌

Funded by the US military and developed by a Seattle-based company called Command Sight, the new goggles will allow handlers to see through a dog’s eyes and give directions while staying out of sight and at a safe distance.

While looking through the dog’s eyes thanks to the goggle’s built-in camera, the handler can direct the dog by controlling an augmented reality visual indicator seen by the dog wearing the goggles.

Check out “Light Fields, Light Stages, and the Future of Virtual Production”

“Holy shit, you’re actually Paul Debevec!”

That’s what I said—or at least what I thought—upon seeing Paul next to me in line for coffee at Google. I’d known his name & work for decades, especially via my time PM’ing features related to HDR imaging—a field in which Paul is a pioneer.

Anyway, Paul & his team have been at Google for the last couple of years, and he’ll be giving a keynote talk at VIEW 2020 on Oct 18th. “You can now register for free access to the VIEW Conference Online Edition,” he notes, “to livestream its excellent slate of animation and visual effects presentations.”

In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning.  I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded.  Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach.  I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware.  I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.

Put the “AR” in “art” via Google Arts & Culture

I’m excited to see the tech my team has built into YouTube, Duo, and other apps land in Arts & Culture, powering five new fun experiences:

Snap a video or image of yourself to become Van Gogh or Frida Kahlo’s self-portraits, or the famous Girl with a Pearl Earring. You can also step deep into history with a traditional Samurai helmet or a remarkable Ancient Egyptian necklace.

To get started, open the free Google Arts & Culture app for Android or iOS and tap the rainbow camera icon at the bottom of the homepage.

NASA brings the in sound from way out

Nothing can stop us now
We are all playing stars…

A new project using sonification turns astronomical images from NASA’s Chandra X-Ray Observatory and other telescopes into sound. This allows users to “listen” to the center of the Milky Way as observed in X-ray, optical, and infrared light. As the cursor moves across the image, sounds represent the position and brightness of the sources.

Google & researchers demo AI-powered shadow removal

Speaking of Google photography research (see previous post about portrait relighting), I’ve been meaning to point to the team’s collaboration with MIT & Berkeley. As PetaPixel writes,

The tech itself relies on not one, but two neural networks: one to remove “foreign” shadows that are cast by unwanted objects like a hat or a hand held up to block the sun in your eyes, and the other to soften natural facial shadows and add “a synthetic fill light” to improve the lighting ratio once the unwanted shadows have been removed.

Here’s a nice summary from Two-Minute Papers:

https://youtu.be/qeZMKgKJLX4

Interactive Portrait Light comes to Google Photos on Pixel; editor gets upgraded

I have been waiting, I kid you not, since the Bush Administration to have an easy way to adjust lighting on faces. I just didn’t expect it to appear on my telephone before it showed up in Photoshop, but ¯\_(ツ)_/¯. Anyway, check out what you can now do on Pixel 4 & 5 devices:

This feature arrives, as PetaPixel notes, as one of several new Suggestions:

Nestled into a new ‘Suggestions’ tab that shows up first in the Photos editor, the options displayed there “[use] machine learning to give you suggestions that are tailored to the specific photo you’re editing.” For now, this only includes three options—Color Pop, Black & White, and Enhance—but more suggestions will be added “in the coming months” to deal specifically with portraits, landscapes, sunsets, and beyond.

Lastly, the photo editor overall has gotten its first major reorganization since we launched it in 2015: