Monthly Archives: July 2019

Google is giving away 100,000 Home Minis to people living with paralysis

Particularly as the uncle of a little dude who uses a wheelchair, this news makes me very happy & proud:

Google  announced this morning via blog post that it has partnered with the Christopher & Dana Reeve Foundation to give away 100,000 Home Mini units to people living with paralysis. The news is designed to mark the 29th anniversary of the Americans with Disabilities Act (ADA), which was signed into law on this day in 1990.

There’s a form on Google’s site for people who qualify and their caregivers. Interested parties must live in the United States to receive a unit.

[YouTube]

Animation: Trippy Osaka bends before our eyes

🤯

Colossal writes,

In this fantastic short titled Spatial Bodies, actual footage of the Osaka skyline is morphed into a physics-defying world of architecture where apartment buildings twist and curve like vines, suspended in the sky without regard for gravity. The film was created by AUJIK, a collaborative of artists and filmmakers that refers to itself as a “mysterious nature/tech cult.”

Begone, lame skies!

Does anyone else remember when Adobe demoed automatic sky-swapping ~3 years ago, but then never shipped it… because, big companies? (No, just me?)

Anyway, Xiaomi is now offering a similar feature. Here’s a quick peek:

And here’s a more in-depth demo:

Coincidentally, “Skylum Announces Luminar 4 with AI-Powered Automatic Sky Replacement”:

It removes issues like halos and artifacts at the edges and horizon, allows you to adjust depth of field, tone, exposure and color after the new sky has been dropped in, correctly detects the horizon line and the orientation of the sky to replace, and intelligently “relights” the rest of your photo to match the new sky you just dropped in “so they appear they were taken during the same conditions.”

Check out the article link to see some pretty compelling-looking examples.

NewImage

[YouTube 1 & 2]

One billion+ people now use Google Photos every month

🎉🎉🎉

Google product teams aspire to “three-comma moments” (i.e. reaching 1,000,000,000 users); congrats to Photos for reaching joining this rarefied club!

Aiming to extend Photos magic to even more people around the world, the team has introduced Gallery Go, a super lightweight app designed for offline use, especially on entry-level phones.

The Verge writes,

Gallery Go is a new app from Google designed to let people with unreliable internet connections organize and edit their photos. Like Google’s regular Photos app it uses machine learning to organize your photos. You can also use it to auto-enhance your pictures and apply filters. The difference is that Gallery Go is designed to work offline, and takes up just 10MB of space on your phone.

[YouTube]

Fun AR nerdery: How Google’s object-tracking tech works

In case you’ve ever wondered about the math behind placing, say, virtual spiders on my kid works, wonder no more: my teammates have published lots o’ details.

One of the key challenges in enabling AR features is proper anchoring of the virtual content to the real world, a process referred to as tracking. In this paper, we present a system for motion tracking, which is capable of robustly tracking planar targets and performing relative-scale 6DoF tracking without calibration. Our system runs in real-time on mobile phones and has been deployed in multiple major products on hundreds of millions of devices.

You can play with the feature via Motion Stills for Android and Playground for Pixel phones. 

I can haz cheeseburgAR?

Here’s an… appetizing one? The LA Times is offering “an augmented reality check on our favorite burgers.”

NewImage

I’ve gotta say, they look pretty gnarly in 3D (below). I wonder whether these creepy photogrammetry(?)-produced results are net-appealing to customers. I have the same question about AR clothing try-on: even if we make it magically super accurate, do I really want to see my imperfect self rocking some blazer or watch, or would I rather see a photo of Daniel Craig doing it & just buy the dream that I’ll look similar?

Fortunately, I found the visual appearance much more pleasing when rendered in AR on my phone vs. when rendered in 3D on my Mac, at least unless I zoomed in excessively.

NewImage

Set Drone Controls For The Heart OF The Sun

“If you want to be a better photographer, [fly] in front of more interesting things…” This eclipse hyperlapse is rad:

“I wasn’t sure if it was going to work but I didn’t want to use it manually because I wanted to watch what was my first-ever eclipse,” [photographer Matt] Robinson tells PetaPixel. “Around 10 minutes before totality, the drone was sent up above our camp and programmed to fly along and above the spectacular Elqui Valley in Chile.

[YouTube]

Google Lens makes the NYT… Stranger

Let’s get upside down, baby. The AR tracking & rendering tech we’ve been making is bringing printed ads to life:

Inside the NYT, readers will find a full page ad in the Main News section and quarter page ads both in Arts and Business sections of the paper with a CTA encouraging readers to scan the ads with Google Lens, where they might find that things are stranger than they seem. 🙃

Tangentially related: this is bonkers:

Moon Reunion: Fun lunar stuff to check out

As we roll up on the 50th (!) anniversary of humanity visiting our biggest satellite:

 

NewImage

Photography: A hyperkinetic maelstrom of patterns

Irish photographer Páraic Mc Gloughlin has a real knack for finding patterns among huge corpora of data (e.g. from Google Earth; see previous). Now he’s making music videos:

Mc Gloughlin’s latest work is for the band Weval’s track “Someday,” and features the filmmaker’s signature fusion of geometric shapes found in historical domes, skyscraper facades, and farmland irrigation systems. The tightly edited video shows quickly-passing frames that shift in time with the music, visually quaking or smoothly transitioning depending on the percussive and melodic elements of the song.

Brace yourself:

GANpaint promises to hallucinate details for your photos

“Paint using neurons instead of pixels,” promises GAN Dissection a framework to “let you explore what a GAN (generative adversarial networks) has learned by examining and manipulating its internal neurons.” Check out how it can invent details like trees & doorways based on the target image:

I’m reminded of the O.G. PatchMatch demo from 10 (!) years ago that led us to put Content-Aware Fill (itself based on a subset of that work) into Photoshop:

[YouTube 1 & 2] [Via Product Hunt]