The folks behind Moment lenses have launched a new contest & (tiny) festival celebrating travel photography. The vid below is a bit (or a lot) twee-hipster for my old-man tastes, but I thought the whole thing was interesting enough to share:
It’s not high art (never is!), but I had a little fun combining Mavic & iPhone footage with classic War riffs to create this look at our son Henry bombing around the wonderful Victoria’s Cellars Vineyard in a ’52 lowrider Bel Air. (Not pictured: my sudden stop as I realized I was about to fly sideways into some telephone wires.)
Particularly as the uncle of a little dude who uses a wheelchair, this news makes me very happy & proud:
Google announced this morning via blog post that it has partnered with the Christopher & Dana Reeve Foundation to give away 100,000 Home Mini units to people living with paralysis. The news is designed to mark the 29th anniversary of the Americans with Disabilities Act (ADA), which was signed into law on this day in 1990.
There’s a form on Google’s site for people who qualify and their caregivers. Interested parties must live in the United States to receive a unit.
In this fantastic short titled Spatial Bodies, actual footage of the Osaka skyline is morphed into a physics-defying world of architecture where apartment buildings twist and curve like vines, suspended in the sky without regard for gravity. The film was created by AUJIK, a collaborative of artists and filmmakers that refers to itself as a “mysterious nature/tech cult.”
It removes issues like halos and artifacts at the edges and horizon, allows you to adjust depth of field, tone, exposure and color after the new sky has been dropped in, correctly detects the horizon line and the orientation of the sky to replace, and intelligently “relights” the rest of your photo to match the new sky you just dropped in “so they appear they were taken during the same conditions.”
Check out the article link to see some pretty compelling-looking examples.
Google product teams aspire to “three-comma moments” (i.e. reaching 1,000,000,000 users); congrats to Photos for reaching joining this rarefied club!
Aiming to extend Photos magic to even more people around the world, the team has introduced Gallery Go, a super lightweight app designed for offline use, especially on entry-level phones.
Gallery Go is a new app from Google designed to let people with unreliable internet connections organize and edit their photos. Like Google’s regular Photos app it uses machine learning to organize your photos. You can also use it to auto-enhance your pictures and apply filters. The difference is that Gallery Go is designed to work offline, and takes up just 10MB of space on your phone.
In case you’ve ever wondered about the math behind placing, say, virtual spiders on my kid works, wonder no more: my teammates have published lots o’ details.
One of the key challenges in enabling AR features is proper anchoring of the virtual content to the real world, a process referred to as tracking. In this paper, we present a system for motion tracking, which is capable of robustly tracking planar targets and performing relative-scale 6DoF tracking without calibration. Our system runs in real-time on mobile phones and has been deployed in multiple major products on hundreds of millions of devices.
I’ve gotta say, they look pretty gnarly in 3D (below). I wonder whether these creepy photogrammetry(?)-produced results are net-appealing to customers. I have the same question about AR clothing try-on: even if we make it magically super accurate, do I really want to see my imperfect self rocking some blazer or watch, or would I rather see a photo of Daniel Craig doing it & just buy the dream that I’ll look similar?
Fortunately, I found the visual appearance much more pleasing when rendered in AR on my phone vs. when rendered in 3D on my Mac, at least unless I zoomed in excessively.
“If you want to be a better photographer, [fly] in front of more interesting things…” This eclipse hyperlapse is rad:
“I wasn’t sure if it was going to work but I didn’t want to use it manually because I wanted to watch what was my first-ever eclipse,” [photographer Matt] Robinson tells PetaPixel. “Around 10 minutes before totality, the drone was sent up above our camp and programmed to fly along and above the spectacular Elqui Valley in Chile.
I am godawful at swimming in a straight line when putting my face in the water, so I’d really appreciate a version of these things that would show an arrow blinking “You’re going the wrong way!!“
Let’s get upside down, baby. The AR tracking & rendering tech we’ve been making is bringing printed ads to life:
Inside the NYT, readers will find a full page ad in the Main News section and quarter page ads both in Arts and Business sections of the paper with a CTA encouraging readers to scan the ads with Google Lens, where they might find that things are stranger than they seem. 🙃
Tangentially related: this is bonkers:
This is amazing—Stranger Things 3's Starcourt Mall wasn't a sound stage. It was all built inside an actual dying mall in Georgia. And the set designers made more than simple storefronts—they made FULL INTERIORS, even for stores that were never seen on-screen… pic.twitter.com/v5RahFLPeR
EasyJet has launched a brand new hand luggage app that enables customers to check their bag size before they leave for the airport. The technology offers 3D augmented reality and shows if the baggage will fit the cabin bag dimensions correctly.
20 or so of my teammates hail from Russia or Belarus (the second-most bummed-out place on earth, apparently—although in my brief visit I found it to be lovely), and we bond over pitch-dark humor. In that vein, enjoy (?) this bleak riff on The Simpsons’ intro—really putting the “gag” in “couch gag”:
Irish photographer Páraic Mc Gloughlin has a real knack for finding patterns among huge corpora of data (e.g. from Google Earth; see previous). Now he’s making music videos:
Mc Gloughlin’s latest work is for the band Weval’s track “Someday,” and features the filmmaker’s signature fusion of geometric shapes found in historical domes, skyscraper facades, and farmland irrigation systems. The tightly edited video shows quickly-passing frames that shift in time with the music, visually quaking or smoothly transitioning depending on the percussive and melodic elements of the song.
“Paint using neurons instead of pixels,” promises GAN Dissection a framework to “let you explore what a GAN (generative adversarial networks) has learned by examining and manipulating its internal neurons.” Check out how it can invent details like trees & doorways based on the target image:
I’m reminded of the O.G. PatchMatch demo from 10 (!) years ago that led us to put Content-Aware Fill (itself based on a subset of that work) into Photoshop:
In his recent short film LUFTRAUM, Basel-based artist Dirk Koy utilized a drone to capture buildings and busy roundabouts from above. These roads and structures were then isolated and stacked to create a perpetually spiraling collage of disorienting urban infrastructure.