Monthly Archives: January 2019

New 65mm “Apollo 11” looks amazing

I am, as the kids would say, there for this documentary:

The film is comprised entirely of archival footage and audio:

Miller and his team collaborated with NASA and the National Archives (NARA) to locate all of the existing footage from the Apollo 11 mission. In the course of sourcing all of the known imagery, NARA staff members made a discovery that changed the course of the project — an unprocessed collection of 65mm footage, never before seen by the public. Unbeknownst to even the NARA archivists, the reels contained wide format scenes of the Saturn V launch, the inside of the Launch Control Center and post-mission activities aboard the USS Hornet aircraft carrier.

The find resulted in the project evolving from one of only filmmaking to one of also film curation and historic preservation. The resulting transfer — from which the documentary was cut — is the highest resolution, highest quality digital collection of Apollo 11 footage in existence.

I also loved this music video made using mission audio & imagery:

[YouTube 1 & 2] [Via]

Mavic 2 gains waypoint support, pano fix

Being able to preset one’s flight path on a map seems like a great way to set up shots that transition from day to night—especially cool when done with hyperlapses. Now to find a sufficiently interesting area in which to try it. See below for a demo/tutorial.

Oh, and there’s a really significant (for me, anyway) tweak hanging out in the corresponding firmware update: “Fixed issue: could not open Sphere panorama photos in Facebook.” The absence of the correct metadata was an ongoing pain that prevented me from seeing panos as interactive in Google Photos or making them interactive on Facebook. I haven’t yet installed the update, but I have my fingers crossed. [Update: It works!]

[YouTube]

“Epoch”: A warp-speed tour powered by Google Earth imagery

Whoa—apparently Irish Wonder Twin Powers involve an insane work ethic for finding interesting earthly patterns:

I was getting a sense of deja vu watching this, and PetaPixel helpfully writes,

If project reminds you of “Arena” by Páraic McGloughlin, there’s a good reason for that: Páraic is Kevin’s twin brother and the two had originally planned to create a single collaborative video before splitting and working independently on two separate videos while working in the same office.

[Vimeo]

Beautiful hand-drawn AR

This hand-rotoscoped style makes me happy:

Mark Kawano points out that Ofir Shoham has a whole feed of such work on Instagram:

Adobe acquires Substance Painter

Interesting news:

Allegorithmic’s Substance family of tools are used in the vast majority of AAA games, including Call of Duty, Assassin’s Creed, and Forza. They’re increasingly being used for visual effects and animation in entertainment, including in award-winning, popular movies like Blade Runner 2049, Pacific Rim Uprising, and Tomb Raider. And they’re being adopted in the fields of design, product visualization, retail and marketing. [artwork gallery]

I’m curious to see how this goes. We introduced 3D painting to Photoshop some 12 (!!) years ago, but in retrospect we (or at least I) were naive about the sheer amount of investment & complexity it would entail. Will this acquisition finally help lower complexity & barriers to entry? We shall see.

I once argued that “Photoshop 3D is not (just) about 3D,” but rather about building a general way to import, render, and manipulate non-native content. So little of that dream has come to pass… But hey, it’s a new day, and in the end we shall all be dead. 🙂

[YouTube]

Wacom’s teaming up with Magic Leap for collaborative creation

Hmm—it’s a little hard to gauge just from a written description, but I’m excited to see new AR blood teaming up with an OG of graphics to try defining a new collaborative environment:

Wearing a Magic Leap One headset connected to a Wacom Intuos Pro pen tablet, designers can use the separate three-button Pro Pen 3D stylus to control their content on a platform called Spacebridge, which streams 3D data into a spatial computing environment. The program allows multiple people in a room to interact with the content, with the ability to view, scale, move, and sketch in the same environment.

Check out the rest of the Verge article for details. I very much look forward to seeing how this develops.

NewImage

Hey Porter!

Just a little drone fun my Mini Me & I had in Barstow, CA, at New Years (with a big hat tip to Mr. Johnny Cash):

And on the very off chance you’re interested in having a very rail-savvy 9-year-old tour you around the Western America Railroad Museum, well, you’re in luck. 😌

[YouTube]

[YouTube 1 & 2]

Giant artificial flowers at Google react to your emotions

Our sister team makes the machine learning-powered library driving this large installation now populating our lobby. It’s to enable this sort of thing that we released ML acceleration tech the other day:

The flowers are built using Raspberry Pi running Android Things, our Android platform for everyday devices like home speakers, smart screens and wearables. An “alpha flower” has a camera in it and uses an embedded TensorFlow neural net to analyze which emotion it sees, and the surrounding flowers change colors based on the image the camera captures of your face. All processing is done locally, so no data is saved or sent to any servers.

Better still, the code has been open-sourced.

NewImage

NewImage

Google’s using AI to help predict floods

I love the mix of high & low tech in this use of AI for social good:

A team of engineers from Google’s Crisis Response team is using machine learning to develop a more accurate and reliable flood forecasting system for countries that need it most.

Learn more about how AI and other technology is being used to help provide people with more accurate crisis alerts. http://g.co/earlywarning

NewImage

Tangentially related if at all, but what the heck: How great are these 3D renderings of vintage topographical maps?

NewImage

[YouTube]

Impressive Animoji-powered music videos

I am not, you may have noticed, curing cancer with my limited time on this planet. Having said that, I love working on the continued democratization of creative tech. These example videos show off an incredible leap in one kind of expressivity, letting one person with a telephone create animation that would’ve previously required huge amounts of effort in complex software:

Kanye/Daft Punk:

Nickelback (sorry)

https://twitter.com/KabouniFilms/status/1075429086904881152

Evanescence:

Accelerate machine learning through new TensorFlow Lite GPU

I’m thrilled to say that the witchcraft my team has built & used to deliver ML & AR hotness on Pixel 3, YouTube, and beyond is now available to iOS & Android developers:

For Portrait mode on Pixel 3, Tensorflow Lite GPU inference accelerates the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. CPU inference with floating point precision. In YouTube Stories and Playground Stickers our real-time video segmentation model is sped up by 5–10x across a variety of phones.

We found that in general the new GPU backend performs 2–7x faster than the floating point CPU implementation for a wide range of diverse deep neural network models.

A preview release is available now, with a full open source release planned for the near future.

I often note that I came here five (five!) years ago to “Teach Google Photoshop,” and delivering tech like this is a key part of that mission: enable machines to perceive the world, and eventually to see like artists & be your brilliant artistic Assistant. We have so, so far to go, and the road ahead can be far from clear—but it sure is exciting.

A little maritime drone fun

The lads and I are just back from an overnight visit to the USS Hornet, a decorated World War II-era carrier we last visited some 7 years ago. This time around we spent the night with our Cub Scout pack & several hundred other scouts & parents from around the area. On the whole we had a ball touring the ship, and I had a little fun flying my drone over the Hornet & her adjacent Navy ships:

And here’s an interactive 360º panorama from overhead. (Obligatory nerdy sidenote: This is the JPEG version stitched on the fly by the drone, and although I was able to stitch the raw source images in Camera Raw & get better color/done, I’ll be damned if I can figure out how to inject the proper metadata to make it display right. As usual I used EXIF Fixer to make the JPEG interactive.)

[YouTube]

Designers, engineers: Come work on After Effects

Here’s a rare opportunity to team up with one of the rarest of things—a super friendly, gifted, and yet humble team building a beloved app that makes the world more beautiful. The AE team have long been some of my favorite folks in the industry, and they’re looking to expand their ranks:

NewImage

“Fluidity” drone controller

I have no idea whether this thing is worth a damn—but I’d sure like to find out (well, with the caveat that if it’s awesome, it’d be one more piece of bulky kit to schlepp around):

Using an astronaut’s perspective on intuitive motion through space, we have patented a unique and intuitive drone controller that anyone, whether they’re eight or eighty, can pick up and begin using immediately.

The FT Aviator is designed to incorporate the relevant 4 degrees of freedom of movement (x, y, z, and yaw) to drone flying, eliminating the awkward interface and steeper learning curve of existing dual thumb-controlled drones. It intuitively unlocks human potential to fly and capture stunning imagery.

NewImage

[YouTube]

Great long(ish) reads about computer science badasses

  •  The Yoda of Silicon Valley discusses the life & work of computer-science OG Don Knuth. The whole article & the accompanying reader comments are fascinating. (Side bonus for me: I ended up learning the names of various people in my extended team (!), who are quoted in the article.) I love that Don’s defiantly 1997-looking personal site includes a list of Infrequently Asked Questions.
  • The New Yorker profiles Google coding duprass Jeff Dean (who leads our org) and Sanjay Ghemawat. They “seem like two halves of a single mind,” and their work enabled planet-scale data infrastructure (among many other things). Retaining as I do the most unimportant details, I now really want to see Jeff’s bespoke basement trampoline. ¯\_(ツ)_/¯ Oh, and you should definitely read Chuck Norris-style Jeff Dean Facts (“Jeff Dean’s PIN is the last 4 digits of pi,” etc.).

NewImage

Photography: A New Year’s flight through Icelandic fireworks

What a gorgeous way to ring in the still-new year:

Pilot/photographer Sigurður Þór Helgason writes,

Happy New Year 2019. This is Reykjavik city on New Years Eve. Most households shot their own fireworks just before midnight so the outcome is a spectacular firework show, unlike any other. Music by Adi Goldstein.

Note: I shot this with my Mavic 2 Pro. I used D-log M, ISO 1600, shutter 1/25 frame rate 25 and used the LUT from DJI to bring the colours back. No other adjustments and no, there are no special effects in this video or post production. Hope you enjoy this.

Night Sight is outta sight!

This is a watershed moment for me: After 11+ years of shooting on iPhones & Canon DSLRs, this is the first time I’ve shot on an Android device that plainly outshines them both at something. Night Sight on Pixel 3 blows me away.

NewImage

First, some important disclaimers:

  • I work at Google & get to collaborate with the folks responsible for this tech, but I can take no credit for it, and these are just my opinions & non-scientific findings.
  • I’m not here to rain on anybody’s parade. My iPhone X is great, and the 70D has been a loyal workhorse. I have no plans to ditch either.
  • The 70D came out in 2013, and it’s obviously possible to get both a newer DSLR & a lens faster than my 24-70mm f/2.8.
  • It’s likewise possible to know a lot more about manual exposure than I do. I went only as far as to choose aperture priority, crank the exposure wide open, and set ISO to Auto.

Having said all that, I think my results reasonably represent what a normal-to-semi-savvy person would get from the various devices. Here’s what I saw:

What do you think?

By the way, Happy New Year! Here’s an animation created last night by shooting a series of Night Sight images, then combining them in Google Photos & finally cropping the output in Photoshop.

PS—I love the Queen-powered “Flash!” ad showing Night Sight:

[YouTube]