I am, as the kids would say, there for this documentary:
The film is comprised entirely of archival footage and audio:
Miller and his team collaborated with NASA and the National Archives (NARA) to locate all of the existing footage from the Apollo 11 mission. In the course of sourcing all of the known imagery, NARA staff members made a discovery that changed the course of the project — an unprocessed collection of 65mm footage, never before seen by the public. Unbeknownst to even the NARA archivists, the reels contained wide format scenes of the Saturn V launch, the inside of the Launch Control Center and post-mission activities aboard the USS Hornet aircraft carrier.
The find resulted in the project evolving from one of only filmmaking to one of also film curation and historic preservation. The resulting transfer — from which the documentary was cut — is the highest resolution, highest quality digital collection of Apollo 11 footage in existence.
I also loved this music video made using mission audio & imagery:
Terrific work from Tarsicio Sañudo, who according to PetaPixel “shot thousands of RAW photos with his DJI Mavic 2 Pro over the course of two months.” He mentions using After Effects for post-capture stabilization.
Being able to preset one’s flight path on a map seems like a great way to set up shots that transition from day to night—especially cool when done with hyperlapses. Now to find a sufficiently interesting area in which to try it. See below for a demo/tutorial.
Oh, and there’s a really significant (for me, anyway) tweak hanging out in the corresponding firmware update: “Fixed issue: could not open Sphere panorama photos in Facebook.” The absence of the correct metadata was an ongoing pain that prevented me from seeing panos as interactive in Google Photos or making them interactive on Facebook. I haven’t yet installed the update, but I have my fingers crossed. [Update: It works!]
Whoa—apparently Irish Wonder Twin Powers involve an insane work ethic for finding interesting earthly patterns:
I was getting a sense of deja vu watching this, and PetaPixel helpfully writes,
If project reminds you of “Arena” by Páraic McGloughlin, there’s a good reason for that: Páraic is Kevin’s twin brother and the two had originally planned to create a single collaborative video before splitting and working independently on two separate videos while working in the same office.
Allegorithmic’s Substance family of tools are used in the vast majority of AAA games, including Call of Duty, Assassin’s Creed, and Forza. They’re increasingly being used for visual effects and animation in entertainment, including in award-winning, popular movies like Blade Runner 2049, Pacific Rim Uprising, and Tomb Raider. And they’re being adopted in the fields of design, product visualization, retail and marketing. [artwork gallery]
I’m curious to see how this goes. We introduced 3D painting to Photoshop some 12 (!!) years ago, but in retrospect we (or at least I) were naive about the sheer amount of investment & complexity it would entail. Will this acquisition finally help lower complexity & barriers to entry? We shall see.
I once argued that “Photoshop 3D is not (just) about 3D,” but rather about building a general way to import, render, and manipulate non-native content. So little of that dream has come to pass… But hey, it’s a new day, and in the end we shall all be dead. 🙂
I’m intrigued by the wealth of enhancements arriving in Procreate for iPad, including new tapered strokes & “QuickShapes.” These remind me of shape-recognition tech in Adobe apps that dates back 20+ years to early Flash, but which is cleverly executed here (enabling quick movement & manipulation of what’s drawn):
Wearing a Magic Leap One headset connected to a Wacom Intuos Pro pen tablet, designers can use the separate three-button Pro Pen 3D stylus to control their content on a platform called Spacebridge, which streams 3D data into a spatial computing environment. The program allows multiple people in a room to interact with the content, with the ability to view, scale, move, and sketch in the same environment.
Check out the rest of the Verge article for details. I very much look forward to seeing how this develops.
Our sister team makes the machine learning-powered library driving this large installation now populating our lobby. It’s to enable this sort of thing that we released ML acceleration tech the other day:
The flowers are built using Raspberry Pi running Android Things, our Android platform for everyday devices like home speakers, smart screens and wearables. An “alpha flower” has a camera in it and uses an embedded TensorFlow neural net to analyze which emotion it sees, and the surrounding flowers change colors based on the image the camera captures of your face. All processing is done locally, so no data is saved or sent to any servers.
I am not, you may have noticed, curing cancer with my limited time on this planet. Having said that, I love working on the continued democratization of creative tech. These example videos show off an incredible leap in one kind of expressivity, letting one person with a telephone create animation that would’ve previously required huge amounts of effort in complex software:
We found that in general the new GPU backend performs 2–7x faster than the floating point CPU implementation for a wide range of diverse deep neural network models.
A preview release is available now, with a full open source release planned for the near future.
I often note that I came here five (five!) years ago to “Teach Google Photoshop,” and delivering tech like this is a key part of that mission: enable machines to perceive the world, and eventually to see like artists & be your brilliant artistic Assistant. We have so, so far to go, and the road ahead can be far from clear—but it sure is exciting.
Sometimes I think, “Y’know, this life I’m living is going alright…” And then I see things like this & say, “How did we just remodel our kitchen and not do this??”
The lads and I are just back from an overnight visit to the USS Hornet, a decorated World War II-era carrier we last visited some 7 years ago. This time around we spent the night with our Cub Scout pack & several hundred other scouts & parents from around the area. On the whole we had a ball touring the ship, and I had a little fun flying my drone over the Hornet & her adjacent Navy ships:
And here’s an interactive 360º panorama from overhead. (Obligatory nerdy sidenote: This is the JPEG version stitched on the fly by the drone, and although I was able to stitch the raw source images in Camera Raw & get better color/done, I’ll be damned if I can figure out how to inject the proper metadata to make it display right. As usual I used EXIF Fixer to make the JPEG interactive.)
Here’s a rare opportunity to team up with one of the rarest of things—a super friendly, gifted, and yet humble team building a beloved app that makes the world more beautiful. The AE team have long been some of my favorite folks in the industry, and they’re looking to expand their ranks:
“It’s a hell of a lot easier to sit on your ass in a vehicle for thousands of miles than it is to carry 80 lbs of gear on your back into the wilderness for dozens of miles,” writesNicolaus Wegner, explaining his interest in capturing storm time lapses. “Plus, I think supercells and other forms of severe weather are just about the coolest events our planet manifests.” Agreed:
I have no idea whether this thing is worth a damn—but I’d sure like to find out (well, with the caveat that if it’s awesome, it’d be one more piece of bulky kit to schlepp around):
Using an astronaut’s perspective on intuitive motion through space, we have patented a unique and intuitive drone controller that anyone, whether they’re eight or eighty, can pick up and begin using immediately.
The FT Aviator is designed to incorporate the relevant 4 degrees of freedom of movement (x, y, z, and yaw) to drone flying, eliminating the awkward interface and steeper learning curve of existing dual thumb-controlled drones. It intuitively unlocks human potential to fly and capture stunning imagery.
The Yoda of Silicon Valley discusses the life & work of computer-science OG Don Knuth. The whole article & the accompanying reader comments are fascinating. (Side bonus for me: I ended up learning the names of various people in my extended team (!), who are quoted in the article.) I love that Don’s defiantly 1997-looking personal site includes a list of Infrequently Asked Questions.
The New Yorker profiles Google coding duprassJeff Dean (who leads our org) and Sanjay Ghemawat. They “seem like two halves of a single mind,” and their work enabled planet-scale data infrastructure (among many other things). Retaining as I do the most unimportant details, I now really want to see Jeff’s bespoke basement trampoline. ¯\_(ツ)_/¯ Oh, and you should definitely read Chuck Norris-style Jeff Dean Facts (“Jeff Dean’s PIN is the last 4 digits of pi,” etc.).
What a gorgeous way to ring in the still-new year:
Pilot/photographer Sigurður Þór Helgason writes,
Happy New Year 2019. This is Reykjavik city on New Years Eve. Most households shot their own fireworks just before midnight so the outcome is a spectacular firework show, unlike any other. Music by Adi Goldstein.
Note: I shot this with my Mavic 2 Pro. I used D-log M, ISO 1600, shutter 1/25 frame rate 25 and used the LUT from DJI to bring the colours back. No other adjustments and no, there are no special effects in this video or post production. Hope you enjoy this.
This is a watershed moment for me: After 11+ years of shooting on iPhones & Canon DSLRs, this is the first time I’ve shot on an Android device that plainly outshines them both at something. Night Sight on Pixel 3 blows me away.
First, some important disclaimers:
I work at Google & get to collaborate with the folks responsible for this tech, but I can take no credit for it, and these are just my opinions & non-scientific findings.
I’m not here to rain on anybody’s parade. My iPhone X is great, and the 70D has been a loyal workhorse. I have no plans to ditch either.
The 70D came out in 2013, and it’s obviously possible to get both a newer DSLR & a lens faster than my 24-70mm f/2.8.
It’s likewise possible to know a lot more about manual exposure than I do. I went only as far as to choose aperture priority, crank the exposure wide open, and set ISO to Auto.
Having said all that, I think my results reasonably represent what a normal-to-semi-savvy person would get from the various devices. Here’s what I saw:
Pixel 3 vs. 70D shots (set one, set two), all unedited. CR2 files from the 70D got converted to JPEG using default processing in Lightroom. In many cases the 70D struggled to focus (whereas the Pixel never did), so some of its shots are soft as well as dark.
Pixel 3 vs. iPhone X on a separate evening. With a few subjects (e.g. this one) I tried taking an iPhone shot with default (auto) exposure, then one with exposure manually cranked up, and finally one with Pixel 3 Night Sight. Here’s another triplet. Regrettably I didn’t think to try shooting raw on either phone.