Monthly Archives: December 2017

“You know the liquid metal guy in T2? Yeah, my teammate wrote him”

I used to enjoy busting out that humblebrag on Google Photos (and before that, at Adobe) where I got to work with John Schlag. Troubles in the VFX industry yielded a windfall of imaging talent for Google (which occupies the former LA office of Rhythm & Hues, FWIW), and we had a real murderer’s row of talent from DreamWorks, PDI, Sony, and other shops. (There’s so much potential yet to realize… but I digress.)

I mention it because I happened across a fun oral history of Terminator 2’s technology, featuring interesting recollections from John & the team (which, I’m very belatedly realizing, included longtime After Effects engineer turned chef/blogger (!) Michael Natkin).

“I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.’ I’m like, ‘You people are batshit!’” – John Schlag

Enjoy, and see also “Re-visiting the freakin’ T-1000 walking out of the fiery truck crash.”



Google tech gauges your expression from just your eyes

Are Irish eyes smiling? I should ask my teammate Avneesh to scan me & find out:

Google Research presents a machine learning based approach to infer select facial action units and expressions entirely by analyzing a small part of the face while the user is engaged in a virtual reality experience. Specifically, images of the user’s eyes captured from an infrared (IR) gaze-tracking camera within a VR headset are sufficient to infer at least a subset of facial expressions without the use of any external cameras or additional sensors.



“Hey Google, define what’s a beautiful photograph…”

You know how Google Assistant can say “Hey, [stateyourname], you should probably leave for the airport by 5 to make it in time for your 7 o’clock flight?” I want it to also say, “You know, it’s Mother’s Day on Sunday. Would you like this photo book to show up on your mom’s doorstep then together with some nice flowers?” Take my money, robot; make me into a better son!

Clearly such work involves a lot of moving parts & hard-to-define qualities (e.g. whether the memories evoked by an image are happy or sad may change greatly depending on things entirely outside the pixels). On the visual quality front, however, my teammates are making interesting progress. As Engadget writes,

If Google has its way, AI may serve as an art critic. It just detailed work on a Neural Image Assessment (NIMA) system that uses a deep convolutional neural network to rate photos based on what it believes you’d like, both technically and aesthetically. It trains on a set of images based on a histogram of ratings (such as from photo contests) that give a sense of the overall quality of a picture in different areas, not just a mean score or a simple high/low rating.

 Check out the Research blog for details on how it works.


An incredible 10-minute freestyle rap

OT for this blog, sure, but who cares, it’s just a pleasure to see what gifted humans can do:

Warning: this is the best thing you’re going to see today, even if you already saw it yesterday.

In this clip, The Roots’ MC dishes out an album’s worth of rhymes from memory, while hardly stopping to breathe.]

The transcript has disappeared from, but Kottke’s pulled some great bits, like:

“As babies, we went from Similac and Enfamil
To the internet and fentanyl
Where all consent was still against the will”


“Been a million places
Conversation is how beautiful my face is
People hating on how sophisticated my taste is
Then I pulled up on these mofos in a spaceship”



BYO dancing hot dog: AR creation tools emerge from Snapchat & Facebook

Wait—I work on augmented reality tech at Google, so why am I mentioning tools from the other guys?

Well, I’m fundamentally interested in any tools that give people creative superpowers. Just as features like semantic segmentation promise to drain the drudgery from traditional imaging work, tools like these promise to deepen the canvas, letting regular people create experiences that only a few years ago were barely imagined. Good times!

  • Facebook Camera Effects platform (which previously required applying for access):
    • “Build augmented reality experiences for the Facebook camera and Live broadcasts with AR Studio, our customizable suite of creative tools for Mac.”
    • “Create stunning effects that respond to motion, gestures, facial expressions, and your surroundings. Develop creative effects that instantly respond to comments and reactions during Live broadcasts. Combine data and design to create new immersive experiences for the audiences that matter to you.”
    • There’s a corresponding FB group.
  • Snapchat Lens Studio:
    • “Everything you need to build and launch immersive Lenses — all in one place.”
    • “Create and publish your Lens in three easy steps: Design your 3D creation in your favorite software, then import the file right into Lens Studio. Bring your creation to life with movement patterns, animations, and interactive triggers! Preview your Lens on your mobile device, then submit it for publication on Snapchat. If approved, you’ll get a unique Snapcode anyone can scan to unlock your Lens. Then, your Lens can be sent directly to friends, and unlocked again!”



Hypnotic photography: A murmuration of starlings

An immense, whooshing kinetic sculpture darts over the Netherlands, brought to you by thousands of beating hearts & flapping wings:

The art of flying is a short film about “murmurations”: the mysterious flights of the Common Starling. It is still unknown how the thousands of birds are able to fly in such dense swarms without colliding. Every night the starlings gather at dusk to perform their stunning air show.

Because of the relatively warm winter of 2014/2015, the starlings stayed in the Netherlands instead of migrating southwards. This gave filmmaker Jan van IJken the opportunity to film one of the most spectacular and amazing natural phenomena on earth.


[Vimeo] [Via]

Try three new experimental photo apps from Google

I’m excited for my teammates & their new launches. The team writes,

  • Storyboard (Android) transforms your videos into single-page comic layouts, entirely on device. Simply shoot a video and load it in Storyboard. The app automatically selects interesting video frames, lays them out, and applies one of six visual styles.
  • Selfissimo! (iOS, Android) is an automated selfie photographer that snaps a stylish black and white photo each time you pose. Tap the screen to start a photoshoot. The app encourages you to pose and captures a photo whenever you stop moving.
  • Scrubbies (iOS) lets you easily manipulate the speed and direction of video playback to produce delightful video loops… Shoot a video in the app and then remix it by scratching it like a DJ. Scrubbing with one finger plays the video. Scrubbing with two fingers captures the playback so you can save or share it.

Please take ‘em for a spin, then tell us what you think using the in-app feedback links. 


A trippy animation made from a cathedral facade

 Animator Ismael Sanz-Pena used a single image to create this mesmerizing animation. He tells Colossal,

The idea behind the film was to find the innate movement inherit in still forms. Every sculpture has movement in it, and it is the task of the animator to discover it. It was through the process of editing my imagery that I discovered that a single image would suffice to create the animation. The film was made by zooming into the image and panning row by row while making sure that different architectural motives aligned in every increment. This also gave a structure to the film.


[Vimeo] [Via]

An overnight Photoshop success, 9 years in the making

“Being early is the same as being wrong,” says Marc Andreessen. True enough.

I have to admit that when I saw Photoshop’s “new” 360º photo-editing feature, I was a little miffed at the positioning: we shipped almost the same exact thing nine years ago. PS has now bolted on a few menu items to make access more obvious, but otherwise the tech appears largely unchanged.

I get it, though. Nine years ago, how would one create such an image (tripod, SLR, stitching package?) and where would one use it? Now the ecosystem is radically different: You can capture an image in an instant via a Theta or similar cam (or even a drone!), or you can capture one with any smartphone, and you can make them interactively explorable by millions of people via Facebook. So yeah, viewed through that lens, I get it, and I hope that orders of magnitude more people find this feature useful this time around. ⚪️💪

Speaking of tech ahead of its time (?), I hope that wearable capture devices will become practical & enable the kind of experience that Googlers Blaise Agüera y Arcas & Noah Snavely pioneered at Microsoft:



“The Michelangelo of Microsoft Excel”

I find this Quixotic quest to be roughly 50 kinds of charming:

When Tatsuo Horiuchi retired, he decided to try his hand at art. But instead of spending money on paints and brushes, Horiuchi used what he already had pre-installed on his computer—Microsoft Excel. Now, the 77-year-old artist is creating remarkably intricate digital masterpieces of the Japanese landscape, all on the free graphing software.




Design: A musical AR visual synthesizer

How might the world look if populated by 3D shapes tied to musical beats? Artist Oscar López Rocha imagines it:

I followed the “karaoke” principle in this project, lyrics appearing while music is playing. Instead of these, I’ve changed them for geometrical bodies appearing to the rhythm of a song to create visual 3D compositions in real places where I’ve passed by sometime and captured in DSLR video.


[Via Alex Powell]

A great interview with a graphic designer for film

Years ago I loved getting to go behind the scenes on 24, CSI, and other Hollywood productions to see how prop designers put Photoshop through its paces (shattering ribs, burning out cars, and more—all in a matter of minutes). In that vein, I really enjoyed this 99% Invisible interview with designer Annie Atkins:

When a new movie comes out, most of the praise goes to the director and the lead actors, but there are so many other people involved in a film, and a lot of them are designers. There are costume designers and set designers, but also graphic designers working behind the scenes on every single graphic object that you might need in a film. It’s Annie Atkins’s job to design them.

I think you’ll dig it, too.


Google & the Discovery Channel team up on VR

Looks cool:

You can take in Discovery TRVLR for yourself using Google’s Daydream headset (or Google Cardboard). The show will be viewable on YouTube,, and on the Discovery VR app.

In La Paz, meet female wrestlers giving hope to domestic violence victims. Then cap it all off riding alongside a polar explorer through ice caves and the frozen tundra of Antarctica. 

See Engadget for more details.