Monthly Archives: December 2017

“You know the liquid metal guy in T2? Yeah, my teammate wrote him”

I used to enjoy busting out that humblebrag on Google Photos (and before that, at Adobe) where I got to work with John Schlag. Troubles in the VFX industry yielded a windfall of imaging talent for Google (which occupies the former LA office of Rhythm & Hues, FWIW), and we had a real murderer’s row of talent from DreamWorks, PDI, Sony, and other shops. (There’s so much potential yet to realize… but I digress.)

I mention it because I happened across a fun oral history of Terminator 2’s technology, featuring interesting recollections from John & the team (which, I’m very belatedly realizing, included longtime After Effects engineer turned chef/blogger (!) Michael Natkin).

“I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.’ I’m like, ‘You people are batshit!’” – John Schlag

Enjoy, and see also “Re-visiting the freakin’ T-1000 walking out of the fiery truck crash.”

NewImage

NewImage

Google tech gauges your expression from just your eyes

Are Irish eyes smiling? I should ask my teammate Avneesh to scan me & find out:

Google Research presents a machine learning based approach to infer select facial action units and expressions entirely by analyzing a small part of the face while the user is engaged in a virtual reality experience. Specifically, images of the user’s eyes captured from an infrared (IR) gaze-tracking camera within a VR headset are sufficient to infer at least a subset of facial expressions without the use of any external cameras or additional sensors.

NewImage

[YouTube]

“Hey Google, define what’s a beautiful photograph…”

You know how Google Assistant can say “Hey, [stateyourname], you should probably leave for the airport by 5 to make it in time for your 7 o’clock flight?” I want it to also say, “You know, it’s Mother’s Day on Sunday. Would you like this photo book to show up on your mom’s doorstep then together with some nice flowers?” Take my money, robot; make me into a better son!

Clearly such work involves a lot of moving parts & hard-to-define qualities (e.g. whether the memories evoked by an image are happy or sad may change greatly depending on things entirely outside the pixels). On the visual quality front, however, my teammates are making interesting progress. As Engadget writes,

If Google has its way, AI may serve as an art critic. It just detailed work on a Neural Image Assessment (NIMA) system that uses a deep convolutional neural network to rate photos based on what it believes you’d like, both technically and aesthetically. It trains on a set of images based on a histogram of ratings (such as from photo contests) that give a sense of the overall quality of a picture in different areas, not just a mean score or a simple high/low rating.

 Check out the Research blog for details on how it works.

NewImage

An incredible 10-minute freestyle rap

OT for this blog, sure, but who cares, it’s just a pleasure to see what gifted humans can do:

Warning: this is the best thing you’re going to see today, even if you already saw it yesterday.

In this clip, The Roots’ MC dishes out an album’s worth of rhymes from memory, while hardly stopping to breathe.

https://youtu.be/prmQgSpV3fA]

The transcript has disappeared from Genius.com, but Kottke’s pulled some great bits, like:

“As babies, we went from Similac and Enfamil
To the internet and fentanyl
Where all consent was still against the will”

and

“Been a million places
Conversation is how beautiful my face is
People hating on how sophisticated my taste is
Then I pulled up on these mofos in a spaceship”

NewImage

[YouTube]