Monthly Archives: January 2020

Adobe Character Animator wins an Emmy

Early in 2012, I was lucky enough to tag along with After Effects creators David Simons & Dan Wilk as they dropped in on Pixar, Stu Maschwitz, and other smart, thoughtful animators. After 20 years of building the industry-standard motion graphics tool, they didn’t yet know quite what they wanted to build next, so it was fun to bounce ideas back and forth with forward-thinking creators.

Fast forward to 2020, and the product that resulted from those investigations—Character Animator—has just won an Emmy:

Today, the Academy announced that it will honor Adobe Character Animator as a Pioneering System for Live Performance-Based Animation Using Facial Recognition, showing excellence in engineering creativity. In the biz, this is an Emmy! We might be on a bit of a roll here, for industry bling, since this latest award follows on from our two technical Academy Awards in 2019 for Photoshop and After Effects.

The tool has powered the first-ever live episode of The Simpsons, live interviews with Stephen Colbert that morphed into Our Cartoon President, and more (see recent roundup below).

Congrats to the team (who are now “EgOts,” I think—winners of Emmys & Oscars!); we can’t wait to see where you go next!

UntitledImage

[YouTube]

Apple releases Reality Converter

I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!

Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:

The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.

“How focal length can change your face” — and what can be done about it

Quick, interesting animation:

https://www.instagram.com/p/B7PGR45oDFg/

In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used. 

Interestingly, a couple of years back some Adobe & Google researchers unveiled work on “Perspective-Aware Manipulation of Portrait Photos”:

[YouTube] [Via Peyman Milanfar]

Canon promises AI assistance for Lightroom culls

TL;DR: If this works, I’ll be pleasantly shocked.

I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.

Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss. 

So will pros trust such tech to help them sort through hundreds of similar images? Well… maybe? Canon’s prepping a subscription-based plug-in for the job:

The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.

Here it is in action:

NewImage

[YouTube]

Planet hunting with Google ML

Check out how Anne Dattilo, a PhD student in astronomy and Astrophysics, collaborated with Google TensorFlow folks to use machine learning to discover new planets (!):

This is the story of the student who became a planet hunter. When Anne Dattilo attended a guest lecture at the University of Texas she had no idea it would be the start of a journey involving complex algorithms, a space telescope breaking down in orbit, a trip to an observatory in the Chihuahuan desert and, finally, the discovery of two new planets.

If you’re so motivated, you can download Chris’ AstroNet code for yourself. Happy hunting! 🪐🤘

[YouTube]