Early in 2012, I was lucky enough to tag along with After Effects creators David Simons & Dan Wilk as they dropped in on Pixar, Stu Maschwitz, and other smart, thoughtful animators. After 20 years of building the industry-standard motion graphics tool, they didn’t yet know quite what they wanted to build next, so it was fun to bounce ideas back and forth with forward-thinking creators.
Today, the Academy announced that it will honor Adobe Character Animator as a Pioneering System for Live Performance-Based Animation Using Facial Recognition, showing excellence in engineering creativity. In the biz, this is an Emmy! We might be on a bit of a roll here, for industry bling, since this latest award follows on from our two technical Academy Awards in 2019 for Photoshop and After Effects.
I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!
Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:
The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.
While I wait for my Insta360 One R to arrive, I’m tiding myself over with content like this. I can’t wait to try shooting crazy-looking FPV-style shots without the chaos & risk of making high-speed moves, though I do worry about how this rig might interfere with the drone’s GPS receiver. I guess we’ll see!
In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used.
From what I’ve tasted of desire, I hold with those who favor fire…
Wild that this can be captured on what David Lynch might call “your f***ing telephone“; wild too that it’s shared as vertical video (by Apple, which after 10+ years can’t be bothered to make iMovie handle this aspect ratio decently!)
I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.
Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss.
The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.
Check out how Anne Dattilo, a PhD student in astronomy and Astrophysics, collaborated with Google TensorFlow folks to use machine learning to discover new planets (!):
This is the story of the student who became a planet hunter. When Anne Dattilo attended a guest lecture at the University of Texas she had no idea it would be the start of a journey involving complex algorithms, a space telescope breaking down in orbit, a trip to an observatory in the Chihuahuan desert and, finally, the discovery of two new planets.