Monthly Archives: January 2020

Premiere Pro reveals collaborative Productions feature

The other PM in our family (“Hollywood” Nack 💃🏻😌) & her team have been busy:

[T]he new Productions feature set for Premiere Pro was designed from the ground up with input from top filmmakers and Hollywood editorial teams. Early versions of the underlying technology were battle-tested on recent films such as “Terminator: Dark Fate” and “Dolemite is My Name.” Special builds of Premiere Pro with Productions are being used now in editorial on films like David Fincher’s “MANK.” […]

Editorial teams can organize feature film workflows by reels and scenes. Episodic content creators can group their shows by season and agencies can allocate a Production to each client, for easy access to existing assets. You control your content: Productions use shared local storage and can be used without an internet connection.

Check out a quick tour:


[YouTube] [Via]

Adobe XD introduces Content-Aware Layout

Long, long have I awaited thee…

The team writes,

Content-Aware Layout understands the relationships between layers on your canvas and automatically adjusts these layers as your designs change. In this initial release, Content-Aware Layout lets you control the padding values of a group and maintain those values as the group’s layers change, such as when you’re adding a new layer to the group or editing a text layer… You can learn about Content-Aware Layout in our announcement post and explore free tutorials and demo files on Let’s XD.



“Loretta,” Google’s touching Super Bowl ad

“A man reminisces about the love of his life with a little help from Google.” 😢😌

If you’d like to try some of these things for yourself:

First you’ll need the Google Assistant.

00:12 “Show me photos of me and Loretta”
To use the Assistant to pull up photos, make sure you and your favorite people are tagged in your Google Photos. Then just say, “Hey Google, show me photos of me and [their name]”

00:21 “Remember, Loretta hated my mustache.”
To try this one, just say, “Hey Google, remember…” and then whatever you’d like the Assistant to help you recall later. Like “Hey Google, remember Dad’s shoe size is 8 and half” or “remember Maria loves lilies.” Then, to see everything you’ve asked the Assistant to remember, just say, “Hey Google, what did I tell you to remember?”

00:39 “Show me photos from our anniversary”
To see photos from a wedding, anniversary, birthday, or graduation, you’ll need a Google Photos account, and you’ll also need to tell your Assistant the specific date. Just say something like, “Hey Google, remember my anniversary is May 18th” or “remember Mark’s birthday is March 30th.” Then you can use that information in many ways, like “Hey Google, show me photos from our anniversary” or “Hey Google, remind me to buy flowers on Mark’s birthday.”

00:51 “Play our favorite movie.”
First, tell your Google Assistant what your favorite movie is by saying, “Hey Google, our favorite movie is Casablanca.” Once you’ve purchased your favorite movie on Google Play Movies or YouTube, all you have to say is, “Hey Google, play our favorite movie” and the movie will start playing.



“It’s like a big fish made out of fish,” my 10yo son Henry just noted, “Fishception!”

Kottke, who says “Scary Sea Monster Really Just Hundreds of Tiny Fish in a Trench Coat,” notes:

“Try rewatching the video, picking one fish and following it the entire time. Then pick another fish and watch the video again. The juvenile striped eel catfish seem to cycle through positions within the school as the entire swarm moves forward.”

Like riders in a peleton, each taking their turn braving danger at the front.


“Ghost Box”: An audio/sculptural mashup

Steve Parker’s brass audio sculptures are a delightfully weird melange:

Activated by touch, “Ghost Box” plays randomized audio segments on a loop, including the ticks of Morse Code, the chorus of spirituals, and the blows of the shofar and Iron Age Celtic carnyx. Each time someone makes contact with a part of the wall sculpture, a new noise emits.

The artists writes,

The Ghost Army was an Allied Army tactical deception unit during World War II. Their mission was to impersonate other Allied Army units to deceive the enemy. From a few weeks before D-Day, when they landed in France, until the end of the war, they put on a “traveling road show” utilizing inflatable tanks, sound trucks, fake radio transmissions, scripts, and sound projections. The unit was an incubator for many young artists who went on to have a major impact on the post-war US, including Ellsworth Kelly, Bill Blass, and Arthur Singer.


Visually inspecting trains at high speed

It’s pretty OT for my blog, I know, but as someone who’s been working in computer vision for the last couple of years, I find it interesting to see how others are applying these techniques.

Equipped with ultra-high definition cameras and high-powered illumination, the [Train Inspection Portal (TIP)] produces 360° scans of railcars passing through the portal at track speed. Advanced machine vision technology and software algorithms identify defects and automatically flag cars for repair.


A few smart career tweets

I’m now thinking about this constantly:

This too:

And yeah, ¬”Satisfaction, but feeling of uselessness…”

Adobe Character Animator wins an Emmy

Early in 2012, I was lucky enough to tag along with After Effects creators David Simons & Dan Wilk as they dropped in on Pixar, Stu Maschwitz, and other smart, thoughtful animators. After 20 years of building the industry-standard motion graphics tool, they didn’t yet know quite what they wanted to build next, so it was fun to bounce ideas back and forth with forward-thinking creators.

Fast forward to 2020, and the product that resulted from those investigations—Character Animator—has just won an Emmy:

Today, the Academy announced that it will honor Adobe Character Animator as a Pioneering System for Live Performance-Based Animation Using Facial Recognition, showing excellence in engineering creativity. In the biz, this is an Emmy! We might be on a bit of a roll here, for industry bling, since this latest award follows on from our two technical Academy Awards in 2019 for Photoshop and After Effects.

The tool has powered the first-ever live episode of The Simpsons, live interviews with Stephen Colbert that morphed into Our Cartoon President, and more (see recent roundup below).

Congrats to the team (who are now “EgOts,” I think—winners of Emmys & Oscars!); we can’t wait to see where you go next!



Apple releases Reality Converter

I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!

Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:

The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.

“How focal length can change your face” — and what can be done about it

Quick, interesting animation:

In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used. 

Interestingly, a couple of years back some Adobe & Google researchers unveiled work on “Perspective-Aware Manipulation of Portrait Photos”:

[YouTube] [Via Peyman Milanfar]

Canon promises AI assistance for Lightroom culls

TL;DR: If this works, I’ll be pleasantly shocked.

I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.

Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss. 

So will pros trust such tech to help them sort through hundreds of similar images? Well… maybe? Canon’s prepping a subscription-based plug-in for the job:

The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.

Here it is in action:



Planet hunting with Google ML

Check out how Anne Dattilo, a PhD student in astronomy and Astrophysics, collaborated with Google TensorFlow folks to use machine learning to discover new planets (!):

This is the story of the student who became a planet hunter. When Anne Dattilo attended a guest lecture at the University of Texas she had no idea it would be the start of a journey involving complex algorithms, a space telescope breaking down in orbit, a trip to an observatory in the Chihuahuan desert and, finally, the discovery of two new planets.

If you’re so motivated, you can download Chris’ AstroNet code for yourself. Happy hunting! 🪐🤘


Google invites kids to illustrate kindness in a new Doodle

Today the 12th annual Doodle for Google contest kicks off:

We’re excited to announce that this year’s theme is “I show kindness by…” Acts of kindness bring more joy, light and warmth to the world. They cost nothing, but mean everything. . 

As submissions open, we’re inviting young artists in grades K-12 to open up their creative hearts and show us how they find ways to be kind. […]

This year’s national winner will have their artwork featured on the Google homepage for a day and receive a $30,000 college scholarship. The winner’s school will also receive a $50,000 technology package.

Can’t wait to see what kids create.


Design: Two space-loving sisters win Halloween forever 🚀

I can’t really imagine the lunar lander surviving trick-or-treating beyond the end of the driveway, but man do I love these costumes:


“Charlie enters the costume by crawling underneath, and there is a pair of shoulder straps that she uses to lift the entire costume,” their parent who uses the screen name Brandoj23 wrote on Imgur this week. “The costume looks heavier than it is. It’s almost entirely made of foam and foam board.”

The antennae are made from coat hangers and bamboo dowels. The attitude thrusters are made from disposable wine flutes. The gold foil is made from a gold space blanket material. 

“The front hatch magnetically closes and magnetically stays open, and doubles as a candy sample input port,” Brandoj23 added. “The ascent stage (top part) separates from the descent stage (bottom part with landing pads).”