The other PM in our family (“Hollywood” Nack 💃🏻😌) & her team have been busy:
[T]he new Productions feature set for Premiere Pro was designed from the ground up with input from top filmmakers and Hollywood editorial teams. Early versions of the underlying technology were battle-tested on recent films such as “Terminator: Dark Fate” and “Dolemite is My Name.” Special builds of Premiere Pro with Productions are being used now in editorial on films like David Fincher’s “MANK.” […]
Editorial teams can organize feature film workflows by reels and scenes. Episodic content creators can group their shows by season and agencies can allocate a Production to each client, for easy access to existing assets. You control your content: Productions use shared local storage and can be used without an internet connection.
Content-Aware Layout understands the relationships between layers on your canvas and automatically adjusts these layers as your designs change. In this initial release, Content-Aware Layout lets you control the padding values of a group and maintain those values as the group’s layers change, such as when you’re adding a new layer to the group or editing a text layer… You can learn about Content-Aware Layout in our announcement post and explore free tutorials and demo files on Let’s XD.
00:12 “Show me photos of me and Loretta” To use the Assistant to pull up photos, make sure you and your favorite people are tagged in your Google Photos. Then just say, “Hey Google, show me photos of me and [their name]”
00:21 “Remember, Loretta hated my mustache.” To try this one, just say, “Hey Google, remember…” and then whatever you’d like the Assistant to help you recall later. Like “Hey Google, remember Dad’s shoe size is 8 and half” or “remember Maria loves lilies.” Then, to see everything you’ve asked the Assistant to remember, just say, “Hey Google, what did I tell you to remember?”
00:39 “Show me photos from our anniversary” To see photos from a wedding, anniversary, birthday, or graduation, you’ll need a Google Photos account, and you’ll also need to tell your Assistant the specific date. Just say something like, “Hey Google, remember my anniversary is May 18th” or “remember Mark’s birthday is March 30th.” Then you can use that information in many ways, like “Hey Google, show me photos from our anniversary” or “Hey Google, remind me to buy flowers on Mark’s birthday.”
00:51 “Play our favorite movie.” First, tell your Google Assistant what your favorite movie is by saying, “Hey Google, our favorite movie is Casablanca.” Once you’ve purchased your favorite movie on Google Play Movies or YouTube, all you have to say is, “Hey Google, play our favorite movie” and the movie will start playing.
“It’s like a big fish made out of fish,” my 10yo son Henry just noted, “Fishception!”
Kottke, who says “Scary Sea Monster Really Just Hundreds of Tiny Fish in a Trench Coat,” notes:
“Try rewatching the video, picking one fish and following it the entire time. Then pick another fish and watch the video again. The juvenile striped eel catfish seem to cycle through positions within the school as the entire swarm moves forward.”
Like riders in a peleton, each taking their turn braving danger at the front.
Activated by touch, “Ghost Box” plays randomized audio segments on a loop, including the ticks of Morse Code, the chorus of spirituals, and the blows of the shofar and Iron Age Celtic carnyx. Each time someone makes contact with a part of the wall sculpture, a new noise emits.
The Ghost Army was an Allied Army tactical deception unit during World War II. Their mission was to impersonate other Allied Army units to deceive the enemy. From a few weeks before D-Day, when they landed in France, until the end of the war, they put on a “traveling road show” utilizing inflatable tanks, sound trucks, fake radio transmissions, scripts, and sound projections. The unit was an incubator for many young artists who went on to have a major impact on the post-war US, including Ellsworth Kelly, Bill Blass, and Arthur Singer.
It’s pretty OT for my blog, I know, but as someone who’s been working in computer vision for the last couple of years, I find it interesting to see how others are applying these techniques.
Equipped with ultra-high definition cameras and high-powered illumination, the [Train Inspection Portal (TIP)] produces 360° scans of railcars passing through the portal at track speed. Advanced machine vision technology and software algorithms identify defects and automatically flag cars for repair.
Early in 2012, I was lucky enough to tag along with After Effects creators David Simons & Dan Wilk as they dropped in on Pixar, Stu Maschwitz, and other smart, thoughtful animators. After 20 years of building the industry-standard motion graphics tool, they didn’t yet know quite what they wanted to build next, so it was fun to bounce ideas back and forth with forward-thinking creators.
Today, the Academy announced that it will honor Adobe Character Animator as a Pioneering System for Live Performance-Based Animation Using Facial Recognition, showing excellence in engineering creativity. In the biz, this is an Emmy! We might be on a bit of a roll here, for industry bling, since this latest award follows on from our two technical Academy Awards in 2019 for Photoshop and After Effects.
I found myself blocked from doing anything interesting with Apple’s Reality Composer tool due to the lack of readily available USDZ-format files. My kingdom for a Lego minifig!
Therefore it’s cool to see that they’ve released a simple utility meant to facilitate conversion:
The new Reality Converter app makes it easy to convert, view, and customize USDZ 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf and .usd, to view the converted USDZ result, customize material properties with your own textures, and edit file metadata. You can even preview your USDZ object under a variety of lighting and environment conditions with built-in IBL options.
While I wait for my Insta360 One R to arrive, I’m tiding myself over with content like this. I can’t wait to try shooting crazy-looking FPV-style shots without the chaos & risk of making high-speed moves, though I do worry about how this rig might interfere with the drone’s GPS receiver. I guess we’ll see!
In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used.
From what I’ve tasted of desire, I hold with those who favor fire…
Wild that this can be captured on what David Lynch might call “your f***ing telephone“; wild too that it’s shared as vertical video (by Apple, which after 10+ years can’t be bothered to make iMovie handle this aspect ratio decently!)
I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.
Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss.
The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.
Check out how Anne Dattilo, a PhD student in astronomy and Astrophysics, collaborated with Google TensorFlow folks to use machine learning to discover new planets (!):
This is the story of the student who became a planet hunter. When Anne Dattilo attended a guest lecture at the University of Texas she had no idea it would be the start of a journey involving complex algorithms, a space telescope breaking down in orbit, a trip to an observatory in the Chihuahuan desert and, finally, the discovery of two new planets.
We’re excited to announce that this year’s theme is “I show kindness by…” Acts of kindness bring more joy, light and warmth to the world. They cost nothing, but mean everything. .
As submissions open, we’re inviting young artists in grades K-12 to open up their creative hearts and show us how they find ways to be kind. […]
This year’s national winner will have their artwork featured on the Google homepage for a day and receive a $30,000 college scholarship. The winner’s school will also receive a $50,000 technology package.
“Charlie enters the costume by crawling underneath, and there is a pair of shoulder straps that she uses to lift the entire costume,” their parent who uses the screen name Brandoj23 wrote on Imgur this week. “The costume looks heavier than it is. It’s almost entirely made of foam and foam board.”
The antennae are made from coat hangers and bamboo dowels. The attitude thrusters are made from disposable wine flutes. The gold foil is made from a gold space blanket material.
“The front hatch magnetically closes and magnetically stays open, and doubles as a candy sample input port,” Brandoj23 added. “The ascent stage (top part) separates from the descent stage (bottom part with landing pads).”