I’ve long been skeptical of automated video editing. As I noted in May,
My Emmy-winning colleague Bill Hensler, who used to head up video engineering at Adobe, said he’d been pitched similar tech since the early 90’s and always said, “Sure, just show me a system that can match a shot of a guy entering a room with another shot of the same thing from a different angle—then we’ll talk.” As far as I know, we’re still waiting.
Now, however, some researchers at Adobe & Stanford are narrowing the problem, focusing just on saving editors time via “Computational Video Editing for Dialogue-Driven Scenes”:
Given a script and multiple video recordings, or takes, of a dialogue-driven scene as input (left), our computational video editing system automatically selects the most appropriate clip from one of the takes for each line of dialogue in the script based on a set of user-specified film-editing idioms (right).
Check out the short demo (where the cool stuff starts ~2 minutes in):
The makers of the popular Prisma style-transfer app are branching into offering an SDK:
[U]nderstand and modify the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API or SDK for iOS or Android apps.
One example use is Sticky AI, a super simple app for creating selfie stickers & optionally styling/captioning them.
According to TechCrunch, Prisma shares at least one investor with Fabby, the tech/SDK that Google acquired last week. Meanwhile, there’s also YOLO: Real-Time Object Detection:
This mass proliferation of off-the-shelf computer vision makes me think of Mom & Pop at Web scale: It’s gonna enable craziness like when Instagram was launched by two (!) guys thanks to the existence of AWS, OAuth, etc. It’ll be interesting to see how, thanks to Fabby & other efforts, Google can play a bigger part in enabling mass experimentation.
Eclipse-chaser Mike Kentrianakis of the American Astronomical Society boarded a specially rescheduled Alaska Airlines flight (interesting details here) to capture last year’s eclipse over the Pacific. (Tangentially related: Don’t forget that you can contribute creations to NASA’s Eclipse Art Quilt Project.)
If you liked the rich, trippy visuals of the previous post, check out this quick making-of from their creator:
I’m Dan Marker-Moore. Follow me on my journey through Hong Kong and Shanghai and learn how I stitch together hundreds of photos to make one Time Slice image. I use Adobe Lightroom to color correct and After Effects to composite.
Available in 4k UHD!
Break it down, Dan Marker-More & Nas:
This won’t seem like much right now, I’m sure—but I’m really excited. Per TechCrunch:
The search and Android giant has acquired AIMatter, a startup founded in Belarus that has built both a neural network-based AI platform and SDK to detect and process images quickly on mobile devices, and a photo and video editing app that has served as a proof-of-concept of the tech called Fabby.
In a lot of ways it’s the next generation of stuff we started developing when I joined Google Photos (anybody remember Halloweenify?). If you’ve ever hand-selected hair in Photoshop or (gulp) rotoscoped video, you’ll know how insane it is that these tasks can now be performed in realtime on a friggin’ telephone.
As to what happens next—stay tuned!
I know, I know—you think you’ve seen it all a hundred times, but I’d be surprised if you didn’t enjoy this mesmerizing work by Tyler Hulett:
Starry skies swirl and reel above Oregon. Each frame is an independent star trail photograph, and most of these clips represent an entire night of shooting somewhere across the state of Oregon. In a few clips, motion control panning leads to otherworldly patterns. No artificial effects; just stacking. Only one DSLR shutter was blown to make this film.
After leaving my team at Google, Dmitry Shapiro has set up Metaverse, a drag-and-drop authoring platform for app creation. Here the team shows how to create a “Not Hotdog”-style app in just a couple of minutes without writing any code:
[YouTube] [Via Alan Joyce]
I love this kind of cinematic Inside Baseball. As Kottke writes,
This is a clever bit of TV/film analysis by Evan Puschak: he reconstructs the Loot Train Battle from the most recent episode of Game of Thrones using clips from other movies and TV shows (like 300, Lord of the Rings, Stagecoach, and Apocalypse Now). In doing so, he reveals the structure that many filmed battle scenes follow, from the surprising enemy attack presaged by the distant sound of horses (as in 300) to the quiet mid-chaos reflection by a shocked commander (as in Saving Private Ryan).
The Big Red A Took My Baby Away this weekend—but it was for a good cause: Margot hosted a diverse panel of women film & TV editors at the American Cinema Editors (ACE) EditFest. They shared stories of how they’ve broken into & succeeded in the industry. Scrub ahead ~8 minutes to when the conversation starts. (Great work, M!)