New Adobe tech to help video editors

It’s far from the flashiest task, but placing cuts & transitions in interview footage can be crucial to telling a story. Adobe’s Wil Li plus UC Berkeley-based collaborators Maneesh Agrawala and Floraine Berthouzoz have unveiled “a one-click method for seamlessly removing ’ums’ and repeated words, as well as inserting natural-looking pauses to emphasize semantic content.”:

To help place cuts in interview video, our interface links a text transcript of the video to the corresponding locations in the raw footage. It also visualizes the suitability of cut locations… Editors can directly highlight segments of text, check if the endpoints are suitable cut locations and if so, simply delete the text to make the edit. For each cut our system generates visible (e.g. jump-cut, fade, etc.) and seamless, hidden transitions. 


Here’s more info about the project.

9 thoughts on “New Adobe tech to help video editors

  1. That’s pretty amazing. The example that they’re working with at the beginning isn’t as good (or obvious) as the ones they show at the end. Some of the cuts where you can tell the video is being played backwards remind me of the scene in Star Wars where Luke is attacked by the Sandpeople (or “sandperson” since there was only one). They didn’t have enough footage, so they manually rocked the footage back and forth on the film editor to stretch the scene out for a couple more seconds.
    But overall, it’s really impressive. I’d love to see this technology show up in Premiere CS7 (or 6.5! (or right away via Creative Cloud!)).

    1. Philip, did you actually watch the video all the way through? This isn’t just about making cuts and sequencing based on existing transcript metadata, which has been in research and production systems for many years. For an example, see Silver from CMU [Casares et al, 2002], paper linked here:
      Li (et al)’s research system also identifies areas for suitable cuts (in audio and video) and can generate transitions that hide the edits, or can reduce or extend pauses to change pacing. Check out the paper presented at SIGGRAPH 2012 at the link below, which contains many details.
      [But where’s the snark in *that*, smart guy?? :-p –J.]

  2. Wow I would love to use this! I’m not a programmer and I recognize that accurate auto-transcription is an incredibly difficult solution to develop. Especially with there being so many spoken languages in the world.
    That said, I think the usefulness of this feature for interviews will greatly depend on Adobe improving Premiere’s auto-transcription (or at least the manual editing of it). My experience editing Premiere’s 60 percent accurate transcribed text has been less then great. When missed words or corrected phrases are added to the transcribed text I have not been able to manually sync the new text to the words being spoken in the interview footage. At 60 percent accuracy this makes editing from the auto-transcribed interview text word by word pretty frustrating.
    Again I offer this only as constructive feedback and applaud Adobe for tirelessly creating countless tools that elegantly solve the problems I encounter as an editor.

  3. These are still hitting the ‘still frame’ issue – humans move, backgrounds move, and interpolation, when done, still looks cheesy – the notion of auto transition is nice and has a page-curl pleasantness to it but it reads like that too to me. This has room for improvement, conceptually the answer might be more in terms of ‘is there movement or not’ and ‘if so will this effect work at all or does it need something more’ or not. I love this direction and focus, and yes oh my goodness yes, we can make documentary style editing better – I can’t wait!

Leave a Reply to Marc Cancel reply

Your email address will not be published.