The dream of the 90’s is alive in Portland… and perhaps in Flo, a new video-creation app that promises to use AI & voice commands to synthesize great movies from unwashed source material:
Just request a story by location, time period, tags or all of the above and Flo will respond to you like your very own video making assistant, creating the video story of your choice—e.g. ‘Make me a video story of my cat’ or ‘Make a video of my weekend trip to the beach’.
I’m off to try it out, but color me skeptical: My Emmy-winning colleague Bill Hensler, who used to head up video engineering at Adobe, said he’d been pitched similar tech since the early 90’s and always said, “Sure, just show me a system that can match a shot of a guy entering a room with another shot of the same thing from a different angle—then we’ll talk.” As far as I know, we’re still waiting.
No word yet on what happens if you invite Flo to kiss your grits. (Also, saying this makes me Very Old.)
Baidu’s Deep Voice 2 can eerily render sentences with numerous accents & other unique quirks; click through to hear examples.
In only three months, we’ve been able to scale our system from 20 hours of speech and a single voice to hundreds of hours with hundreds of voices. Deep Voice 2 can learn from hundreds of voices and imitate them perfectly. Unlike traditional systems, which need dozens of hours of audio from a single speaker, Deep Voice 2 can learn from hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality.
Conversely, Anti AI AI is a wearable detector of synthetic voices:
The device notifies the wearer when a synthetic voice is detected and cools the skin using a thermoelectric plate to alert the wearer the voice they are hearing was synthesised: by a cold, lifeless machine.
To quote the stoney falsetto of Towlie, “I have no idea what’s goin’ on right now…”
“Automatic colorization is a fundamentally ambiguous problem,” says UC Berkeley researcher Richard Zhang. “What humans do well + what computers do well = better than either alone,” say I. With that in mind, check out this new research project that pairs AI with human-powered adjustments to deliver compelling results quickly:
Like a Tracy Jordan joint, this is Hard To Watch: ADAC, “the German equivalent of AAA… usually crash tests real cars at their facility in Landsberg, Germany,” but in this case put a 2,700-piece Lego Porsche to the test at 28mph:
“The challenge was now to test this small car in the normal crash system and still produce the most realistic damage possible,” explains Johannes Heilmaier, head of the crash system at the ADAC Technikzentrum. “We developed a crash set-up like for any other car – just in mini format.” [details]
Over the 10 days we took photos of yellow cabs whenever we had time to from as many different angles as possible. So we gathered 2000 (!) photos in total we had to sort afterwards and compile to a hyperlapse around a cab in post production. It took us 5 whole days in post production to get this one shot.
Whether or note you’re compelled by the story of caffeine addiction & the migraines brought on by withdrawal, I think you’ll find this animation captivating & will want to see The White Stripes sign the production team stat.
I’m wondering, though, about these devices’ ability to help us find & be our better selves. Could something like Google’s One Today charitable app offer bite-sized daily info, sharing the voices of people in needs & asking for you to kick in a couple of bucks to their aid? I’m not sure—but I like the possibilities that are opening up.