Last year Google’s Aseem Agarwala & team showed off ways to synthesize super creamy slow-motion footage. Citing that work, a team at NVIDIA has managed to improve upon the quality, albeit taking more time to render results. Check it out:
[T]he team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
3 thoughts on “Ohhhhhh yeaahhhh: ML produces super slow mo”
Ferris Bueller reference?
If only there were SOME WAY to integrate this, into existing video applications…