As I’m on a kick sharing recent work from Ira Kemelmacher-Shlizerman & team, here’s another banger:
Given an “in-the-wild” video, we train a deep network with the video frames to produce an animatable human representation.
This can be rendered from any camera view in any body pose, enabling applications such as motion re-targeting and bullet-time rendering without the need for rigged 3D meshes.
I look forward (?) to the not-so-distant day when a 3D-extracted Trevor Lawrence hucks a touchdown to Cleatus the Fox Sports Robot. Grand slam!!