Happy Friday, weirdoes. 🙂
This is all possible because of something called FakeApp, software that utilizes algorithms and deep-learning to scan someone’s face and graft it into any given video.
This thing is pretty cool! Cornell researchers worked with Googlers to use machine learning in order to fingerprint the songs of various birds, then lay them out in an interactive visualization:
Built by Kyle McDonald, Manny Tan, Yotam Mann, and friends at Google Creative Lab. Thanks to Cornell Lab of Ornithology for their support. The Essential Set for North America sounds are provided by the Macaulay Library. The open-source code is available here. Check out more at A.I. Experiments.