This audio browser app has a clever idea, though I wonder if it’d benefit from the kind of rendering that a Google project uses to let researchers visualize thousands of bird sounds via AI:
The primary innovation in Sononym is something called “similarity search”, which enable users to find similar-sounding samples in their sample collection based on any source sound. Essentially, a bit like how Google’s reverse image search works, but with audio.
The initial release focuses strictly on the core functionality of the software. That is, to offer similarity search that work with large collections of samples. Technically, our approach is a combination of feature extraction, machine learning and modern web technologies.
Not entirely dissimilar: Font Map helps you see relationships across more than 750 web fonts.
[YouTube]
The main difference is that Sononym lets the user define what similarity “means”, which makes the results very dynamic. Have you tried it out?
It occurs to me that the s-TSE algorithm used for generating the google experiments is excellent for static overview maps (creating order out of chaos), but not suitable for such dynamic data. At least, not the way I understand it.
Interesting, thanks. I haven’t yet tried Sononym.