Zozo Suit Riot: This startup wants you to put on a stretchy, instrumented shirt & then scan your body in order to get custom-tailored clothes:
Would you do it? Will others? ¯\_(ツ)_/¯ I guess we’ll find out.
Sorry if I’m laying it on a bit thick with the drone bits lately, but I find these focused, bite-sized tutorials to be a great way to learn:
Here’s the gist of the vid above:
Also, here’s a great tour of using Active Track to follow one’s vehicle (including orbiting it in a circle as it drives—amazing!).
Starting today, service members can search ‘jobs for veterans‘ on Google and then enter their specific military job codes (MOS, AFSC, NEC, etc.) to see relevant civilian jobs that require similar skills to those used in their military roles. We’re also making this capability available to any employer or job board to use on their own property through our Cloud Talent Solution.
The primary innovation in Sononym is something called “similarity search”, which enable users to find similar-sounding samples in their sample collection based on any source sound. Essentially, a bit like how Google’s reverse image search works, but with audio.
The initial release focuses strictly on the core functionality of the software. That is, to offer similarity search that work with large collections of samples. Technically, our approach is a combination of feature extraction, machine learning and modern web technologies.
Not entirely dissimilar: Font Map helps you see relationships across more than 750 web fonts.
This 💩 is 🍌🍌🍌, B-A-N-A-N-A-S: This Video-to-Video Synthesis tech apparently can take in one dance performance & apply it to a recording of another person to make her match the moves:
It can even semantically replace entire sections of a scene—e.g. backgrounds in a street scene:
Now please excuse me while I lie down for a bit, as my brain is broken.
Google showcased its cloud-rendering & collaboration chops by deploying a cloud-based animation studio, enabling a creative team to design & render this short over three days:
To demonstrate what’s possible, we built an animated short over the course of three days.
To do it, we invited some like-minded artists who share our vision to set up a live cloud-based animation studio on the second floor of Moscone Center. These artists worked throughout the three days of the show to model, animate, and render the spot, and deliver a finished short. […]
We used Zync Render, a Renderfarm-as-a-Service running on GCP that can be deployed in minutes, and works with major 3D applications and renderers. The final piece was rendered in V-Ray for Maya.
Zync is able to deploy up to 500 render workers per project, up to a total of 48,000 vCPUs.
Pretty dope—though in my heart, these dabbing robots won’t ever compete with my then-5yo son Finn as a dancing robot: