Monthly Archives: August 2018

Sononym: Finding sound by similarity

This audio browser app has a clever idea, though I wonder if it’d benefit from the kind of rendering that a Google project uses to let researchers visualize thousands of bird sounds via AI:

The primary innovation in Sononym is something called “similarity search”, which enable users to find similar-sounding samples in their sample collection based on any source sound. Essentially, a bit like how Google’s reverse image search works, but with audio.

The initial release focuses strictly on the core functionality of the software. That is, to offer similarity search that work with large collections of samples. Technically, our approach is a combination of feature extraction, machine learning and modern web technologies.

Not entirely dissimilar: Font Map helps you see relationships across more than 750 web fonts.

NewImage

[YouTube]

Absolute witchcraft: AI synthesizes dance moves, entire street scenes

This 💩 is 🍌🍌🍌, B-A-N-A-N-A-S: This Video-to-Video Synthesis tech apparently can take in one dance performance & apply it to a recording of another person to make her match the moves:

It can even semantically replace entire sections of a scene—e.g. backgrounds in a street scene: 

Now please excuse me while I lie down for a bit, as my brain is broken.

NewImageScan

[YouTube]

[YouTube 1 & 2] [Via Tyler Zhu]

Google builds a cloud-based animation studio, creates on the fly

Google showcased its cloud-rendering & collaboration chops by deploying a cloud-based animation studio, enabling a creative team to design & render this short over three days:

To demonstrate what’s possible, we built an animated short over the course of three days.

To do it, we invited some like-minded artists who share our vision to set up a live cloud-based animation studio on the second floor of Moscone Center. These artists worked throughout the three days of the show to model, animate, and render the spot, and deliver a finished short. […]

We used Zync Render, a Renderfarm-as-a-Service running on GCP that can be deployed in minutes, and works with major 3D applications and renderers. The final piece was rendered in V-Ray for Maya.

Zync is able to deploy up to 500 render workers per project, up to a total of 48,000 vCPUs.

Pretty dope—though in my heart, these dabbing robots won’t ever compete with my then-5yo son Finn as a dancing robot:

NewImage

[YouTube 1 & 2]

Photography: The Moon meets “Clair de Lune”

NASA, using a digital 3D model of the Moon built from Lunar Reconnaissance Orbiter global elevation maps and image mosaics, produced this lovely tour of our nearby neighbor. The lighting is derived from actual Sun angles during lunar days in 2018.

The filmmakers write,

The visuals were composed like a nature documentary, with clean cuts and a mostly stationary virtual camera. The viewer follows the Sun throughout a lunar day, seeing sunrises and then sunsets over prominent features on the Moon. The sprawling ray system surrounding Copernicus crater, for example, is revealed beneath receding shadows at sunrise and later slips back into darkness as night encroaches.

NewImage

[YouTube] [Via]

Animation: Fire In Cardboard City

Gonna be a hot time in the (deeply poorly conceived) virtual town tonight:

Some cool making-of details:

The results look so realistic that they could almost be stop-motion. “I built a big virtual set, I guess, is how you could describe it,” he said. “The characters are like stop-motion marionettes in a way; they have joints to the arms and the knees and all of that, and controllers.” He then used a low-budget motion-capture process — a D.I.Y. version of Hollywood’s green screens and Ping-Pong-ball suits — using the XBox Kinect and special software. “It sees you doing the motions you want the character to do, and then you can transfer that to the animation so you can transfer that onto your characters,” he said. He considered having a giant robot attack Cardboard City, and then settled on fire: that looked kind of cool, too.

[YouTube][Via]

Google brings an AI game to WeChat

Fun stuff from the Shanghai office:

In order to give everyone the opportunity to experience just how natural AI-powered interactions can now be, we’re launching 猜画小歌 (“Guess My Sketch”) from Google AI, a fun, social WeChat Mini Program in which players team up with our AI to sketch everyday items in a race against the clock. In each round, players sketch the given word (like “dog”, “clock”, or “shoe”) for their AI teammate to guess correctly before time runs out.

When the AI successfully guesses your sketch, you’ll move on to the next round and increase your sketching streak. You can invite friends and family to compete for the longest streak, share interesting sketches with each other, and collect new words and drawings as you continue playing.

NewImage

The Consolation of Repetition

“So we beat on, boats against the current, borne back ceaselessly into the past…”

Sitting in my parents’ house, surrounded by my dad’s old college books and mine, I’m struck by a certain melancholy—a mix of memory, gratitude, and loss. As it happens, Margot just told me about Insta Repeat, a feed that catalogs the repetitiousness of Instagram photography. This makes me think of “vemödalen” (“the frustration of photographing something amazing when thousands of identical photos already exist”)—and just searching this blog for that term shows my current unoriginality in its use:

Ah well—so it goes. Until next time…

NewImage

[YouTube]

SnotBot FTWhales! 🚁🐳

Hah—brilliant: SnotBot is a hexacopter drone covered in petri dishes that collects whale exhalations for science:

Popular Science explains:

Whale snot, it turns out, is packed with DNA, viruses, hormones, and microbes—all incredibly useful things to a variety of scientists.

Typically, marine biologists employ the same techniques that failed Kerr: A motorboat equipped with long sticks and modified crossbows to collect whale biopsies. But Kerr hopes these flying research robots will soon change that.

NewImage

[YouTube 1 & 2]

Illustrator’s forthcoming Diffusion Gradients look dope

Wanna feel old? Illustrator’s gradient meshes debuted 20 f’ing years ago, and the challenge of using them effectively is attested to by the age of images made with them. Now, though, it seems Adobe’s putting a more accessible interface atop similar-looking tech:

Traditional linear or radial gradients can limit your flexibility, while gradient meshes can have a steep learning curve. The new gradient feature lets you create rich blends of colors that seemingly diffuse naturally into each other.

Check it out (as I hope we’ll all be able to do hands-on this fall):

[YouTube]