Zozo Suit Riot: This startup wants you to put on a stretchy, instrumented shirt & then scan your body in order to get custom-tailored clothes:
Would you do it? Will others? ¯\_(ツ)_/¯ I guess we’ll find out.
Sorry if I’m laying it on a bit thick with the drone bits lately, but I find these focused, bite-sized tutorials to be a great way to learn:
Here’s the gist of the vid above:
Also, here’s a great tour of using Active Track to follow one’s vehicle (including orbiting it in a circle as it drives—amazing!).
Starting today, service members can search ‘jobs for veterans‘ on Google and then enter their specific military job codes (MOS, AFSC, NEC, etc.) to see relevant civilian jobs that require similar skills to those used in their military roles. We’re also making this capability available to any employer or job board to use on their own property through our Cloud Talent Solution.
The primary innovation in Sononym is something called “similarity search”, which enable users to find similar-sounding samples in their sample collection based on any source sound. Essentially, a bit like how Google’s reverse image search works, but with audio.
The initial release focuses strictly on the core functionality of the software. That is, to offer similarity search that work with large collections of samples. Technically, our approach is a combination of feature extraction, machine learning and modern web technologies.
Not entirely dissimilar: Font Map helps you see relationships across more than 750 web fonts.
This 💩 is 🍌🍌🍌, B-A-N-A-N-A-S: This Video-to-Video Synthesis tech apparently can take in one dance performance & apply it to a recording of another person to make her match the moves:
It can even semantically replace entire sections of a scene—e.g. backgrounds in a street scene:
Now please excuse me while I lie down for a bit, as my brain is broken.
Google showcased its cloud-rendering & collaboration chops by deploying a cloud-based animation studio, enabling a creative team to design & render this short over three days:
To demonstrate what’s possible, we built an animated short over the course of three days.
To do it, we invited some like-minded artists who share our vision to set up a live cloud-based animation studio on the second floor of Moscone Center. These artists worked throughout the three days of the show to model, animate, and render the spot, and deliver a finished short. […]
We used Zync Render, a Renderfarm-as-a-Service running on GCP that can be deployed in minutes, and works with major 3D applications and renderers. The final piece was rendered in V-Ray for Maya.
Zync is able to deploy up to 500 render workers per project, up to a total of 48,000 vCPUs.
Pretty dope—though in my heart, these dabbing robots won’t ever compete with my then-5yo son Finn as a dancing robot:
NASA, using a digital 3D model of the Moon built from Lunar Reconnaissance Orbiter global elevation maps and image mosaics, produced this lovely tour of our nearby neighbor. The lighting is derived from actual Sun angles during lunar days in 2018.
The filmmakers write,
The visuals were composed like a nature documentary, with clean cuts and a mostly stationary virtual camera. The viewer follows the Sun throughout a lunar day, seeing sunrises and then sunsets over prominent features on the Moon. The sprawling ray system surrounding Copernicus crater, for example, is revealed beneath receding shadows at sunrise and later slips back into darkness as night encroaches.
Gonna be a hot time in the (deeply poorly conceived) virtual town tonight:
Some cool making-of details:
The results look so realistic that they could almost be stop-motion. “I built a big virtual set, I guess, is how you could describe it,” he said. “The characters are like stop-motion marionettes in a way; they have joints to the arms and the knees and all of that, and controllers.” He then used a low-budget motion-capture process — a D.I.Y. version of Hollywood’s green screens and Ping-Pong-ball suits — using the XBox Kinect and special software. “It sees you doing the motions you want the character to do, and then you can transfer that to the animation so you can transfer that onto your characters,” he said. He considered having a giant robot attack Cardboard City, and then settled on fire: that looked kind of cool, too.
#BobRossIsABoss—weirdly brilliant! Kottke writes,
As a fundraiser for the Leukemia & Lymphoma Society, Micah Sherman and Mark Stetson produced a web series called The Bob Ross Challenge in which 13 comedians attempt to paint along with Bob Ross as he does his thing with the trees and little fluffy clouds. Here’s the first episode, featuring Aparna Nancherla:
My crazy-talented buddy Dave (whose hiring at Adobe is one of the best things for which I can take fragmentary credit) has created an interactive mystery using—and showing off—Adobe Character Animator:
As a special bonus, you can download the rigged puppets from Dave’s site. (Hat tip to AE superfans who grok some of the character names. 😌)
Fun stuff from the Shanghai office:
In order to give everyone the opportunity to experience just how natural AI-powered interactions can now be, we’re launching 猜画小歌 (“Guess My Sketch”) from Google AI, a fun, social WeChat Mini Program in which players team up with our AI to sketch everyday items in a race against the clock. In each round, players sketch the given word (like “dog”, “clock”, or “shoe”) for their AI teammate to guess correctly before time runs out.
When the AI successfully guesses your sketch, you’ll move on to the next round and increase your sketching streak. You can invite friends and family to compete for the longest streak, share interesting sketches with each other, and collect new words and drawings as you continue playing.
Nifty, even if it doesn’t include the actual images produced on-device. More details.
“So we beat on, boats against the current, borne back ceaselessly into the past…”
Sitting in my parents’ house, surrounded by my dad’s old college books and mine, I’m struck by a certain melancholy—a mix of memory, gratitude, and loss. As it happens, Margot just told me about Insta Repeat, a feed that catalogs the repetitiousness of Instagram photography. This makes me think of “vemödalen” (“the frustration of photographing something amazing when thousands of identical photos already exist”)—and just searching this blog for that term shows my current unoriginality in its use:
Ah well—so it goes. Until next time…
Hah—brilliant: SnotBot is a hexacopter drone covered in petri dishes that collects whale exhalations for science:
Popular Science explains:
Whale snot, it turns out, is packed with DNA, viruses, hormones, and microbes—all incredibly useful things to a variety of scientists.
Typically, marine biologists employ the same techniques that failed Kerr: A motorboat equipped with long sticks and modified crossbows to collect whale biopsies. But Kerr hopes these flying research robots will soon change that.
Wanna feel old? Illustrator’s gradient meshes debuted 20 f’ing years ago, and the challenge of using them effectively is attested to by the age of images made with them. Now, though, it seems Adobe’s putting a more accessible interface atop similar-looking tech:
Traditional linear or radial gradients can limit your flexibility, while gradient meshes can have a steep learning curve. The new gradient feature lets you create rich blends of colors that seemingly diffuse naturally into each other.
Check it out (as I hope we’ll all be able to do hands-on this fall):