Category Archives: Design

“Please Feed The Lions”: AI-driven collective poetry writing in Trafalgar Square

Google collaborated with artist Es Devlin to help London passerby contribute to an ever-evolving poem projected on Nelson’s Column in Trafalgar Square.

Cast in 1867, the four monumental lions in Trafalgar Square have been sitting as silent British icons at the base of Nelson’s Column for the past 150 years. Overnight on Monday 17 September, a fifth fluorescent red lion will join the pride. This new lion will roar poetry, and the words it roars will be chosen by the public. Everyone is invited to “feed the lion”, but this lion only eats words.

Go behind the scenes or just check out this 60-second overview:

NewImage

[YouTube]

Giant artificial flowers react to your emotions

Ah—so this is the backstory on the large installation now populating our lobby.

The flowers are built using Raspberry Pi running Android Things, our Android platform for everyday devices like home speakers, smart screens and wearables. An “alpha flower” has a camera in it and uses an embedded TensorFlow neural net to analyze which emotion it sees, and the surrounding flowers change colors based on the image the camera captures of your face. All processing is done locally, so no data is saved or sent to any servers.

Better still, the code has been open-sourced.

NewImage

NewImage

Lego builds a million-brick, working (!) Bugatti

Sometimes these things almost blog themselves. 🙃

According to Design Taxi,

Brainstorming for the project took flight in June 2017, with construction initiating in March 2018. In total, the team spent “over 13,000 work hours of development and construction.”

The automobile doesn’t solely comprise Lego though. Certain parts including its steel frame, battery pair, 3D-printed gears, and real Bugatti wheels make up this 1,500kg beast.

NewImage

NewImage

[YouTube]

Sononym: Finding sound by similarity

This audio browser app has a clever idea, though I wonder if it’d benefit from the kind of rendering that a Google project uses to let researchers visualize thousands of bird sounds via AI:

The primary innovation in Sononym is something called “similarity search”, which enable users to find similar-sounding samples in their sample collection based on any source sound. Essentially, a bit like how Google’s reverse image search works, but with audio.

The initial release focuses strictly on the core functionality of the software. That is, to offer similarity search that work with large collections of samples. Technically, our approach is a combination of feature extraction, machine learning and modern web technologies.

Not entirely dissimilar: Font Map helps you see relationships across more than 750 web fonts.

NewImage

[YouTube]

Google builds a cloud-based animation studio, creates on the fly

Google showcased its cloud-rendering & collaboration chops by deploying a cloud-based animation studio, enabling a creative team to design & render this short over three days:

To demonstrate what’s possible, we built an animated short over the course of three days.

To do it, we invited some like-minded artists who share our vision to set up a live cloud-based animation studio on the second floor of Moscone Center. These artists worked throughout the three days of the show to model, animate, and render the spot, and deliver a finished short. […]

We used Zync Render, a Renderfarm-as-a-Service running on GCP that can be deployed in minutes, and works with major 3D applications and renderers. The final piece was rendered in V-Ray for Maya.

Zync is able to deploy up to 500 render workers per project, up to a total of 48,000 vCPUs.

Pretty dope—though in my heart, these dabbing robots won’t ever compete with my then-5yo son Finn as a dancing robot:

NewImage

[YouTube 1 & 2]