Monthly Archives: May 2018

“Faces of Frida,” a huge interactive exhibition from Google

I’d never previously seen many of the pieces exhibited in this new gallery:

The “Faces of Frida” brings Frida Kahlo’s most iconic artwork together for the first time in the largest digital retrospective on Frida, that embraces her work, life, and legacy. Discover several of her pieces that have never been viewable online, as well as personal photographs, letters, journals, clothes, and early sketches of some of Kahlo’s finest work, which were hidden from the world on the back of finished paintings.


[YouTube] [Via]

Demo: Generating realistic 3D faces & skin from ordinary photos

10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).]



“Deep Video Portraits” can put words in your mouth

Part of me says, “What great new tools for expressive video editing!”

The other part says, “This will not end well…”

[W]e are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor… [W]e can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing.


[YouTube] [Via Jeremy Cowles]

The joyously absurd Rube Goldberg machines of Joseph Herscher

I’m getting way too big a kick out of the work of kinetic artist & toymaker Joseph Herscher:

Khoi Vinh writes, in an inventory worth of Stefon (“this place has everything…”),

Herscher’s pièce de résistance may be “The Cake Server,” shown above: a gorgeous monstrosity that brings together melting butter, a glass of juice that pours its contents into itself, a baby using a smartphone and much more to serve a slice of upside-down cake to a plate in its God-intended manner of delivery. It’s a marvel to behold.



[YouTube 1 & 2]

Red hearts, white stars (but no purple horseshoes) come to Google Photos

Been a long time in coming, but we’re getting there at last:

The Favorite (star) button will only appear on photos in your own library, allowing you to mark an individual item as a favorite which, in turn, will automatically populate a new photo album with just your favorite photos. […]

Meanwhile, the heart icon is Google Photos’ version of the “like.” This will appear only on those photos that have been shared with you from your family and friends.


Photography: A Timelapse Tornado

Perhaps my earliest memory (circa age 4) is of watching a giant tornado bounce across the plains of southern Wisconsin, blazing through arcing power lines & bounding over a farmhouse as my mom debated whether to force me & my grandparents out of the car to shelter in a ditch. “It looks like a big ice cream cone!” I said.

Photographer Mike Olbinski succeeded in capturing a half-mile-wide cone of his own in this striking clip:



“You are not a storyteller”

Heh—having wrapped up my Adobe career working on an unsuccessful “storytelling!” tool (complete with its own storyteller 🙄), I had to laugh as I winced at this one. Stefan Sagmeister craps on “the mantle of bullshit” adopted by people trying to embellish their work with some stolen valor. (Bonus & unremarked irony: This beatdown was apparently sponsored by “Crafted Stories: Brand Storytelling.” Puzzle on that one.)



Google’s new Sketch plug-in helps you pair harmonious colors & fonts

Old Man Nack would’ve killed for this back in his designer days:

As Design Taxi writes,

“Material Theming” effectively fixes a core gripe of the original “Material Design”: that virtually every Android app looks the “same,” or made by Google, which isn’t ideal for brands.

The tool is currently available on Sketch, and you can use it by downloading the “Material” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.



Eerily current: NYC 1911

I’m oddly intrigued by the immediacy of this 107-year-old archival footage showing New York City. As Khoi Vinh explains,

The footage has been altered in two subtle but powerful ways: the normally heightened playback speed of film from this era has been slowed down to a more “natural” pace; and the addition of a soundtrack of ambient city sounds, subtly timed with the action on screen.



Build your own home AR with an Ikea desk lamp & a laser (!)

The open-source Lantern project promises to transform any surface into AR using Raspberry Pi, a laser projector, and Android Things:

Rather than insisting that every object in our home and office be ‘smart’, Lantern imagines a future where projections are used to present ambient information, and relevant UI within everyday objects. Point it at a clock to show your appointments, or point to speaker to display the currently playing song. Unlike a screen, when Lantern’s projections are no longer needed, they simply fade away.


[YouTube] [Via]

VR: Google introduces Tour Creator for students

Man, I’m really eager to see what the Micronaxx can do with this:

Tour Creator […] enables students, teachers, and anyone with a story to tell, to make a VR tour using imagery from Google Street View or their own 360 photos. The tool is designed to let you produce professional-level VR content without a steep learning curve. […]

Once you’ve created your tour, you can publish it to Poly, Google’s library of 3D content. From Poly, it’s easy to view. All you need to do is open the link in your browser or view in Google Cardboard.



Check out Color Pop in Google Photos

In past posts I’ve talked about how our team has enabled realtime segmentation of videos, and yesterday I mentioned body-pose estimation running in a Web browser. Now that tech stack is surfacing in Google Photos, powering the new effect shown below and demoed by Sundar super briefly here.

Starting today, you may see a new photo creation that plays with pops of color. In these creations, we use AI to detect the subject of your photo and leave them in color–including their clothing and whatever they’re holding–while the background is set to black and white. You’ll see these AI-powered creations in the Assistant tab of Google Photos.

Thoughts? If you could “teach Google Photoshop,” what else would you have it create for you?


Demo: Realtime pose estimation in a browser

My teammates George & Tyler have been collaborating with creative technologist Dan Oved to enable realtime human pose estimation in Web browsers via the open-source Tensorflow.js (the same tech behind the aforementioned Emoji Scavenger Hunt). You can try it out here and read about the implementation details over on Medium.

Ok, and why is this exciting to begin with? Pose estimation has many uses, from interactive installations that react to the body to augmented reality, animation, fitness uses, and more. […]

With PoseNet running on TensorFlow.js anyone with a decent webcam-equipped desktop or phone can experience this technology right from within a web browser. And since we’ve open sourced the model, JavaScript developers can tinker and use this technology with just a few lines of code. What’s more, this can actually help preserve user privacy. Since PoseNet on TensorFlow.js runs in the browser, no pose data ever leaves a user’s computer.


[Via Luca Prasso]

Use AI to hunt down wild emoji

Heh—let’s see what you & your phone can see together in Google’s Emoji Scavenger Hunt:

Introducing Emoji Scavenger Hunt 🕵️ ♀️, powered by Tensorflow.js—TensorFlow’s open-source framework for machine learning with JavaScript. It works like this: the game will show you an emoji, and you have to find its real world version before time expires. While you search, the neural network will try and guess what it’s seeing—proof that machine learning can be used for more than serious applications. Sometimes, you’re just on the hunt for a 🔑, and machine learning can help.



QooCam promises “World’s First interchangeable 4K 360° & 3D Camera”

I’m honestly not sure what to make of this wacky-looking new device, but it’s weird/interesting enough to share. I can pretty definitely say that no one wants to refocus photos/video after the fact (RIP, Lytro—and have you ever done this with an iPhone portrait image, or even known that you can?), but simply gathering depth data in 180º is interesting, as (maybe) is 360º timelapse. Check it out:


Students: $5/year (!) for Photoshop & all of Creative Cloud

Adobe subscriptions massively lower the barrier to entry,” I wrote back in 2012:

Yesterday, if you didn’t own Photoshop, the cost of getting started was $700.
Today it’s $20*.

Yesterday if you didn’t own the Master Collection, the cost was $2,600.
Today it’s $50–or if you own a CS3 or later app, just $30 (!).

Yesterday if you wanted to reach tablets via Adobe’s Digital Publishing Solution, the cost was $400 per publication.
Soon it’ll be free, for unlimited publications, once you subscribe to Creative Cloud.

This is a very big deal.

Now, here’s an even bigger deal:

Adobe will offer K-12 schools its full suite of Creative Cloud software for $5 per student per year, starting May 15, it said Thursday. That’s a radical discount compared […] earlier education pricing of $240 per year […] $360 after the first year.

Kids can use on home computers when they sign in, Adobe said. 

Amazingly bold. I love it.


Facebook demos conversion of 2D photos into 3D worlds

The demo is sparse on details, but it looks potentially very cool:

Engadget writes,

Scattered throughout the place — which seems to be a recreation of a real Oakland home — were cut-out squares floating in the air. When I hovered over them with a cursor, I saw thumbnails of photos and videos, all of which were supposedly taken in the room that I was in. When I clicked on the thumbnails, I teleported over to them so that I could see the photos and videos up close. One was a photo of a family, while another was a short video clip of a young couple getting ready for prom.


[YouTube] [Via]

Preserving cultural heritage via AI & drones

“I don’t always praise Microsoft, but when I do, it’s for rad imaging + culture efforts…”

Check out these efforts done in collaboration with globetrotting preservationists at Iconem

Team members took 50,000 photos of Palmyra with drones that allowed them to avoid landmines. In western Syria, they took 150,000 photos of Crac des Chevaliers, one of the world’s most famous Crusader castles now damaged by fighting, as part of a project for UNESCO. And they surveyed Aleppo’s Old City, the devastated historic quarter known for its 13th century citadel, ancient mosque and vibrant souk, or bazaar – all now in ruins.




Your watch can now control your bike helmet

Having biked every day through Times Square, I’m pretty sure I burned through roughly 8.5 of my 9 lives. Now being an old breadwinner, I’m trying to keep my noggin intact while keeping my arteries at least moderately pliable, so the Lumos Helmet ($180) seems pretty dope:

The Verge writes,

After installing Lumos’ Apple Watch app, the Watch will record how its wearer makes their left and right turn gestures. When they make them in the future while the app is running, it’ll activate the corresponding signal on the helmet. The Watch will vibrate to remind wearers that it’s still blinking, and they’ll have to shake their hand to turn it off. The helmet is supposed to automatically detect when you’re braking, so there doesn’t appear to be a gesture for that.