Monthly Archives: June 2019

Inside Apple’s charming new “Bounce” commercial

I think you’ll enjoy this:

AdAge writes,

The team shot outdoor scenes in Kiev, Ukraine, before recreating the entire town on a set inside the country’s largest airplane hangar. The “ground,” however, was built six feet off the floor, to allow space for trampolines built into the sidewalks. […]

For a scene where he falls sideways beside a woman on a bench, two practical shots were merged into a single one. The actor bounces off a specially-crafted surface, and the camera was turned 90 degrees to film the woman, who was strapped into a bench built into a wall. The entire production was shot in just 12 days, a feat that required 200 artists and technicians.

[YouTube]

Capturing your every strand of hair in 3D

Back in the day I wrote about how Male-pattern baldness -> Great Photoshop feature:

Jeff’s mane is a little thin on top, and Gregg is more folliclularly challenged.  So, when Jeff returned from vacation to Taiwan, he was rather unhappy to find that Quick Selection was selecting only his head, missing the wispy bits of hair on top.  As he proclaimed while making a quick whiteboard self portrait, “I need to keep all the hair I’ve got!”

Smash-cut forward 13 years (cripes…), and researchers are developing a way to use multiple cameras to capture one’s hair, then reconstruct it in 3D (!). Check it out:

 

[YouTube

Google open-sources PoseNet 2.0 for Web-based body tracking

My teammates Tyler & George have released numerous projects made with their body-tracking library PoseNet, and now v2 has been open-sourced for you to use via TensorFlow.js. You can try it out here.

From last year (post), here’s an example of the kind of fun stuff you can make using it:

[YouTube]

Musicians zoom in & comment on paintings through Google’s Art Zoom

Ever wondered what Feist thinks about Bruegel the Elder? Well wonder no more, my friend! She & other musicians have recorded their thoughts on the details of famous paintings. To wit:

More than 10,000 artworks from 208 partners worldwide have been captured with Art Camera and digitized in ultra-high resolution, from the fluffy fabric from which Vivienne Westwood tailored the Keith Haring “Witches” dress, to the almost photographic View of Delft by Vermeer. You can see these works in intricate detail simply by browsing on the Google Arts & Culture app. Explore Art Zoom online at g.co/ArtZoom, or download our free app for iOS or Android.

[YouTube]

AR: Cloaking device… engaged!

Zach Lieberman has been on a tear lately with realtime body-segmentation experiments (see his whole recent feed), and now he’ll ghost ya for real:

It’s crazy to think that this stuff works in realtime on a telephone, when just 7 years ago here’s how Content-Aware Fill looked when applied to video:

ML developers: Come check out MediaPipe

The glue my team developed to connect & coordinate machine learning, computer vision, and other processes is now available for developers:

The main use case for MediaPipe is rapid prototyping of applied machine learning pipelines with inference models and other reusable components. MediaPipe also facilitates the deployment of machine learning technology into demos and applications on a wide variety of different hardware platforms (e.g., Android, iOS, workstations).

If you’ve tried any of the Google AR examples I’ve posted in the last year+ (Playground, Motion Stills, YouTube Stories or ads, etc.), you’ve already used MediaPipe, and  now you can use it to remove some drudgery when creating your own apps.

Here’s a whole site full of examples, documentation, a technical white paper, graph visualizer, and more. If you take it for a spin, let us know how it goes!

NewImage

Check out Fresco, Adobe’s new tablet drawing app

People have been trying to combine the power of vector & raster drawing/editing for decades. (Anybody else remember Creature House Expression, published by Fractal & then acquired by Microsoft? Congrats on also being old! 🙃) It’s a tough line to walk, and the forthcoming Adobe Fresco app is far from Adobe’s first bite at the apple (I remember you, Fireworks).

Back in 2010, I transitioned off of Photoshop proper & laid out a plan by which different mobile apps/modules (painting, drawing, photo library) would come together to populate a share, object-centric canvas. Rather than build the monolithic (and now forgotten) Photoshop Touch that we eventually shipped, I’d advocated for letting Adobe Ideas form the drawing module, Lightroom Mobile form the library, and a new Photoshop-derived painting/bitmap editor form the imaging module. We could do the whole thing on a new imaging stack optimized around mobile GPUs.

Obviously that went about as well as conceptually related 90’s-era attempts at OpenDoc et al.—not because it’s hard to combine disparate code modules (though it is!), but because it’s really hard to herd cats across teams, and I am not Steve Fucking Jobs.

Sadly, I’ve learned, org charts do matter, insofar as they represent alignment of incentives & rewards—or lack thereof. “If you want to walk fast, walk alone; if you want to walk far, walk together.” And everyone prefers “innovate” vs. “integrate,” and then for bonus points they can stay busy for years paying down the resulting technical debt. “…Profit!”

But who knows—maybe this time crossing the streams will work. Or, see you again in 5-10 years the next time I write this post. 😌

[YouTube]

Introducing AR makeup on YouTube

I’m so pleased to be able to talk about the augmented reality try-on feature we’ve integrated with YouTube, leveraging the face-tracking ML tech we recently made available for iOS & Android:

Today, we’re introducing AR Beauty Try-On, which lets viewers virtually try on makeup while following along with YouTube creators to get tips, product reviews, and more. Thanks to machine learning and AR technology, it offers realistic, virtual product samples that work on a full range of skin tones. Currently in alpha, AR Beauty Try-On is available through FameBit by YouTube, Google’s in-house branded content platform.

M·A·C Cosmetics is the first brand to partner with FameBit to launch an AR Beauty Try-On campaign. Using this new format, brands like M·A·C will be able to tap into YouTube’s vibrant creator community, deploy influencer campaigns to YouTube’s 2 billion monthly active users, and measure their results in real time.

As I noted the other day with AR in Google Lens, big things have small beginnings. Stay tuned!

Google makes… a collaborative game level builder?

Hey, I’m as surprised as you probably are. 🙃 And yet here we are:

What if creating games could be as easy and fun as playing them? What if you could enter a virtual world with your friends and build a game together in real time? Our team within Area 120, Google’s workshop for experimental projects, took on this challenge. Our prototype is called Game Builder, and it is free on Steam for PC and Mac.

I’m looking forward to taking it for a spin!

Predicting someone’s gestures from just their words

Whoa—what an eerie, funky thing to undertake: using a computer to anticipate a specific person’s idiosyncratic (yet predictable!) hand gestures based just on a recording of their speech:

Now they just need to automatically add accordions.

As for the training data:

Our speakers come from a diverse set of backgrounds: television show hosts, university lecturers and televangelists. They span at least three religions and discuss a large range of topics from commentary on current affairs through the philosophy of death, chemistry and the history of rock music, to readings in the Bible and the Qur’an.

As always, “This is the strangest life I’ve ever known…”

[YouTube]

3D scanner app promises to make asset creation suck less

Earlier this week I was messing around with Apple’s new Reality Composer tool, thinking about fun Lego-themed interactive scenes I could whip up for the kids. After 10+ fruitless minutes of trying to get off-the-shelf models into USDZ format, however, I punted—at least for the time being. Getting good building blocks into one’s scene can still be a pain.

This new 3D scanner app promises to make the digitization process much easier. I haven’t gotten to try it, but I’d love to take it for a spin:

[YouTube]

No more sync between Google Photos & Drive

It was powerful but confusing. The team writes,

Starting in July, new photos and videos from Drive won’t automatically show in Photos. Similarly, new photos and videos in Photos will not be added to the Photos folder in Drive. Photos and videos you delete in Drive will not be removed from Photos. Similarly, items you delete in Photos will not be removed from Drive. This change is designed to help prevent accidental deletion of items across products.

See also more detailed info as needed.

FWIW I bailed on this integration a long while back. Instead I now import images from my SLR & Insta360 to my Mac; edit/convert the selects to JPEG via Lightroom Classic & the Insta app; then drag the JPEGs into a photos.google.com in a browser window (so they’re grouped with my phone pics/vids); and finally back up the originals to an external HD. It’s not exactly elegant, but it’s simple enough and it works.

NewImage

Speaking of AI putting words in people’s mouths…

We’ll see whether Facebook reverses course & deletes this deepfake now that it involves Mark Zuckerberg:

Adobe & co. enable video manipulation through typing

There’s no way this can end badly—no way.

The Verge writes,

When someone edits a text transcript of the video, the software combines all this collected data — the phonemes, visemes, and 3D face model — to construct new footage that matches the text input. This is then pasted onto the source video to create the final result.

In tests in which the fake videos were shown to a group of 138 volunteers, some 60 percent of participants though the edits were real. That may sound quite low, but only 80 percent of that same group thought the original, unedited footage was also legitimate.

NewImage

See previous: “Audio Photoshopping” using Adobe VoCo:

[YouTube 1 & 2]

Give ’em a hand: Google Nest Hub Max recognizes hand movements & faces

For the last couple of years I’ve randomly observed colleagues waving at Frankenstein-looking circuit boards & displays. As with many odd things at Google, I’ve channeled Bob Dylan—”Don’t Think Twice, It’s Alright”—and moved right along.

Now some of the fruits of that labor are coming to light, as the new Nest Hub Max has been announced, complete with the ability to recognize gestures (e.g. to stop music or an alarm) and faces (to show you personalized info like your calendar). Dieter Bohn offers a nice overview here:

[YouTube

Metal detectorists use Google Earth to find buried treasure

Ooh—I’ll have to show this story to my coin- and detector-loving 9yo son Henry.

Although Peter has been doing this for decades, his success rate of uncovering historic finds has grown since the launch of Google Earth, which helps him research farmlands to search and saves him from relying on outdated aerial photography. In December 2014, Peter noticed a square mark in a field with Google Earth. It was in this area where the Weekend Wanderers discovered the £1.5 million cache of Saxon coins. The coins are now on display in the Buckinghamshire County Museum, by order of the queen’s decree.

NewImage

NewImage

[YouTube]