Monthly Archives: February 2019

3-wheelin’ EV feeling

“Oh, that’s easy,” said my wife on our first date, answering my question about what kind of car she’d be: “I’d be one of those little three-wheeled French jobs like Audrey Hepburn drove in Funny Face.” Ever since then we’ve had a thing for three-wheelers, putting one on our save-the-date wedding card.

It’s hard to imagine the electric Nobe car really hitting the highways, but I love the look of it:

NewImage

NewImage

[YouTube] [Via]

Googlers win VFX Oscars

Congrats to Paul Debevec, Xueming Yu Wan-Chun Alex Ma, and their former colleague Timothy Hawkins for the recognition of their groundbreaking Light Stage work! 

[YouTube]

Now they’re working with my extended team:

“We try to bring our knowledge and background to try to make better Google products,” Ma says. “We’re working on improving the realism of VR and AR experiences.”

I go full SNL Sue thinking about what might be possible.

NewImage

Oh, and they worked on Ready Player One (nominated for Best Visual Effects this year) and won for Blade Runner 2049 last year:

Just prior to heading to Google, they worked on “Blade Runner 2049,” which took home the Oscar for Best Visual Effects last year and brought back the character Rachael from the original “Blade Runner” movie. The new Rachael was constructed with facial features from the original actress, Sean Young, and another actress, Loren Peta, to make the character appear to be the same age she was in the first film.

Check out their work in action:

[YouTube 1 & 2]

A Google UX opening to work on cool streaming tech

The listing for Interaction Designer, Enterprise Design in Munich might sound a touch dry at first, but it’s related to a very interesting project:

We’ve been working on Project Stream, a technical test to solve some of the biggest challenges of streaming. For this test, we’re going to push the limits with one of the most demanding applications for streaming—a blockbuster video game.

We’ve partnered with one of the most innovative and successful video game publishers, Ubisoft, to stream their soon-to-be released Assassin’s Creed Odyssey® to your Chrome browser on a laptop or desktop. Starting on October 5, a limited number of participants will get to play the latest in this best-selling franchise at no charge for the duration of the Project Stream test.

“Would You Like To Know More?” Check out the listing to see if it’s a fit for you.

NewImage

A peek at Oppo’s new 10x optical zoom for phones

Looks pretty nifty, though it’s interesting that it doesn’t (at least currently) work for capturing video or macro shots:

The Verge explains,

The key component to Oppo’s system is a periscope setup inside the phone: light comes in through one lens, gets reflected by a mirror into an array of additional lenses, and then arrives at the image sensor, which sits perpendicular to the body of the phone. That’s responsible for the telephoto lens in Oppo’s array, which has a 35mm equivalence of 160mm. Between that lens, a regular wide-angle lens, and a superwide-angle that’s 16mm-equivalent, you get the full 10x range that Oppo promises.

[YouTube]

ZOMG: A life-sized Lego Westy

Ladies & gentlemen, we are approaching Peak JNack…

Using 400,000 LEGO® bricks, two experienced LEGO® model makers have built what is probably the world’s biggest camper from LEGO® bricks. The full-size T2 was revealed at the f.re.e leisure and travel fair in Munich. Visitors young and old to f.re.e (20 – 24 February) will be able to admire the 700 kg Bulli up close. The vehicle that served as the blueprint for the model was the T2a camper van, built from 1967 to 1971 – to this day the truly iconic camper for globetrotters.

See more photos & details here.

NewImage

NewImage

[Via]

Photography: Beautiful orbital shots of “The World Below”

Bruce Berry (not Neil Young’s late roadie) created some beautiful time lapse imagery from images captured aboard the International Space Station:

On Vimeo he writes,

All footage has been edited, color graded, denoised, deflickered, stabilized by myself. Some of the 4K video clips were shot at 24frames/sec reflecting the actual speed of the space station over the earth. Shots taken at wider angels were speed up a bit to match the flow of the video.

Some interesting facts about the ISS: The ISS maintains an orbit above the earth with an altitude of between 330 and 435 km (205 and 270 miles). The ISS completes 15.54 orbits per day around the earth and travels at a speed of 27,600 km/h; 17,100 mph).

The yellow line that you see over the earth is Airgolw/Nightglow. Airglow/Nightglow is a layer of nighttime light emissions caused by chemical reactions high in Earth’s atmosphere. A variety of reactions involving oxygen, sodium, ozone, and nitrogen result in the production of a very faint amount of light (Keck A and Miller S et al. 2013).

I love the choice of music & wondered whether it comes from Dunkirk. Close: that somewhat anxious tock-tock undertone is indeed a Hans Zimmer jam, but from 20 years earlier (The Thin Red Line).

[YouTube]

Machine learning in your browser tracks your sweet bod

A number of our partner teams have been working on both the foundation for browser-based ML & on cool models that can run there efficiently:

We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X. […]

This might all make more sense if you try a live demo here.

Check out this post for more details.

David Salesin joins Google Research

“Man, that dude looks eerily like David Salesin,” I thought the other day as I was getting coffee, “but nah, he’s wearing a new-employee badge. But wait, holy crap… that dude is David Salesin wearing an employee badge!”

Perhaps you don’t know his name, but for 11+ years David (a tango-dancing Aikido black belt) led a wing of Adobe Research, and we collaborated on more projects than I can begin to count. Now he’s at the Goog (having led Snapchat research in the interim), teaming back up with several of our fellow Adobe alums. I can’t wait to see what he does here!

Last Monday I began work as a Principal Scientist / Director at Google AI Perception, based in San Francisco. I’m excited to collaborate with so many good friends and colleagues who are already at Google, and, in time, to hire many more. Google’s products and reach are incredibly broad, and so is the mandate for my lab: I look forward to continue inventing tools for creative expression, as well as to begin working on some brand new far-reaching challenges potentially well outside my area of expertise, like applying AI to healthcare. In my new role, I’m energized to grow in new ways, working on projects that, in Larry Page’s words, are “uncomfortably exciting”! 

Eye-popping racing drone photography

Holy crap! Now my stuff looks positively lethargic ¯\_(ツ)_/¯, but what the heck, strap in & enjoy:

DIY Photography writes,

Johnny Schaer (Johnny FPV) is a pro drone racer. His drones are designed to be light, quick, nimble, fly upside down and through all kinds of crazy flightpaths that DJI’s drones could never achieve. And when somebody with the skill of Johnny turns on the camera, that’s when you get results like the video above.

To shoot the footage, Johnny used a drone built around the AstroX X5 Freestyle Frame (JohnnyFPV edition, obviously) frame with a GoPro Hero 7. It has no GPS, no gimbal, no stabilisation, no collision avoidance, none of those safety features that make more commercial drones predictable and easy to fly. 

[YouTube]

It’s Friday: Let’s melt some faces!

I’m so pleased to say that my team’s face-tracking tech (which you may have seen powering AR effects in YouTube Stories and elsewhere) is now available for developers to build upon:

ARCore’s new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.


“Why do you keep looking at King Midas’s wife?” my son Finn asked as I was making this GIF the other day. :-p

Check out details & grab the SDKs:

We can’t wait to see what folks build with this tech, and we’ll share more details soon!

Electricity-generating kites? Google’s “Makani” moonshot takes off

“Days of miracles & wonder,” part 9,277:

We’re working to advance the global adoption of renewable energy by creating kites that efficiently harness energy from the wind. After more than a decade developing our energy kite technology on land, I’m thrilled to share that we’re now partnering with Shell to bring Makani to offshore environments. As we take this next step towards commercialization, we’ll also be moving on from the Moonshot Factory, our home for the last five years, to become an independent business within Alphabet.

Watch it soar:

NewImage

[YouTube]

Adobe’s “Enhance Details” promises higher res, fewer artifacts

Enhance!” The latest changes in Camera Raw & Lightroom promise to improve the foundational step in raw processing:

The composite red, green, and blue value of every pixel in a digital photo is created through a process is called demosaicing.

Enhance Details uses an extensively trained convolutional neural net (CNN) to optimize for maximum image quality. We trained a neural network to demosaic raw images using problematic examples […] As a result, Enhance Details will deliver stunning results including higher resolution and more accurate rendering of edges and details, with fewer artifacts like false colors and moiré patterns. […]

We calculate that Enhance Details can give you up to 30% higher resolution on both Bayer and X-Trans raw files using Siemens Star resolution charts.

Hmm—I’m having a hard time wrapping my head around the resolution claim, at least based on the results shown (which depict an appreciable but not earth-shattering change). Having said that, I haven’t put the tech to the test, but I look forward to doing so.

For more info check out the related help doc plus some deep nerdery on how it all works.

AR walking nav is starting to arrive in Google Maps

I’m really pleased to see that augmented reality navigation has gone into testing with Google Maps users:

On the Google AI Blog, the team gives some insights into the cool tech at work:

We’re experimenting with a way to solve this problem using a technique we call global localization, which combines Visual Positioning Service (VPS), Street View, and machine learning to more accurately identify position and orientation. […]

VPS determines the location of a device based on imagery rather than GPS signals. VPS first creates a map by taking a series of images which have a known location and analyzing them for key visual features, such as the outline of buildings or bridges, to create a large scale and fast searchable index of those visual features. To localize the device, VPS compares the features in imagery from the phone to those in the VPS index. However, the accuracy of localization through VPS is greatly affected by the quality of the both the imagery and the location associated with it. And that poses another question—where does one find an extensive source of high-quality global imagery?

Read on for the full story.

Google has built… Lego-scanning radar?

No, for real. The Verge writes,

What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.

But yes… Legos. See what you can make of this:

[YouTube]

AR: Gambeezy lands on Pixel!

This is America… augmented by Childish Gambino on Pixel:

NewImage

The Childish Gambino Playmoji pack features unique moves that map to three different songs: “Redbone,” “Summertime Magic,” and “This is America.” Pixel users can start playing with them today using the camera on their Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3 and Pixel 3 XL.

And with some help from my team:

He even reacts to your facial expressions in real time thanks to machine learning—try smiling or frowning in selfie mode and see how he responds.

Enjoy!

The amazing Google Live Transcribe is here

Having watched my teammate Dimitri use Live Transcribe in meetings for the past year, I’m super excited to see it arrive:

[It’s] a free Android service that makes conversations more accessible through real-time captioning, supporting over 70 languages and more than 80% of the world’s population.

Here’s a deeper look into how it works.

Paul Thurrott writes,

Given my experience with my deaf son, who uses cochlear implants, lip-reading, and sign language to communicate with others, I can tell you that these apps—unlike certain misguided Microsoft accessibility efforts, like Cortana screeching during Windows Setup—address real-world problems that impact many, many people. And that they are, thus, both well-intentioned and truly useful. Bravo, Google.

[YouTube]

New microdrones can lift 40x their body weight

As we know of velociraptors, things tend to go awesome once creatures learn how to open doors. The Verge writes,

The key to the design is the use of interchangeable adhesives on the drone’s base: microspines for digging into rough materials like stucco, carpet, or rubble, and ridged silicone (inspired by the morphology of gecko feet) for grabbing onto glass. Both microspines and silicone ridges only cling to surfaces in one direction, meaning they can be easily detached. With these in place, the micro-drones can pull well above their 100-gram weight, exerting 40 newtons of force or enough to lift four kilograms (about eight pounds).

Google+ going away has no impact on your Google Photos library

Just to confirm, as various friends have been asking:

Note that photos and videos backed up in Google Photos will not be deleted.

So after Google+ shuts down April 2, everything you see at photos.google.com will be safe & sound. As for stuff not in your Photos library:

  • If you’ve shared an image on G+, that shared copy will be deleted. You can see all of those images in the Album Archive page.
  • You can download your content before the April shutdown.

Hope this helps. If anything is unclear, please let me know!

Grid-swarming nanocopters, coming soon to a dream/nightmare near you

I, for one, welcome our new cubic overlords:

We present GridDrones, a self-levitating programmable matter platform that can be used for representing 2.5D voxel grid relief maps capable of rendering unsupported structures and 3D transformations. GridDrones consists of cube-shaped nanocopters that can be placed in a volumetric 1xnxn mid-air grid, which is demonstrated here with 15 voxels. The number of voxels and scale is only limited by the size of the room and budget. Grid deformations can be applied interactively to this voxel lattice by manually selecting a set of voxels, then assigning a continuous topological relationship between voxel sets that determines how voxels move in relation to each other and manually drawing out selected voxels from the lattice structure. Using this simple technique, it is possible to create unsupported structures that can be translated and oriented freely in 3D. Shape transformations can also be recorded to allow for simple physical shape morphing animations. This work extends previous work on selection and editing techniques for 3D user interfaces.

[YouTube] [Via]