Monthly Archives: September 2017

BMW plans wireless charging for cars

Where phone have led, cars will follow—first becoming malleable, software-centric platforms (e.g. Tesla rolling out Autopilot, improving cars’ acceleration, etc.), integrating voice assistants, and now adopting contactless charging. Sure, poking a plug into my car takes all of 15 seconds, but I’ll admit to being a touch jealous:

When approaching the pad, the 530e’s sensors and internal screen will navigate the driver to the necessary point above the charging pad. Once in position, the pad’s integrated coil, alongside a secondary coil found inside the car, generate an alternating magnetic field that will charge the car’s 9.4kWh battery in 3.5 hours with 3.2kW of power. The entire process can even be monitored via an app.



“I hate Google Photos”—but in a good way!

Heh—nice to see concept movies (assembled from thematically related pics/vids) really resonating with this Reddit user:

You made a grown ass man cry like a baby by automatically making a video titled “They grow up so fast”.. which has about 45 clips of videos with my daughter in it.. aged around 4-5 months to 22 months (current).

I have watched that 3 minute long video 3 times so far.. first time while I cried like a baby.. next two times with my jaw dropped due to the technology that made this possible.

I got one of these myself on Saturday, and now my mom & wife can’t stop watching our Henry Seamus grow from cooing blob to fun-sized weirdo. Cue gratuitous showing!



Canon’s “Free Viewpoint” virtual camera system looks bananas

Remember when the yellow first-down line was the height of game-time badassery? Details here are scant, but this looks like a trip:

PetaPixel writes,

The system would require a number of high-resolution cameras mounted in various places around the stadium. Each camera is connected to a network and controlled by software. Afterward, the video viewpoints are fed into an image processing engine that turns it into high-resolution 3D spatial data.



VFX history: Preserving the last working Scanimate system

“For about a decade, from 1975 to 1985,” Vice writes, “if you witnessed moving animation on television, it was either shot one frame at a time, or made using a Scanimate machine. Only ten of the devices were ever built.”

Here they drop in on engineer Dave Sieg, who has spent the last 20 years preserving the only working Scanimate. Dave discusses the technical and cultural impact of the Scanimate and what the future holds for this iconic machine.


[YouTube] [Via Margot]

Crazy Train: A wild drone flight into & under a moving train

Tommy played piano like a kid out in the rain
then he lost his leg in Dallas he was dancin’ with a train

Man, I thought that my flying a drone off the back of a boat on the Mississippi was risky—but that seems laughably sane compared to Paul Nurkkala flying his drone flying onto, next to, inside, and under a moving freight train.

If you can’t take the queasy-making camera moves, jump to 3:20 to go underneath & 3:30 to go inside:

PetaPixel writes,

Nurkkala specializes in flying camera drones through a first-person point-of-view using a live feed through goggles. His custom-assembled drone was equipped with a GoPro HERO5 Session action camera, which is light enough to keep the craft fast and nimble.

“I recognize that this isn’t the most ‘flowy’ video or anything, but all of the things were all in the same flight, so I wanted to show that off,” Nurkkala writes.




My brand new gig: Augmenting reality in Google Research

I’m stupid-excited to say that I’ve just joined Google’s Skynet Machine Perception team to build kickass creative, expressive experiences, delivering augmented reality to (let’s hope) a billion+ people. I told you sh*t just got real. 🙂

Now, the following career bits may be of interest only to me (and possibly my mom), but in case you’re wondering, “Wait, don’t you work on Google Photos…?”

Well, like SNL’s Stefon, “I’ve had a weird couple of years…” 


The greatly smoothed version goes basically like this:

  • I joined Google in early 2014 to work on Photos. I liked to say I was “Teaching Google Photoshop,” meaning getting computers to see & synthesize like humans (making your Assistant your artist!). Among other things, we created a brand-new image editor, did some early AR face-painting work (a year+ ahead of Snapchat et al), and made movies for tens of millions of people.
  • After a bit over a year, I wanted to explore some crazier photo- and video-related ideas (stuff not ready for Photos to include then, if ever), so I left the team & walked across the hall to work with & learn from Luke Wroblewski. Thus I was “working at Google on photos, just not Photos.” This was a subtle distinction, and as I was working on secret stuff, I didn’t spend time publicizing it. I remained closely involved with the ex-Nik Photos folks in building out Snapseed & the next rev of the new editor we’d started.
  • Meanwhile I spent the better part of the next year thinking up, prototyping, and iterating on a bunch of little photo apps. It was a tough but enlightening process. I know we were on to something, but I also felt like Edison saying some variant of “I have not failed. I’ve just found 10,000 ways not to make a light bulb.”
  • Somewhat tired from the process & eager to make concrete contributions, I was set to join an imaging hardware team. When project plans changed, however, I agreed to help improve photography experiences on social apps including Google+.
  • Having witnessed on Photos the massive importance of speed, I teamed up with my future teammates in Research to build out the RAISR machine-learning library and ship it in Google+, saving users immense amounts of bandwidth (critical in the developing world).
  • Since then, and up until this week, I’ve been focusing on enterprise social needs. Though it wasn’t an area I sought out, I ended up really digging the experience, and I look forward to eventually sharing some of the rad stuff my team was building.
  • And then, Google bought this little company in Belarus & my old Research friends came calling…

So now we’ve come full circle, and to capture my feelings, I’ll cite SNL yet again. Wish me luck. 🙂 

Snapseed gets refreshed with presets, reorganized tools, and perspective

More power & speed for the millions of people who use Snapseed every day:

We’re excited to announce that Snapseed 2.18 has started rolling out today to users on Android and iOS. This update includes a fresh new UI, designed for faster editing with more efficient access to your favorite features.

You’ll find Looks are now available from the main screen, making it easier than ever to apply your customized filters to your photos. Looks are a powerful way to save your favorite combinations of edits and apply them to multiple images. We’ve added 11 beautiful new presets (handcrafted by the Snapseed team) to help you get started – give them a try!

We’re also bringing the Perspective tool to iOS to allow you to easily adjust skewed lines and perfect the geometry of horizons or buildings.



Explore the world’s photos in Google Earth


Starting today you’re invited to explore a global map of crowdsourced photos in Google Earth

To get started, open the Google Earth app on Android and iOS, or go to Google Earth in your Chrome browser on desktop. Open the main menu and turn on the Photos toggle. As you explore the world and zoom in, relevant photos from each location will appear. Click on any thumbnail to see a full-screen version of the photo, and then flip through related photos.





Make a 3D model of your face just by uploading a photo

Creeptastic! But quick, cool, and impressive: Visit The University of Nottingham’s demo site, and check out the project site for more details. As The Verge writes,

“3D face reconstruction is a fundamental computer vision problem of extraordinary difficulty.” You usually need multiple pictures of the same face from different angles in order to map every contour. But, by feeding a bunch of photographs and corresponding 3D models into a neural network, the researchers were able to teach an AI system how to quickly extrapolate the shape of a face from a single photo.


[Via Alex Kauffmann]

Francis Ford Coppola’s wisdom for product managers

“Arbiter of Focus”—that’s how David Lieb, who was the CEO of Bump & who now leads product for Google Photos—describes a PM’s job. Elsewhere I’ve heard, “What game are we playing, and how do we keep score?” In a similar vein, I found resonance in these remarks from Francis Ford Coppola:

Q. What is the one thing to keep in mind when making a film?

A. When you make a movie, always try to discover what the theme of the movie is in one or two words. Every time I made a film, I always knew what I thought the theme was, the core, in one word. In “The Godfather,” it was succession. In “The Conversation,” it was privacy. In “Apocalypse,” it was morality. The reason it’s important to have this is because most of the time what a director really does is make decisions. All day long: Do you want it to be long hair or short hair? Do you want a dress or pants? Do you want a beard or no beard? There are many times when you don’t know the answer. Knowing what the theme is always helps you.

Here’s the rest of the interview.


Google Earth VR adds Street View

Looks awesome:

This update lets you explore Street View imagery from 85 countries right within Earth VR. Just fly down closer to street level, check your controller to see if Street View is available and enter an immersive 360° photo. You’ll find photos from the Street View team and those shared by people all around the world.



“Inside Music”: A WebVR sonic explorer from Google & Sound Exploder

Check out this multi-track song explorer from a cool podcast (see previous) & Google’s WebVR team:

What if you could step inside a song? Inside Music is a simple experiment that explores that idea. It features the music of Phoenix, Natalia Lafourcade, Perfume Genius, Alarm Will Sound, Clipping, and Ibeyi. 

if you’re a musician, you can explore your own songs in VR or put them up on the web for others to explore.


[YouTube 1 and 2] [Via]

iPhone’s new “Portrait Lighting” looks compelling

I’m eager to try this out: 

When framing a subject, you’ll have a number of different lighting options to choose from for giving your portrait different looks — things like Contour Light, Natural Light, Studio Light, Stage Light, and Stage Light Mono.

These “aren’t filters,” Apple says. Instead, the phone is actually studying your subject’s face and calculating the look based on light that’s actually in the scene using machine learning.

Check out PetaPixel or Apple’s site for larger sample images. 


Giphy World: Social remixing of AR collages

A few years back I was really intrigued by Mixel, a social collage app that enabled easy creation & mixing of scenes. It didn’t take off, but I thought the underlying concept was strong, and now Giphy is taking a run at something similar:

Allows you to place gifs in 3D space, share videos of them or even share the whole 3D scene in AR with friends who have the app. They can then add, remix and re-share new instances of the scene. As many people as you want can collaborate on the space.

You drop GIFs into the world in the exact position you want them. A curated and trending mix of gifs that have transparency built into them is the default, but you can also flip it over to place any old Gif on the platform.

Interesting; let’s see what happens!



Sonic sculptures in space

Augmented reality beauty from Zach Lieberman:

Elsewhere there’s the ethereal Presence 3.1:

Presence is a circular four screen video installation. The screens show moving abstract forms against a black background, the motion of these forms reveal a human presence. These life-size abstract forms have been created by motion captured performances of dancers Julia Eichten and Nathan Makolandra from Benjamin Millepied’s LA Dance Project. 


Design: A Game of Thrones pop-up book

Given the show’s iconic titles, Game of Thrones: A Pop-Up Guide to Westeros (Amazon) seems kind of inevitable, no? Still pretty neat:

It features a total of five stunning spreads, which fold out to create a remarkable pop-up map of Westeros that is perfect for displaying. The book also contains numerous mini-pops that bring to life iconic elements of the show, such as direwolves, White Walkers, giants, and dragons.



Photography: Viewing the eclipse from space

Scientists at UW Madison observed the eclipse through the eye of one of the world’s most advanced weather satellites, GOES-16. The eclipse images from the satellite were taken at a rate of one every five minutes, then stitched together:


Elsewhere, Liem Bahneman loaded four cameras (three stills, including a Ricoh Theta 360, plus a GoPro) onto a high-altitude and shot what the total solar eclipse looks like from the edge of space. PetaPixel writes,

The 9-minute video above is what one camera recorded over Central Oregon. […] He launched the balloon shortly before totality pass over the state. As you’ll see in the video, the cameras were able to capture the shadow of the moon creeping across the land and plunging everything into darkness for minutes during totality. At around 5 and 7 minutes, you can hear the sounds of jets flying over the mountains below.