Category Archives: Miscellaneous

Cool Google Maps news: Mapping pollution, seeing restaurant wait times, and more


  • On Google search (and soon Maps) you can see wait times for nearly a million sit-down restaurants around the world. Search for the restaurant on Google, open the business listing, and scroll down to the Popular Times section. “You can even scroll left and right to see a summary of each day’s wait times below the hour bars–so you can plan ahead to beat the crowds.”
  • Google mapped air quality across California, with Street View cars spending 4,000 hours driving 100,000 miles in SF, LA, and the Central Valley. Check out the preliminary results.
  • Did you know that your timeline on Maps makes it easy to revisit the places you’ve been, filter by activity (e.g. horseback riding), and more?


My new team’s new page: Check out Google Machine Perception

“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:

Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.

Our technology powers products across Alphabet, including image understanding in Search and Google Photos, camera enhancements for the Pixel Phone, handwriting interfaces for Android, optical character recognition for Google Drive, video understanding and summarization for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including Motion Stills, PhotoScan and Allo.

We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).


[Via Peyman Milanfar]

Google Assistant adds more than 50 kids’ games and activities

When they’re not savagely trolling me (“Hey Google, play Justin Bieber!”—then running away), the Micronaxx really enjoy playing the “I’m Feeling Lucky” trivia app with us. Therefore I was charmed to get invited to brainstorm with my Toontastic friends & others from Google’s kid-focused group, coming up with all kinds of ideas for other family-oriented audio apps. Now that work is starting to come to fruition, enabling 50+ new games & activities on Google Home:

Google says the Assistant is now better at recognizing kids’ voices; and like adults, it’ll be able to distinguish between them so that it can customize responses to each person. To do this, kids will need a Family Link account, which around Google accounts for kids under 13 that allow for parental supervision.

Check it out:



Adios, “Content-Aware Fail”? Check out DeepFill

As rad as now-venerable (!) Content-Aware Fill tech is, it’s not semantically aware. That is, it doesn’t pay attention to what objects a region contains (e.g. face, clouds, wood), and so it can produce undesirable results. Here Adobe’s Jiahui Yu shows off a smarter successor, DeepFill:

Watching the little “heart” portion of the demo, I can only imagine what Russell Brown will do with this tech.

Question, though: If Content-Aware Phil is passé, will we see the rise of Deep Phil, below? (And yes, I could use some quick style-transfer integration in Photoshop to help with a piece like this. Chop chop, Adobeans. :-))


Un-Lost in Space: Google Maps heads to the heavens

To the moon, Alice!—and points beyond:

Now you can visit these places—along with many other planets and moons—in Google Maps right from your computer. For extra fun, try zooming out from the Earth until you’re in space!

Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon’s crust—suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes. Inspect the massive crater of Mimas—while it might seem like a sci-fi look-a-like, it is a moon, not a space station.  

More info.


My brand new gig: Augmenting reality in Google Research

I’m stupid-excited to say that I’ve just joined Google’s Skynet Machine Perception team to build kickass creative, expressive experiences, delivering augmented reality to (let’s hope) a billion+ people. I told you sh*t just got real. 🙂

Now, the following career bits may be of interest only to me (and possibly my mom), but in case you’re wondering, “Wait, don’t you work on Google Photos…?”

Well, like SNL’s Stefon, “I’ve had a weird couple of years…” 


The greatly smoothed version goes basically like this:

  • I joined Google in early 2014 to work on Photos. I liked to say I was “Teaching Google Photoshop,” meaning getting computers to see & synthesize like humans (making your Assistant your artist!). Among other things, we created a brand-new image editor, did some early AR face-painting work (a year+ ahead of Snapchat et al), and made movies for tens of millions of people.
  • After a bit over a year, I wanted to explore some crazier photo- and video-related ideas (stuff not ready for Photos to include then, if ever), so I left the team & walked across the hall to work with & learn from Luke Wroblewski. Thus I was “working at Google on photos, just not Photos.” This was a subtle distinction, and as I was working on secret stuff, I didn’t spend time publicizing it. I remained closely involved with the ex-Nik Photos folks in building out Snapseed & the next rev of the new editor we’d started.
  • Meanwhile I spent the better part of the next year thinking up, prototyping, and iterating on a bunch of little photo apps. It was a tough but enlightening process. I know we were on to something, but I also felt like Edison saying some variant of “I have not failed. I’ve just found 10,000 ways not to make a light bulb.”
  • Somewhat tired from the process & eager to make concrete contributions, I was set to join an imaging hardware team. When project plans changed, however, I agreed to help improve photography experiences on social apps including Google+.
  • Having witnessed on Photos the massive importance of speed, I teamed up with my future teammates in Research to build out the RAISR machine-learning library and ship it in Google+, saving users immense amounts of bandwidth (critical in the developing world).
  • Since then, and up until this week, I’ve been focusing on enterprise social needs. Though it wasn’t an area I sought out, I ended up really digging the experience, and I look forward to eventually sharing some of the rad stuff my team was building.
  • And then, Google bought this little company in Belarus & my old Research friends came calling…

So now we’ve come full circle, and to capture my feelings, I’ll cite SNL yet again. Wish me luck. 🙂