Monthly Archives: October 2017

Explore the World Series in VR through Google Expeditions

Check out five new Google Expeditions created in partnership with

These include virtual tours of Citi Field in New York and Oriole Park at Camden Yards in Baltimore, both of which are narrated by MLB Network’s Heidi Watney. You can also get behind-the-scenes access with career tours that showcase the lives of a baseball beat reporter and television broadcasters. We’re also bringing you a Statcast tour, so you can geek out Moneyball-style with the math and physics behind the game.


My new team’s new page: Check out Google Machine Perception

“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:

Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.

Our technology powers products across Alphabet, including image understanding in Search and Google Photos, camera enhancements for the Pixel Phone, handwriting interfaces for Android, optical character recognition for Google Drive, video understanding and summarization for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including Motion Stills, PhotoScan and Allo.

We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).


[Via Peyman Milanfar]

New for kids: Google Science Journal

I’m eager to try this out with our lads:

We’ve redesigned Science Journal as a digital science notebook, and it’s available today on Android and iOS.

With this new version of Science Journal, each experiment is a blank page that you can fill with notes and photos as you observe the world around you. Over time, we’ll be adding new note-taking tools… We’ve added three new sensors for you to play with along with the ability to take a ”snapshot” of your sensor data at a single moment in time.



High-speed Photoshop evisceration as a music video


The new video for rock band Spoon’s “Do I Have to Talk You Into It” consists of lead singer Britt Daniel being rapidly morphed, deformed, beautified, clone-stamped, liquified, and peeled apart in Photoshop. At one point, Daniel is transformed into a coyote, the Photoshop interface drops away for a split second, and we just see some video of a snarling coyote in the woods. Why not?

Gizmodo goes behind the scenes (and skin!) with director Brook Linder & team.



DxO acquires the Nik Collection from Google

I am delighted to share this news:

DxO plans to continue development of the Nik Collection. The current version will remain available for free on DxO’s dedicated website, while a new “Nik Collection 2018 Edition” is planned for mid-next year.

“The Nik Collection gives photographers tools to create photos they absolutely love,” said Aravind Krishnaswamy, an Engineering Director with Google. “We’re thrilled to have DxO, a company dedicated to high-quality photography solutions, acquire and continue to develop it.”

DxO is already integrating Nik tech into their apps:

The new version of our flagship software DxO OpticsPro, which is available as of now under its new name DxO PhotoLab, is the first embodiment of this thrilling acquisition with built-in U Point technology (video).

Having known them as Photoshop developers, I was always a big fan of the Nik crew & their tech. (In fact, their acquisition by Google was instrumental in making me consider working here.) I wanted to acquire them at Adobe, and I was always afraid that Apple would do so & put U Point into Aperture! ¯\_(ツ)_/¯

The desktop plug-ins, however, were never a great fit for Google’s mobile/cloud photo strategy, and other than Analog Efex, none had been improved since 2011 (more than a year before Google acquired them). I know that Aravind Krishnaswamy (badass photog, Photoshop vet, eng manager for Google Photos) and other went many extra miles to find a good new home for the Nik Collection, and I’m really excited to see what DxO can do with it. On behalf of photographers everywhere, thanks guys!


Google Assistant adds more than 50 kids’ games and activities

When they’re not savagely trolling me (“Hey Google, play Justin Bieber!”—then running away), the Micronaxx really enjoy playing the “I’m Feeling Lucky” trivia app with us. Therefore I was charmed to get invited to brainstorm with my Toontastic friends & others from Google’s kid-focused group, coming up with all kinds of ideas for other family-oriented audio apps. Now that work is starting to come to fruition, enabling 50+ new games & activities on Google Home:

Google says the Assistant is now better at recognizing kids’ voices; and like adults, it’ll be able to distinguish between them so that it can customize responses to each person. To do this, kids will need a Family Link account, which around Google accounts for kids under 13 that allow for parental supervision.

Check it out:



Infographic creation gets smarter with Adobe’s Project Lincoln

Hey kids, remember Bob Dole?

The last time I recall charting features in Adobe Illustrator getting an update, Bob was running for president—in 1996. Later (c. 2000), Illustrator & ImageReady (later Photoshop) added the ability to bind text objects & shapes to variables. That would have been a godsend in my old graphics production life, but the world didn’t seem to take much notice.

Figuring that we were never going to get around to doing something natively in the apps, I proposed enabling HTML or Flash layers right on the canvas of Adobe design apps. That way a single HTML or SWF GUI could run right in Illustrator, Photoshop, InDesign, etc.—and remain alive & dynamic when exported. You could argue that I was on crack, or you could argue that had we gone that way, we’d have had great charting a decade ago—or both.

But may the future bury the past: it looks like Adobe is at last getting serious about delivering great infographic-making tools. Check out this sneak of “Project Lincoln”:



Adios, “Content-Aware Fail”? Check out DeepFill

As rad as now-venerable (!) Content-Aware Fill tech is, it’s not semantically aware. That is, it doesn’t pay attention to what objects a region contains (e.g. face, clouds, wood), and so it can produce undesirable results. Here Adobe’s Jiahui Yu shows off a smarter successor, DeepFill:

Watching the little “heart” portion of the demo, I can only imagine what Russell Brown will do with this tech.

Question, though: If Content-Aware Phil is passé, will we see the rise of Deep Phil, below? (And yes, I could use some quick style-transfer integration in Photoshop to help with a piece like this. Chop chop, Adobeans. :-))


Explore Mars in WebVR with Google

Immerse yourself in a 3D model built from Curiosity rover photos:

Today, we’re putting that same 3D model into an experience for everyone to explore. We call it Access Mars, and it lets you see what the scientists see. Get a real look at Curiosity’s landing site and other mission sites like Pahrump Hills and Murray Buttes. Plus, JPL will continuously update the data so you can see where Curiosity has just been in the past few days or weeks. All along the way, JPL scientist Katie Stack Morgan will be your guide, explaining key points about the rover, the mission, and some of the early findings.


[YouTube] [Via]

Image science: Inside Portrait mode on the Pixel 2

If TensorFlow, PDAF pixels, and semantic segmentation sound like your kind of jam, check out this deep dive into mobile imaging from Google research lead Marc Levoy. He goes into some detail about how the team behind the new Pixel 2 trains neural network, detects depth, and synthesizes pleasing, realistic bokeh even with a single-lens device. [Update: There’s a higher-level, less technical version of the post if you’d prefer.]



Un-Lost in Space: Google Maps heads to the heavens

To the moon, Alice!—and points beyond:

Now you can visit these places—along with many other planets and moons—in Google Maps right from your computer. For extra fun, try zooming out from the Earth until you’re in space!

Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon’s crust—suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes. Inspect the massive crater of Mimas—while it might seem like a sci-fi look-a-like, it is a moon, not a space station.  

More info.


Google Photos gets pet-savvy

“Dogs and cats clustered together—mass hysteria!” Google Photos can now search by breed (e.g. Maine Coon, Labrador), and it clusters pets alongside people:

When you want to look back on old photos of Oliver as a puppy or Mr. Whiskers as a kitten, you no longer need to type “dog” or “cat” into search in Google Photos. Rolling out in most countries today, you’ll be able to see photos of the cats and dogs now grouped alongside people, and you can label them by name, search to quickly find photos of them, or even better, photos of you and them. This makes it even easier to create albums, movies, or even a photo book of your pet.

Also, don’t forget to check your Assistant to see whether you’ve received a deliriously cornball-soundtracked pet movie. [Via

AR insani-tee: “It’s technology with guts!”

Curiscope takes introspection to a whole new level—and the puns’ll get under your skin:

Virtuali-Tee is a magic lens into a world inside the body. View through our free app on your phone or tablet to unlock a portal into your body’s vital organs. Jump into the pumping heart of an awesome anatomical adventure that brings learning to life in fully animated 3D using augmented and virtual reality technologies. Take a deep breath, dive into the bloodstream, and see for yourself.



Street View goes *way* north, eh?

“Wearable camera” is a relative term, but props to the brave souls who schlepped up to within 500 miles of the North Pole:

Last summer, our team threw on the Google Trekker and explored the park’s incredible terrain—it was the furthest north Street View has ever gone. Wilderness and extreme isolation characterize this area, where fewer than 50 people visit each year. The park’s name itself translates to “the top of the world” in Inuktitut, the local indigenous language.

Check out the 360º images they captured



New Live Photos hotness in Google Photos, Motion Stills

Motion Stills lets you make stabilized multi-clip movies, animated collages, loops, and more from Live Photos. Now version 2.0 for iOS adds 

  • Capture Motion Stills right inside the app.
  • Capture and save Live Photos on any device.
  • Swipe left to delete Motion Stills in the stream.
  • Export collages as GIFs.

The app’s available on Android, too. Android Police writes, “It’s is essentially a GIF camera, but the app stabilizes the video while you’re recording. You can record for a few seconds, or use the fast-forward mode to speed up and stabilize longer videos.”

Not to be outdone, Google Photos on Web, iOS, and Android now displays Live Photos as well as Motion Photos from the new Pixel 2, giving you a choice of whether to display the still or moving portion of the capture. Here’s a quick sample on the Web. Note the Motion On/Off toggle up top.

I’m thrilled to have joined the team behind Motion Stills, so please let us know what you think & what else you’d like to see!

Behinds the scenes of the new Pixel 2’s camera

Fun insights from my new teammates, including:
  • “You essentially have the space of a blueberry for the camera to squeeze into.”
  • The lens is actually six lenses.
  • Each pixel is split into two—useful for sensing depth.
  • The whole thing weighs .003 pounds, about the same as a paperclip.
  • HDR+ looks tile-by-tile across a range of captures shot in quick succession, moving chunks as needed to align them. This is good for “scaring ghosts.”
  • A neural network trained on 1 million images built a model for what’s person-like and should be kept in focus while blurring the background.
  • A hexapod rig is used to generate (and thus find ways to combat) various kinds of shakiness.





Photography: “Microsculptures,” incredible macro photography of insects

Levon Bliss combines thousands of captures to create each of his “microscuplture” portraits of insects. Check out this brief overview:

If that’s up your alley, check out his TED talk as well—not to mention this year’s winners of Nikon’s Small World photography contest. For those not interested in having terrifying nightmares, I’ll thoughtfully omit the giant close-up of a tapeworm head. 🙂

Photographer Levon Biss was looking for a new, extraordinary subject when one afternoon he and his young son popped a ground beetle under a microscope and discovered the wondrous world of insects. Applying his knowledge of photography to subjects just five millimeters long, Biss created a process for shooting insects in unbelievable microscopic detail. He shares the resulting portraits — each comprised of 8- to 10,000 individual shots — and a story about how inspiration can come from the most unlikely places.


[Vimeo] [Via Peyman Milanfar]

Jeff Koons creates AR art, immediately gets vandalized

Snapchat has teamed up with pop artist Jeff Koons to enable pinning giant 3D augmented reality versions of his sculptures around the world:

TechCrunch isn’t feeling it, and neither are some other artists:

The team made an identical 3D AR Balloon Dog covered in graffiti and geo-tagged it to the exact coordinates, “as if the result of an overnight protest” says Sebastian. “It is vital to start questioning how much of our virtual public space we are willing to give to companies,” he continues.





Have richer conversations around photos on Google+

“A humble thing, but thine own,” Vin Scully used to say, and I’m happy to note that one of the photography-related features I helped shepherd through during my time on enterprise social has launched.

Photographers told us that the new Web UI for Google+, while welcome for offering features like zoom & photo sphere support, made it harder to see the context on photos & to have conversations around them. That’s now changed, providing a better balance between image & context. G+ tech lead Leo Deegan writes,

Over the next few days, we’ll be rolling out a new version of the photo lightbox on Google+ Web. The new lightbox, which appears for photos that are part of single-photo posts (not yet for multi-photo posts), places a greater emphasis on the photo caption and comments.

There are a couple of reasons why I’m happy about this new lightbox. First, the EXIF data (found in the “Show information” menu item) brings back the display of the photo date; the previous lightbox displayed the post date. And second, clicking on the back arrow brings you to the post no matter how you arrive at the lightbox (people who found their way to a lightbox without being able to get to the post know what I’m talking about).


Demo: Amazing video stabilization in the Google Pixel 2

“Any sufficiently advanced technology…”

Watch as the all-new Pixel 2 heads up the mountains in India to test out the new Fused Video Stabilization. The left side of the video has no stabilization at all, with optical image stabilization (OIS) and electronic image stabilization (EIS) turned off. The right side is the Pixel 2 with Fused Video Stabilization enabled.

The Pixel 2 has a feature called “frame look ahead” which analyzes each individual frame of a saved video for movement. Machine learning compares dominant movements from one frame to another and stabilizes accordingly.

CNET’s got details:


[YouTube] [Via]

DxO: “Google Pixel 2 sets new record for overall smartphone camera quality”


The Google Pixel 2 is the top-performing mobile device camera we’ve tested, with a record-setting overall score of 98. Impressively, it manages this despite having “only” a single-camera design for its main camera. Its top scores in most of our traditional photo and video categories put it ahead of our previous (tied) leaders, the Apple iPhone 8 Plus and the Samsung Galaxy Note 8.

Read on for tons of details.


SNL savages Papyrus font

Oh, I see you nervously shifting a little, photographers. 🙂 This take-down is as hilarious as you’ve heard:


Bonus: CBS news caught up with the font’s creator to get his reaction:

“I designed the font when I was 23 years old. I was right out of college. I was kind of just struggling with some different life issues, I was studying the Bible, looking for God and this font came to mind, this idea of, thinking about the biblical times and Egypt and the Middle East. I just started scribbling this alphabet while I was at work and it kind of looked pretty cool,” Costello said.

He added, “I had no idea it would be on every computer in the world and used for probably every conceivable design idea. This is a big surprise to me as well.”


Microsoft shows off cool “Holoportation” AR tech

Holoportation is “a new type of 3D capture technology that allows high quality 3D models of people to be reconstructed, compressed, and transmitted anywhere in the world in real-time.”

When combined with mixed reality displays such as HoloLens, this technology allows users to see and interact with remote participants in 3D as if they are actually present in their physical space. Communicating and interacting with remote users becomes as natural as face to face communication.

Total Recall can’t be far behind…