Computational photography: Inside the Google Pixel

Many years ago the Photoshop team collaborated with Stanford professor Marc Levoy & his team. We were especially interested in their work to create a programmable device—charmingly known as the “Frankencamera”—that could run emerging algorithms to guide both capture & processing.

Fast forward to today, and Marc is leading a team of researchers at Google who just helped ship the new Pixel phone. As Marc notes, “The French agency DxO recently gave the Pixel the highest rating ever given to a smartphone camera.” On the Verge he provides lots of interesting details about how the camera works. For instance,

The Hexagon digital signal processor in Qualcomm’s Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. “The moment you press the shutter it’s not actually taking a shot — it already took the shot,” says Levoy. “It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it’s already captured, and combines them together.”

Read on for more—or if you just want some quick highlights, check out this two-minute tour shot entirely with a Pixel:

NewImage

[YouTube]

One thought on “Computational photography: Inside the Google Pixel

  1. hi John, I read the DxO article when it was first published. And it gave me some problems. Primarily:
    – The total score for the Pixel camera includes a very high mark (93) for something called “Texture”. That is not explained anyplace, nor illustrated with even one pictorial example.
    – The high mark for Exposure (90) seems altogether inconsistent with the observation that “on our greenery test scene below, it didn’t perform as well as some of the other flagship phones, as it lost details in the shadows.” (Actually the pictorial example published essentially contains no detail in the shadowed zones.) Plus, even in the street scene images, where three out of four shots were auto-HDR-processed, I’d not agree that there is “very good highlight preservation and details in shadows.”
    Basically, on the second point, processing for highlights by utilizing underexposed renderings (and vice versa for shadows) always seems to have limitations and/or a pictorial Achilles heel. To the point where maybe it’d actually be better to deliberately – but selectively – shift the uppermost and lowermost luminance zones into the mid-tone portion of the (overall) distribution.

Leave a Reply

Your email address will not be published. Required fields are marked *