Monthly Archives: November 2016

An art director’s rather brilliant Instagram self-promo

Hats off to the clever & industrious Aric Guite:

The idea began with Aric making a list of his top 30 art directors. He combed through each of their Instagram feeds and selected one iconic photo. Using the photo as inspiration, Aric shot a second photo that complemented the subject matter. The two photos were then posted to Aric’s feed, with each art director tagged along with a caption asking to collaborate. Together, the photos create an entirely fresh and one-of-a-kind promo piece that is unique to each art director.


[Vimeo] [Via]

Google Photos introduces new concept movies

The first batch of movie concepts (“the kinds of movies you might make yourself, if you just had the time”) that Photos introduced in September have been really well received, and now the team is rolling out more:

More automatic movies, made for you: baby’s first months, holiday traditions, highlights from the year, and more.

As before, just live your life, back up your pics, and keep an eye out for movies arriving via the Assistant in Photos.



Adobe shows off “Project Dali” for painting in VR

“Why doesn’t designing feel like dancing?” I used to ask Photoshop teammates. Then they’d stare back blankly and I’d say, “Yeah yeah—crack don’t smoke itself…”

But here’s to the crazy ones, and Erik Natzke’s work has long inspired me. Seeing a talk of his years ago, in which he showed how he’d build custom interfaces in Flash that let other artists customize images & animation, sent me on a years-long inquiry into what could happen if Flash or HTML were a layer type in Adobe apps. The point is, he tends to open eyes & get juices flowing.

Thus I’m excited to see Erik & co. working on “Project Dali”:

Erik writes,

I don’t think of Project Dali as digital or analog. It’s something that mixes the two and comes out completely unique. It could incorporate texture (think of the exquisite feel of graphite) and time (your paint is drying) with the unending flexibility of digital. It takes art that used to feel static and lets us manipulate it in three-dimensional space. In the process, the art becomes different, magical.

I’m starting to think about it like a musical instrument: If you are a musician, your instrument enables your creativity; it doesn’t stand between you and the idea in your head. And just like with VR, you learn by playing.

I can’t wait to take it for a spin & see how it evolves.



The Google Photos editor gets smarter & more powerful

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” – Antoine de Saint-Exupery


When I joined the Google Photos team, they’d just integrated Snapseed into Google+ (the predecessor of Photos). As I hope is obvious, I’m a huge Snapseed fan, but when we looked at what most users actually did in G+ (crop, rotate, tweak brightness, and maybe apply a filter), it became clear that Snapseed was dramatically more complex & powerful than they needed.

Therefore we made the hard decision to reset & build a new editor from scratch. We aimed to deliver great results in a single tap, offer just a few powerful sliders (which under the hood adjusted numerous parameters), and keep Snapseed just one extra tap away (via the overflow menu) for nerds like me.

The vision was always to keep learning from users’ behavior, then thoughtfully enable just the controls needed to deliver extra power when needed. I’m delighted to say that Photos now does just that: the update released Tuesday on iOS, Android, and Web (try it here) manages to keep a simple top-level UI while revealing a lot more of the power under the hood.

The filters UI applies Auto (which can now produce more accurate results) as part of every filter:

These unique looks make edits based on the individual photo and its brightness, darkness, warmth, or saturation, before applying the style. All looks use machine intelligence to complement the content of your photo. [1]


In the adjustments section, in addition to the Light, Color, and Pop sliders:

  • Light opens to reveal Exposure, Contrast, Highlights, Shadows, Whites, Blacks, and Vignette
  • Color opens to reveal Saturation, Warmth, Tint, Skin Tone, and Deep Blue

I continue to find Auto to be highly effective for the bulk of my images, but I like being able to pop the hood when needed.

Please take the new features for a spin & let us know what you think!

Oh, and since you’ve been kind enough to read this far, here are some useful shortcuts for use on desktop:

  • E to enter & exit the editor
  • R to enter & exit crop/rotate
  • Shift-R to rotate 90º
  • A to Auto Enhance
  • O (press & hold) to see original
  • Z to zoom
  • Left/right arrows to move among images
  • Cmd-C/V to copy/paste edits among images
  • After clicking a slider, use arrow keys to adjust it & press Tab to put focus onto the next slider

Check out Google Earth in VR

This looks totally bananas:

Ten years ago, Google Earth began as an effort to help people everywhere explore our planet. And now, with more than two billion downloads, many have. Today, we are introducing Google Earth VR as our next step to help the world see the world. With Earth VR, you can fly over a city, stand at the top of the highest peaks, and even soar into space. 

You can grab it now for the HTC Vive.



Introducing Google PhotoScan

“Photos from the past, meet scanner from the future.” I think you’re gonna dig this (available now on Android & iOS). 🙂

Don’t just take a picture of a picture. Create enhanced digital scans, with automatic edge detection, perspective correction, and smart rotation.

PhotoScan stitches multiple images together to remove glare and improve the quality of your scans.

Check it out:

So, how does it work? Let’s hear right from the team:

Enjoy, and as always please let us know what you think!


[YouTube 1 & 2]

Check out Google RAISR: Sharp images with machine learning

If you share a picture of a tree in a forest, but no one can see it, did you really share it?

Working at Google, where teams aspire to “three-comma moments” (i.e. reaching 1,000,000,000 users), it’s become overwhelmingly clear to me that all the fancy features in the world don’t mean squat if people can’t access them. And traveling in Nepal, I got a taste of just how slow & expensive connectivity can be. Anything that helps deliver content faster & more cheaply means more democratic access to ideas & inspiration.

That’s why I’ve been really excited about RAISR (“Rapid and Accurate Image Super-Resolution”). The Google Research team writes,

[It’s] a technique that incorporates machine learning in order to produce high-quality versions of low-resolution images. RAISR produces results that are comparable to or better than the currently available super-resolution methods, and does so roughly 10 to 100 times faster, allowing it to be run on a typical mobile device in real-time.


I’ve been championing this tech within the company and—because the research paper is public—encouraging friends at Facebook, Adobe, Apple, and elsewhere to check it out. Fast, affordable access is good for everyone.

It’s funny: I came here to “teach Google Photoshop” (i.e. to make computers see & create like artists), yet if I do my job right here, you’ll never spot a thing. I’ve come to prioritize access far ahead of synthesis. Funny ol’ world.

PS—Obligatory (?) Old Man Nack remark: “In my day, Genuine Fractals, blah blah…”

Adobe demos automatic sky-swapping

My old Photoshop boss Kevin used to show a chart that nicely depicted the march of tools from simple & broad (think Clone Stamp) to sharp & purposeful (Healing Brush), smartly tailored to specific needs. I love to see how computer vision is helping to extend that arc, as demonstrated here:

Adobe says SkyReplace uses deep learning to automatically figure out the boundary lines between the sky and the rest of the shot (e.g. buildings and ground). It can then not only swap out the old sky and insert a completely new one, but it can adjust the rest of the photo to take on the same look and feel as the new sky, creating a more realistic look.


The N-up UI reminds me of Photoshop’s early-90’s Variations dialog. Maybe graphically, as well as politically, everything old is new again.



YouTube VR arrives on Daydream

I’d love to experience a next-gen Awesome; I Fuckin’ Shot That! as a series of 360º video bubbles that I can jump among (landing on stage, in the crowd, etc.). (Ah, but can I get Moby Dick there?)

The team writes,

This new standalone app was built from the ground up and optimized for VR. You just need a Daydream-ready phone like Pixel and the new Daydream View headset and controller to get started. Every single video on the platform becomes an immersive VR experience, from 360-degree videos that let you step inside the content to standard videos shown on a virtual movie screen in the new theater mode. The app even includes some familiar features like voice search and a signed in experience so you can follow the channels you subscribe to, check out your playlists and more.