Monthly Archives: October 2016

Photography: A fun aerial tour of Google Chicago

Okay, I hesitated to share this as I’m allergic corporate self-congratulation, but A) it’s some pretty amazing aerial filmmaking (including in thunderstorms!), and B) the chase of the Androids is just so weird—and get only weirder/funnier as it progresses. That detail reminds me of Khoi Vinh’s smart observation from a couple years back:

Apple fans like myself often criticize Google for doing things that Apple would never do, and Smarty Pins is a prime example of that. Aside from being an unfair criticism, it’s pointless. The fact that Google endeavors to produce silly things like this is on the whole a positive thing, I believe. It’s acting according to its own compass, which is what every company should be doing.

Props to Joey Helms & crew.


[Via Alex Osterloh]

Google’s trippy VR 360º Sprayscape lets you paint to create VR spheres


We love VR. We love taking pictures. So we figured, why not try smashing the two together?

Sprayscape is a quick hack using the phone’s gyroscope to take pictures on the inside of a 360-degree sphere. Just point your phone and tap the screen to spray faces, places, or anything else onto your canvas. Like what you’ve captured? Share your creations via a link and your friends can jump into your scapes and have a look around using their phones or even Google Cardboard.[

 Nerdy details from the team:

 Sprayscape is built in Unity with native Android support. Sprayscape maps the camera feed on a 360 degree sphere, using the Cardboard SDK to handle gyroscope data and the NatCam Unity plugin for precise camera control.

The GPU makes it all possible. On user tap or touch, the camera feed is rendered to a texture at a rate of 60 frames per second. That texture is then composited with any existing textures by a fragment shader on the GPU. That same shader also creates the scape you see in app, handling the projection from 2D camera to a 360 sphere.

When a user saves a scape, a flat panorama image is stored in the app data. When a user shares a scape, the three.js web viewer takes that flat image and wraps it to a sphere, making it navigable on mobile web by panning, tilting, and moving your device.


[YouTube] [Via]

Photography: Werner Herzog’s “Inferno”

Chuck that Dan Brown shite into some molten rock & peep this Inferno instead. (I mean, I’d listen to Werner read the phone book, and here he is talking volcanoes, for God’s sake.)

Werner Herzog’s latest documentary, Into the Inferno, heads just where its title suggests: into the red-hot magma-filled craters of some of the world’s most active and astonishing volcanoes—taking the filmmaker on one of the most extreme tours of his long career. From North Korea to Ethiopia to Iceland to the Vanuatu Archipelago, humans have created narratives to make sense of volcanoes; as stated by Herzog, “volcanoes could not care less what we are doing up here.” Into the Inferno teams Herzog with esteemed volcanologist Clive Oppenheimer to offer not only an in-depth exploration of volcanoes across the globe but also an examination of the belief systems that human beings have created around the fiery phenomena.


[YouTube] [Via]

Computational photography: Inside the Google Pixel

Many years ago the Photoshop team collaborated with Stanford professor Marc Levoy & his team. We were especially interested in their work to create a programmable device—charmingly known as the “Frankencamera”—that could run emerging algorithms to guide both capture & processing.

Fast forward to today, and Marc is leading a team of researchers at Google who just helped ship the new Pixel phone. As Marc notes, “The French agency DxO recently gave the Pixel the highest rating ever given to a smartphone camera.” On the Verge he provides lots of interesting details about how the camera works. For instance,

The Hexagon digital signal processor in Qualcomm’s Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. “The moment you press the shutter it’s not actually taking a shot — it already took the shot,” says Levoy. “It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it’s already captured, and combines them together.”

Read on for more—or if you just want some quick highlights, check out this two-minute tour shot entirely with a Pixel:



The What & The Why

Everyone, a friend once said, is always asking the same thing over & over based on who they are. The words change, but the underlying question for each tends to be the same:

  • Project managers are always asking, “Are you efficient? Are you effective?”
  • Artists & product managers are asking, “Do you get it?” (What game are we playing, and how do we keep score?)
  • Engineers are always asking, “Are you a moron?” (Did you consider this, think of that, etc.?)

I thought of this on Monday as Buddhist nun Thubten Chodron spoke at Google. Instead of evaluating the what of things (what did you accomplish, create, earn, etc.), she emphasized weighing the why. What is your intention? Is, for example, a charitable contribution really driven by love of others, or is it meant to stroke your ego?

I can’t claim any deep insight here, but I was struck by the parallel & by the wisdom—in life & in work, especially PM work—of pursuing the Five Whys. Hmm; more thinking to be done.


Behold an amazing floating cloud speaker

What witchcraft is this…?

Embedded into both the base and the Cloud are magnetic components that allow the cloud to float 1-2 inches off the base. While the base itself must remain plugged in a rechargeable lithium ion battery powers the Cloud and enables a totally wireless and unobstructed levitation.

Is it for real? The Verge writes,

While the Smart Cloud is an actual product that you can purchase (for a whopping $3,360), no release information has yet been announced for Making Weather, although it likely will fall in a similarly expensive range.



“Change the headlines” by taking immediate action on them

Exciting work from Speakable:

Action Button, which is a snippet of code that lives on publishers’ article pages and gives their readers the option to take direct action. Speakable’s technology is able to understand the content and sentiment of an article and match it with the proper non-profit partner.

From there, users can click the Action Button to send an email to a legislator or tweet to a decision-maker or even make a donation.



New people + highlights features arrive in Google Photos

I’m loving the little galleries I get like “John + Finn,” “Recent highlights of Henry,” etc. The team writes,

First, Google Photos will now help you rediscover old memories of the people in your most recent photos. As your photo library continues to grow, we hope that features like this one make it easier to look back at your fondest memories.


Second, we’re making it easier to look over the most recent highlights from your photos. If you take a lot of photos of your child, for example, you may occasionally get a card showing the best ones from the last month. (Hint hint: grandparents would love to see these!)


Google Photos now turns videos into weirdly charming little animations

“We’ve always made animations from photos,” the team writes, “but now we make animations from your videos, too. And not just any videos. We look for segments that capture activity — a jump into the pool, or even just an adorable smile — and create short animations that are easy to share.”

Here’s one it generated of the Micronaxx:

And it made another from the luau we attended last week:

As before, you don’t need to do anything: just let Photos back up your vids, then watch for Assistant notifications.


Illustration: How amazing pop-up books are created

Using Photoshop, X-acto knives, glue, and more, Matthew Reinhart shapes paper into amazing mechanical structures:

Using scissors, tape, and reams of creativity, Matthew Reinhart engineers paper to bend, fold, and transform into fantastic creatures, structures and locales. By adjusting the angles of folds and the depth of layers, Reinhart animates his subjects to tell dramatic stories that literally pop off the page.


[YouTube] [Via Kevin McMahon]

Adobe teases new Photoshop manipulation tech

Hmm—let’s see what develops here. PetaPixel explains,

First, it can manipulate an image based on very basic coloring, sketching, or warping commands. So you can change the shape, color, and size of an object in just a brush stroke or two, with the final product maintaining as natural a look as possible.

Second, it can actually generate images based on a rudimentary sketch.

Check out more info & demos from Berkeley.

PetaPixel also points out a “neural photo editor” from researchers at the University of Edinburgh:

[I]t uses machine learning to predict and apply the changes you’re intending to make. For example, if you select a bright color and start painting over someone’s hair, it will assume you want to turn them blonde; being using longer brush strokes, and that blonde hair grows longer.

You simply select a color using their “contextual paintbrush” and have at it. The most basic inputs can produce extreme changes.

Smart articles worth a look

Attempting to stay un-murdered by my family during our current trip, I’m taking it easy on blogging this week. In the meantime, here are a few quick links to articles I think you’d find worthwhile: