Monthly Archives: September 2015

Demo: Collaborative albums & more in Google Photos

Fifty billion photos & videos backed up since the product launched in May; yeah, that’ll happen when you say “free & unlimited.” 🙂 

Here’s a concise demo of how Photos will help you pool photos with friends & family, let people sign up for email updates, label people, and display your images via Chromecast. (Oh, and I really hope that “Dutch Thunder on the beach in Cancun” become a thing.)

If the video below doesn’t start at the correct point, jump right to 39:17.

[Vimeo]

Let’s hang out tonight at Google with nonprofits

Sorry for the short notice, but if you’re around Mountain View tonight and, like me, are seeking ways to make more social impact with your life, come check out this event:

The Board Match offers a unique opportunity for Bay Area residents to become stronger leaders by serving on the boards of directors of local nonprofit organizations. Board service is for everyone, whether you’re just starting out, a mid-career professional, or a seasoned philanthropist, there is a nonprofit that will value your talents. Nonprofit board service offers young and mid-career professionals opportunities to become organizational and community leaders, with benefits for their own professional growth, as well as an entrée into philanthropy and civic stewardship that inspires others and can become a pattern for life. It offers seasoned professionals approaching retirement a vital next step in a lifelong career, the opportunity to put well-honed skills to use, build new networks, and foster the growth of other leaders.

Come find me if you can make it!

Coming soon to Google Photos: Collaborative albums & more

Want to pool your kid photos with your partner so that your parents can always stay up to date? Or search for a person by name, or share your photos on a big screen via Chromecast? It’s all rolling out—some now, some coming soon—across Android, iOS, and web.

Per the team’s post:

People Labeling

Label the people in your photos by what you call them, name or nickname.

This week in the U.S. you’ll be able to label the people in your photos however you want – Mom can be “Mom”, “Juliana”, or “Cat Lady” – whatever you choose. These labels are completely private to you and are not associated with a Google account or profile. Once people in your photos are labeled, you can make advanced searches to find photos of people with things, places or people, such as “Mom at the beach” or “Juliana and Marco in Hawaii.”

People labeling is rolling out in the U.S. this week on Android and is coming soon to iOS and the web.

Shared Albums

Gather all your photos and videos from friends and family in one spot, and know as soon as new moments are added.

We’re introducing shared albums later this year – a new, easy way to pool photos and videos with whomever you want, and get updates when new moments are added. There’s no setup involved, and you can use shared albums on any device – Android, iOS, Mac, Windows and Chrome OS.

I’ve been testing these features for a while & think you’ll really like ‘em.

Every generation of iPhone camera compared

As groundbreaking as its capture experience (and mere existence!) was, the 2-megapixel cam in the original iPhone was, we can now admit, really godawful; hence the heavy, pancake-makeup approach of the image filters of the day. I remember training myself to compensate for the profound shutter lag as if I was Luke Skywalker donning a blast helmet. It wasn’t, “Hey kid, look at me [press shutter],” but rather, “Hey kid, [press shutter] look at me.”

But progress has been swift & amazing, and I can instantly visually carbon-date pics of my kids by the quality of the phone-captured shots. Now photographer Lisa Bettany has produced beautiful interactive side-by-side comparisons of every generation of iPhone camera. Talk about night-and-day differences (to say nothing of burst mode, HDR, going from zero video to optically stabilized 4k, and more).

As for the future, let’s hope that next year we’re raving about 3D depth sensing enabling SLR-like background separation. Staying tuned…

Quan

Check out Daqri, an augmented reality headset for work

Sidestepping the privacy & fashion concerns that have bedeviled systems like Google Glass, Daqri targets industrial applications. The Verge writes,

Daqri is an augmented reality (AR) company based out of Los Angeles. It has developed an AR headset and the software which powers it. Technicians wearing its unit out in the field can see additional information, get step-by-step instructions, and easily relay what they are seeing to a support team connected remotely to their headset.

Here it is in action:

Celebrating Honda’s history of amazing ads

The beautiful Paper ad I blogged on Sunday is just the latest installment in Honda’s rich creative history. It’s worth taking a look back at some terrific ads from the last decade—and these are just the ones I’ve blogged!

Snapseed gets a more powerful Healing Brush, more

Do you live in a world where every blemish, random bird, stray pedestrian, and telephone wire is perfectly round? Me neither!

Therefore I think you’ll really like Snapseed’s new ability to heal arbitrary-shaped regions. Just tap the filter selector, tap Healing, and then paint away the bits you’d like to omit. And of course these operations are, like everything else in the new Snapseed, non-destructive, meaning that you can go back and re-edit them and/or copy/paste them among images.

The update (2.0.4) should now be live on the App Store & Play Store. It also squashes some bugs & adds support for Traditional Chinese (Hong Kong) and Canadian French.

Here’s an animation of healing in action: 

Lighthouse

Honda’s epic new hand-drawn animation

Another year, another example of Honda creating some of the most interesting ads in the game. 

PetaPixel shows a number of stills from the ad & writes,

Honda recently enlisted [animator Adam] Pesapane’s services to create the ad above, titled “Paper.” It runs just 2 minutes, but it took 4 months of work to create!

The hands you see in the ad are real people who were placing roughly 3,000 unique illustrations in front of the camera, allowing the animation to be created one frame at a time.

Here’s a peek behind the scenes:

[YouTube 1 & 2]

Photo essay: “The Mind-Bending Bus Stops Of The Former Soviet Union”

Christopher Herwig finds weird, austere beauty on the steppes:

Photographer Christopher Herwig has been hunting bus stops in remote corners of the former Soviet Union since he stumbled upon them while biking to St. Petersburg in 2002. He has covered more than 30,000 km by car, bus and taxi in 13 countries discovering and documenting these strange works of art created behind the Iron Curtain. From the shores of the Black Sea to the endless Kazakh steppe, the bus stops show the range of public art from the Soviet era and give a rare glimpse into the creative minds of the time. Herwig’s series attracted considerable media interest around the world, and now with the project complete, the full collection will be presented in Soviet Bus Stops as a deluxe, limited edition, hard cover photo book. The book represents the most comprehensive and diverse collection of Soviet bus stop design ever assembled.

[Vimeo] [Via]

“Camera Restricta” prevents shooting unoriginal photos

Got a case of vemödalen (“the frustration of photographing something amazing when thousands of identical photos already exist”)? Or perhaps you’ve just wanted a camera that sounds like a Geiger counter while blurting “NEIN” at you in big red letters?

Philipp Schmitt’s Camera Restricta concept wants to help. PetaPixel explains,

“Camera Restricta introduces new limitations to prevent an overflow of digital imagery,” he says. “As a byproduct, these limitations also bring about new sensations like the thrill of being the first or last person to photograph a certain place.”

[Vimeo]

Help refugees & Google will match your gift

Googler Rita Masoud (who fled Afghanistan with her family) writes,

To double the impact of your contribution, we’ll match the first €5 million (~$5.5 million) in donations globally, until together we raise €10 million (~$11 million) for relief efforts.

Your donation will be distributed to four nonprofits providing aid to refugees and migrants: Doctors Without Borders, International Rescue Committee, Save the Children, and UN High Commissioner for Refugees. These nonprofits are helping deliver essential assistance—including shelter, food and water, and medical care—and looking after the security and rights of people in need.

Visit google.com/refugeerelief to make your donation. Thank you for giving.

I feel very privileged to work alongside the folks who are making this happen. 

Photography: New Google/MIT algorithm removes visual clutter

Adios, bothersome fences, reflections, etc. That’s presuming that normal users would be sufficiently motivated to move their devices during capture. Time will hopefully tell.

The video accompanying our SIGGRAPH 2015 paper ” A Computational Approach for Obstruction-Free Photography”. We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.

[YouTube]

Podcast: Me & Andy Ihnatko, down by the schoolyard

I had a ball sitting down with my ex-Photoshop/current Google Photos friend Aravind Krishnaswamy to chat with Andy Ihnatko, Russell Ivanovic, and Yasmine Evjen for this week’s Material Podcast. We talked about computer vision, the future of memory keeping, my wife hypothetically getting bum-rushed by a lady from the Clinique counter, and much more. (Oh, and the jury’s still out on whether there were snakes in the wall. You’ll see.)

Grab the MP3 here.

Utterly bananas: Robbie Maddison’s Pipe Dream

Did I just see… this?

DC presents Robbie “Maddo” Maddison’s “Pipe Dream,” giving the world a chance to witness history being made as Maddo rides his dirt bike on the powerful and iconic waves of Tahiti. From his helmet to motocross boots, Maddo was dressed for FMX when he took his dirt bike into the unchartered saltwater terrain of the Pacific Ocean in French Polynesia.

So how was it done? See the making-of:

[YouTube 1 & 2]

ParticleShop brings Painter brushes to Photoshop

Adobe & Microsoft on stage in an Apple keynote, dogs & cats living together, mass hysteria! And continuing the mash-up madness, some of Painter’s famous brushes are now available inside Photoshop via a $49 plug-in:

Explore an array of 11 imaginative brushes, including Debris, Fabric, Fine Art, Fur, Hair, Light, Space, Smoke and Storm… Enjoy infinite inspiration with our extra brush packs available for purchase.

[YouTube]

You should read “Clay, Water, Brick”

“When the student is ready, the teacher will appear.”

As I was doing some soul searching in July, I saw that Jessica Jackley, co-founder of microlending site Kiva.org, would be speaking at Google. I hurried to catch her talk and highly recommend it:

Afterward I really enjoyed reading Clay Water Brick: Finding Inspiration from Entrepreneurs Who Do the Most with the Least. At some point I will finish pulling out the most resonant bits & will post them here. Meanwhile, quick points:
  •  Kiva was born out of a desire to combine real human connections (vs. just transactions) with scalable, measurable impact. Earlier Jessica had been working at Stanford by day (where people talked in really ambitious but slightly impersonal terms about world-changing enterprises) and by night working with young mothers in East Palo Alto (where she made deep personal connections but questioned what change was resulting). Kiva is meant to foster real connections between entrepreneurs (many of whose stories she tells in the book) & lenders like you & me.
  • Sometimes you have to “dump the quarterback.” In high school she was asked out by Johnny Football Hero, and of course she had to say yes (as one does). But the guy was kind of a bore, and she dumped him (sacrilege!). It’s tough, but when the inside doesn’t match the outside (be it in a relationship, an ostensible dream job, etc.), something has to change.

I think you’ll find both the talk & the book rewarding, and if you’d like to get started lending via Kiva, check out my lender page and jump in!

[YouTube]

Quick tip: Sending your GIFs to Instagram

Google Photos automatically makes cool looping animations from your bursts, but you can’t share these GIFs directly on Instagram. Fortunately there’s a workaround (only on iOS for the moment):

  1. Select the animation.
  2. Tap the Share icon.
  3. Choose “Save as video.” Photos will automatically replicate the sequence a number of times to preserve the looping effect (useful on platforms like Facebook that don’t auto-loop the video).
  4. In Instagram, select the video from your camera roll & post it as you would any other.

And voila, you get something like this:

 

That thing where you’re just glad that #TheMountain didn’t eat your children. #HighlandGames

A video posted by John Nack (@jnack) onSep 7, 2015 at 4:08pm PDT

A neural network tries to identify objects in the Star Trek intro

Heh—folks worrying about the imminent & inevitable robopocalypse might want to check this out. Kottke writes,

[T]he system hadn’t seen much space imagery before, so it didn’t do such a great job. For the red ringed planet, it guessed “HAIR SLIDE, CHOCOLATE SAUCE, WAFFLE IRON” and the Enterprise was initially “COMBINATION LOCK, ODOMETER, MAGNETIC COMPASS” before it finally made a halfway decent guess with “SUBMARINE, AIRCRAFT CARRIER, OCEAN LINER”.

[YouTube]

Make & explore 360º panoramas with the new Google Street View app

I love capturing panos via The App Formerly Known As Photo Sphere, now significantly updated & renamed Street View  (download for iOSAndroid). PetaPixel writes,

Users can quickly browse all available traditional Street View content in addition to the newer 360-degree photospheres. Simply input a location, zoom in, and you are ready to start walking the streets of your favorite city. You can also explore beautiful photography through a pull-up tab that displays presorted collections and the ‘Explore’ tab. If you want to create your own photosphere you can do so, but will need a smartphone that contains a gyroscope sensor.

I particularly enjoy uploading my spheres to Google Maps to help other people explore the places I’ve visited.

ContactSheet-001

Coincidentally, Ricoh just introduced the Theta S, a new version of their spherical 360º capture app that generates Street View-compatible images. Check out this 360º-degree video that you can spin around while streaming from YouTube:



[YouTube]

Will Photoshop.next auto-zap distracting elements?

Hmm—interesting: according to PetaPixel, Adobe is Working on Automatic Distraction Removal Technology:

The scientists first gathered together thousands of photos and asked people (through Amazon’s Mechanical Turk) to manually mark distracting regions in them… That set of annotated images was then used to train a computer to recognize areas of photos people might want to remove in random photos presented to it.

Okay, but I’d like to see this run in reverse, slyly inserting weird little elements (garden gnome, rune, cursed tiki, etc.) into the periphery of your shots—not unlike the PhotoBomb Tool parody that I got in trouble for blogging at Adobe. :-p 

A number of examples of original photos, the identified distractions, and the resulting photo with the distraction removed.

Neural algorithm makes photos emulate famous paintings

Back in 2003 we blew a lot of minds by showing Photoshop’s Match Color feature sucking up the color palette of one photo or painting, then depositing it onto another. This kind of thing kept getting love as it evolved (see 2010 demo), eventually matching lighting among images. As far as I know no one has ended up using such functionality in practice (and yes, Match Color is still sitting in Photoshop on your hard drive right now), but it’s still cool.

Now the tech has taken another leap forward. Per PetaPixel,

In a newly published research paper titled “A Neural Algorithm of Artistic Style,” scientists at the University of Tubingen in Germany describe how their deep neural network can create new artistic images when provided with a random photo and a painting to learn style from.

“Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality,” the paper says. “The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images.”

Check out many more examples via the article.

neuralartwork

Photography: A new flyby of Pluto

Props to Bjorn Jonsson for assembling NASA photos into this animation:


The time covered is 09:35 to 13:35 (closest approach occurred near 11:50). Pluto’s atmosphere is included and should be fairly realistic from about 10 seconds into the animation and to the end. Earlier it is largely just guesswork that can be improved in the future once all data has been downlinked from the spacecraft. Light from Pluto’s satellite Charon illuminates Pluto’s night side but is exaggerated here, in reality it would be only barely visible or not visible at all.

[Vimeo] [Via]