Category Archives: Photography

The Instagram lobster trap

We’re all just a bunch of apes lekking around the water hole, aren’t we? “Facebook,” it’s been said, “is basically designed like a lobster trap with your friends as bait,” and the same holds for Instagram. You know it’s empty, often stressful calories—but you can’t get away.

I’m reminded of this hearing teenage girls tell This American Life about their Instagram habits. Points that stuck out for me:

  • Time is a big factor. Reactions are expected within minutes. (You can hear relief in the girls’ voices as the first likes roll in.)
  • Getting 150 likes on a selfie is normal. Nighttime is when you get the most.
  • Each person posts just a few times per week.
  • They reflexively like nearly everything in their feeds.
  • Commenting is more intimate than liking & carries more expectation of reciprocation.
  • Language choice is super important. There’s tons of repetition of “Gorgeous, Pretty, OMG, etc.,” though never “Sexy.”
  • Comments are a way to map & judge others’ relationships (who’s commenting, who isn’t).
  • “If I didn’t have it, I’d feel like I’m missing so much.” It’s a diagram of where people stand socially. Parsing this is where the most time goes.
  • The girls will preflight photos (sending them to close friends for review) before posting.
  • They know it’s shallow, but “It’s like free candy, so why not?”
  • “I’m a brand… Relevance is a big term right now. In middle school we were so relevant!”

Don’t get me wrong—I’ve enjoyed Instagram for 5+ years & would miss it deeply, even if I’m guiltier than I’d like of these pathologies. And still my mind turns endlessly to thinking about ways to foster more genuine, personal, enriching communications. Nobody said would be easy, but the desire is there.

A rather charming father-son ad from Canon

Okay, yes, my inner cynic says that for-profit art schools churn out more “professional photographers” in a year than such jobs exist in the whole country, and that million-view videos online generally pay their creators just a couple of grand at best. Still, there’s more value in a visual education than just getting a job, and I like the spirit of this narrative (as would Max Fischer):

https://youtu.be/5_LFmQ6eH1I

[YouTube] [Via Justin Oliver]

A pair of amazing books

This Book Is A Camera—really!

PetaPixel explains,

“I wanted to make a working camera within an educational pop-up book—one that connects the dots between design and science/structure and function,” Anderson writes. This book does just that — it both explains and demonstrates how a camera uses light to produce a photograph. […]

Anderson is self-publishing “This Book is a Camera,” and it’s currently available for $29 on her website. If you like do-it-yourself projects, Anderson has also generously made her camera’s design available for free as a PDF under a Creative Commons license. 

Elsewhere, The Incredible Intergalactic Journey Home claims to be “the most technologically advanced picture book ever created”:

[YouTube] [Via Margot]

Apple enables SD-to-iPhone import

Last year I waxed a little nostalgic & appreciative:

We’ve come a long way, baby:

  • 2004: Epson P-2000: $500, 1360 grams., 3.8” screen, and 40GB HD.
  • 2014: Apple iPhone 6 Plus: $500*, 172 grams, 5.5” screen, and 128GB HD.

At the time one couldn’t plug Apple’s Camera Connection Kit into the phone. Things have changed, however: According to PetaPixel, iOS 9.2 Lets You Import Photos to an iPhone Directly From a Camera or via the Lightning-to-SD-card reader. On the new iPad Pro, that means transfer speeds that can theoretically be 10x faster (see previous).

My ideal world, as of late 2015:

  • 128GB phone with USB 3.0 speed
  • On-the-fly import of raw files as DNG proxies (massively smaller, but 99% as flexible as raw originals)
  • Preservation of raw originals on the card (as I don’t need or want all the original bits clogging up my phone)
  • Backup of originals later (via desktop upload) + syncing of edits performed to DNG proxies

For this I’d more than happily stick a short, $29 cable into my pocket every time I took my SLR for a spin.

Photo sphere support comes to Google Photos Web

I hereby apologize in advance to my wife & kids: I’ll now be capturing even more spherical panoramas via the Street View app, meaning I’ll be stopping and lagging behind for a couple of minutes per capture. Sorry, guys: I’ve you’ve gotta suffer for my art. 🙂

If you’ve captured any spheres, you can find them on Google Photos online by searching for searching for #photosphere, then click each & interactively navigate around. Here are a few of mine.

To make them embeddable (like the one below), share them on Google Maps via the Street View app.

Capture VR panoramas with Google’s new Cardboard Camera

The new app captures sound & depth that you can see and hear in virtual reality.Launchpromo flow02 lrg

TechCrunch writes,

Cardboard Camera pulls off a bit of trickery to simulate depth within your photos (making near things look near, and far things look far), and then sends slightly different photos to each eye — thereby simulating the appearance of a 3D environment.

It’s not actually 3D, of course — you won’t be able to move around within it. But it’s quite a bit cooler than a flat, static image. Add in Cardboard Camera’s ability to attach sound from the environment to each VR Photo, and it actually starts to feel pretty darn immersive.

A 46-gigapixel image of the Milky Way

Cripes—I remember when we lifted the 30,000-pixel/2GB file size limit in Photoshop. Of this monster undertaking DP Review writes,

Researches from German university Ruhr-Universitat Bochum spent half a decade creating the largest astronomical image created to date, a 46-gigapixel image of the Milky Way, which is now available via an interactive online viewer. The image is made up of 46 billion pixels, and the file weighs in at a hefty 194GB in size… 22,000 Full HD TV screens would be required to display it at its full resolution.

Check out the not-so-easy-to-navigate interface here. [Via Steve Mansfield]

Photography: Jetmen + A380

“We will be mosquitoes, flying with an eagle or a condor,” said wingsuited weirdo Yves Rossy of his flight with Vince Reffet and an oh-my-God that’s actually an Airbus A380.

Fun details:

Airbus’ flagship aircraft producing 70,000lbs of thrust from each engine, versus a mere 88 lbs per Jetman engine. The total weight of the plane is 560,000 kg (1.2 million lbs), compared with 150kg ( 331 lbs) per human jet.

If that’s up your alley, check out jetmen going all foo fighter with a B-17:

 

[YouTube]

Will iPad Pro finally enable pro photo workflows?

Back in 2010 photographers could not stop telling me how much they wanted to bring an iPad on trips, plug in a card, import raw images, pick the good ones, apply presets, and later have everything synced to the cloud. But I wrote last year of my sad disbelief at the “bizarre failure of our industry” to make this work well.

But now, maybe—maybe—the iPad Pro’s huge screen, crazy battery life, and MacBook Pro-class performance will change the equation. And here’s a very quiet but potentially critical change: The device supports 10x faster (theoretically) data import. Engadget writes,

[T]he iPad Pro is capable of transferring files at speeds that reach 5Gbps, whereas a USB 2.0 connection can only reach a max speed of 480Mbps. Apple had a good reason for not making a big deal out of it, though: you’ll need to get an extra USB 3.0 adapter to be able to take advantage of the capability, since the tablet only ships with a cable that can handle USB 2.0 speeds. Problem is, that adapter doesn’t exist yet, so you’ll have to deal with slower file transfers for a while longer.

We’ve collectively been let down (and let ourselves down) so many times here—but hope springs eternal!

A stunning hyperlapse of Istanbul

Rob Whitworth & team take us on a thrilling high- (and low-) speed zoom through the ancient & iconic city:

PetaPixel notes,

Whitworth spent about 6 weeks shooting the project with his team, with a couple of those weeks spent waiting for the weather to improve. About 295 hours were spent on location for recon and shooting, and 278 were spent afterward in post-production — there were about 71,000 RAW files after shooting was complete, so Whitworth had 3.8 terabytes of data to work through.

Here’s a look behind the scenes at the setup of just one shot:

 

[YouTube]

Google uses VR to help veterans march

Check this out:

We’re enabling veterans everywhere to virtually march alongside 20,000 of their fellow servicemen and women in New York City. Google veterans and volunteers are using Google Cardboard to host a virtual Veterans Day march online and in VA hospitals across the country—so that every veteran has the chance to be celebrated and experience the country’s gratitude for their service.

I can’t wait to see the results.

Thank you, veterans, for your service.

Lytro unveils a spherical light field capture rig for VR

After wisely (IMHO) pivoting away from consumer imaging (where the only time most people want to refocus an image is if they’ve screwed up), Lytro is breaking out the big guns with a multi-$100k capture device, the Immerge:

Lytro Immerge is the world’s first professional Light Field solution for cinematic VR, providing lifelike presence for live action VR through Six Degrees of Freedom. It is built from the ground up to seamlessly blend live action and computer graphics (CG) using Light Field data. With configurable capture and playback solutions, it supports a range of new immersive storytelling needs.

Engadget explains,

Basically, think of this giant sphere as a large VR sensor. There are five different layers, and each layer is packed with a ring of 360-degree cameras and sensors. “One of the layers represents somewhere between three and four times the data and resolution of any VR camera that exists today”… [This] allows for accurate horizontal and vertical parallax, which is needed to realistically incorporate CGI elements in the video and have them appear as part of the natural environment.

I can’t wait to see & navigate what it captures.

[Vimeo]

Snapseed for iOS gets a faster image picker, more

It’s a small thing, but I can’t tell you how much more I enjoy using the image picker in Snapseed for iOS than its predecessor. Grab the latest update (2.1) and check out the enhancements. From the team post:

  • A new image picker provides much quicker access to photos
  • Editing session is preserved when switching to another app
  • Style menus in filters are opened by default
  • Filter names are displayed in the title bar 
  • Tap to hide controls on main screen to see the image without distractions
  • When zoomed in, the image can be moved so that the navigator doesn’t obscure any part of the image
  • Filter selector displays 3 columns in landscape orientation on iPhones
  • Bug fixes and stability improvements

NewImage

Snapseed goes raw! DNG support arrives on Android.

I’m delighted to say that DNG files, shot directly on Android phones or converted from other formats, can now be edited in Google Snapseed for Android. When you open these images in the new Snapseed 2.1 (rolling out now, so please check back in the Play Store if it’s not yet available where you are), a new Develop filter module gives you great control over highlights, shadows, and white balance—just as you’d expect when working with raw.

Some phones can shoot DNG photos in the phone’s built-in camera app, including LG G4, HTC One M9, OnePlus One, Oppo N1, Oppo N3, and Oppo Find 7. Others require a third-party camera app to shoot DNGs, including the Samsung S6, Samsung S6 Edge, Nexus 5, and Nexus 6. Devices need at least 1.5GB of RAM & good OpenGL support.

Happy shooting, and please let us know what you think!

Raw vs jpeg

Enormous photo composites of cathedrals

Photographer Markus Brunetti has spent years relentlessly honing the craft of capturing, compositing, and printing giant (up to 10’) representations of Europe’s cathedrals. Khoi Vinh writes,

Brunetti creates stunning photographs of European cathedrals from countless source images that he takes and painstakingly composites into photographic equivalents of elevation drawings. The results are intricately unreal; perspective is dramatically flattened, light is almost impossibly even, and all signs of human activity are removed—in effect, Brunetti reconstructs the original architectural ideal that motivated each structure.

Enjoy:

[YouTube]

100+ million people adopt Google Photos in five months!

To echo my teammates, thanks for the amazingly warm reception!

PM data whiz Chris Perry has posted 11 interesting insights into what & where people shoot most often, what they search for (babies! dogs! duh :-)), and more. My favorite detail: “We’ve freed up 3,720 terabytes of storage. That’s like filling up a 16GB phone with photos every day for 637 years.”

This scale of user community is such a change for me. My work on Photoshop would reach a couple of million people, and I loved knowing that many of them would use a given enhancement dozens or even hundreds of times per day. At Google the work can reach orders of magnitude more people, but naturally the average person’s use will be far more casual. Both kinds of impact can be very satisfying—just very different.

More great stuff is on its way, and comments & questions are always welcome. Onward!

[YouTube]

So, wirelessly shooting down drones is now a thing

Be advised, photogs.

The DroneDefender may be our first look at the perfect anti-drone technology. The device, which looks like a modern rifle with an antenna mechanism attached to the front — because that’s basically what it is — uses targeted radio waves to force drones out of the sky. The nondestructive tech “utilizes a non-kinetic solution to defend airspace up to 400m against UAS, such as quadcopters and hexacopters, without compromising safety or risking collateral damage.”

[YouTube] [Via]

The Apollo archives brought to life

“I was so inspired by the recent release of the Apollo Archives,” Tom Kucy tells PetaPixel, “that I decided these would be interesting to be seen in motion as if straight out of a science fiction movie. I used Adobe After Effects and Photoshop to bring some of the best photos (1960’s) to life.” I love the meditative effect:

In a totally different vein, but using the same assets, is this rather hyperkinetic stop motion piece:


 [Vimeo 1 & 2]

Huge new photo archives, from the Depression to Apollo

  • “Kipp Teague of the Project Apollo Archives has been working since 1999,” reports FastCo, “to digitize the film (his is a private endeavor, not a NASA-sponsored initiative) and has recently released over 8,000 high-resolution, unprocessed photographs to a Flickr album.”
  • Meanwhile Yale has made 170,000 Library of Congress photos(shot 1935 to 1945) available via their Photogrammar site. According to the project site, “The Farm Security Administration-Office of War Information (FSA-OWI) produced some of the most iconic images of the Great Depression and World War II and included photographers such as Dorothea Lange, Walker Evans, and Arthur Rothstein who shaped the visual culture of the era both in its moment and in American memory. Unit photographers were sent across the country. The negatives were sent to Washington, DC. The growing collection came to be known as ‘The File.’ With the United State’s entry into WWII, the unit moved into the Office of War Information and the collection became known as the FSA-OWI File.” [Via]

Light L16, a 52-megapixel computational camera for your pocket

This new beast looks really interesting, although I remain skeptical about photographers’ real desire to refocus shots after capture & to pay $1699 for the capability:

Using a new approach to folded optics design, the Light L16 Camera packs DSLR quality into a slim and streamlined camera body. It’s like having a camera body, zoom, and 3 fast prime lenses right in your pocket. With 16 individual cameras, 10 of them firing simultaneously, the L16 captures the detail of your shot at multiple fixed focal lengths. Then the images are computationally fused to create an incredible high-quality final image with up to 52 megapixel resolution.

You won’t be surprised to learn that it’s generating lots of conversation (e.g. in the PetaPixel comments). [Update: Here’s a ton of detail about the device.]

[Vimeo] [Via]

Throwback Thursday: “Monument Mode”

Hey, what if you took a Photoshop feature from 2005 (later mass-automated by Google in 2013) and put it onto a mobile device in 2015, but this time you required that the capture device apparently be locked down on a tripod? You’d get “Monument Mode”:

Sorry for the snark. It’s just that I’ve seen (and shipped!) this one before, as have many others. Maybe I’m missing some new wrinkle, but in my experience it’s just fun demo-ware. We shall see!

[YouTube]

Coming soon to Google Photos: Collaborative albums & more

Want to pool your kid photos with your partner so that your parents can always stay up to date? Or search for a person by name, or share your photos on a big screen via Chromecast? It’s all rolling out—some now, some coming soon—across Android, iOS, and web.

Per the team’s post:

People Labeling

Label the people in your photos by what you call them, name or nickname.

This week in the U.S. you’ll be able to label the people in your photos however you want – Mom can be “Mom”, “Juliana”, or “Cat Lady” – whatever you choose. These labels are completely private to you and are not associated with a Google account or profile. Once people in your photos are labeled, you can make advanced searches to find photos of people with things, places or people, such as “Mom at the beach” or “Juliana and Marco in Hawaii.”

People labeling is rolling out in the U.S. this week on Android and is coming soon to iOS and the web.

Shared Albums

Gather all your photos and videos from friends and family in one spot, and know as soon as new moments are added.

We’re introducing shared albums later this year – a new, easy way to pool photos and videos with whomever you want, and get updates when new moments are added. There’s no setup involved, and you can use shared albums on any device – Android, iOS, Mac, Windows and Chrome OS.

I’ve been testing these features for a while & think you’ll really like ‘em.

Every generation of iPhone camera compared

As groundbreaking as its capture experience (and mere existence!) was, the 2-megapixel cam in the original iPhone was, we can now admit, really godawful; hence the heavy, pancake-makeup approach of the image filters of the day. I remember training myself to compensate for the profound shutter lag as if I was Luke Skywalker donning a blast helmet. It wasn’t, “Hey kid, look at me [press shutter],” but rather, “Hey kid, [press shutter] look at me.”

But progress has been swift & amazing, and I can instantly visually carbon-date pics of my kids by the quality of the phone-captured shots. Now photographer Lisa Bettany has produced beautiful interactive side-by-side comparisons of every generation of iPhone camera. Talk about night-and-day differences (to say nothing of burst mode, HDR, going from zero video to optically stabilized 4k, and more).

As for the future, let’s hope that next year we’re raving about 3D depth sensing enabling SLR-like background separation. Staying tuned…

Quan

Snapseed gets a more powerful Healing Brush, more

Do you live in a world where every blemish, random bird, stray pedestrian, and telephone wire is perfectly round? Me neither!

Therefore I think you’ll really like Snapseed’s new ability to heal arbitrary-shaped regions. Just tap the filter selector, tap Healing, and then paint away the bits you’d like to omit. And of course these operations are, like everything else in the new Snapseed, non-destructive, meaning that you can go back and re-edit them and/or copy/paste them among images.

The update (2.0.4) should now be live on the App Store & Play Store. It also squashes some bugs & adds support for Traditional Chinese (Hong Kong) and Canadian French.

Here’s an animation of healing in action: 

Lighthouse

Photo essay: “The Mind-Bending Bus Stops Of The Former Soviet Union”

Christopher Herwig finds weird, austere beauty on the steppes:

Photographer Christopher Herwig has been hunting bus stops in remote corners of the former Soviet Union since he stumbled upon them while biking to St. Petersburg in 2002. He has covered more than 30,000 km by car, bus and taxi in 13 countries discovering and documenting these strange works of art created behind the Iron Curtain. From the shores of the Black Sea to the endless Kazakh steppe, the bus stops show the range of public art from the Soviet era and give a rare glimpse into the creative minds of the time. Herwig’s series attracted considerable media interest around the world, and now with the project complete, the full collection will be presented in Soviet Bus Stops as a deluxe, limited edition, hard cover photo book. The book represents the most comprehensive and diverse collection of Soviet bus stop design ever assembled.

[Vimeo] [Via]

“Camera Restricta” prevents shooting unoriginal photos

Got a case of vemödalen (“the frustration of photographing something amazing when thousands of identical photos already exist”)? Or perhaps you’ve just wanted a camera that sounds like a Geiger counter while blurting “NEIN” at you in big red letters?

Philipp Schmitt’s Camera Restricta concept wants to help. PetaPixel explains,

“Camera Restricta introduces new limitations to prevent an overflow of digital imagery,” he says. “As a byproduct, these limitations also bring about new sensations like the thrill of being the first or last person to photograph a certain place.”

[Vimeo]

Photography: New Google/MIT algorithm removes visual clutter

Adios, bothersome fences, reflections, etc. That’s presuming that normal users would be sufficiently motivated to move their devices during capture. Time will hopefully tell.

The video accompanying our SIGGRAPH 2015 paper ” A Computational Approach for Obstruction-Free Photography”. We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.

[YouTube]

Podcast: Me & Andy Ihnatko, down by the schoolyard

I had a ball sitting down with my ex-Photoshop/current Google Photos friend Aravind Krishnaswamy to chat with Andy Ihnatko, Russell Ivanovic, and Yasmine Evjen for this week’s Material Podcast. We talked about computer vision, the future of memory keeping, my wife hypothetically getting bum-rushed by a lady from the Clinique counter, and much more. (Oh, and the jury’s still out on whether there were snakes in the wall. You’ll see.)

Grab the MP3 here.

Utterly bananas: Robbie Maddison’s Pipe Dream

Did I just see… this?

DC presents Robbie “Maddo” Maddison’s “Pipe Dream,” giving the world a chance to witness history being made as Maddo rides his dirt bike on the powerful and iconic waves of Tahiti. From his helmet to motocross boots, Maddo was dressed for FMX when he took his dirt bike into the unchartered saltwater terrain of the Pacific Ocean in French Polynesia.

So how was it done? See the making-of:

[YouTube 1 & 2]

Make & explore 360º panoramas with the new Google Street View app

I love capturing panos via The App Formerly Known As Photo Sphere, now significantly updated & renamed Street View  (download for iOSAndroid). PetaPixel writes,

Users can quickly browse all available traditional Street View content in addition to the newer 360-degree photospheres. Simply input a location, zoom in, and you are ready to start walking the streets of your favorite city. You can also explore beautiful photography through a pull-up tab that displays presorted collections and the ‘Explore’ tab. If you want to create your own photosphere you can do so, but will need a smartphone that contains a gyroscope sensor.

I particularly enjoy uploading my spheres to Google Maps to help other people explore the places I’ve visited.

ContactSheet-001

Coincidentally, Ricoh just introduced the Theta S, a new version of their spherical 360º capture app that generates Street View-compatible images. Check out this 360º-degree video that you can spin around while streaming from YouTube:



[YouTube]

Neural algorithm makes photos emulate famous paintings

Back in 2003 we blew a lot of minds by showing Photoshop’s Match Color feature sucking up the color palette of one photo or painting, then depositing it onto another. This kind of thing kept getting love as it evolved (see 2010 demo), eventually matching lighting among images. As far as I know no one has ended up using such functionality in practice (and yes, Match Color is still sitting in Photoshop on your hard drive right now), but it’s still cool.

Now the tech has taken another leap forward. Per PetaPixel,

In a newly published research paper titled “A Neural Algorithm of Artistic Style,” scientists at the University of Tubingen in Germany describe how their deep neural network can create new artistic images when provided with a random photo and a painting to learn style from.

“Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality,” the paper says. “The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images.”

Check out many more examples via the article.

neuralartwork

Photography: A new flyby of Pluto

Props to Bjorn Jonsson for assembling NASA photos into this animation:


The time covered is 09:35 to 13:35 (closest approach occurred near 11:50). Pluto’s atmosphere is included and should be fairly realistic from about 10 seconds into the animation and to the end. Earlier it is largely just guesswork that can be improved in the future once all data has been downlinked from the spacecraft. Light from Pluto’s satellite Charon illuminates Pluto’s night side but is exaggerated here, in reality it would be only barely visible or not visible at all.

[Vimeo] [Via]