Visiting the NY Times was always among the real treats of my time working on Photoshop. I was always struck by the thoughtfulness & professionalism of the staff, but also by the gritty, brass-tacks considerations of cranking through thousands of images daily, often using some pretty dated infrastructure.
The morgue contains photos from as far back as the late 19th century, and many of its contents have tremendous historical value—some that are not stored anywhere else in the world. In 2015, a broken pipe flooded the archival library, putting the entire collection at risk. Luckily, only minor damage was done, but the event raised the question: How can some of the company’s most precious physical assets be safely stored?
I believe we can’t abandon our sense of adventure because we lose our ability to see it, and it has become my goal to help people who live with similar challenges, and show them that anything is possible.
In 2013, I became the first blind person to kayak the entire 226 miles of the Colorado River through the Grand Canyon But, I always felt it didn’t mean anything unless I found a way to pay it forward. So I joined up with the good folks at Team River Runner, a nonprofit dedicated to providing all veterans and their families an opportunity to find health, healing, community, and purpose. Together we had the audacious goal to support four other blind veterans take a trip down the Grand Canyon.
The academic research they’ve shared, however, promises to go farther, enabling VR-friendly panoramas with parallax. The promise is basically “Take 30 seconds to shoot a series of images, then allow another 30 seconds for processing.” The first portion might well be automated, enabling the user to simply pan slowly across a scene.
This teaser vid shows how scenes are preserved in 3D, enabling post-capture effects like submerging them in water:
Will we see this ship in FB, and if so when? Your guess is as good as mine, but I find the progress exciting.
Time, they say, has the nice property of keeping everything from happening at once. But what would it look like if everything did happen at once?
Photographer Páraic McGloughlin hung out on a bridge in Sligo, Ireland for 19 hours, to create a single, day-long shot that he then manipulated. Colossal writes,
“Using a fundamental image (a time lapse) to mask and cut into, I tried to show the variable possibilities within a limited time span, maintaining the integrity of each individual photograph while dissecting and rearranging the overall image.” The visual content was matched with each layer of audio created by Cooper to form the song, which stacks up to over one hundred layers.
My colleague Richard is in charge of Google News, and in addition to doing a million other interesting things, he’s an accomplished aerial photographer. I enjoy the perspectives—literal & figurative—he shares in this meditative piece:
“We designed and refined FPV drones since 5 years now. When Kilian spoke about his idea of putting a GoPro Fusion on one of our drones, we were intrigued but thrilled about this new challenge. The design and the flying of this set up are so different than what we are used to, there were loads of crashes but the end result is so refreshing and pushes the drone shot to the next level.” Pierre, engineer at Cinematic Flow
Now, how about a look behind the scenes?
We trust the cam, we ensure a light flying and tighten the buttocks to export!
Wow—check out this amazing sneak peek from Adobe’s Long Mai (see paper):
Enables any photograph to be turned into a live photo; animating the image in 3D, simulating the realistic effect of flying through the scene.
This is especially dear to my heart.
As a brand new Photoshop PM (in 2002—gah!), one of my first trips was back to NYC to visit motion graphics artists. Touring one shop I was amazed to glimpse a technique I’d never seen, using Photoshop to break 2D photos into layers, fill in gaps, and then animate the results in After Effects. Later that year the work came to the big screen in The Kid Stays in the Picture, the documentary that now lends its name to this ubiquitous parallax effect.
Here Yorgo Alexopoulos talks about how he developed the technique & how he’s leveraged it in later works:
So, while we wait for Adobe’s new tech to ship, how could one do this by hand? Below, artist Joe Fellows gives a brief, highly watchable demo of how it’s done (although it physically pains me to see him using the Pen tool to make selections & no Content-Aware Fill to at least block in the gaps):
Man, I used to hate demoing alongside After Effects during internal Adobe events: We had Photoshop, sure—but they were Photoshop on wheels. You could just pencil them in for the Top Gun trophy nearly every time.
Making Content-Aware Fill work at all is hard—but making it effective over multiple frames (“temporally coherent,” in our nerdy parlance)? Well, that requires FM technology—F’ing Magic. Here’s a naive implementation (not from Adobe):
Cool, artsy—but generally not so useful. And here (at 1:50:44) it is as the After Effects team intends to ship it next year (first sneak-peeked last year as Project Cloak):
Special props to Jason Levine for vamping through the calculation phase & then going full “When Harry Met Sally deli scene” at the conclusion. As a friend noted, “I’ll have what he’s having.” 😝
[Y]ou just take a photo in Portrait mode using your compatible dual-lens smartphone, then share as a 3D photo on Facebook where you can scroll, pan and tilt to see the photo in realistic 3D… Everyone will be able to see 3D photos in News Feed and VR today, while the ability to create and share 3D photos begins to roll out today and will be available to everyone in the coming weeks.
Check out their post for tips on composing a 3D-friendly image (e.g. include lots of foreground/background separation; avoid transparent objects like drinking glasses).