I’d heard Todor and Jeff talk about plenoptic camera research, but it wasn’t until reader Joe Lencioni mentioned this Stanford work that I followed up. Wow. If nothing else, check out this video demonstrating how images can be refocused after the fact. (For background, Todor notes the word “plenoptic” was coined by Ted Adelson in this 1992 paper.) Wired News coverage is here.
Being more an Arts & Letters guy (read: math Cro-Mag), I tend to dwell on the social aspects of technology, and I wonder how photographers might react to these developments. There’s already a vocal minority of strident anti-raw shooters who say, “Raw is for when you plan to get the shot wrong.” That is, the post-processing flexibility that raw enables lets bad photographers sweep ever more mistakes under the carpet. What would they say about something that forgave flaws in focusing? It’s also funny to note that as technology like this makes it possible to keep more items in focus, technology like Photoshop’s Lens Blur works in the opposite direction, letting you add a “Bokeh” effect to otherwise crisp shots.
Personally, I’d love to see the concept of taking multiple captures in single pass used to enable greater dynamic range. Wouldn’t it be great to effectively auto-bracket shots simultaneously, instead of in quick succession?
The concept of bracketing photographs around an exposure is an old one. However, the ability to automatically bracket exposures and then combine them into a high dynamic range image would be most appreciated. Of course histogram tools will become much more important with these types of images. I am specifically thinking of tools that allow one to work in images almost exclusively from within the histogram would be welcome improvements to Photoshop. Oh, and don’t forget the much needed and oft requested histogram export tool.
As I see it, the only real limiting factor is storage space. It should be possible at some future date to capture an HDR plenoptic Raw image so that all exposure and focus decisions would be made in post processing. However, such a file would probably be in the gb range for a single image, at least if you want a high-rez final image. But at least the cameras could be simpler, needing no focusing or exposure mechanism.
I could see some merit to the idea of being able to refocus errors in accuracy after the fact.
The biggest problem with aps-sensor sized digital cameras is that there’s too much depth of field most of the time. That’s one reason I prefer to shoot with an EOS 1Ds Mark II. Less depth of field lets me place the focus at the correct point and then let the focus fall off in front and behind the subject in a way I’m familiar with, and which lets me direct the viewer’s attention where I want.
Just like the anti-raw Luddites who have no clue that RAW is no different than having a digital negative – only better. Infinite depth of field does not look natural. Our eyes have pretty shallow depth field in fact. So too much depth of field looks unnatural.
This reminds me of once when I talked to a photo editor at the Portland Oregonian. He told me some day they would send out photographers who would go to a scene – such as a riot or a garden party – and hold up a camera and swing it aound in a circle and then the editor back at the paper would pick the right image. Showing he had no clue that photos are more than just “f/8 and be there.” It takes a sense of timing, understanding how the camera sees, angle of view, being at the event to know what is actually important to photograph, etc.
He wanted to reduce photojournalists to the equivalent of trained chimps. In the end, it would completely ignore the amount of skill it takes to make good photographs.
This new tool might be used to cover up some people’s lack of skill, but it could also fix photos that in the heat of battle were focused less than perfectly. And that doesn’t ignore the skill of the photographer who had the ability – or insight – to make a good composition, but was simply a tad bit off in terms of focus. How could that be a bad thing?
Good photos are more than the physical skill to place focus perfectly. Especially nowadays in this era of auto focus. (I go back way beyond that). It could be a tool that can be used well or badly. It’s up to the photographer to know which it is.
This could be done if the camera recorded not only the value of each individual pixel, but the accumulation of that value over small time slices. So, like a mini-video, for a 1/100th of a second exposure there’d be, for example, the exposure every 1/10,000 of a second for the whole duration of the exposure.
Software could then hold back the highlights by looking at only the first portion of the exposure in a particular region of the image.
Similarly small amounts of motion blur could be corrected for by using a smaller portion of the time slice (i.e. shorter exposure) and adjusting.
Personally, I’d love to see the concept of taking multiple captures in single pass used to enable greater dynamic range. Wouldn’t it be great to effectively auto-bracket shots simultaneously, instead of in quick succession?