“We now snap more photos in two minutes than were captured in the entire 19th century,” and people have already spent more than 900 years working in Photoshop Touch (!).
Check out the talk for more interesting info & demos. Skip to around the 23-minute mark to see camera shake reduction. Around 31 minutes they show that tech being run in the cloud.
[Via Andrew Kavanagh]
Interesting to see that the deshake filter supports different regions to account for the different shake patterns caused by parallax. Technically, I don’t understand why it shouldn’t work with subject motion as she states? From a mathematical point of view, there shouldn’t be any difference between a moving background and a moving subject as far as I can tell, provided of course that the estimations for the individual regions are treated as completely independent by the deconvolution algorithm and they are not reconstructing the camera transformation matrix and a z-depth channel.
By the way, this would be a really good addition to the Sharpening toolset of Camera Raw/Lightroom since it is a fairly common and useful operation that requires very little local control (i.e. no brushing of any kind and so on). Plus it would have access to metadata (ISO and exposure settings) that is guaranteed to match the image data (i.e. at least for raw files, the user is guaranteed not have run a noise reduction plugin in the meantime or done any exposure changes that would amplify or otherwise affect noise), which should make any math that accounts for estimation errors caused by noise more reliable I would assume.