At NVIDIA’s technology conference this week, Adobe researcher Todor Georgiev demonstrated GPU-accelerated processing of plenoptic images. As Engadget puts it, “Basically, a plenoptic lens is composed of a litany of tiny “sub-lenses,” which allow those precious photons you’re capturing to be recorded from multiple perspectives.” Plenoptic image capture could open the door to easier object matting/removal (as the scene can be segmented by depth), variable perspective after capture, and more.
This brief demo takes a little while to get going, but I still think it’s interesting enough to share.
Very very interesting – (but a far better presentation of this feature/effect could be made – hope someone does it)
Holy cow
How does something like this get applied practically? Buy all new lenses for your camera? Thread on some kind of filter version? How many stops do you lose?
No, it’s a microlens array at the sensor. You don’t lose stops.
It will probably come as a new camera that’s less expensive and better quality by capturing refocusable after the fact 3d.
Sherman, set the WABAC machine for November 22, 2005.
Golly, Mr. Peabody, where are we going?
http://blogs.adobe.com/jnack/2005/11/plenoptic_cameras_whistle_typelow_apprecia.html
😛
Apparently there is a plenoptic camera on the market.
http://www.petapixel.com/2010/09/23/the-first-plenoptic-camera-on-the-market/
I knew about this concept, but seeing it action is something else.
Very impressive.