Adobe's Computational Photography Using Plenoptic Lens Tech

After a brief demonstration during the keynote address at Nvidia's GPU Technology Conference, Adobe went into more detail about computational photography using plenoptic lenses, a method of taking pictures so that any part of a photo can be brought into focus after the fact. By using a bunch of tiny lenses and some rendering software, […]

After a brief demonstration during the keynote address at Nvidia's GPU Technology Conference, Adobe went into more detail about computational photography using plenoptic lenses, a method of taking pictures so that any part of a photo can be brought into focus after the fact. By using a bunch of tiny lenses and some rendering software, users will be able to select what they want in focus, even after the photo has been taken. This technology will let users create 3D images on the fly as well.

"Adobe placed a plenoptic lens–basically, hundreds of tiny lenses crammed together–in between a camera's lens and the image sensor. In this case, they used a medium-format camera and a digital back, probably because it afforded more space than a smaller DSLR. By placing the plenoptic lens here, camera's sensor now records what looks like a bunch of fragmented images, but in reality, each fragment contains more information about individual light rays entering the camera. Traditionally, when a camera takes a picture, a ray of light enters the lens and gets recorded on a specific spot. With a plenoptic lens, that same light ray passes through several lenses before making it to sensor, so it's getting recorded from several different perspectives. Because there's all these tiny little lenses in front of the sensor, the resulting image."

[Source]