Sunday, May 4, 2014

A Brief Overview of Lightfield Photography Part One

The following is an update of an article I wrote for Make Magazine. A much slimmed down version with some great graphics appears on pages 60 & 61 of volume 38. The online version which resembles what is below much more closely can be found here.

The Lytro camera has been around for a couple of years and the feature most people seem to talk about is the ability to fix or change the focal point of a picture after it has been taken. But that isn't the whole story; in fact it’s just the tip of the iceberg.

Traditional cameras, whether digital or analog capture a scene from a single point of view. Photoshop and similar software enable amazing things to be done to these images but it’s unlikely that any amount of post processing will give photographers the ability to slightly change the perspective of an image, change its focal point or render it in 3D. To do those things you need to have captured the entire “light field”. The light field is made up of additional data types that include not only the color and intensity of a light ray but also the direction it came from. In addition the light field is all of the light rays that enter the camera, not just those that are focused on a traditional camera’s sensor or film to create a single point of view 2D image.

The first generation Lytro captures the light field by directing the light rays that enter the main lens onto an array of over one hundred thousand micro lenses. The micro lenses in turn direct the light rays onto a  6.5 x 4.5 mm CMOS sensor with the ability to capture eleven million light rays arranged in a 3280 by 3280 grid. Each micro lens utilizes a roughly ten by ten pixel portion of the CMOS sensor.

Capturing the light field is only the first step. The next step is to generate an image that can be viewed. Doing this has been described as “ray tracing in reverse”. To explain what this means I’m going to describe how pinhole cameras work, traverse briefly through ray tracing and finally explain the Lytro rendering process. All three have two things in common. First there is an observer viewing the scene and second, the scene must somehow be rendered onto a screen that the observer can view...



In a pinhole camera light rays pass through a hole in the front of the camera and appear on the opposite surface as an image that is upside down and reversed. The back wall is essentially a screen. If you were inside the camera with your back to the pinhole lens you would see the projected image in front of you.

In the case of my crude illustration above try to imagine if you can; that the scene object is a cactus. This and all other images can be clicked on to get a higher resolution version. Sadly the quality of the drawings does not improve.

In ray tracing a scene is described mathematically using geometric shapes, textures and light sources. A point outside the scene is selected that represents the position of the observer and an image is generated from the perspective of that observer...


Ray tracing differs from the pinhole camera example in three ways. First, the scene does not exist in the real world and has to be rendered. Second the “real” scene is in front of the observer rather than behind and finally since the scene is virtual, the image on the screen needs to be rendered. This is done on a pixel by pixel basis. “View” rays are cast out into the scene for each pixel and the color of the pixel is calculated based on the objects and light sources each view ray hits while traversing the scene. Sending view rays from the observer greatly reduces rendering time as only the portions of the scene visible to the observer need to be calculated.

In the case of the Lytro we have the stored light rays that describe the scene captured when the picture was taken. In order to project an image onto our screen and generate an image a focal point needs to be chosen. Given the selected focal point, the Lytro software uses the stored light rays to render the image. This is ray tracing in reverse in that the rays projected onto the screen to generate an image have their origin point within the scene rather than having been projected from the observer and through the screen into the scene.


To review, in ray tracing the scene is rendered by shooting view rays out from the observer and through the screen while with the Lytro the scene is rendered by shooting light rays captured in the scene back onto the screen.

A recent addition to the Lytro Library software is the ability to create 3D images. This ability demonstrates another advantage of capturing the light field. As in the pinhole example imagine yourself inside the Lytro camera. Scooting around would give you a slightly different view of the external scene. The data needed to render those transitions is part of the captured light field and can be used to generate 3D renderings of the captured scene. A pair of inexpensive Anaglyph glasses is all you need. It’s easy to export the images as JPG’s so that they can be viewed by anyone with Anaglyph glasses. Below is an example based on a picture I took at the OK corral with my first generation Lytro.


One downside of the Lytro camera is the lack of a published API for image manipulation. Lytro uses a proprietary image format and while a lot of work has been done to reverse engineer it and create software to manipulate images as I write this there is no comprehensive cross platform API or software available for working with LFP’s (Light Field Picture files). The best resource I’ve found is “Lytro meltdown” (http://optics.miloush.net/lytro/Default.aspx) which tends to be Windows centric but contains a lot of useful information.

Light field photography is still in its infancy. Prices were initially high and the results underwhelming but things are starting to change on both fronts. The first generation Lytro camera can now be had for under $200.

The thing I’d most like to see is a comprehensive cross platform open source library that can be used to manipulate and manage LFP files. The Holy Grail would be software capable of taking a raw Lytro file and rendering images.

The recent announcement of Lytro's second generation Illum camera has some interesting technical and marketing implications that I'll talk about in the second part of this article.
Enhanced by Zemanta

No comments:

Post a Comment