A quick look at light field cameras


At the beginning of this lecture, we consider the problem of why if I hold a sheet of paper up in front of the scene, I do not see an image on that sheet of paper. And the reason for that as we talked about was that at any particular point on the paper, we record the total amount of light that's falling on that particular point and that light has come from very, very many points in the real world itself. We've recorded the total amount of light there. We didn't say anything about the direction that the light rays came from. 

So, what if we could actually record at each point not just the amount of light but the direction that the light came from? This gets us into the area of what's called a light field. Now, a light field is, mathematically, it's a function that describes the light that's travelling in every single direction at every single point in space.

Think about the room that you're in at the moment. Consider any particular point in that room and there'll be a large number of light rays passing through that point and they will have come from many many other points within the room.

So, very, very complex lots of light rays, all going different directions and coming from different places. 

So, a light field sensor attempts to capture some of that, to capture the color, the intensity and the vector direction of all these rays of light. Traditional camera simply adds up all the rays of light and records the sum total. Light field cameras try to record the directions as well and the direction is really important. Consider this example again of the thin lens model. We have an object in the world. It's a whole bunch of light rays leaving the object passing through the lens and passing through the focal plane.

I've shown here a pixel and the light rays that are entering that pixel are coming from a large number of different directions. Let’s animated this, you can see that the object is moving and we can see the light rays from the tip of that object moving across the array of pixels.

So, the object when it's really close to the lens would be well out of focus in the traditional camera. What's happening though is that those light rays are now following across a range of pixels and that each pixel, those light rays are entering at a different angle.

So, if those pixels were able to record not just the amount of light but the direction at which they are coming from as well, it would be possible to compute the image even though it is very, very out of focus and this is a technology that's starting to emerge now in what's called light field cameras. 

Now, one of the first on the market, the consumer level of light field cameras, this one produced by Lytro and in the right hand side of this slide, what we can see is the light field image being post-process to bring a particular part of the image into focus. So, the person is clicking on a particular point in the image and using all the raw information from the light field, the amount of light and its direction, we can compute the appropriate in focus image. We cant let every point in the image in focus at all the time but we can now select after we took the picture which part we would like to have in focus and that's a big step forward. 

Early versions of light field cameras, the research versions of these were very very massive arrays of cameras, very expensive and far from portable. Here's an example of an early portable light field camera. Again, an array of cameras, which are trying to capture the light rays coming from the world and the various directions at which they come from.

This is a technology now, which is becoming quite accessible as the cost of phones is being reduced. It's now possible to create enormous arrays of cameras and these videos show some work that's being done using commodity mobile phone cameras, cheap digital cameras in order to be able to capture light fields and once you have a captured light field then, you can perform some very amazing visual effects, stop motion effects and so on.

Light field cameras are now commercially available and capture much more information about the rays of light reflected from the scene. This enables us to perform functions like changing the focus of an image after it has been captured.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school-level mathematics, e.g. trigonometry, algebra, calculus, physics (optics) and some knowledge/experience of programming (any language).

More information...

Rate this lesson


Check your understanding

Leave a comment