LESSON
Artificial Color Sensors
Share
Transcript
Color cameras are functionally equivalent to the system of cones that we have in the retinas of our eye. Incoming light from the scene is focused by a lens onto an artificial retina. This artificial retina comprises an array of silicon photo sensors. These are solid state circuits that measure the amount of incoming light. They are actually sensitive to quite a broad spectrum of light. So to make them color sensitive we print on top of these silicon sensors individual color filters where we print patterns of red, green and blue filters. This gives the individual silicon photo sensor then the ability to measure the amount of incoming red, green or blue light at that particular point on the artificial retina. Most color cameras use an array of color filters that looks something like this.
This pattern of color filters is frequently referred to as a Bayer filter array, and is named after Bryce Bayer who invented this idea way back in 1976. The fundamental element in the Bayer filter is a two by two grid and it contains two green filters one red filter and one blue filter. For the purpose of example, imagine that each of the underlying photo sensors is returning these particular values and these are unsigned eight bit integer numbers which vary between 0 and 255. And further, let’s consider we want to measure the amount of red, green and blue at this particular point on the artificial retina. Well the answer for red is really pretty clear, the amount of red at this point is 75, it is what is being returned by the photo sensor after the light has passed through the overlying red filter. But let’s say now that I want to know the amount of green light at this particular location and that is problematic because this particular sensor has got a red filter on it there is no way that I can measure the amount of green light there because there is a red filter not a green filter. Now what we do is simply average the green value of all of the neighbours and we say that this is a good approximation to the amount of green light at the pixel that is circled. Similarly, if you want to measure the amount of blue light at this particular pixel we take the average of the neighbouring blue pixels. So for every pixel in this array we can measure directly one color and we can estimate two other colors from its neighbours.
If our sensor is an NxM pixel array, then we can estimate the red, green and blue values of every pixel. That means than we have a total of 3xNxM values returned from this artificial retina. But in reality there are only NxM actual measurements being made. There are only that many individual photo sensors. The rest of the values have been estimated.
So an alternative approach is just to store the values directly measured by the photo sensors — the NxM value — and then also take note of the properties of the color filters that are associated with the different pixels and this is an alternative way of representing the image and this is referred to as a raw image file format. It is different to a normal red, green and blue image. We need to do some post processing; we need to apply the characteristics of the filters in the camera to the data that is stored within the file.
Code
A color camera has many similarities to the human eye. Instead of three types of cone cells a uniform silicon sensor uses a pattern of three color filters known as a Bayer filter.
Skill level
High school mathematics
This content assumes an understanding of high school-level mathematics, e.g. trigonometry, algebra, calculus, physics (optics) and some knowledge/experience of programming (any language).