LESSON

Gamma Correction

Transcript

Another important monadic image processing operation is gamma correction, and I am going to talk now about the problem of gamma.

Back in the early days of television, the display device of choice was the cathode ray tube and the cathode tube is a very non-linear device. So for an input voltage V, the brightness on the screen, which I am going to call L for luminance, was the power of the input voltage. In fact, it was almost a square law, so that is; the luminance of a point on the screen was about the square of the input voltage—the input voltage raised to the power of 2.2.

So this number, 2.2, is referred to as the gamma of the display device. So this is a bit of a difficulty. The output luminance is not linearly proportional to the input voltage.

Now let’s add a camera to the system.

So the camera looks at the world and it sees luminance L and it converts that to a voltage. The voltage is transmitted to the television receiver, and the television receiver produces a luminance, which is related to the input voltage raised to the power of gamma.

And what happens now is that our system is not linear from end to end. The luminance that I see on my screen is the original luminance raised to the power of gamma, and that is going to have a really bad effect on contrast. If one part of the original image is twice as bright as another on the screen that I am looking at, it will appear to be almost four times greater in brightness.

So to get around this problem the early television engineers pulled a pretty neat trick. And what they did is they inserted the inverse non-linearity into the camera: the luminance that I see on my TV screen is proportional to the luminance of the scene that the TV camera was looking at. The system now is linear from end to end. And that is great for the person viewing the TV image.

So the process within the camera is referred to as gamma encoding and the image between the camera and the TV screen is referred to as a gamma encoded image. And the process of gamma decoding, or sometimes called gamma correction, happens within the display screen.
Now where this is relevant today is those gamma encoded images are what we record in image files. So if I take a standard image file format, a JPEG file or whatever, the pixel values within that file have been gamma encoded. They are not linearly related to the luminance of the scene that the camera is looking at. And in the header of almost any image file format there is an entry there, which says: what’s the gamma setting that has been used in the camera.

The reason that this is still convenient even though cathode ray tubes have long gone is that those image pixels in the image file are going to be displayed on your screen, and as I said before even modern screens emulate the old cathode ray tube non-linearity.

So it is really important to remember that the pixel values within an image file are not linearly related to the real world luminance. They are related to that luminance to the power of 1 over gamma; in fact, it is almost a square root function.

So we can do gamma correction, we can treat that as a monadic function. The function is basically we take every input pixel and raise it to the power of gamma. And we can see that there is basically a parabolic shape to this mapping. And if we apply it to an image and look at it, we see that the image looks very, very dark indeed. And the reason for that is that what I have done is gamma corrected this image twice. I have gamma corrected the image by applying this monadic function and then my screen, or your screen, has applied the gamma correction again.

So it can be useful to apply gamma correction if you want to look at and manipulate the pixel values, because you want them to be proportional to the original light levels in the scene, but to display a gamma corrected image on a monitor is not a useful thing to do.

Code

There is no code in this lesson.

Almost all cameras apply gamma encoding to the images they output. Let’s talk about what gamma encoding is, why it happens and how it is decoded.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school level mathematics; for example, trigonometry, algebra, calculus, physics (optics) and experience with MATLAB command line and programming, for example workspace, variables, arrays, types, functions and classes.

More information...

Rate this lesson

Average

Leave a comment