Here is one of the digital images that we have just been looking at and on the right we can see some of its pertinent details. Now we can determine the total number of bytes in the image by multiplying the image width by the image height by three. The number three comes from the fact that for every pixel in the image we need to represent the amount of red, green and blue. And each of those is represented by a single byte. And when we compute this product we see that something like 21 million bytes of information is in this image. However, on disk the image only occupies 3.7 megabytes, so the number of bytes on disk is way less then the number of colour pixels that there are in the original image.
If we take the ratio of these two numbers, we see that the number of bytes required to store the image in the file on disk is a very small fraction of the total number of bytes required to represent all the coloured pixels in the image, and this comes about because this image has been compressed.
Now all images contain a large number of pixel data and we want to make the file size as small as we possibly can. That means then we can fit more images into the same amount of disk. There are two fundamentally different approaches to image compression. The first is Lossless compression, and what this does is exploit redundancy or recurring patterns in the image and codes them very efficiently, so they require overall less space. And this is in stark contrast to Lossy compression, the most common example of which is JPEG compression. And what lossy compression does is exploit the fact that the human vision system is far from perfect, and that there is some information that you can remove from the image in such a way that the human observer does not notice very much change. So some of the problems with the human vision system is we are not very good at noticing very fine detail. We are not as good at resolving the differences between colours as we are at resolving the differences between different intensities. And our colour sensitivity actually depends on the sorts of colours that we are looking at. So we can exploit these bugs in the human visual system in order to remove information that we wouldn’t notice anyway and make the files much smaller.
Here is an image of our favourite robot and on disk this image occupies 480 kilobytes. If we observe this little region within the robot and we zoom in on that what we can see is that it looks pretty good, we can see the edges are fairly sharp, and there is some interesting colour gradations there. Now this image is only been modestly compressed, it is 82% of the original image size. But if we start to increase the amount of compression, we are throwing away additional information, we see the image starts to become quite seriously degraded when we zoom in and look at it very closely. So with the very high level of compression the image is only occupying twelve per cent of the original storage, so we have made massive savings in the disk space required, but the image is starting to look fairly poor when we zoom in on it. And if we zoom in even further we can see that it has become quite blocky, quite quantised, and there are many fewer colours here then there are in the original image.
All of these images have the same number of pixels, but the actual file size varies. This image as we mentioned before is 3.7 megabytes long. This crocodile is only 2.9 megabytes long. The bird picture
3.8 megabytes. The size of an image after it has been compressed is a function of how much information is in that image.
There are a lot of pixels in a typical image which makes them take up a lot of memory. Images can be compressed to take up less storage. Compression can be lossless or lossy, where we tradeoff size for quality.