LESSON

Multiple Image Regions

Transcript

Now let’s consider the slightly more difficult situation where we have two shark objects within our image. Now you and I looking at this image can clearly see that there are two distinct regions; we’re not confused at all. But for the machine vision system that we’re building, we need to develop an algorithm that can determine that there are two distinct regions here.

If we apply the technique of moments that we just looked at and compute the moments of this scene, what we will end up with is the centroid of a region, which contains the two sharks. We see the centroid in between the two sharks. It hasn’t been able to work out that they’re separate, it’s just treated all the white pixels in the same way, and we see also that the bounding box is drawn around the extremities of both sharks, rather than around the sharks individually. We need some way to distinguish between these two objects.

Now the way that we do this is to transform the image. So here is our input image and we’re going to perform a transformation; it’s a process called connectivity analysis, and the output image is shown on the right. We refer to this as a label image, and what’s interesting about the labelling image is that all of the pixels that belong to the same object are assigned the same label. So we see that all of the pixels that belong to the top shark have been labelled 1. All of the pixels that correspond to the bottom shark have been labelled 2, and all of the pixels that belong to the background have been labelled 3. In this image the pixel values refer to the label of the object if you like; to describe which of the n objects in the scene those pixels belong to.

This is a really powerful transformation. The white objects in the input scene are called blobs. ‘Blobs’ is actually a technical term — you’re allowed to use it. In this input scene there are three blobs, shark 1, shark 2 and the background, and in the output image the pixels have got values 1–3, indicating which blob they belong to.

So what is a blob? It’s also known as a region, and sometimes it’s called a connected component. It’s a set of contiguous pixels of the same colour that are next to each other; they touch each other. And when we talk about colour, before we’ve just been looking at a binary image where the pixels are either black or white. Often times we’ll start with a real colour image, and the first processing stage is to convert that colour image into a boundary image.

So in the top example here we have a scene with four yellow objects in it. And we perform some image processing operations and we’ll talk about this in future lectures, to convert the yellow pixels to being true — or white — and all the other colour pixels to being false — or black.

In the bottom example we’re trying to find the tomatoes on the bush, and we’re going to use image processing techniques that give us a true result when the pixel is red and a false result when it is not red. So the first step is to take a real colour image, convert it into a binary image where the pixel values are either true or false, and then we can apply our connectivity analysis to that image.

The label that we assign to a pixel indicates which set it belongs to. Every pixel has the same label as its neighbour to the north, south, east or west that has the same colour. This is a fundamental part of the connectivity analysis algorithm.

Now this process of performing connectivity analysis has many names. Sometimes it’s called connected component analysis; sometimes it’s called blob or region or image labelling; sometimes it’s called blob or region colouring. Many names, the same algorithm.

Here is our original binary image with two sharks in it, and now in the middle we have the label image where the pixels have got values of 1, 2 or 3 depending on whether pixels belong to shark 1, shark 2 or the background. Now I can apply a simple logical test to this image, and I can say give me all of the pixels that have got a label value of 1. I end up with this binary image that contains just one shark, the top shark. And once I have this image I can apply the techniques we used previously to find the centroid and the bounding box of this particular shark.

Now I can ask for a binary image, which contains all of the pixels that are labelled 2, and I have an image, a binary image which contains just the bottom shark. Once I have this shark in isolation I can apply the earlier techniques and again find its bounding box and its centroid.

Now the background is also a blob. It’s all of the pixels that are labelled 3. So we can say what is the image where all of the pixels are equal to 3, and we see here it is basically the background with shark shaped holes in it. It’s a blob as well; it’s a very large blob. It touches the edge of the image and it’s got these two holes in it, which are the objects in the foreground. If we compute the moments of the background below we find that it’s got a centroid which is roughly in the middle and it’s got a very large bounding box which goes around the whole outside of the image.

Now we have a binary image, or logical image, that contains two blobs. So let’s compute the label image. The function that we use has actually got two output arguments. The first output argument, L is the label image itself, and the second output argument is the number of blobs it finds within the scene. The function itself is called ilabel and we pass in the binary image.
So it’s computed L, which is an image, and the number of blobs, which has got a value of 3, so it’s saying there, are 3 blobs within this scene. There are the two sharks, and the background blob.

So let’s display the label image and investigate that. Start a new figure and display the label image.

Let’s have a look at some of the pixel values within that. So these pixels here, all the ones that I’m clicking on, have got a value of 3. That means these pixels belong to blob number 3, which is the background blob. All of these pixels here in this shark are all labelled 1; that is, they all belong to blob number 1. These pixels down here all have a value of 2, so these pixels all belong to blob number 2.

Now what we can do is use a logical operation, a logical test on the labelling of each in order to isolate the different blobs within the image. What I can do now is to display all the pixels who have a label equal to 1, and what we see now is that there is just a single shark in this scene. Let’s have a look at all the pixels that have got a label equal to 2 and now we see the second shark. So now what we’ve got is a scene with a single blob in it, and this is a problem we’ve met before. We know how to work out the bounding box of a scene that contains a single blob; we know how to compute the centroid, and the area of a scene that contains just a single blob. So by using this intermediate step of labelling the image, we can turn our complex problem with multiple regions into a problem that we have seen before.

An extra level of sophistication that we can go to is to use the function iblobs. I pass in the image — that’s all I need to do — and it’s returned a vector of blob objects. Technically, they are region feature objects. And there’s one object for each region or each blob in the scene. So element number 1 is a blob with an area of 7827. We can see its centroid. We can see is colour; that is, that it comprises pixels with the value of 1. We can see its label; it’s got a label value of 1, and a couple of other parameters as well which we will get to in due course.
Blob number 2 also has an area of 7827 It’s another shark. And blob number 3 has got a very large area; and we can see its colour is equal to 0, so that is the background blob.

Let’s show again the original image. Here we go, here’s the scene with two sharks in it. For the first element in this vector of blobs, that’s blob number 1, I can plot a bounding box and I can do that in the colour green … there we go. For blob number 2, I can plot its bounding box in green. I can also plot the centroid of the object and I will plot that in blue and I’ll draw an asterisk there.

The blobs have also got a number of parameters or attributes. So, for instance, for blob number 1 I can say what is its area. We can look at the colour of that particular blob; we use the class attribute, and see that it is comprising pixels with a value of 1. We can look at the centroid using the p attribute and we see a vector here, which contains the u and v coordinates of its centroid. We can get the individual elements, uc is the u coordinate of the centroid. We even find the moments have been stashed away, so the 0,1 moment of that particular blob has got this particular value.

Code

There is no code in this lesson.

For a binary image that contains multiple blobs we must first transform it using connectivity analysis or region labeling. Then we can describe each of the blobs in the scene we first need to transform the image using connectivity analysis. Each of the blobs can then be described in terms of its area, centroid position, bounding box and moments.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school level mathematics; for example, trigonometry, algebra, calculus, physics (optics) and experience with MATLAB command line and programming, for example workspace, variables, arrays, types, functions and classes.

More information...

Rate this lesson

Average

Check your understanding

Leave a comment