LESSON

Summary of Feature Extraction

Transcript

Let’s summarise what we’ve learned.

The camera is a really wonderful sensor for a robot to have, but it produces an unusable torrent of data. There are just too many pixels. Far more pixels than a robot can action. So what we need to do is to simplify this stream of data and to find meaningful objects within that stream of data that a robot can respond to, move to, whatever.

So a region is a set of adjacent pixels with a similar intensity or colour. We’ve talked about techniques to find regions within a scene. We’ve use a technique called region labelling to distinguish multiple distinct objects that exist within a single binary image.

Once we have labelled regions, then we can extract features to describe each of those regions. We have multiple regions within the scene, and for each of them we can determine the position of the region, the size of the region, the shape of the region, and its orientation.

We’ve talked about techniques to measure and estimate all of these things, and demonstrated how to do a lot of these things using MATLAB and the MATLAB tool box.

Code

There is no code in this lesson.

Let’s recap the important points from the topics we have covered about image features, blobs, connectivity analysis, and blob parameters such as centroid position, area, bounding box, moments, equivalent ellipse, and perimeter.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes an understanding of high school level mathematics; for example, trigonometry, algebra, calculus, physics (optics) and experience with MATLAB command line and programming, for example workspace, variables, arrays, types, functions and classes.

More information...

Rate this lesson

Average

Leave a comment