Building a highly accurate robot is not trivial yet we can perform fine positioning tasks like threading a needle using hand-eye coordination. For a robot we call this visual servoing.
Search Results for: vision
One very powerful trick used by humans is binocular vision. The images from each eye are quite similar, but there is a small horizontal shift, a disparity, between them and that shift is a function of the object distance.
Humans have been fascinated by the sense of vision for a long time, but it took a while to figure out how it worked. We now understand that illumination falls on an object and some light is reflected into our eye where it is sensed and interpreted by our brain.
The human eye is quite amazing, let’s look at its various components including the light sensitive rod and cone cells.
Vision is useful to us and to almost all forms of life on the planet, perhaps robots could do more if they could also see. Robots could mimic human stereo vision or use cameras with superhuman capability such as wide angle or panoramic views.
The sense of vision evolved over 540 million years ago and ushered in the Cambrian explosion and complex life forms, such as trilobites, with eyes. How did such an amazing sense come to exist?
Now let’s talk about the sense of vision, something we use almost all the time and practically take for granted.
Let’s look at some recent research results that vividly show how information from many 2D images taken from many different locations can be combined to form a detailed 3D model of the world.
Let’s recap the important points from the topics we have covered in our discussion of optical flow and visual servoing.
An image is a two dimensional projection of a three dimensional world. The big problem with this projection is that big distant objects appear the same size as small close objects. For people, and robots, it’s important to distinguish these different situations. Let’s look at how humans and robots can determine the scale of objects […]