The pinhole camera simplifies the geometry but in practice it results in very dark images. Cameras, as well as our eyes, use a lens to form a brighter image but there are consequences.
Search Results for: vision
What are the consequences of representing a three-dimensional scene using only two-dimensions? The appearance of parallel lines converging and circular objects being elliptical should be surprising but we take this for granted.
Let’s look at how light rays reflected from an object can form an image. We use the simple geometry of a pinhole camera to describe how points in a three-dimensional scene are projected on to a two-dimensional image plane.
We use MATLAB and some Toolbox functions to create a robot controller that moves a camera so the image matches what we want it to look like. We call this an image-based visual servoing system.
It is common to think about an assembly task being specified in terms of coordinates in the 3D world. An alternative approach is to consider the task in terms of the relative position of objects in one or more views of the task — visual servoing.
Vision is a ubiquitous sense and is found in almost all animals, but the number and type of eye is very diverse. We will look at examples such as compound eyes of insects, spiders, and sea creatures such as scallops and squids.
Why do robots need a sense of vision in this modern age when GPS satellites can tell a robot where it is? Let’s talk about GPS, how it works and its strengths and weaknesses such as multipath and urban canyon effects.