The light we see is a mixture of different wavelengths in the visible region of the electromagnetic spectrum. The most common source of light is incandescence from a very hot body such as our sun or the filament of an old-fashioned light bulb. The spectrum, the amount of energy as a function wavelength, follows Planck’s […]
Search Results for: visible region
For a binary image that contains multiple blobs we must first transform it using connectivity analysis or region labeling. Then we can describe each of the blobs in the scene we first need to transform the image using connectivity analysis. Each of the blobs can then be described in terms of its area, centroid position, […]
If we look at a binary image we can easily see distinct regions, that is, sets of pixels the same color as their neighbours. We call these blobs and they’re an important way of achieving an object rather than pixel view of the scene. We can describe these blobs by their area, centroid position, bounding […]
We can also describe a blob by its contour or perimeter. Let’s look at how we determine the length of a blob’s perimeter using crack code and chain code. We can use the perimeter length to determine another scale and invariant shape parameter called circularity which indicates how compact, or circle-like, the blob is.
When it comes to describing a blob we can do more than just area, centroid position and bounding box. By looking at second order moments we can compute an ellipse that has the same moments of inertia as the blob, and we can use its aspect ratio and orientation to describe the shape and orientation […]
In a binary image a white blob could contain one or more holes or black blobs. Those block blobs in turn could contain one or more white blobs and so on. Any blob that is surrounded by another blob, of the opposite color, is considered to be the child of the surrounding blob. This gives […]
We will compare and contrast the terms image processing, computer vision and robotic vision — they have much in common but there are some subtle but important distinctions. When it comes to interpreting an image we typically try to find and describe regions, lines and interest points.
Let’s recap the important points from the topics we have covered about image features, blobs, connectivity analysis, and blob parameters such as centroid position, area, bounding box, moments, equivalent ellipse, and perimeter.
We learn how to describe the orientation of an object by a 2×2 rotation matrix which has some special properties. Try your hand at some online MATLAB problems. You’ll need to watch all the 2D “Spatial Maths” lessons to complete the problem set.
Light entering our eyes stimulates the photoreceptor cells in the retina of our eye: color sensitive cone cells that we use in normal lighting conditions and monochromatic rod cells we use in low light. The density of these cells varies across the retina, it is high in the fovea, low in the peripheral vision region […]