LESSON
Coming soon- robots that drive
Share
Transcript
There is a lot of interest today in robots that drive, otherwise known as self driving cars, and such technology has been depicted in fiction for a very long time. The origin of self driving car technology can probably be traced back to these two research robots from the 1960s.
On the left we have a machine known as the Stanford cart. This one here was built by Hans Moravec. What this robot is doing is using a vision system. In fact, a stereo vision system to reconstruct the three dimensional nature of the world in which it’s driving. It uses that information to plan a path so that it can avoid hitting any of these obstacles. This machine was excruciatingly slow. Mostly dictated by the limited computing power that was available for the problem back in 1964.
The robot on the right is also pretty famous. It’s known as Shakey and developed at SRI International in the late 60s and its career went on through the 70s. This robot also used sense of vision to build a map of the environment in which it was navigating. A major step forward in self driving car technology was the DARPA urban challenge in 2007. A number of teams competed to build robot cars that could perform as well as human drivers. They had to perform tasks like moving into parking bays. They had to do the right things at the intersections. They had to demonstrate that they could do overtaking and all of this safely with skill levels comparable to human drivers.
The winner of that competition was this robot car called, ‘Boss’ developed by Carnegie Mellon University and we can see that it doesn't look anything like an ordinary car. It’s bristling with all sorts of sensory devices and a large part of the car is filled with high performance computing equipment.
Now technology has evolved pretty rapidly so by the year 2014. Google cars look much more sleek. There is really in fact only one sensor that’s obvious when you look at the car and that is the device known as a Velodyne scanning laser range finder on the roof of the car. The way the robot car sees its role is shown here in what we call a point cloud image and this is generated by that Velodyne scanner that we saw on top of the Google car.
The point cloud is a number of points in three dimensional space and they are typically color coded. So the colors blue, the cool colors indicate the ground plain on which the robot is driving and points above the ground plain where it might be imprudent to drive are colored green or red for those points that are very high above the ground plane. So from this fairly simple three dimensional geometric model of the worlds surrounding the car, the software on board the car is able to make a number of decisions that which direction it should drive.
It’s able to see other objects perhaps human beings, perhaps other cars, perhaps road signs. The software on board the vehicle has to take all of this rich sensory information and create a plan and then send commands to the car to adjust the steering wheel or to adjust the throttle or to adjust the break.
Code
Self-driving cars are in the news a lot lately. An alternative way to think of such cars is as robots that carry people.