Robots and the future



In this section, I'd like to speculate about the further future. I'm sure everybody's heard of Moore's law. It was articulated by Gordon Moore of Intel in 1971. He was looking at the number of transistors that they could put on a silicon chip and he was plotting how that number of transistors per chip increased over time.

And he showed in the very early days of silicon chip technology that the transistor count doubled every two years. In the four decades since then, that trend has continued and this graph shows a line fitted to the transistor density of a number of micro-processors that have been developed over that period of time. So Moore's law is all about the number of transistors per chip. 

This is an interesting graph from Ray Kurzweil. And what he's plotted here is the number of calculations per second that you can get for a thousand dollars. And he's plotted that against time. And this curve is interesting because it's not linear. In fact, it's exponential.

And what Ray has done is taken a number of data points and plotted them and fitted an exponential to that. Now, what's really interesting about this graph are the horizontal dashed lines which correspond to the computational capability of various life forms. 

So he has a horizontal line for an insect brain, a mouse brain, a human brain, and all human brains. Right now, we're about here. And if this graph is correct, it indicates that for a thousand dollars, I could buy the computational equivalent of one mouse brain.

By they year 2030, I should be able to buy a computer for $1000 equivalent to a human brain. The most interesting of all is it by the year 2050, for a thousand dollars, I can buy the computational equivalent of all human brains on the planet.

The year 2050 is not that far away. For many of you, it will be within your working life. So there will be profound implications if we can have this much computational power available for so few dollars.

This graph plots the power density of various microprocessors against their clock frequency. And we can see that over time, we're moving in this direction, we’re moving towards higher and higher power densities and higher and higher clock frequencies.

Down here, we can see the human brain. We see that it has an impressively low power density and it operates at a very, very low clock frequency. We can argue that where we're going with silicon computational technology, we're going in the wrong direction. We're going further and further away from the amazing capability of the human brain.

Now it's possible to simulate the human brain on a supercomputer. Apparently, these simulations run at something like 1/500 of real time. And if we were to scale those up so that they would operate in real time, the computer would be big and require 12 gigawatts of electricity. You could power New York with that much electricity.

So clearly, if we're trying to get human-like intelligence in robots, following this traditional approach is going to require computers that are massive, expensive, and consume phenomenal amounts of electricity. What can we do to get around that?

Well, some people are looking at different computing architectures, called neuromorphic computing, modeled on the neural-connectionist structure of human brains. So these are like neural networks on steroids and embedded in custom silicon chips. So perhaps, this is the future hardware in which we will create the intelligence that will power future robots. 

Following on from this biological approach to computation, learning from nature about the way brains do computation, is to consider the perhaps most important biological sensing invention which is vision.

Vision is an amazing sense shared by almost all creatures on the planet. The sense of vision has evolved multiple times independently and in parallel. And today, there is a vast variety of different sorts of eyes on creatures on the planet.

Here we see just a few of them. We see the pinhole camera eye of a nautilus, the multiple eyes of spider, the compound eyes of an insect, the eyes of a bird, the small reflector-based eyes of a scallop, the eyes of an octopus.

Eyes are everywhere. They are a really important sensing modality for all living creatures and I believe very strongly they’re an important sensing modality for robots.  So if you're interested in vision, the sense of vision, and how robots might exploit it, then I invite you to join me in the follow-up MOOC, Robotic Vision, which is coming soon.

So what are the important take-home messages from these six weeks that we've been looking at robots? I think the first message I like you to take home is that robots are real. They're not all like the fictional robots that you read about in books or you see on television and in movie theatres. There are real robots out there doing useful things on the planet today.

These robots are diverse in form and function. They don't look like the humanoid robots we see in fiction. They might look like cars, they might look like tractors or airplanes or submarines, but they are robots because they follow this fundamental paradigm of sensing, planning, and acting so that they can achieve a goal. That's what robots are.

The functionality of robots is improving incredibly rapidly and that's driven by improvements in technology, computational technology, and the ready availability of amazing sensors, inertial sensors, visual sensors, and so on.

But there are some really important non-technical take-home messages here. And in this lecture, we talked a lot about the big challenges that confront all of us as a society and these are problems around population and food and jobs and care for the elderly.

These are massive challenges that our societies face and I really believe that robots are part of the solution to these challenges. But as roboticists, I believe it's important that we think much more widely about the implications of the technology that we are developing.

Yes, I believe robotics really is the answer to many of these challenges but we do need to be aware of the way the technology will be used. We need to understand about the ethics, we need to understand about the sorts of concerns and considerations that general members of the public hold. So it's our responsibility, as roboticists, to employ and deploy this technology wisely.


There is no code in this lesson.

We speculate about the further future, and how robots may impact upon our civilization.

Professor Peter Corke

Professor of Robotic Vision at QUT and Director of the Australian Centre for Robotic Vision (ACRV). Peter is also a Fellow of the IEEE, a senior Fellow of the Higher Education Academy, and on the editorial board of several robotics research journals.

Skill level

This content assumes only general knowledge.

More information...

Rate this lesson



  1. Onofrio Gallina says:

    Very interesting Masterclass. Thank you Professor Peter Corke.

Leave a comment

Previous lesson Next lesson