MASTERCLASS
Robots and the future
Lessons
Share
Transcript
Peter: It's a sad fact that human beings have a shocking propensity for conflict. It’s estimated in the last century, 230 something million people were killed in war and most of them were non-combatants. Many rich countries now are pushing really hard into robotic technology to reduce soldier casualty rates, and developing so called smart weapons, which will reduce the deaths of non-combatants. So, right now, these robots are not fully autonomous. There is a human being responsible for actually firing the weapon and causing harm or death to another human being. But there seems to be a growing pressure to take that human being out of the firing loop, and to actually have robots making autonomous decisions about whether they kill human beings or not. So, what is ethics have to say about this situation?
Doug: I think the term moral agency is important here. What moral agency is, is does that robot or that individual have the capacity to make a decision whether it's right or wrong.
Peter: Okay but human beings struggle with this. So, how would we expect the machine to be able to do that?
Doug: Well, probably, they don't. They don't at this point. So, without moral agency, then ethics isn't applied in this context against the machine. So, the machine is neither ethical nor unethical.
Peter: Because it cannot tell right from wrong?
Doug: It has no moral agency but what that means though is that if that's the machine without moral agency, then we come back to the person running or programming the machine. They become liable for that moral agency. So, it just falls back. And so that individual, those engineers or that robot designer is the one that has the ethical responsibility.
Peter: Okay because of the machine that they have created, does not have this…
Doug: There's not have moral agency, correct.
Peter: …the responsibility is back on the person who either created it or deployed it.
Doug: Yes. There's quite a literature I just discovered on this. They are robot ethics and machine ethics… there's quite a bit out there on this that debates this back and forth as you well know. We've talked about this a little bit. So, for me, that is the defining construct. It's moral agency.
Peter: I know it's a topic that in the robotics community, we talk a little bit about it. I don't think we talked about it nearly enough but this is the one that really seems to get people excited or angry, is this idea of robots in warfare. It already starting to cause a lot of contention.
Doug: That doesn't surprise me. And Hollywood, of course, has just done wonders with that, with the Arnold Schwarzenegger series, with the robots taking over the world. It's been a common theme for ages.
Peter: Absolutely. It's the recurring theme in all robot movies.
Doug: Yes. Here, we've just discussed beforehand Asimov, Isaac Asimov and his 1942 short story. The short story is called Runaround.
Peter: Absolutely. This brings in his famous roles of robotics, yes?
Doug: Exactly. Shall I read them?
Peter: Yes.
Doug: I love it. I discovered just this recently. So, he's got four laws. A new law was just added.
Peter: The zeroth law.
Doug: Yes. According to these, all robots should understand all circumstances and obey these laws.
1. A robot may not injure a human being, or through inaction, allow a human being to be harmed.
2. A robot must obey orders it receives from human beings, except when such orders conflict with this first law.
Yes, that one's harming.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.
In other words, harming humans. There's another one. Asimov added the fourth.
4. No robot may harm humanity, or through inaction, allow humanity to come to harm.
So, this sets the basis for some of the machine ethics discussion, which is fascinating but I go back to moral agency.
Peter: So, if a machine had these rules encoded in its control computer, would that robot have moral agency?
Doug: If it could make that decision, yes.
Peter: Okay.
Doug: Yes, good point. It would. And at that point, then we have ethical or unethical behavior.
Peter: What robots today currently lack is the ability to be able to make… first of all, sufficiently well understand what's going on in the world but I think also, to understand the consequences of their actions, to predict into the future. ‘If I did this, then it would cause harm or injury to a human being’.
Doug: Yes.
Peter: We're a long, long way off that in robotics. We would need to have that kind of capability in order to implement something like these laws of robotics.
Doug: Absolutely, yes. What surprises me is how fast we’ve come Peter, and you of all people would know that. The speed of what you guys are doing in terms of robotics is staggering.
Peter: We have come a long way but I think we have got an awfully long way to go. I think just in terms of perceiving the state of world we’ve got a lot of work to do there. In order to figure out the consequences of our actions, robotically, yes I think that's a huge body of work yet to be done there.
Doug: Yes. Yes, I agree.
Code
I ask Doug Baker what ethics says about robots making autonomous decisions to kill during combat and he responds by explaining the importance of moral agency. We also discuss progress in robotics as measured by Asimov’s three laws.