Nicholas Pell
Jan 3, 2012
Featured

Rise of the machines? AI morality and robot ethics

In popular imagination, intelligent machines are nearly synonymous with a robot apocalypse. However, such doomsday scenarios are not the subject of the new book Moral Machines by Wendell Wallach and Colin Allen.

In fact, even the possibility of such truly morally capable machines might belong more to the realm of science-fiction than science fact.

A fundamental misunderstanding of the human brain underpins anxiety about thinking machines. The human brain is not merely a collection of synapses and processing power. Consciousness, while not well understood, is an integral part of the human brain. So are memories and emotions, to say nothing of the body the brain resides in. Put simply, your brain is not merely an organic computer. It is something more than the sum of its parts. Even a computer as complex as your brain is not an “inorganic brain.”

I reached out to Wendell Wallach and he outlined the misunderstanding in clear terms. “The whole field is framed by people who work in robotics and try to find answers to limited problems,” he said, “on the other hand you have Singulatarians who think it’s only a matter of time before we will have not only machines with all of our capabilities, but many others through an accelerator effect.” In other words, the practical problems of robotics aren’t anywhere near the level of Terminator 2: Judgment Day. Those who worry about such things don’t tend to be robotocists. Computer engineers have enough trouble making machines that adequately recognize faces or can distinguish between the words “Polish” and “polish.”

Computer ethicists instead work on more concrete issues. I spoke with Colin Allen, director of the Program in Cognitive Science at Indiana University, Bloomington, and he explained,  “You want to maximize welfare to some degree and respect individual wishes to some degree, but really those are just relevant considerations that might not be solvable by any formula."

Wallach, in our discussion, raised the point that whether the machines actually “know” the difference between right and wrong is a secondary issue. “Our point,” says Wallach, “is that machines are becoming more autonomous. At the least if you want to ensure these machines are safe, you need to ensure that these machines are aware of the moral ramifications.” One concrete example Wallach used is a robotic companion doll for a child. At what point does the doll alert the parent about problematic behavior in the child? If the child acts destructively against the doll, does the doll say something like “Stop, you’re hurting me?” Would such an action be beneficial or traumatizing to the child?

There are no easy answers. Finding the more difficult answers is precisely what this field of ethical inquiry is all about. The more apocalyptic questions of robot uprisings are far off in the future. Wallach pointed out to me that “even some of the problems we’re talking about in our book might be putting the cart before the horse.”

Surprisingly, military robot ethics is perhaps the most important topic in the field. Allen explained the divide between those who believe robots should be bound by Geneva Convention laws of war and those who argue for a new set of ethics for this new technology. According to Allen, some critics of Geneva Convention limitations on robots see this as “a slippery slope to robot Armageddon or an entryway to easier wars.”

But again, these issues are rather far afield. Practical robotics is dealing with far more practical questions. As such, the ethics of robotics and artificial intelligence are far more practical, concrete considerations than how we prevent machines from taking over the world. “In the very long term, some of these issues might become more tractable,” however, said Allen, “We need to keep our eye on what types of things are feasible on the immediate horizon.” Specifically, this means “ethical sensitivity and moral principles in the technologies we have, without going all sci fi.”