Artificial intelligence expert Massimiliano Versace addresses our persistent — yet mostly unfounded — fear of a robot revolution.
From HAL to The Terminator to The Matrix, American pop culture possesses an enduring fear that robots will one day overthrow the human race. But could it really happen? Geek talked to professor Massimiliano Versace, head of Boston University’s Neuromorphics Lab to gauge the odds.
Versace and his team operate at the leading edge of artificial intelligence, designing computer brains that learn from their environments and use subroutines that resemble emotions. He is currently working on two separate projects with NASA: one is to create autonomous drone aircraft and the other could be part of the next Mars mission.
His explanation of the robot-human paradigm offered some assurance that we should be safe from our increasingly complex devices. Should.
GEEK: Why do you think our culture has an obsession with robots or computers overthrowing the human race?
Dr. Massimiliano Versace: The main reason people are afraid of robots is because they’re afraid of themselves. Think about The Matrix. Morpheus explains that humans are grown to harvest energy from them. But think of what humans do to animals. They grow animals. They slaughter them. And they eat them. And on a massive scale. So why are we afraid that robots are going to do this to us? Because we are doing it constantly, systematically, on a daily basis.
Why don’t you think it would happen?
For a couple of reasons. The first is that we currently are, from the robot’s perspective, “god.” We’re able to build in safety mechanisms that could prevent these robots from turning against us. For instance, you can give military robots only non-lethal weapons. The second reason is a bit more subtle. Humans turn against each other mostly because they’re competing for resources. You kill because of food or because you want somebody’s property. That’s why people hurt other people and why wars are fought. What are robotic organisms competing for with respect to humans? We can’t really extract the same sort of energy. A robot can be powered by electricity, but a human, directly, can’t.
Do you think something like Isaac Asimov’s Three Laws of Robotics would be appropriate?
Yes, though Asimov’s laws are tricky. Let’s imagine that you send robots to war. You cannot always have a human in the loop because there are going to be too many robots. You have to have many automatic systems. If you have, say, a fighter jet off in the Pacific and it’s attacked by another thing — which the robot doesn’t know if it’s human or if it’s a drone — would the drone fight back? My assumption is that, yes, it would fight back. The Laws of Robotics are nice, but in practice, they’re difficult to follow.
Could a robot be programmed to kill humans?
Like any technology, robots could be used for good or evil. You can have robots as kind of the nuclear deterrent, where everybody has lethal technology in the form of robots so nobody uses them, or you can go in the way of everybody has robots and everybody uses them. The consequences are difficult to grasp, given the de-personalizing nature of wars fought by machines.
Your initial rejection of the idea of a Terminator-like scenario was that human beings are very cruel, and robots have a long way to catch up. Can you elaborate on that?
If you look at the history of humankind, you find countless examples of cruelty — genocides, murders. In order to develop the need to destroy something, like in humans, there has to be a motivation. What would be the drive for a robot to destroy all humans? As we build this creature, it’s not that the first thing they do is they start to kill people around their environment. As you develop things, such as competition, then that’s another story. If the robot is competing with another robot for resources then, potentially, you can see it hurting the other robot or “killing” the other robot or moving the other robot away from the source of food and support.
So in terms of fictional scenarios, you think HAL locking Dave out of the ship in 2001 is more likely than Skynet in The Terminator?
It certainly is a simpler task. To achieve something at the level of Skynet, it requires a lot of abstract thinking and sequencing. What Skynet did is very complex. It has to develop a long-term plan and then it has to develop strategies, step-by-step, sequence it correctly [and] hide it from humans. I mean, the intelligence that’s actually required to carry out the plan like Skynet, I think it’s 30 years away from our capabilities now.
You’ve also worked on emotions in computers. How does that play into this discussion?
I was running in Italy two years ago and I saw this thing coming out of the bushes and I jumped instantly, instinctively. I looked back. Sure enough, it was a black snake. What struck me was my ability to make a decision in, say, 200 milliseconds and do a jump without even knowing what I was jumping for — because I didn’t recognize the thing right away. Using emotions as shortcuts to your brain is useful. That’s why it makes me laugh when people say you shouldn’t program emotions in robots. That’s not true. You should.
What about other emotions, like envy or anger?
Envy would be a crucial engine for a robot that wants to succeed with respect to other robots. If the robot has a strong drive to, say, accumulate more energy, the robot which will develop the equivalent of “envy” will probably have an evolutionary advantage, because it would try to get things from other robots.
Being on the cutting edge of artificial intelligence, have you seen anything that resembles the machines from The Matrix or The Terminator?
I’ve seen Terminator-like features emerging. Not in the terms of “evil,” but in terms of what are machines going to look like. If you come to our lab, you can see the visual system we are developing. That red eye that moves around and zooms in and zooms out. But our robot is much kinder, like its creators.
Photo by Matt M. Casey