Stephen Hawking made quite a stir with a few comments about robotics.
Well, more accurately, he was talking about artificial intelligence and the dangers he sees it posing to humanity. He asserted that:
"Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
It's not a far leap in logic to postulate that this would include robots endowed with AI capabilities, the so-called "robot uprising" that has been popular in media since time immemorial. Like I said, the comments caused something of a stir in the various technological communities. After all, the opinion of a genius-level scientist such as Hawking can go a long way in the forming of opinions in the public writ-large.
Which is why I was glad to read this article by Mark Bishop in New Scientist.
Bishop is a professor of Cognitive Computing at Goldsmiths, University of London. He disagrees with Hawking's postulation, arguing that the level of AI that Hawking refers to is unlikely to develop. Supporting that assertion, Bishop identifies three interesting points:
-Computers lack understanding. A program may scan a text and recognize aspects of it, but it has no genuine understanding.
-Computers lack consciousness. He says: "An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness."
-Computers lack mathematical insight. In other words, the way human mathematicians tackle a complex problem is psychologically un-mathematical.
This is not to say that there's nothing to worry about and let's all just let the AIs run free. Bishop concludes with (rather chillingly):
"In my role as an AI expert on the International Committee for Robot Arms Control, I am particularly concerned by the potential deployment of robotic weapons systems that can militarily engage without human intervention. This is precisely because current AI is not akin to human intelligence, and poorly designed autonomous systems have the potential to rapidly escalate dangerous situations to catastrophic conclusions when pitted against each other. Such systems can exhibit genuine artificial stupidity.
It is possible to agree that AI may pose an existential threat to humanity, but without ever having to imagine that it will become more intelligent than us."
That's right. It's not that AI robots will be smarter than us but perhaps it's that they will be dumber than us. More accurately, they might be born of our own stupidity. That's a sobering consideration and a more immediate concern than Hawking's flapdoodle, in my opinion.
I do wonder about Bishop's views of AI, though. I don't want to call him a defeatist as he obviously knows far more about the subject than I, but something in my gut is nagging me. Could the shortcomings he mentioned in his three points eventually be overcome? Also, what should we be more apprehensive of? A conscious robot or one that operates purely on logic?
Nah, I'm still more scared of human stupidity.
Follow me on Twitter: @Jntweets