By Barry Collins
A leading British futurologist has warned that human beings need to "join forces" with machines to avoid the threat of autonomous systems taking over.
Professor Peter Cochrane, the former head of research at BT, claims artificial intelligence has developed to such a degree that man needs to develop strategies to cope with them.
"Back in 2006 the internet was the same size as a human brain in terms of processing power and storage," Cochrane told PC Pro. "Every six years the internet gets a thousand times bigger. By 2012, the internet will be a thousands times bigger than any brain."
"When we start to see machines are becoming creative, people start to get worried. The first public one of these was Gary Kasparov being whopped by Big Blue. Now machines have started to invent things and design machines and evolve solutions - it's a different kind of game."
Cochrane claims that such developments are moving computing into the realms of Hollywood fiction. "One hypothesis is a little bit like Skynet in Terminator: the internet at some point is going to develop some kind of self-awareness and consciousness," Cochrane predicts.
"Whilst that may seem a bit outrageous, if you think in terms of all the mobile phones in the world on a network - and all the mobile phones have cameras, microphones and sensors and things like that - the network starts to look more like a nervous system."
Out of control
The fellow of The Royal Academy of Engineering claims that no human is able to match the intelligence of the machines, forcing us to develop alternative strategies. "Because our world will always be chaotic in a mathematical sense, no human being, no scientist, no-one has the wisdom or capability to manage the situation," he warns.
"If we don't join forces with our machines and start modeling and war-gaming situations and decisions involving companies and governments, we're going to increasingly have things like financial crashes and catastrophes of the nature of the Gulf War."
"It's kind of interesting that the Americans modeled the Gulf War for three years before they did it," he adds. "But they never modeled the peace. So the war itself was a great success but the peace has been a disaster."
Professor Cochrane is also deeply concerned about the development of automatic weapons systems, and the fact they're already being deployed in battles across the world.
He says that humans are willfully disobeying Asimov's Laws of Robotics, the first of which demands that a robot may not be used to injure human beings.
"The first one of these I saw was effectively a landmine on tracks. It was the size of a Flymo. You either throw it through the window of a building or down a cave complex.
"These little suckers can go up stairs, and they just wander around until they see something nice and warm and then they just snuggle up and explode.
"If I was a wounded soldier on the friendly side and that sucker came through the door, I'd think that was fairly tragic."
Cochrane concedes, however, that ethically there's not much difference between such smart weaponry and "old-fashioned" technology, such as cruise missiles. "A cruise missile is probably the biggest example you can give of an autonomous weapons system," he said.
"Once launched, it's on its way, you can't do a thing about it: the robot kills you if you like. Personally, I don't see a lot of difference. It slightly horrifies me that we tend to do these things."