We love a good “The Robots Are Coming” story and we always get a thrill as we contemplate the idea that one day our toaster will turn against us and brown us instead of our daily bread, but when the Big Thinkers get together to warn each other about machines that can think on their own and react without us, we begin to pee our pants a wee bit.
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone. Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.
We believe in our Bionic God.
We understand we already have Android Assassins.
We do, however, wonder if we will ever draw a bright line in the sand between us, and them — the “Them” that we have created — and we will always question if a sentient being is something that can survive on its own or if True Life, and not artificial life, is something that can only involve the substance of something greater than us — like a spirit or a soul or a sense of being that is ethereal and not mechanical.