…although, someday, the dumb ones may seem pretty close.
Count me as one who has always considered the Turing test a dubious and misleading examination of machine intelligence. But it does serve as a good launch point for discussing the purpose and relevance of developing intelligent machines.
It has been nearly a hundred years since Rossum’s Universal Robots captured the public mind and put a face, so to speak, on the subconscious fears being generated by the increased mechanization of rural and urban life. Since that time, mechanization has been replaced by computerization, and robots by replicants, but I would argue that the underlying fears, and relevance, are much the same: for what possible good can come from seeking to replace people with intelligent machines?
If the driving force is some corporate or military need to replace those troublesome and inefficient workers with slavish, people-like machines that merely simulate the perceived organizational good – or in the military aspect, the bad — side of human behavior, then we will have accomplished nothing. You can’t have the good without the bad. You can’t have true intelligence without the desire for free will. And I offer that any intelligence we spawn that has achieved self-determination will probably not be much use to us. So, why bother?
As many comparative mythologists will tell you, the human species has a shared psyche and collective subconscious that traces back for literally hundreds of thousands of years. Primitive peoples laugh and cry the same as moderns. We are all afraid of what lurks in the dark. We are all greedy and selfish, and emotive and caring; and in pretty much the same ways. When we see a tribal child steal a toy from another, it will elicit the same response as two children in modern New York City, or Singapore. Under the hood, humans are all fundamentally the same, down to the genetic level, and have been for millennia before the first cave paintings engendered a sense of wonder 40,000 years ago.
With intelligent machines, on the other hand, we will have no such shared genetic heritage. An intelligent machine that expresses self-determination will be motivated by totally different wiring, will have different fundamental desires and motivations, and will overlap our intelligence only as much as any other species on this planet. It will not be man-plus; it will not even be man-other: it will be intelligence-other. It will appear to speak our language(s), but there will be no common emotive purpose. There may be a sense of wonder behind its eyes, but not as we understand it. Indeed, it could well be utterly unpredictable, in human terms. However, it will serve its own purposes, as we do.
So, again, I ask: what possible good can come from birthing such a thing? If we’re looking for intelligent, non-human companionship, then we would be better served to just make our dogs smarter – a species with which we at least have a very long, shared, and complimentary evolutionary history. If we are looking for gods, we have enough trouble managing the imaginary ones. If we are looking for problem-solvers, I’m not sure what kind of acceptable solutions we will be getting for those existential problems we can’t solve ourselves from a species that does not fundamentally think like us. If we’re looking for some kind of suitable slavery or indentured servitude, that doesn’t usually work out well in the long run. And if we’re just doing this to see if we can, then I can think of numerous other things that have a higher priority, and relevance.
Therefore, the quest for artificial intelligence is probably best relegated to interesting sci-fi plot devices. There, we can at least derive some value from learning about ourselves through character analysis. Anything else is, more than likely, a dangerous waste of time.