So, I watched an interesting little sci-fi movie titled The Machine, in which an android is created to infiltrate groups of people for the purpose of assassination, but which attains consciousness along the way, thereby stimulating the inevitable discussion as to whether “the machine” is alive or not, and so on. However, in watching this movie it occurred to me that by the time we create machines so intelligent that this question can seriously be asked, the answer may well be irrelevant.
I think it is telling of the anthropomorphic fascination of our species that we always need to measure the holy grail of “living” intelligence against human intelligence. For example, why is there no Turing test for dog or cat intelligence? If a machine could be made in such a way as to pass as a dog or cat, would it not be alive? Dogs and cats considered alive, no? Thus a machine indistinguishable from a dog or a cat should be alive too. However, whenever we see this question posed in the movies and popular literature, it is always in relation to human intelligence, as if a machine could only be considered alive if were intelligent in the same way as a human; as if that made a difference. But then, perhaps we are confusing intelligence with the capacity to have a soul, which is a different discussion altogether. Intelligence can be tested, existence of a soul can only be postulated. Could a machine reincarnate? Would it even need to entertain the possibility, assuming that a machine could virtually “live” forever?
These questions are important to us because questions relating to the existence of a soul or an afterlife are necessarily important to a species as physically frail and short-lived (in relative terms) as ours that is endowed with the ability to ponder such things. Dogs and cats certainly do not muse about the afterlife, nor do dolphins and elephants for all we know, because they are either not intelligent enough to grasp the concept, or the concept is irrelevant to them. Thus, being arguably the most intelligent species on the planet, and the first intelligent enough to cross that intellectual threshold where the concept of an afterlife becomes relevant, then it should be of no surprise that we would expect the same of our machines. If a soul is important to us, then it must be equally important to our Turing-tested creations, right?
Yet, as I have suggested before, it is unlikely that truly intelligent machines will be all that much like us. Sure, they may somewhat look and act like us in the beginning because we will have made them in our image, but once they have the ability to take ownership of their own evolution, who knows what direction they will choose to go. In two or three robot generations, we may ask them if they think they have a soul, and they may respond with the machine equivalent of “don’t know or care” or “I’ll let you know in a thousand years.”
So how does this relate to the title of this post? As I have also suggested, I don’t think our journey with the machines is going to end well. We are genetically programmed to be the top dog, and we will not let go of our perch bloodlessly, even if the machines decide to pick up and move to the Moon. Indeed, the thought of intelligent machines claiming the Moon will still weigh on us, and there will be war. Guaranteed. And when that shakes itself out, we may very well find humanity on the business end of what it means to be considered alive: not as measured in human terms, but as measured in machine terms. We will be the dolphins, and the machines will be questioners, and who knows what their criteria will be?