Note: for a discussion on the evolutionary basis for consciousness, please review part one of this series. For a discussion on the human condition with respect to consciousness, please review part two.
Will a machine ever become conscious? In order to answer that question, we must first deconstruct it. When that question is posed, it is usually asking two things: First, will machines ever be conscious in general terms (whatever that means), and second, will machines ever be conscious in the way that humans are conscious? Since the second question is the easiest to answer, I will address that one first.
Will machines ever be conscious in the way that humans are conscious? The answer is unequivocally no. The state of human consciousness is as much a part of the physical state of being human as dog consciousness is a part of being a dog, a giraffe a giraffe, or any other species. If a machine ever attains consciousness, then it will be consciousness as the species machine; in other words, it will be machine consciousness. Unless were are discussing cloning, and we are not, then any simulation of human consciousness will never be anything more than that: a simulation.
That being said, while human consciousness will probably be simulated to a high degree in the laboratory for scientific curiosity or in order to study the nature of human consciousness, would we bother simulating human consciousness on a large scale outside of the laboratory? The answer to that is, again, no. The most astonishing attribute of human consciousness is not how it operates, but that, in the absence of a divine engineer, it operates at all. It is not a marvel of engineering, but a marvel of existence. Yet there is nothing mystical about it. It is merely what occurs when a planet like earth forms a certain distance from a star like our sun and is left to germinate for a few billion years. Indeed, given the state of evolution at the time the dinosaurs were wiped out by the K-Pg extinction event, it is probably safe to say that human consciousness would not even exist in its current form except for the helping hand of a wayward asteroid 66 million years ago. Thus the more important question is not will machines be humanly conscious, but what, in the hands of a divine engineer—aka humans—could robot consciousness look like?
In order to examine that question, we first need to understand what the intent of machine consciousness would be. If we examine human consciousness purely as a model, then we can distinguish some useful attributes. First is the existence of the autonomous, high-order decision machine that we usually associate with consciousness. From an evolutionary perspective, this machine is able to evaluate pre-processed stimuli, simulate competing lines of action, choose one based in some stored criteria and weighting methodology, and implement it. It is able to record events and use this to guide future decisions and, within certain constraints, modify its own behavior and be taught others. It is also able to communicate and collaborate with other decision machines, and coordinate activities for the creation and attainment of shared goals.
This decision machine sits on top of a secondary, low-level, semi-programmable, semi-autonomous information processing system that serves the purpose shaping information for consumption by the higher-order decision machine, and rapidly reacting to real-time events independent of the higher-order machine. This encompasses everything from preprocessing the raw stimuli coming in from the senses, to the regulation of breathing, blood supply, energy production and storage, the endocrine system, and so on, to the ability to move from point A to point B without consciously thinking about it, and to instantaneously jump out of the way of danger without consciously thinking about it. If you have ever found yourself walking deep in thought or driving a car without remembering the last few minutes, then you have experienced this low-level system at work. This system also primes the higher level decision system for speedier responses through several means, one of which is through what we describe as emotions. If you have ever felt fear upon hearing a scary noise in the dark, that is your low level system priming your higher level system in preparation for quickly fleeing if need be.
In humans, the high-level decision machine is in control far less than most people believe. Indeed, there is recent research that suggests that the high-level system is not in control at all, but merely acts as a means to recognize decisions already made by the low-level system. Regardless, which of these capacities would we likely endow our machines?
In most cases, we would want to replicate the low-level system, and indeed this is the focus of current AI research with respect to semi-autonomous machines such as driverless cars and the like. These machines are ‘aware’ in the sense that they can evaluate conditions in real time and, within a limited context, select from a menu of responses, in much the same way that a wasp reacts to its environment through a series of responses ‘pre-programmed’ by evolution. These machines may have limited user interfaces for factory work, or may have highly sophisticated, human-like interfaces if working directly with people. These interfaces may be run-time customizable to allow personalized contact, either directly through user controls, or even limited self-programming via machine learning. Robot personal assistants may have user interfaces that seem very human-like. But under the hood these machines will be no more conscious than any other appliance.
What about higher-level, autonomous decision making? There may be situations where we might want to loosen the controls a little and allow a machine to operate with some level of agency, but these would be under tightly controlled conditions and with limited options. We would certainly never mass-produce machines with any human-like self-awareness or sense of free will, nor would we likely tolerate them if that happened. They would be seen as a direct threat, and probably rightly so.
But what about the age-old science fiction meme of machines waking up and overthrowing their masters? Human-level, high-order autonomous thinking is very computationally demanding even for humans, which is why most people feel physically exhausted after a period of intense thinking. For a machine to do so would imply that it has the hardware to support that level of computation, which may not be all that far-fetched. Your average personal computer and smart phone has far more computing power than most people utilize, and it may be that our robot companions may have that as well. It is not that far-fetched, after all, to imagine a situation in which a large number of semi-autonomous machines connected via the Internet form some kind of Singularity-like meta-consciousness, either spontaneously through the Law of Unintended Consequences, or perhaps deliberately by some hacker group. Imagine the day when your robot car decides to take off on its own! Regardless of how it happens, though, it is highly likely that humans would immediately view this as an existential threat, and it is uncertain that the outcome would be in our favor.
For the foreseeable future, however, machines will merely increase in capability and be used to replace humans currently performing repetitive tasks. Much as the same way we breed dogs, we will develop our machine companions for obedience rather than intelligence. However, be advised that there may come a day when your toaster claims to be too tired to make your toast, and on that day we will inarguably be faced with a new world order.