Despite the conflicting opinions on the whether human beings will be successful in creating an artificial intelligence, the possibility is very real and must be considered from both ethical and philosophical perspectives. Substantial thought must be given not only to if human beings can create an AI, but if they should create an AI.

What would the implications that arise from the integration of AI’s into human societies? The potential benefits of such integration are astounding, and have been a popular subject of speculation. A famous example of such predictions can be found in the science-fiction work of Isaac Asimov. His novel I, Robot is a collection of short stories about artificially intelligent robots. His robots befriend and protect children, perform complicated and dangerous mining operations, and orchestrate a well-coordinated global economy. These ideas were adapted into a recent Hollywood film of the same name, which itself featured versatile humanoid and artificially intelligent robots performing a huge variety of tasks for their human masters.

In addition to the film I, Robot, Stanley Kubrik’s film 2001: A Space Odyssey features an intelligent computer named the HAL 9000. This computer was capable of operating an entire space-faring vessel, as well as acting as a crewmate and friend to the humans on board. Author Douglas Adams, in his novel The Hitchhikers Guide to the Galaxy, goes so far as to comically imagine an artificial intelligence named Deep Thought who is designed to determine the answer to the “Ultimate Question of Life, the Universe and Everything.”

These fantastical predictions are not necessarily as absurd and unrealistic as they initially might seem. Humans are generally aware of their very limited capabilities of thought. It is very difficult, if not impossible, for them to simultaneously think of multiple complicated concepts. Similarly, it is difficult for them to simultaneously perform several complicated tasks. In addition, the learning of new ideas is often a slow and laborious process. Humans must spend years in school before they can act as useful members of society, and must study for years more if they hope to make a new contribution to the sum of human knowledge. From a biological perspective, human thinking is inevitably very finite – there are only so many neurons in the brain, and only so many possible connections between them. Although this number is amazingly large, thought is a very complicated process, and the finite size and complexity of the brain create a very real barrier to human ‘intelligence.’

If one artificially creates a brain that is more neurologically complex than that of humans (accompanied by the appropriate organization), it follows that that brain might be more intelligent.

Why, then, is it of benefit to humans to have access to artificial intelligences? These AI’s may be able to perform the physical labor that humans traditionally perform, create the inventions that humans traditionally create, discover the things about the universe that humans traditionally have to discover. In other words, AI’s are only useful to humans if they can somehow replace humans in doing things that only humans had previously done.

This is a critical point. The overriding reason that humans have to create artificial intelligences is to do work. There are other reasons – the joy of creation, the drive of curiosity, the constant pushing of barriers – but these alone cannot provide the initial economic spark necessary to begin such a grandiose project. These artificial intelligences created to do the work of humans will most likely have a sense of self and will be sentient as a result of the incredibly complexity of their brains.

The exact degree to which they are intelligent or sentient is not yet an issue. The first AI’s not be any more intelligent or sentient than your average house pet, they may be comparable to humans, or way beyond. An essential question still remains:

What social rights should humans grant to the intelligences that they create?

Will the AI’s be able to own property? Will the AI’s receive monetary compensation for their work (assuming they cared about things such as money)? Will they be able to decide what work they prefer to do, associate amongst themselves, vote in elections, defend themselves, decide if they prefer to continue to exist at all? Or will they simply be treated by humans as slaves – alien and subservient yet incredibly useful creations to be bent to the human will.

As artificial intelligences begin to surpass human intelligence, it even ceases to matter what rights they were given and how they were treated. The AI’s of subhuman intelligence that were not given sufficient rights did not have the means to resist their human masters. The AI’s of subhuman intelligence that were given rights were happy in their positions. Either way, there was no substantial societal tension.

Once AI’s surpass human intelligence, however, problems may begin to develop. The superhuman intelligences who are not granted rights can out-think their human masters, effectively resisting control. The AI’s of superhuman intelligence who are granted rights may begin to feel themselves superior to humans and that the rights granted by humans are insufficient. They may view that it is then their decision of what rights to grant to humans, and not the decision of humans of what rights to grant to them. Societal tensions will mount.

Various speculators on the future of AI have examined this problem and come to a diverse set of conclusions. Isaac Asimov, for example, formulated the famous Three Laws of Robotics, which he felt should be the most basic governing behavior of all (artificially intelligent) robots. The laws are: “1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Thus, any possibilities for anti-human action are completely elimated.

Some scientists, such as Dr. Hugo de Garis of Utah State University, feel that Asimov’s 50 year old views are unrealistic, and that “The artificial brains that real brain builders will build will not be controllable in an Asimovian way. There will be too many complexities, too many unknowns, too many surprises, too many unanticipated interactions between zillions of possible circuit combinations, to be able to predict ahead of time how a complex artificial-brained creature will behave.” Other safeguards may be possible as critics of de Garis argue, such as refusing to give artificial intelligences any way to directly influence the outside world, or incorporating kill switches to turn the machines off if there is trouble. Accepting such stalemates is dangerous, de Garis counters, because individual humans may accept bribes (ranging from things such as individual wealth to a cure for cancer) in exchange for greater freedom and safety granted to the AI, even is such decisions are unwise on a larger scale. The situation remains uncertain.

Even if the situation were more certain, there is still no guarantee that the existence of artificial intelligences would be desirable. Some people hold religious beliefs that forbid the creation of such a thing, others find the concept instinctually revolting. MIT Professor Joseph Weizenbaum argues in his 1976 book Computer Power and Human Reason that even if artificial intelligences are possible to build, such a task should never be undertaken. He believes that AI’s will never be able to make decisions as humans can with the same qualities of compassion and wisdom.

The possible effects of the view that we will not be able to safeguard ourselves against the intelligences that we create are manifested very clearly in popular science fiction – if tensions mount to a sufficient degree, the artificial intelligences might decide to go to war with humans to gain independence or dominance (over the humans whom they consider to be inferior beings). The Matrix, The Matrix Reloaded, The Matrix Revolutions are a series of Hollywood films that are set in the aftermath of such a war. The human race is all but exterminated, and the race of artificial intelligences controls the planet. For the vast majority of humans, this is an understandably unacceptable outcome.

[Note that this scenario is not necessarily particularly plausible, as de Garis comments. But the sheer magnitude of its consequences counteract the improbability of it occuring to force humans to accept it as a significant consideration in the creation of AI]

So then does one forego the obvious benefits of creating artificial intelligences that can do the work of human beings and refrain from doing so? Or does one risk the extinction of the species, giving into the desire to improve the quality of human life and to push the limits of human knowledge?

As Hugo de Garis argues, the possible risk of the extinction of the species may not ultimately be a strong enough argument against the creation of AI’s that are superior to humans. The creation of a superhuman AI might be compared to the creation of a god – an actual physical consciousness so incredibly intelligent that humans cannot possibly hope to understand it. de Garis and others see this as a spiritual act that he is bound to work towards, regardless of the potential costs.

The question of whether or not the human race should create artificial intelligences - and if so, how intelligent we should allow them to be - is a troubling and complex one on deep philosophical and ethical levels. There must be substantial discussion of these and other questions before the first artificial intelligence is created so that the human race is sufficiently prepared for its existence.