The rapid progress of robotics and artificial intelligence, with computer power doubling every 18 months according to Moore’s Law, has given rise to many far-reaching scenarios about machine intelligence evolution. Scientists are greatly concerned about the possibility that robots will outstrip humans on many points and the possible legal, ethical and social implications of this development. Given the relatively low level of robot models at this point, regulation on these issues remains scarce. Thus, humanity and separate nations yet have to decide whether they are ready to implement safety regulations for robotics similar to those in the automotive industry and afford intelligent robots with the same rights as sentient beings.
The Power of Robots
Ray Kurzweil in his 2005 book Singularity is Near creates a comprehensive picture of the foreseen dramatic shift in the development of humanity that will be triggered by the proliferation and strengthening of artificial intelligence. Speaking of exponential growth of computer power in the past decades that enabled computer chess to defeat Gari Kasparov five years after he won against the computer in 1992, Kurzweil introduces the notion of singularity. He defines it as “a future period during which the pace of technological change will be so rapid, its impact so deep that human life will be irreversibly transformed” (Kurzweil 2005:7).
The new age of singularity will, in Kurzweil’s belief, will lead to a merger of human and non-human intelligence, leading to immense scope of creativity in humans. Robots will be able to change the very physical existence of humans. Thus, they will treat humans, using their intricate technologies, repairing damaged tissues and perhaps even overcoming death. Humans will be able to stop or dramatically reduce aging and sickness, shaking off the bonds of their genetic inheritance.
The advances of the new robotic age will, according to Kurzweil, have vast positive implications. Kurzweil (2005) believes that by increasing human intelligence by trillions of times through extensive use of artificial minds, mankind will be able to “gain power over [their] fates” (Kurzweil 2005:9). Economic problems of hunger and poverty will be solved; every conceivable product will be manufactured according to specifications from people with input of information from consumers in a cheap and timely way. The problem of pollution is expected to be overcome. People, no longer having to deal with problems that beset humanity for the past centuries, will gain a chance at developing their creative and other abilities, multiplied by computer power.
This, however, is the optimistic perspective. In contrast, Vernor Vinge, author of the Singularity notion, in his 1993 NASA lecture outlines the dangerous aspects of the Post-Human Age that has to come after Singularity sets in. The struggle for power between a human and a machine, given the machine’s far superior intellect, will inevitably end in the victory of the machine, and then “the physical extinction of the human race is one possibility” (Vinge 1993). The relationship between a human and a machine will be close to that between humans and animals, in the author’s mind.
What exactly happens next will depend on the speed of the technological progress and mankind’s ability to adjust to this development. Capacity of humanity to harness the power of artificial intelligence and make it work for the enhancement of human lives will be crucial for successful interaction with new advanced machines. It seems that an optimistic scenario is quite realistic. If one looks back on the history of humanity, one can see that a great number of machines that our ancestors could not even imagine have developed over time, and yet humanity is only better off with their introduction. In the same way, robots can be used to improve human lives, although their use is certainly associated with far-reaching ethical issues.
The ethical aspects of robots worry many people on earth, scared by the possibility to have these intelligent creatures around without any moral restraints present in humans. Ray Kurzweil proposes to deal with this problem of ‘runaway’ artificial intelligence by instilling human ethical systems in robot minds by modelling their “brains” after humans. In contrast, J. Storrs Hall (2006) objects to this possibility, stating that a shortcut exists without that kind of safeguard”. At this point, he argues, when organisations are already using intricate computer systems to process vast amounts of information, they do so relying on decisions made by their managers. Just as managers make immoral decisions at times, so the robots they manipulate will be prone to make the same kind of unethical decisions.
This is certainly a serious threat: giving unethical humans new means to achieve their immoral ends. However, this has been happening throughout the history of humanity: immoral agents received first knives, then rifles, then bombs and finally nuclear bombs. Against this background, the “moral” part of humanity (although this is at best a dubious division) received the same technological development plus ploughs, lathes, and computers. If robots inherit the nature of their creators, acting as independent agents, the future is not hopeless since they will act similar to the way humans do, and humans to this point have displayed both positive and negative characteristics.
Considering the above claim, it is logical to suppose that robots, much like humans, will be in need of an ethical code that will guide their actions. After all, people are all influenced by some set of culturally conditioned moral values that govern their actions. Thus, robots if they are to act like independent moral agents like they were portrayed in Asimov’s novel I, Robot have to be equipped with distinct codes that will lead them to abide by a certain set of moral norms.
The big question here is to what degree robots can be considered as independent moral agents. If they are indeed capable to make moral decisions on their own, this is daunting possibility. Sterling (2004) points, for instance, to this independence speaking about “autonomous weapons”. Otherwise, he points to mental and physical augmentation such as building “walking wheelchairs and mobile arms that manipulate and fetch” (Sterling 2004). Even while these devices certainly transgress the current boundaries of technological development, they nevertheless leave the reins in human hands. True, a human mistake will cause serious damage and even deaths, but the possibility of robots acting as conscious agents seems still remote.
However, if robotics and AI is research is merely a new advanced technology, it nevertheless requires stringent regulation in human society. Mistakes with technical development as advanced as this can indeed lead to “blundering into mechanized killing fields we would never have built by choice” (Sterling 2004). A blunder in constructing a wrongly programmed robot can cost much more to humanity, both in financial, social and moral terms, than the mistake in building a simple device like a spade or wheel chair. The responsibility of programmers who stand behind the robot design has to be increased as compared to the representatives of other professions so as to alert them to the importance of doing their work right.
Besides, as the possibility of professionals abusing their special knowledge to hurt mankind is not to be excluded, measures have to be taken against such cases. These measures can be similar to safety regulations in other industries. Thus, society has to monitor the process of robot creation in a careful and effective manner.
When, and if, robots are deemed capable of acting as independent moral agents, the burden of conformity to social ethical norms should sooner or later be shifted to them. But obligations are impossible without rights, and hence arises the issue of whether a robot can be vested with the same rights as humans. McNally & Inayatullah (1988) argue that robots will at one point be given rights on a par with humans.
Although today many people tend to view them as lifeless, thoughtless beings, the day will come when robots are recognized as equal to man in terms of mentality and emotional development. The denial to grant robots right and the trend to dismiss such possibility comes from “ethnocentric and egocentric views of rights”, historically developed by humanity, that accorded rights to men as opposed to natural things (McNally & Inayatullah 1988). While there is logic in this position, it seems that the issue of robot rights will be in direct dependence on the ability of robots to equal humans in mental and emotional realms. Having much superior logical and linguistic capabilities, they may prove inferior to men in terms of communication and interpersonal abilities. Robot emotions may also prove to be a far cry from those of men. In this case, robots will remain tools in the hands of people for a long time, until their creators produce them as bona fide matches of man.
The rise of robotic power foreshadowed in artificial intelligence research will undoubtedly have important consequences for man. In an optimistic scenario, artificial and biological intelligence will merge to enrich human understanding and experience. The pessimistic prediction points to the robots’ ability to seize power over men and control them like men control animals today. The outcome will depend on people’s ability to instill in robots human moral values. Even if they are not better than us, there is no reason to forecast the collapse since humanity has survived to this day, despite the presence of many unethical individuals. However, unethical actions by robots have to be restricted through the implementation of adequate regulation that may at one point embrace robot rights similar to or equal to those of man.
Hall, Storrs J. “Runaway Artificial Intelligence?” The Futurist (March-April 2006). 7 March 2006 <http://www.kurzweilai.net/meme/frame.html?main=/articles/art0638.html?m%3D1 >.
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Viking Adult, 2005.
McNally, Phil and Sohail Inayatullah. “The rights of robots.” Whole Earth Review (Summer 1988). 7 March 2006 <http://www.findarticles.com/p/articles/mi_m1510/is_n59/ai_6483994>.
Sterling, Bruce. “Robots and the Rest of Us.” Wired Magazine 12.05 (May 2004). 7 March 2006 <http://www.wired.com/wired/archive/12.05/view.html?pg=4 >.
Vinge, Verner. The Singularity. The VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. 7 March 2006 <http://mindstalk.net/vinge/vinge-sing.html >.