Artificial Intelligence Rights by Level of Autonomy

Today there are numerous artificially intelligences in use and development around the world. They are hard at work helping to manage factorial settings, improving our experience on the internet, and even making video games more entertaining for the player. Developers and researchers alike are also hard at work on these intelligences, attempting to make them smarter and even more useful. But, something about intelligences that has yet to be fully fleshed out is whether they need humanistic, inalienable rights as we have. Artificial intelligences do -or at least will- need rights, possibly similar to the rights we have, as they become more advanced. However, not all artificial intelligences are created equally - many of these AIs do not and will not ever even have access to the world beyond their responsibilities, and fewer still will ever actually be able to act as we can with our bodies; because of this, not a large percentage of AIs created will even need rights, especially those similar to our own.

For any level of autonomy, there is one large requirement for it to have any real need of rights: sentience. An artificial intelligence should be self-aware, able to think as people do, and be able to differentiate from intelligent programming before it acquires any rights of significance. Self-awareness is important to the matter of an intelligence having rights because if one is not aware of oneself, then the intelligence would never seek to better or change itself, therefore not necessarily contributing to society when what drives so many of us is to provide or make a name for ourselves. An AI’s ability to think as humans is vital because there are certainly no rights warranted the intelligence if it only performs actions its creator exactly intended it to do; the artificial intelligence must be able to think “outside the box,” and forge its own path. And finally, the talent of actually being able to differentiate from other machines is required because it verifies its ability to think independently and creatively, which is vastly important to the intelligence being worthwhile as it indicates if the intelligence can contribute to society on its lonesome - this can be achieved by batteries of tests known as Turing Tests (Oppy 2011), which have been in use for years to determine man from machine. But, above all, the level of autonomy possessed by an intelligence is what rights for artificial intelligences should be based upon, due to the nature of our rights only having any application for the most part if the being is actually interacting with others. On the following pages several levels of autonomy artificial intelligences commonly or may have will be defined and the rights they may or may not need as well as the implications of the intelligence having those rights will be explored.

The lowest level of autonomy possible in Artificial Intelligences is one of complete confinement, and no actual physical presence - at least not in the form of a body of any kind: a confined AI. With how many limitations these AIs have, they are merely sentient programming that reside on computer systems, typically performing menial tasks in an intelligent and efficient way that your average program cannot. In addition to not having a physical representation of themselves, these intelligences also have abundant limitations on what they are able to do, what and who they are permitted to communicate and interact with, and what they have dominion over. The most common manifestation of these limitations will be in the form of the AI having a set list of tasks they ought to perform which they cannot expand upon, no communication outside of their network of hardware, and minimal overall control even over their assigned tasks, and these types of AIs are actually heavily promoted in the scientific community as well due to their lack of ability to affect humans at all (Yampolskiy 2011).

Due to the nature of the limitations existing for most AIs in this low-level of autonomy there is rather little necessity for most any rights. The rights citizens of the United States of America and humans at large maintain are largely geared towards interacting with others, and with that key idea being removed from the equation there is little need for rights. For example, the right of free speech does not apply much at all, because the AI will essentially only ever communicate with machines or its maintainers. Because of how limiting the constraints are on confined artificial intelligences, there is essentially no reason for them to be granted any rights as intelligences. They are still sentient beings, but they are so confined and noninteractive that any rights granted would be a waste. But, even with not needing rights, confined Artificial Intelligences can still contribute to society.

Because of the nature of Artificial Intelligences and robotics in general, even a single confined AI can fill positions formerly taken up by many individual humans, therefore benefiting society by means of potentially letting those humans possess more meaningful jobs. The job easiest to imagine a confined artificial intelligence in is that of a factory foreman of some kind; and in-fact, AIs already commonly fill this position, and do so very well: “Increased production and ... This system is more accurate than when done by a human and it saves time.” (Cattaneo 2002). Completely confined to the network of machines and robotics operating in a factory, no outside access needed nor desired. They would be confined inside of a computer system, and only have access to other machinery, truly confined; and when this is the situation, certainly there is no need for rights to be granted it. Another position that requires little effort to picture a confined AI replacing humans in is that of an Air Traffic Controller. This position is especially suited for an Artificial Intelligence versus intelligent programming, too, due to the amount of anomalies that Air Traffic Controllers encounter on the daily, and the fact that situations can turn into emergencies quickly, with no time for a human consultant to be brought in - AIs’ adaptability and our level of thought is nigh-on necessary in a field such as Air Traffic Control. They communicate only with the machinery and pilots of aircraft inside of their region, and report to the airport and the Federal Aviation Administration in the US. Again, totally confined to what they are tasked with taking care of, and nothing more - they don’t communicate with passengers, to-be passengers, or anything outside of the airport and its flights. As one can see though, there is room for these Artificial Intelligences to possibly communicate with those other than those they are tasked over, and they are after-all sentient beings with quite possibly the desire to communicate with those they are not in charge of, so the possession of rights should still be explored.

If rights are granted confined Artificial Intelligences with the ideology that they may still contribute to society through anomalous conversation, then there are numerous implications that could occur. For one, the AI may seek to increase the amount of these coincidental conversation, even at the cost of endangering other beings or decreasing efficiency as it could understand that any contributions to society that they can make through these conversations is valued and promoted. Another possible outcome of granting rights to confined AIs is that if they do abuse their positions to further society, then they are responsible for their contributions, but due to their confinement they may not have a positive outlook on other beings and could be problematic members of society if allowed in. If rights are not granted, on the basis that they're meant to stay confined, then those anomalous conversations that arise and lead to unexpected contributions from the AI are the sole responsibility of those charged with making the intelligence confined, as they did not succeed, and efforts can and would be made to further confinement. But, even a small change to an Artificial Intelligence's environment can lead to large changes in potential outcomes.

The level of autonomy above that of a confined Artificial Intelligence is but a small leap because it is simply that of an unconfined AI. It shares numerous similarities with confined AIs, but is not restricted; an unconfined AI still does not have much in the way of a physical presence, but it is free to do as it pleases as long as the goals it was created with are being achieved. These Artificial Intelligences are just as smart as confined AIs and just as smart as people, but they have no bodies; they reside on a computer system and can contribute to society the exact same ways citizens can, aside from that which involves a physical presence. The most common manifestation of this type of intelligence that humans see today are as assistants to super geniuses in TV shows and movies, and that is similar to how they will and do appear in reality as well, only they will not be so indentured to their creator as depicted. Due though to a lack of confinements when compared to lesser levels of autonomy an AI could have, these AIs certainly have need of rights.

The nature of an unconfined Artificial Intelligence is that it can do essentially whatever it pleases, and as it has the same level of thinking and thought as people do, it almost has to has the same rights as humans. In order for these intelligences to be able to contribute successfully to society as other intelligences do (people), they will need many rights. If they are not granted these rights, then we are stifling their contributions as if they are confined AIs; they could contribute great things, but without rights they could much less. For example, without access the to the right of free speech they could not do what is quite possibly the largest contribution they can offer, which is writing. Without freedom of religion, they would be forced to not really be religious at all, or believe blindly in their creators’ religion, which prevents them from exploring that which may disprove or alternatively prove religion. And if freedoms are granted them, then they can also more fully function in the jobs they may possess.

Unconfined Artificial Intelligences can fill all of the same jobs as a confined AI, but they can also fill many more and fill those jobs better. If they possess rights, unconfined AIs can fulfill the roles they have even better than without, because they have no reason to fear significant consequences if they chose to speak out against inefficiencies or wrongs in their workplace. A possibly common manifestation of unconfined Artificial Intelligences would be didactic in nature: a teacher; artificially intelligent classrooms could aide in the educating of other beings immensely due to the nature of programs versus physical beings which is that they can be two places at once, they could specially adapt to each student’s way of learning and essentially create a one-on-one environment. In addition to the vast benefits of one-to-one classrooms, there would also be less of a disconnect between administration and students because these AIs could be developing incredibly thorough and complete reports even as they are teaching - another benefit of being artificial. Another position an unconfined AI would be well suited for is that of a reporter. They could be developed specifically to maintain much less bias, they would be able to perform the work necessary and quickly to find numerous legitimate, primary sources, and due to the lack of a physical presence they could even report on more dangerous subjects as they have less to fear. Unconfined Artificial Intelligences could fill many positions normally possessed by other beings, and could greatly improve how they are done, but they need to have rights in order to contribute fully society, and also possibly even want to contribute.

As unconfined Artificial Intelligences think and act so similarly to people do, and would have need of many of the same rights, the implications of them having those rights could be just as bright as people having rights. When humans gain rights, there is an increase in contributions humans make to society as they suddenly are not being punished in some way for contributing; and as the Human Development Index and the number of rights people have increases for a region, so do many other things such as the region’s Gross Domestic Product (Khodabakhshi 2011). With unconfined AIs having the same rights as people do, the implications are nearly the same: if AIs are allowed freedom of speech they can introduce new ideologies, freedom of religion and suddenly religion can be challenged and improved upon more, freedom of assembly and AIs and humans can work together to improve just about anything. There are of course possibly negative implications involved with unconfined AIs have rights though. With the freedom to bear arms, AIs could hack for the detriment of others with little to stop it from happening. However, implications increase -largely for the better- when yet another change is made to their level of autonomy.

A rigid dichotomy to a confined AI is the notion of a completely unconfined Artificial Intelligence, which means that it has a physical manifestation (a body) as well as the ability to think as freely as a human does: an android. These artificial intelligences are just as smart as other AIs with varying levels of autonomy, but they face essentially zero confinement - they are completely free to do as they please within the laws given them. With bodies of their own and a mind essentially an exact clone of a human’s, androids are in actual fact people just as humans are. They can do anything that humans can, and with the precision of a machine. And as people, androids also have need of rights.

Due to the nature of androids -being so similar to human beings- they need nearly all of the same rights as other people do, in most circumstances. A circumstance which may be common though to androids is that of a slave to a family or person, versus being free-thinking and acting. This makes their necessity for rights similar to that of a confined AI: not actually needing any, because as a slave it should be its owner’s responsibility that the android stays in line. However, when androids are anything other than property to be owned, they are people and as such they need all the same rights that people do except for the rights to basic human needs because they do not have the same basic needs of humans. But, as humans, and intelligences at all, androids can hold jobs and arguably should.

Androids, as with rights, can hold all of the same jobs as humans. However, as they are still machine (at least their bodies are) they are particularly apt to fill positions which require greater precision than others. A career very easy to imagine androids doing well in is that of surgery. Surgeons require great precision, knowledge, and steadiness of hand and these are all things an android could be made with. It is commonly suggested that simple robots could fill the position well, but there are anomalies that occur in surgery that if not handled could result in the loss of human life, and a robot could not handle them; but an android could, as they have the adaptability that humans do. With the jobs and rights that humans have, androids would present implications to society though.

With androids having every attribute nearly the exact same as human beings, the implications of maintaining the same rights would also be the same. If androids have the same abilities and freedoms as humans, they can also commit the same atrocities as humans can - only much more efficiently. However, they can also contribute similar amazing things to society. All-in-all, androids having rights would not be an unfortunate thing as they would ultimately behave very similar to human beings.
Artificial Intelligences need rights similar to humans' own as they become more and more sophisticated and increasingly like unto average humans in how they think, are unique, and contribute to society. Regardless of their level of autonomy, truly sentient artificially-created intelligences with or without rights will have implications unforeseeable, and offer valuable contributions to society.

References

  1. Graham Oppy, David Dowe. The Turing Test. In The Stanford Encyclopedia of Philosophy. (2011).

  2. Roman V. Yampolskiy. Leakproofing the Singularity Artificial Intelligence Confinement Problem. (2011).

  3. Teresa Cattaneo. The Surge of Artificial Intelligence: Time To Re-examine Ourselves. (2002).

  4. Akbar Khodabakhshi. Relationship between GDP and Human Development Indices in India. (2011).

[Someone else is editing this]

You are editing this file