Today there are numerous artificially intelligences in use and development around the world. They are hard at work helping to manage factorial settings, improving our experience on the internet, and even making video games more entertaining for the player. Developers and researchers alike are also hard at work on these intelligences, attempting to make them smarter and even more useful. But, something about intelligences that has yet to be fully fleshed out is whether they need humanistic, inalienable rights as we have. Artificial intelligences do -or at least will- need rights, possibly similar to the rights we have, as they become more advanced. However, not all artificial intelligences are created equally - many of these AIs do not and will not ever even have access to the world beyond their responsibilities, and fewer still will ever actually be able to act as we can with our bodies; because of this, not a large percentage of AIs created will even need rights, especially those similar to our own.
For any level of autonomy, there is one large requirement for it to have any real need of rights: sentience. An artificial intelligence should be self-aware, able to think as people do, and be able to differentiate from intelligent programming before it acquires any rights of significance. Self-awareness is important to the matter of an intelligence having rights because if one is not aware of oneself, then the intelligence would never seek to better or change itself, therefore not necessarily contributing to society when what drives so many of us is to provide or make a name for ourselves. An AI’s ability to think as humans is vital because there are certainly no rights warranted the intelligence if it only performs actions its creator exactly intended it to do; the artificial intelligence must be able to think “outside the box,” and forge its own path. And finally, the talent of actually being able to differentiate from other machines is required because it verifies its ability to think independently and creatively, which is vastly important to the intelligence being worthwhile as it indicates if the intelligence can contribute to society on its lonesome - this can be achieved by batteries of tests known as Turing Tests (Oppy 2011), which have been in use for years to determine man from machine. But, above all, the level of autonomy possessed by an intelligence is what rights for artificial intelligences should be based upon, due to the nature of our rights only having any application for the most part if the being is actually interacting with others. On the following pages several levels of autonomy artificial intelligences commonly or may have will be defined and the rights they may or may not need as well as the implications of the intelligence having those rights will be explored.
The lowest level of autonomy possible in Artificial Intelligences is one of complete confinement, and no actual physical presence - at least not in the form of a body of any kind: a confined AI. With how many limitations these AIs have, they are merely sentient programming that reside on computer systems, typically performing menial tasks in an intelligent and efficient way that your average program cannot. In addition to not having a physical representation of themselves, these intelligences also have abundant limitations on what they are able to do, what and who they are permitted to communicate and interact with, and what they have dominion over. The most common manifestation of these limitations will be in the form of the AI having a set list of tasks they ought to perform which they cannot expand upon, no communication outside of their network of hardware, and minimal overall control even over their assigned tasks, and these types of AIs are actually heavily promoted in the scientific community as well due to their lack of ability to affect humans at all (Yampolskiy 2011).
Because of the nature of Artificial Intelligences and robotics in general, even a single confined AI can fill positions formerly taken up by many individual humans, therefore benefiting society by means of potentially letting those humans possess more meaningful jobs. The job easiest to imagine a confined artificial intelligence in is that of a factory foreman of some kind; and in-fact, AIs already commonly fill this position, and do so very well: “Increased production and ... This system is more accurate than when done by a human and it saves time.” (Cattaneo 2002). Completely confined to the network of machines and robotics operating in a factory, no outside access needed nor desired. They would be confined inside of a computer system, and only have access to other machinery, truly confined; and when this is the situation, certainly there is no need for rights to be granted it. Another position that requires little effort to picture a confined AI replacing humans in is that of an Air Traffic Controller. This position is especially suited for an Artificial Intelligence versus intelligent programming, too, due to the amount of anomalies that Air Traffic Controllers encounter on the daily, and the fact that situations can turn into emergencies quickly, with no time for a human consultant to be brought in - AIs’ adaptability and our level of thought is nigh-on necessary in a field such as Air Traffic Control. They communicate only with the machinery and pilots of aircraft inside of their region, and report to the airport and the Federal Aviation Administration in the US. Again, totally confined to what they are tasked with taking care of, and nothing more - they don’t communicate with passengers, to-be passengers, or anything outside of the airport and its flights. As one can see though, there is room for these Artificial Intelligences to possibly communicate with those other than those they are tasked over, and they are after-all sentient beings with quite possibly the desire to communicate with those they are not in charge of, so the possession of rights should still be explored.