Today there are numerous artificially intelligences in use and development around the world. They are hard at work helping to manage factorial settings, improving our experience on the internet, and even making video games more entertaining for the player. Developers and researchers alike are also hard at work on these intelligences, attempting to make them smarter and even more useful. But, something about intelligences that has yet to be fully fleshed out is whether they need humanistic, inalienable rights as we have. Artificial intelligences do -or at least will- need rights, possibly similar to the rights we have, as they become more advanced. However, not all artificial intelligences are created equally - many of these AIs do not and will not ever even have access to the world beyond their responsibilities, and fewer still will ever actually be able to act as we can with our bodies; because of this, not a large percentage of AIs created will even need rights, especially those similar to our own.
For any level of autonomy, there is one large requirement for it to have any real need of rights: sentience. An artificial intelligence should be self-aware, able to think as people do, and be able to differentiate from intelligent programming before it acquires any rights of significance. Self-awareness is important to the matter of an intelligence having rights because if one is not aware of oneself, then the intelligence would never seek to better or change itself, therefore not necessarily contributing to society when what drives so many of us is to provide or make a name for ourselves. An AI’s ability to think as humans is vital because there are certainly no rights warranted the intelligence if it only performs actions its creator exactly intended it to do; the artificial intelligence must be able to think “outside the box,” and forge its own path. And finally, the talent of actually being able to differentiate from other machines is required because it verifies its ability to think independently and creatively, which is vastly important to the intelligence being worthwhile as it indicates if the intelligence can contribute to society on its lonesome - this can be achieved by batteries of tests known as Turing Tests (Oppy 2011), which have been in use for years to determine man from machine. But, above all, the level of autonomy possessed by an intelligence is what rights for artificial intelligences should be based upon, due to the nature of our rights only having any application for the most part if the being is actually interacting with others. On the following pages several levels of autonomy artificial intelligences commonly or may have will be defined and the rights they may or may not need as well as the implications of the intelligence having those rights will be explored.
The lowest level of autonomy possible in Artificial Intelligences is one of complete confinement, and no actual physical presence - at least not in the form of a body of any kind: a confined AI. With how many limitations these AIs have, they are merely sentient programming that reside on computer systems, typically performing menial tasks in an intelligent and efficient way that your average program cannot. In addition to not having a physical representation of themselves, these intelligences also have abundant limitations on what they are able to do, what and who they are permitted to communicate and interact with, and what they have dominion over. The most common manifestation of these limitations will be in the form of the AI having a set list of tasks they ought to perform which they cannot expand upon, no communication outside of their network of hardware, and minimal overall control even over their assigned tasks, and these types of AIs are actually heavily promoted in the scientific community as well due to their lack of ability to affect humans at all (Yampolskiy 2011).