Ethan Henderson edited For_any_level_of_autonomy__.tex  over 8 years ago

Commit id: 109ae4e7531f0d4ed29005ab0d6d291b136e1795

deletions | additions      

       

For any level of autonomy controlled by an artificial intelligence though, there is one large requirement for it to have any real need of rights, especially as many rights as we maintain; and that requirement is this: sentience. An artificial intelligence should be self-aware, able to think as we do, and be able to be differentiated from intelligent programming before it acquires any rights of significance. Self-awareness is important to the matter of an intelligence having rights because if one is not aware of oneself, then the intelligence would never seek to better or change itself, therefore not necessarily contributing to society when what drives so many of us is to provide or make a name for ourselves. An AI's ability to think as we do -as humans- is vital because there are certainly no rights warranted the intelligence if it only performs actions its creator exactly intended it to do; the artificial intelligence must be able to think "outside the box," and forge its own path. And finally, the talent of actually being able to be differentiated from other machines is required because it verifies its ability to think independently and creatively, which is vastly important to the intelligence being worthwhile as it indicates if the intelligence can contribute to society on its lonesome. But, above all, the level of autonomy is what rights for artificial intelligences should be based upon, due to the nature of our rights only having much application for the most part if the being is actually interacting with others.