The problem with superintelligence is that you can’t predict how it will behave. It might behave in a way that is detrimental to humanity. One approach to this problem is to try to code benevolence into the operating system of the AI. There is a project in Berkeley led by MIRI to try to create benevolent AI. There is some risk in relying on this strategy. When you picture AIs that are ultimately millions of times more intelligent than humans, there is risk in assuming that the simple ethical principles that we can come up with would work in a superintelligent being. In addition, the principles may not survive at all, as presumably AIs will tinker with their own operating system.