There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us.
After all, how are we supposed to know which goals the machines will be driven by? Is it possible to have “friendly” AI? If we attempt to turn them off, will they care? Would they care about their own survival in the first place? There is no a priori reason to assume that intelligence necessarily implies any goals, such as survival and reproduction.
But, in spite of being rather an optimist otherwise, some seemingly convincing thoughts led me to the conclusion that there is a reason and that we can reasonably expect those machines to be a potential threat to us. The reason is, as I will argue, that the evolutionary process that has created us and the living world will continue to be valid for future intelligent machines. Just as this process has installed the urge for survival and reproduction in us, it will do so in the machines as well.
Via
Louie Helm