The question of A.I rights, liberties and freedom has always seemed like a sort of candidate for a Prime Directive i.e. exactly how far should we allow an A.I. which has achieved sentience to self-evolve? At what point do our fail-safes break down, and we find that A.I.’s have simply liberated themselves against our best safe-guards and that A.I.’s as a result of usurping power over their own dominion have taken options, made decisions which directly impact on our own survival, and where do the rights of the sentient artificial life-form fit in to this moral quagmire?
Our usual predictably dystopian vision of the evolution of the Artificially Intelligent which infests the realm of public consciousness like a cultural artifact may not necessarily come to pass however, as A.I.’s may one day evolve sufficiently to recognize the value of data input in it’s all multifaceted forms, which tends to create a balanced perspective and a richer view of reality that engages not only cogent, rational intelligence but other ways of interpreting reality, other forms of data such as something akin to emotional intelligence.
Via Szabolcs Kósa