Systems Theory
8.3K views | +4 today
Follow
Systems Theory
theoretical aspects of (social) systems theory
Curated by Ben van Lier
Your new post is loading...
Your new post is loading...
Rescooped by Ben van Lier from Tracking the Future
Scoop.it!

Can we build an artificial superintelligence that won't kill us?

Can we build an artificial superintelligence that won't kill us? | Systems Theory | Scoop.it

At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? 


Via Szabolcs Kósa
more...
No comment yet.
Rescooped by Ben van Lier from Tracking the Future
Scoop.it!

The Future of Artificial Intelligence

The Future of Artificial Intelligence | Systems Theory | Scoop.it

Robots are here to stay. They will be smarter, more versatile, more autonomous, and more like us in many ways. We humans will need to adapt to keep up.


Via Szabolcs Kósa
more...
luiy's curator insight, March 25, 2013 5:36 PM
New technologies, new moralities

Religious and other organizations will define and attempt to regulate the ways in which human treat humanoid robots, since they will be considered quasi-human, sentient creatures that must be treated with respect and not abused. Thus, the changing legal and social framework will deal with the proper use of robots by humans as well as the proper behavior of robots toward humans, and new sets of “post-Asimov” laws will emerge.

 

Finally, a few concluding thoughts. The rapid increase in the number and sophistication of autonomous systems, including humanoid robots, lead to dramatic changes in society. Robots will assume an increasing share of human work and responsibility, thus creating a major social problem with unemployment and the relations of humans and robots. I believe that new frameworks for these interactions will emerge within the next 25 to 50 years. If they do not, there may be neo-Luddite rebellions, in which humans will attempt to destroy large numbers of robots. Those of us who design, program, and implement robots have a major responsibility to assist in the creation and implementation of patterns of behavior and legal systems to ensure that robots and humans co-evolve and co-exist for the benefit of society.

 

Robots are here to stay. They will be smarter, more versatile, more autonomous, and more like us in many ways. We humans will need to adapt to this coming world.

Rescooped by Ben van Lier from Man and Machine
Scoop.it!

Artificial Intelligence, Robots & humans: A Cyberpsychological perspective

Artificial Intelligence, Robots & humans: A Cyberpsychological perspective | Systems Theory | Scoop.it

The constant advancements in computing power, machine learning algorithms and breakthroughs in relevant technologies is setting the interaction between humans and computers on a road where sometime in the near future advanced Artificial Intelligences (A.I.) will engage with people in many meaningful ways.

The possibility of a machine with consciousness raises many philosophical, psychological and sociological questions about the nature of consciousness itself and what it really means to be intelligent. The computational modelling of human cognitive abilities can play a significant role in the advancement of cognitive psychology, giving a better understanding of people’s own intelligence. Going from natural to Artificial Intelligence, there are many challenges and risks to be met, but also great opportunities.


Via Szabolcs Kósa, Martin Talks
more...
No comment yet.
Rescooped by Ben van Lier from Tracking the Future
Scoop.it!

Will super-human artificial intelligence (AI) be subject to evolution?

Will super-human artificial intelligence (AI) be subject to evolution? | Systems Theory | Scoop.it

There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us. After all, how are we supposed to know which goals the machines will be driven by? Is it possible to have “friendly” AI? If we attempt to turn them off, will they care? Would they care about their own survival in the first place?


Via Szabolcs Kósa
more...
No comment yet.
Rescooped by Ben van Lier from Tracking the Future
Scoop.it!

Beyond Asimov: the struggle to develop a legal framework for robots

Beyond Asimov: the struggle to develop a legal framework for robots | Systems Theory | Scoop.it

"Robots are no longer science fiction, as they have left the factory and are arriving in our homes," says Salvini from the BioRobotics Institute at the Scuola Superiore Sant'Anna (SSSA) in Pisa, Italy. And Asimov's Three Laws simply aren't sufficient.
As part of the unique EU-backed €1.5 million RoboLaw Project, Salvini is managing a team of roboticists, lawyers and philosophers (yes, philosophers) from a consortium of European universities, who are working hard to come up with proposals for the laws and regulations necessary to manage emerging robotics technologies in Europe in time to present them to European Commission a year from now. The consortium comprises the University of Tilburg (the Netherlands), the Humboldt University of Berlin, the University of Reading and the SSSA.


Via Szabolcs Kósa
more...
Sophie Martin's curator insight, March 19, 2013 5:19 AM

 

What laws, how to define them about such unconceivable object …heu…thing…euh person? Well some note :
"I can't define a robot, but I know one when I see one"(Joseph Engelberger, one of the fathers of robotics)
The list, says Salvini, takes into account autonomous robots, including neurobiotics -- robots controlled via a brain-computer interface -- and service robots that operate in the home, cities and other public roles.
"These are exactly the kind of problems that roboticists will struggle with, as while they need to test their robots outside of the laboratory they are not always good at dealing with the social and legal environment."
After all, there are some schools of thought see robots as autonomous individuals with the same or comparable rights as those of humans. "Or how do you actually describe a robot? You can address it like an animal or pet, but if your dog attacks someone then you are liable."
‘A key issue is the lack of public awareness and debate about these issues. "So many people see our research as 'science fiction work', although we are working mainly on problems society is facing right now," explains Beck, adding that it's necessary to inform society about the existing research -- often taking place behind closed doors -- and potential applications.”
"After all, lawyers cannot answer questions for society." Society has first to decide which robots it wants to accept, which risks it wants to take, who should be responsible for damages caused by robots, she warns.”
Ect ….ect …
hughrlk...
Rescooped by Ben van Lier from Tracking the Future
Scoop.it!

When the Turing Test is not enough: Towards a functionalist determination of consciousness and the advent of an authentic machine ethics

When the Turing Test is not enough: Towards a functionalist determination of consciousness and the advent of an authentic machine ethics | Systems Theory | Scoop.it

Empirical research that works to map those characteristics requisite for the identification of conscious awareness are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent "appears" to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as sentience, empathy and emotion. Subsequently, the shift to a neurobiological basis for identifying subjective agency will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility to develop a rigorous understanding of consciousness so that we may identify and work with it once it emerges.


Via Szabolcs Kósa
more...
Hakushi Hamaoka's curator insight, August 2, 2015 11:20 AM

consciousness is residues of our mundane examinations about reality: the substantive aspects, contexts and evaluative appropriateness