Systems Theory
9.4K views | +4 today
Follow
Systems Theory
theoretical aspects of (social) systems theory
Curated by Ben van Lier
Your new post is loading...
Your new post is loading...
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

First general learning system that can learn directly from experience to master a wide range of challenging tasks

First general learning system that can learn directly from experience to master a wide range of challenging tasks | Systems Theory | Scoop.it

The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.


This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.


Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.


"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.


Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.


Nature Video of DQN AI


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Humans With Amplified Intelligence Could Be More Powerful Than AI

Humans With Amplified Intelligence Could Be More Powerful Than AI | Systems Theory | Scoop.it

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

 

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store. Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.

 

Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger atAccelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI. The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

 

The first step will be to create a direct neural link to information. Think of it as a "telepathic Google." The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex. The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.

 

For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.

 


Via Dr. Stefan Gruenwald
more...
Dominic's curator insight, March 26, 2015 6:24 PM

Our brain is a powerful device that has much potential to undertake theories in which we thought was impossible, to reality. This article discovers the ways that we humans can release our cognitive limitations and use the power of the brain to explore innovations that we couldn't even dream of. This also explores how amplified human intelligence (IA) could become more advanced than Human Intelligence. 

Rescooped by Ben van Lier from Amazing Science
Scoop.it!

‘Superorganisations’ – Learning from Nature’s Networks

‘Superorganisations’ – Learning from Nature’s Networks | Systems Theory | Scoop.it

Fritjof Capra, in his book ‘The Hidden Connections’ applies aspects of complexity theory, particularly the analysis of networks, to global capitalism and the state of the world; and eloquently argues the case that social systems such as organisations and networks are not just like living systems – they are living systems. The concept and theory of living systems (technically known as autopoiesis) was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela.

 

This is a complete version of a ‘long-blog’ written by Al Kennedy on behalf of ‘The Nature of Business’ blog and BCI: Biomimicry for Creative Innovation www.businessinspired...


Via Peter Vander Auwera, ddrrnt, Spaceweaver, David Hodgson, pdjmoo, Sakis Koukouvis, Dr. Stefan Gruenwald
more...
Monica S Mcfeeters's curator insight, January 18, 2014 8:57 PM

A look at how to go organic with business models in a tech age...

Nevermore Sithole's curator insight, March 14, 2014 9:01 AM

Learning from Nature’s Networks

pdjmoo's curator insight, December 6, 2014 11:04 PM

YOU ARE INVITED TO FOLLOW MY NEWS AGGREGATES @pdjmoo

 

▶  CLIMATE CHANGE http://www.scoop.it/t/changingplanet

▶  BIODIVERSITY http://www.scoop.it/t/biodiversity-is-life

▶  OUR OCEANS http://www.scoop.it/t/our-oceans-need-us

▶   OUR FOOD http://www.scoop.it/t/agriculture-gmos-pesticides

Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us?

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us? | Systems Theory | Scoop.it

Computers might soon become more intelligent than us. Some of the best brains in Silicon Valley are now trying to work out what happens next.


Nate Soares, a former Google engineer, is weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.


The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence (AI). Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.


Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director. It isn’t unusual to hear people express big thoughts about the future in Silicon Valley these days – though most of the technology visions are much more benign. It sometimes sounds as if every entrepreneur, however trivial the start-up, has taken a leaf from Google’s mission statement and is out to “make the world a better place”.


Warnings have lately grown louder. Astrophysicist Stephen Hawking, writing earlier this year, said that AI would be “the biggest event in human history”. But he added: “Unfortunately, it might also be the last.”


Elon Musk – whose successes with electric cars (through Tesla Motors) and private space flight (SpaceX) have elevated him to almost superhero status in Silicon Valley – has also spoken up. Several weeks ago, he advised his nearly 1.2 million Twitter followers to read Superintelligence, a book about the dangers of AI, which has made him think the technology is “potentially more dangerous than nukes”. Mankind, as Musk sees it, might be like a computer program whose usefulness ends once it has started up a more complex piece of software. “Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted. “Unfortunately, that is increasingly probable.”


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Ben van Lier from e-Xploration
Scoop.it!

Swarm intelligence - Wikipedia

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.[1]

SI systems are typically made up of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.

The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms. 'Swarm prediction' has been used in the context of forecasting problems.

Swarm intelligence (SI) is the collective behaviour of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.[1]
SI systems are typically made up of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.
The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms. 'Swarm prediction' has been used in the context of forecasting problems.


Via Sigalon, luiy
more...
No comment yet.