Systems Theory
7.7K views | +0 today
Follow
Systems Theory
theoretical aspects of (social) systems theory
Curated by Ben van Lier
Your new post is loading...
Your new post is loading...
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us?

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us? | Systems Theory | Scoop.it

Computers might soon become more intelligent than us. Some of the best brains in Silicon Valley are now trying to work out what happens next.


Nate Soares, a former Google engineer, is weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.


The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence (AI). Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.


Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director. It isn’t unusual to hear people express big thoughts about the future in Silicon Valley these days – though most of the technology visions are much more benign. It sometimes sounds as if every entrepreneur, however trivial the start-up, has taken a leaf from Google’s mission statement and is out to “make the world a better place”.


Warnings have lately grown louder. Astrophysicist Stephen Hawking, writing earlier this year, said that AI would be “the biggest event in human history”. But he added: “Unfortunately, it might also be the last.”


Elon Musk – whose successes with electric cars (through Tesla Motors) and private space flight (SpaceX) have elevated him to almost superhero status in Silicon Valley – has also spoken up. Several weeks ago, he advised his nearly 1.2 million Twitter followers to read Superintelligence, a book about the dangers of AI, which has made him think the technology is “potentially more dangerous than nukes”. Mankind, as Musk sees it, might be like a computer program whose usefulness ends once it has started up a more complex piece of software. “Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted. “Unfortunately, that is increasingly probable.”


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Can Life Evolve from Wires and Plastic?

Can Life Evolve from Wires and Plastic? | Systems Theory | Scoop.it

In a laboratory tucked away in a corner of the Cornell University campus, Hod Lipson’s robots are evolving. He has already produced a self-aware robot that is able to gather information about itself as it learns to walk.

 

Hod Lipson reports: "We wrote a trivial 10-line algorithm, ran it on big gaming simulator, put it in a big computer and waited a week. In the beginning we got piles of junk. Then we got beautiful machines. Crazy shapes. Eventually a motor connected to a wire, which caused the motor to vibrate. Then a vibrating piece of junk moved infinitely better than any other… eventually we got machines that crawl. The evolutionary algorithm came up with a design, blueprints that worked for the robot."

 

The computer-bound creature transferred from the virtual domain to our world by way of a 3D printer. And then it took its first steps. Was this arrangement of rods and wires the machine-world’s equivalent of the primordial cell? Not quite: Lipson’s robot still couldn’t operate without human intervention. ‘We had to snap in the battery,’ he told me, ‘but it was the first time evolution produced physical robots. Eventually, I want to print the wires, the batteries, everything. Then evolution will have so much freedom. Evolution will not be constrained.’

 

Not many people would call creatures bred of plastic, wires and metal beautiful. Yet to see them toddle deliberately across the laboratory floor, or bend and snap as they pick up blocks and build replicas of themselves, brings to mind the beauty of evolution and animated life.

 

One could imagine Lipson’s electronic menagerie lining the shelves at Toys R Us, if not the CIA, but they have a deeper purpose. Lipson hopes to illuminate evolution itself. Just recently, his team provided some insight into modularity—the curious phenomenon whereby biological systems are composed of discrete functional units.

 

Though inherently newsworthy, the fruits of the Creative Machines Lab are just small steps along the road towards new life. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead,’ he said, ‘but beneath the surface it’s not simple. There is a lot of grey area in between.’

 

The robots of the Creative Machines Lab might fulfill many criteria for life, but they are not completely autonomous—not yet. They still require human handouts for replication and power. These, though, are just stumbling blocks, conditions that could be resolved some day soon—perhaps by way of a 3D printer, a ready supply of raw materials, and a human hand to flip the switch just the once.

 

According to Lipson, an evolvable system is ‘the ultimate artificial intelligence, the most hands-off AI there is, which means a double edge. All you feed it is power and computing power. It’s both scary and promising.’ What if the solution to some of our present problems requires the evolution of artificial intelligence beyond anything we can design ourselves? Could an evolvable program help to predict the emergence of new flu viruses? Could it create more efficient machines? And once a truly autonomous, evolvable robot emerges, how long before its descendants make a pilgrimage to Lipson’s lab, where their ancestor first emerged from a primordial soup of wires and plastic to take its first steps on Earth?


Via Dr. Stefan Gruenwald
more...
No comment yet.
Rescooped by Ben van Lier from Amazing Science
Scoop.it!

Humans With Amplified Intelligence Could Be More Powerful Than AI

Humans With Amplified Intelligence Could Be More Powerful Than AI | Systems Theory | Scoop.it

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.

 

As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store. Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.

 

Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger atAccelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI. The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.

 

The first step will be to create a direct neural link to information. Think of it as a "telepathic Google." The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex. The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.

 

For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.

 


Via Dr. Stefan Gruenwald
more...
Dominic's curator insight, March 26, 2015 6:24 PM

Our brain is a powerful device that has much potential to undertake theories in which we thought was impossible, to reality. This article discovers the ways that we humans can release our cognitive limitations and use the power of the brain to explore innovations that we couldn't even dream of. This also explores how amplified human intelligence (IA) could become more advanced than Human Intelligence.