Futurology
2 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Rescooped by Pierre Duchaine from Amazing Science
Scoop.it!

A Wikipedia for robots allowing them to share knowledge and experience worldwide

A Wikipedia for robots allowing them to share knowledge and experience worldwide | Futurology | Scoop.it

European scientists from six institutes and two universities have developed an online platform where robots can learn new skills from each other worldwide — a kind of “Wikipedia for robots.” The objective is to help develop robots better at helping elders with caring and household tasks. “The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e researcher and RoboEarth project leader.


“RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database.” In addition, some computing and “thinking” tasks can be carried out by the system’s “cloud engine,” he said, “so the robot doesn’t need to have as much computing or battery power on‑board.”


For example, a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.

 

RoboEarth is based on four years of research by a team of scientists from six European research institutes (TU/e, Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart).


 

Robots learn from each other on 'Wiki for robots'


Via Dr. Stefan Gruenwald
more...
Scooped by Pierre Duchaine
Scoop.it!

Measuring spirituality: Real-time data in daily life - The Missoulian

Measuring spirituality: Real-time data in daily life - The Missoulian | Futurology | Scoop.it
Measuring spirituality: Real-time data in daily life
The Missoulian
And he'd like the answers in real time, launching a website that sends texts to smartphones that it's time for participants to take the twice-daily survey.
more...
No comment yet.
Scooped by Pierre Duchaine
Scoop.it!

Spirituality and Ecological Hope » The Ecology of Economic Growth

Spirituality and Ecological Hope » The Ecology of Economic Growth | Futurology | Scoop.it
If toxic chemicals and invasive species are part of Lake Michigan's new ecology, that's because we use toxic chemicals and flush them into the lake, or bring invasive species in on ships because it's too expensive (say the ...
more...
No comment yet.
Scooped by Pierre Duchaine
Scoop.it!

'Spiritual but Not Religious': A Rising, Misunderstood Voting Bloc - The Atlantic

'Spiritual but Not Religious': A Rising, Misunderstood Voting Bloc - The Atlantic | Futurology | Scoop.it
The Atlantic
'Spiritual but Not Religious': A Rising, Misunderstood Voting Bloc
The Atlantic
Spirituality is a big story in politics. Maybe as big a story as religion. It's been more than a decade since evangelicals helped George W.
more...
No comment yet.
Scooped by Pierre Duchaine
Scoop.it!

The EEB & flow: A multiplicity of communities for community ecology

The EEB & flow: A multiplicity of communities for community ecology | Futurology | Scoop.it
Community ecologists have struggled with some fundamental issues for their discipline. A longstanding example is that we have failed to formally and consistently define our study unit – the ecological community.
more...
No comment yet.
Rescooped by Pierre Duchaine from Amazing Science
Scoop.it!

Facing the Intelligence Explosion: There is Plenty of Room Above

Facing the Intelligence Explosion: There is Plenty of Room Above | Futurology | Scoop.it

Why are AIs in movies so often of roughly human-level intelligence? One reason is that we almost always fail to see non-humans as non-human. We anthropomorphize. That’s why aliens and robots in fiction are basically just humans with big eyes or green skin or some special power. Another reason is that it’s hard for a writer to write characters that are smarter than the writer. How exactly would a superintelligent machine solve problem X?


The human capacity for efficient cross-domain optimization is not a natural plateau for intelligence. It’s a narrow, accidental, temporary marker created by evolution due to things like the slow rate of neuronal firing and how large a skull can fit through a primate’s birth canal. Einstein may seem vastly more intelligent than a village idiot, but this difference is dwarfed by the difference between the village idiot and a mouse.


As Vernor Vinge put it: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”[1]  How could an AI surpass human abilities? Let us count the ways:


Speed. Our axons carry signals at seventy-five meters per second or slower. A machine can pass signals along about four million times more quickly.Serial depth. The human brain can’t rapidly perform any computation that requires more than one hundred sequential steps; thus, it relies on massively parallel computation.[2] More is possible when both parallel and deep serial computations can be performed.Computational resources. The brain’s size and neuron count are constrained by skull size, metabolism, and other factors. AIs could be built on the scale of buildings or cities or larger. When we can make circuits no smaller, we can just add more of them.Rationality. As we explored earlier, human brains do nothing like optimal belief formation or goal achievement. Machines can be built from the ground up using (computable approximations of) optimal Bayesian decision networks, and indeed this is already a leading paradigm in artificial agent design.Introspective access/editability. We humans have almost no introspective access to our cognitive algorithms, and cannot easily edit and improve them. Machines can already do this (read about EURISKO and metaheuristics). A limited hack like the method of loci greatly improves human memory; machines can do this kind of thing in spades.

 

REFERENCES:

1Vernor Vinge, “Signs of the Singularity,” IEEE Spectrum, June 2008, http://spectrum.ieee.org/biomedical/ethics/signs-of-the-singularity.

2J. A. Feldman and Dana H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science 6 (3 1982): 205–254, doi: 10.1207/s15516709cog0603_1.


Via Dr. Stefan Gruenwald
more...
Steffi Tan's curator insight, March 24, 2015 5:43 AM

Vernor Vinge answered the question, "Will computers ever be as smart as humans?" with the simple sentence of "Yes, but only briefly."

 

For only a short period of time as technology ever develops, will technology be on the same intellectual playing field before it is able to surpass and exponentially grow in its capabilities. Emphasis again on how controlled setting need to be taken if an intelligence explosion were to occur. However, even if everyone agrees on the priority of safety, it only requires a single group of people to blindly walk into such circumstance for the event to cause issues for everyone.

Scooped by Pierre Duchaine
Scoop.it!

'Protect Fragile Ecology of Kodagu' - The New Indian Express

'Protect Fragile Ecology of Kodagu' - The New Indian Express | Futurology | Scoop.it
'Protect Fragile Ecology of Kodagu'
The New Indian Express
The ecology of Kodagu is under serious threat due to development projects, unregulated urbanisation and invasive tourism.
more...
No comment yet.
Rescooped by Pierre Duchaine from Amazing Science
Scoop.it!

DWave’s updated quantum optimizer gets beaten by a classical computer

DWave’s updated quantum optimizer gets beaten by a classical computer | Futurology | Scoop.it

New Scientist reports that Matthias Troyer of ETH Zurich in Switzerland has tested a D-Wave Two computer against a conventional, "classical" machine running an optimised algorithm – and they have found no evidence of superior performance in the D-Wave machine.


Quantum computing promises a huge speedup for certain classes of problems, such as factoring numbers into primes. But so far, building a true quantum computer with more than a few bits of processing power has proven an insurmountable hurdle. A company called DWave initially confused matters by announcing that it had developed a quantum computer, but after a bit of back-and-forth, the company has settled on calling its machine a quantum optimizer. It can perform calculations that may rely on quantum effects, but it's not a general quantum computer.

 

With that settled, the obvious question became whether the quantum optimizer was worth the money—did it actually outperform classical computers for some problems? Some initial results published last year looked promising, as an early production machine outperformed classical computers on a number of tests. But that work came under fire because some of the algorithms run on the classical machine weren't as optimized as they could have been.

 

Now, a new team of computer scientists has taken DWave's latest creation, a 512-bit quantum optimizer, and put it through its paces on a single problem. And here, the results are pretty clear: a single classical processor handily beats the DWave machine in most circumstances.

 

The work was done by a large team that includes people from where the DWave 2 machine is housed (USC), a lone Google employee, and a handful of researchers from other academic institutions. The USC machine is an updated version of the one that ran the previous set of tests; 503 of its bits are functional, making it significantly more powerful than the previous version. In this case, rather than tackling a variety of problems, the team focused on a single one: resolving what's called a "spin glass," which starts with a collection of individual spins that are randomly oriented and then finds a low-energy state as those spins interact and reorient.

 

In theory, this is similar to how the DWave machine works, so (at least in a superficial analysis) you might expect the machine to perform well on the problem. To get its answer, it simply simulates the same process rather than taking an algorithmic shortcut. Pitted against it is a single-processor classical computer.


You'd think that simulating a process would be rather inefficient compared to actually running a similar process. But you'd be wrong. If you only consider the time involved in performing the calculations, then DWave does show a considerable advantage, one that starts off rising as the complexity of the problem increases. But at some point, that trend reverses. By the time the problem size is approaching that of the number of bits in DWave's machine, the gains have largely vanished.

 

And that's only considering the time spent calculating. The DWave machine needs time to be set up to model the problem, and then it needs to expend time on error correction. When the full time involved in performing the calculation is considered, the classical computer outperforms the DWave machine on most problems, often by a wide margin. "We find that while the DW2 is sometimes up to 10× faster in pure annealing time," the authors say, "there are many cases where it is ≥ 100× slower."

 

The researchers readily admit that spin glass isn't the only problem that the DWave machine can solve, and there may be others that it handles better. It's also possible, they recognize, that better error correction could give DWave's quantum optimizer a boost. But it could also be that the optimizer just isn't as good as a classical computer, though a better implementation of this optimization might be.

 

We may not have to wait too long to find out, as the authors say, "Future studies will probe these alternatives and aim to determine whether one can find a class of problem instances for which an unambiguous speedup over classical hardware can be observed."

 

Original paper: arxiv.org/pdf/1401.2910v1.pdf

 

Comments:


Scott Aaronson has written his take and has announced his "second retirement" as "chief Dwave Critic. However, I am expecting this to be like Michael Corleone. Michael Corleone: Just when I thought I was out... they pull me back in.


DWave, Google and Lockheed remain optimistic of the usefulness of the machine and of future speed up. Troyer's team ran their tests on a D-Wave Two owned by Lockheed Martin and operated by the University of Southern California in Los Angeles. There were certain instances in which the D-Wave computer was up to 10 times faster at problem solving, but in other instances it was one-hundredth the speed of the classical computer. D-Wave's advantage also tended to disappear when the team added in the time needed to configure the D-Wave Two to solve the problem, a step that is not necessary on regular PCs.

The findings don't worry Google: "At this stage we're mainly interested in understanding better what limits and what enhances the performance of quantum hardware to inform future hardware designs," says Google spokesman Jason Freidenfelds. He says Google is also more focused on problems with different structures than the one used in Troyer's test, such as machine-learning problems like the Glass blink-detection algorithm. Google had also used the machine to help improve machine learning of automatic classification of images. They were able to improve the identification of cars in pictures. This work is applicable to the self driving car work.


Via Dr. Stefan Gruenwald
more...
No comment yet.