Fear of Artificial Intelligence vs. the Ethics and Art of Creative Destruction Wired While it may be an interesting question whether the seasons are changing in artificial intelligence (AI), or to what extent the entertainment industry is herding...
The field of soft robotics has attracted a rush of attention in the last year. Down the road at Harvard, multiple groups are working on soft robotic hands, jumping legs, exosuits, and quadrupeds that can do the limbo. At Worcester Polytechnic Institute's Soft Robotics Lab, researchers are building a snake. In San Francisco, a startup called Otherlab is buildinginflatable robots that can shake hands, walk, and carry riders. In Italy, a group of researchers built a robotic tentacle modeled after an octopus.
Before the 1970s, car companies made cars safer by making them larger and heavier. Then along came the airbag: a lightweight safety device that folded up invisibly into the vehicle until it sensed a crash. Similar revolutions took place with body armor, bridges, and contact lenses, and these researchers believe something similar is happening with robots.
"It’s not a part of conventional robotics technologies," says Fumiya Iida, a professor of bio-inspired robotics at the Swiss Federal Institute of Technology-Zurich and a member of the IEEE committee on soft robotics. "They have to think completely differently, use different materials, different energy sources. Definitely this is the way we should go in the long run." One of the most impressive rigid robots in the world right now is Boston Dynamics’ 300-pound humanoid Atlas. If Atlas wants to pick up a ball, it needs to sense and compute the precise distance between its digits and the ball and figure out exactly where to place its hand and how much pressure to apply.
Robots like Atlas "are doing a lot of thinking," says Barry Trimmer, PhD, a professor at Tufts and the editor of a new journal, Soft Robotics, which launched last month. "There’s a lot of hesitancy. ‘Where do I put my foot next?’ Animals just don't do that. We need to get away from the idea that you have to control every variable."
By contrast, Harvard’s starfish-shaped soft gripper only needs to be told to inflate. As it’s pumped full of air, it conforms to the shape of an object until its "fingers" have enough pressure to lift it. Another example would be a human picking up a glass of water. We don’t have to compute the exact size and shape of the glass with our brains; our hand adapts to the object. Similarly, Bubbles doesn’t calculate the full length of its movement.
There are technological challenges as well. In addition to air and fluid pressure actuators, soft roboticists are experimenting with dielectric elastomers, elastic materials that expand and contract in response to electric voltage; shape-memory alloys, metal alloys that can be programmed to change shape at certain temperatures; and springs that respond to light. These approaches are still rudimentary, as are the control systems that operate the robots. In the case of many of Harvard’s soft robots, it’s simply a syringe of air attached to a tube.
The field is so new, however, that no possibilities have yet been ruled out. Soft robotics technologies could theoretically be used in a wearable pair of human wings.More practically, soft robots could easily pack eggs or pick fruit — traditional hard robots, equipped with superhuman grips, are more likely to break yolks and inadvertently make applesauce. A mass of wormlike "meshworm" robots could be filled with water and dropped over a disaster area, where they would crawl to survivors. A soft robotic sleeve could be worn to eliminate tremors or supplement strength lost with age. Soft robots could be used in space exploration, where weight is hugely important; in prosthetics, where they would provide comfort and lifelikeness; in the home, where they can help out around the house without trampling the dog; and in surgical robots, where operators have inspired a few lawsuits after puncturing patients’ insides.
Drawing on the work of a clever cadre of academic researchers, the biggest names in tech—including Google, Facebook, Microsoft, and Apple—are embracing a more powerful form of AI known as “deep learning,” using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help.
It was one of the most tedious jobs on the internet. A team of Googlers would spend day after day staring at computer screens, scrutinizing tiny snippets of street photographs, asking themselves the same question over and over again: “Am I looking at an address or not?’ Click. Yes. Click. Yes. Click. No. This was…
Researchers from MIT’s Laboratory for Information and Decision Systems have developed an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses.
In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location, as described in an arXiv paper.
Google has set up an ethics board to oversee its work in artificial intelligence. The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans. One of its founders warned artificial intelligence is 'number 1 risk for this century,' and believes it could play a part in human extinction. The ethics board, revealed by web site The Information, is to ensure the projects are not abused.
To further advance our understanding of the brain, new concepts and theories are needed. In particular, the ability of the brain to create information flows must be reconciled with its propensity for synchronization and mass action. The theoretical and empirical framework of Coordination Dynamics, a key aspect of which is metastability, are presented as a starting point to study the interplay of integrative and segregative tendencies that are expressed in space and time during the normal course of brain and behavioral function. Some recent shifts in perspective are emphasized, that may ultimately lead to a better understanding of brain complexity.
Fifty years ago, John Bell made metaphysics testable, but quantum scientists still dispute the implications.
In 1964, Northern Irish physicist John Bell proved mathematically that certain quantum correlations, unlike all other correlations in the Universe, cannot arise from any local cause1. This theorem has become central to both metaphysics and quantum information science. But 50 years on, the experimental verifications of these quantum correlations still have ‘loopholes’, and scientists and philosophers still dispute exactly what the theorem states.
Quantum theory does not predict the outcomes of a single experiment, but rather the statistics of possible outcomes. For experiments on pairs of ‘entangled’ quantum particles, Bell realized that the predicted correlations between outcomes in two well-separated laboratories can be profoundly mysterious (see ‘How entanglement makes the impossible possible’). Correlations of this sort, called Bell correlations,were verified experimentally more than 30 years ago (see, for example, ref. 2). As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that “determine the results of individual measurements”1. The second option is that reality is ‘non-local’, meaning that “the setting of one measuring device can influence the reading of another instrument, however remote”1.
Most physicists are localists: they recognize the two options but choose the first, because hidden variables are, by definition, empirically inaccessible. Quantum information scientists embrace irreducible randomness as a resource for secure cryptography3. Other physicists and philosophers (the ‘non-localist camp’) dispute that there are two options, and insist that Bell’s theorem mandates non-locality4.
Bell himself was a non-localist, an opinion he first published in 1976 (ref. 6), after introducing a concept, “local causality”, that is subtly different from the locality of the 1964 theorem. Deriving this from Einstein’s principle requires an even stronger notion of causation: if two events are statistically correlated, then either one causes the other, or they have a common cause, which, when taken into account, eliminates the correlation.
In 1976, Bell proved that his new concept of local causality (based implicitly on the principle of common cause), was ruled out by Bell correlations6. In this 1976 theorem there was no second option, as there had been in the 1964 theorem, of giving up hidden variables. Nature violates local causality.
Experiments in 1982 by a team led by French physicist Alain Aspect2, using well-separated detectors with settings changed just before the photons were detected, suffered from an ‘efficiency loophole’ in that most of the photons were not detected. This allows the experimental correlations to be reproduced by (admittedly, very contrived) local hidden variable theories.
In 2013, this loophole was closed in photon-pair experiments using high-efficiency detectors7, 8. But they lacked large separations and fast switching of the settings, opening the ‘separation loophole’: information about the detector setting for one photon could have propagated, at light speed, to the other detector, and affected its outcome.
There are several groups worldwide racing to do the first Bell experiment with large separation, efficient detection and fast switching. It will be a landmark achievement in physics. But would such an experiment really close all the loopholes? The answer depends on one’s attitude to causation.
The Large Hadron Collider (LHC), the largest and most powerful particle accelerator in the world, has started to get ready for its second three-year run. Cool down of the vast machine has already begun in preparation for research to resume early in 2015 following a long technical stop to prepare the machine for running at almost double the energy of run 1. The last LHC magnet interconnection was closed on 18 June 2014 and one sector of 1/8 of the machine has already been cooled to operating temperature.
Raw Story Without Claude Shannon, there would have been no Internet Raw Story It was the start of the science of “information theory”, a set of ideas that has allowed us to build the internet, digital computers and telecommunications systems.