Amazing Science
769.8K views | +93 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Korean Go champ scores surprise victory over supercomputer

Korean Go champ scores surprise victory over supercomputer | Amazing Science | Scoop.it
A South Korean Go grandmaster on Sunday scored his first win over a Google-developed supercomputer, in a surprise victory after three humiliating defeats in a high-profile showdown between man and machine.

 

Lee Se-Dol thrashed AlphaGo after a nail-biting match that lasted for nearly five hours—the fourth of the best-of-five series in which the computer clinched a 3-0 victory on Saturday.

 

Lee struggled in the early phase of the fourth match but gained a lead towards the end, eventually prompting AlphaGo to resign.

The 33-year-old is one of the greatest players in modern history of the ancient board game, with 18 international titles to his name—the second most in the world.

 

"I couldn't be happier today...this victory is priceless. I wouldn't trade it for the world," a smiling Lee said after the match to cheers and applause from the audience. "I can't say I wasn't hurt by the past three defeats...but I still enjoyed every moment of playing so it really didn't damage me greatly," he said.

 

Lee earlier predicted a landslide victory over Artificial Intelligence (AI) but was later forced to concede that the AlphaGo was "too strong". Lee had vowed to try his best to win at least one game after his second defeat.

 

Described as the "match of the century" by local media, the game was closely watched by tens of millions of Go fans mostly in East Asia as well as AI scientists.

 

The most famous AI victory to date came in 1997, when the IBM-developed supercomputer Deep Blue beat the then-world class chess champion Garry Kasparov. But Go, played for centuries mostly in Korea, Japan and China, had long remained the holy grail for AI developers due to its complexity and near-infinite number of potential configurations.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Amazing Science | Scoop.it

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.

 

Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.

 

The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”

 

“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”

 

Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

World record: First robot to solve a Rubik's Cube in under 1 second (0.887 s)

Prior to the world record attempt a WCA-conform modified speed cube was scrambled with a computer generated random array and positioned in the robot. Once the start button was hit two webcam shutters were moved away. Thereafter a laptop took two pictures, each picture showing three sides of the cube. Then the laptop identified all colors of the cube and calculated a solution with Tomas Rokicki's extremely fast implementation of Herbert Kociemba's Two-Phase-Algorithm. The solution was handed over to an Arduino-compatible microcontroller board that orchestrated the 20 moves of six high performance steppers. Only 887 milliseconds after the start button had been hit Sub1 broke a historic barrier and finished the last move in new world record time.

Needing several hundreds of working hours to construct, build, program and tune Sub1, it is the first robot that can independently inspect and solve a Rubik's Cube in under 1 second.

The world record has been approved by Guinness World Records on 18-Feb-2016.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

An evolutionary approach to AI learning

An evolutionary approach to AI learning | Amazing Science | Scoop.it
Researchers develop AI that can learn how best to achieve its goals, but do we need to step back a bit and think first about the implications?


AI guru Nick Bostrom frequently highlights the crucial importance of the goal setting process when building a superintelligence.  He outlines the complexities involved in doing this, with the many unintended consequences that lie in wait for us. Despite the perilous nature of such work, it’s increasingly likely that automations will have such an end goal and then freedom over how they achieve that.  Recent work by researchers at the University of California, Berkeley revolves around setting such a goal and devising algorithms that successfully attain it.


Bostrom suggests that this can be averted by giving the AI an overarching goal of friendliness, although even that is not without difficulty. “How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration,” he says.


We’re reaching the stage where AI is increasingly capable of doing fantastical things.  Now is perhaps the time to address just how we’d like it to do that before we reach a point where such deliberations are too late.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI: Machine learning helps discover the most luminous supernova in history

AI: Machine learning helps discover the most luminous supernova in history | Amazing Science | Scoop.it
Machine-learning technology developed at Los Alamos National Laboratory played a key role in the discovery of supernova ASASSN-15lh, an exceptionally powerful explosion that was 570 billion times brighter than the sun and more than twice as luminous as the previous record-holding supernova. This extraordinary event marking the death of a star was identified by the All Sky Automated Survey for SuperNovae (ASAS-SN) and is described in a new study published today in Science.


"This is a golden age for studying changes in astronomical objects thanks to rapid growth in imaging and computing technology," said Przemek Wozniak, the principal investigator of the project that created the software system used to spot ASASSN-15lh. "ASAS-SN is a leader in wide-area searches for supernovae using small robotic telescopes that repeatedly observe the same areas of the sky looking for interesting changes."


ASASSN-15lh was first observed in June 2015 by twin ASAS-SN telescopes¾just 14 centimeters in diameter¾located in Cerro Tololo, Chile. While supernovae already rank among the most energetic explosions in the universe, this one was 200 times more powerful than a typical supernova. The event appears to be an extreme example of a "superluminous supernova," a recently discovered class of rare cosmic explosions, most likely associated with gravitational collapse of dying massive stars. However, the record-breaking properties of ASASSN-15lh stretch even the most exotic theoretical models based on rapidly spinning neutron stars called magnetars.


"The grand challenge in this work is to select rare transient events from a deluge of imaging data in time to collect detailed follow-up observations with larger, more powerful telescopes," said Wozniak. "We developed an automated software system based on machine-learning algorithms to reliably separate real transients from bogus detections." This new technology will soon enable scientists to find ten or perhaps even hundred times more supernovae and explore truly rare cases in great detail. Since January 2015 this capability has been deployed on a live data stream from ASAS-SN.


Los Alamos is also developing high-fidelity computer simulations of shock waves and radiation generated in supernova explosions. As explained by Chris Fryer, a computational scientist at Los Alamos who leads the supernova simulation and modeling group, "By comparing our models with measurements collected during the onset of a supernova, we will learn about the progenitors of these violent events, the end stages of stellar evolution leading up to the explosion, and the explosion mechanism itself."


The next generation of massive sky monitoring surveys is poised to deliver a steady stream of high-impact discoveries like ASASSN-15lh. The Large Synoptic Survey Telescope (LSST) expected to go on sky in 2022 will collect 100 Petabytes (100 million Gigabytes) of imaging data. The Zwicky Transient Facility (ZTF) planned to begin operations in 2017 is designed to routinely catch supernovae in the act of exploding. However, even with LSST and ZTF up and running, ASAS-SN will have a unique advantage of observing the entire visible sky on daily cadence. Los Alamos is at the forefront of this field and well prepared to make important contributions in the future.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google AI algorithm defeats human professional Go player for the first time

Google AI algorithm defeats human professional Go player for the first time | Amazing Science | Scoop.it

A computer has beaten a human professional for the first time at Go — an ancient board game that has long been viewed as one of the greatest challenges for artificial intelligence (AI). The best human players of chess, draughts and backgammon have all been outplayed by computers. But a hefty handicap was needed for computers to win at Go. Now Google’s London-based AI company, DeepMind, claims that its machine has mastered the game.


DeepMind’s program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions, the firm reveals in research published in Nature on 27 January1. It also defeated its silicon-based rivals, winning 99.8% of games against the current best programs. The program has yet to play the Go equivalent of a world champion, but a match against South Korean professional Lee Sedol, considered by many to be the world’s strongest player, is scheduled for March. “We’re pretty confident,” says DeepMind co-founder Demis Hassabis.

“This is a really big result, it’s huge,” says Rémi Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away. The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games2.


This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.” Examples are using medical images to make diagnoses or treatment plans, and improving climate-change models.


In China, Japan and South Korea, Go is hugely popular and is even played by celebrity professionals. But the game has long interested AI researchers because of its complexity. The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19 × 19 grid. But the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.


Chess is less complex than Go, but it still has too many possible configurations to solve by brute force alone. Instead, programs cut down their searches by looking a few turns ahead and judging which player would have the upper hand. In Go, recognizing winning and losing positions is much harder: stones have equal values and can have subtle impacts far across the board.


To interpret Go boards and to learn the best possible moves, the AlphaGo program applied deep learning in neural networks — brain-inspired programs in which connections between layers of simulated neurons are strengthened through examples and experience. It first studied 30 million positions from expert games, gleaning abstract information on the state of play from board data, much as other programmes categorize images from pixels. Then it played against itself across 50 computers, improving with each iteration, a technique known as reinforcement learning.


The software was already competitive with the leading commercial Go programs, which select the best move by scanning a sample of simulated future games. DeepMind then combined this search approach with the ability to pick moves and interpret Go boards — giving AlphaGo a better idea of which strategies are likely to be successful. The technique is “phenomenal”, says Jonathan Schaeffer, a computer scientist at the University of Alberta in Edmonton, Canada, whose software Chinook solved3 draughts in 2007. Rather than follow the trend of the past 30 years of trying to crack games using computing power, DeepMind has reverted to mimicking human-like knowledge, albeit by training, rather than by being programmed, he says. The feat also shows the power of deep learning, which is going from success to success, says Coulom. “Deep learning is killing every problem in AI.”


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Will computers ever truly understand what humans are saying?

Will computers ever truly understand what humans are saying? | Amazing Science | Scoop.it

If you think computers are quickly approaching true human communication, think again. Computers like Siri often get confused because they judge meaning by looking at a word's statistical regularity. This is unlike humans, for whom context is more important than the word or signal, according to a researcher who invented a communication game allowing only nonverbal cues, and used it to pinpoint regions of the brain where mutual understanding takes place.


From Apple's Siri to Honda's robot Asimo, machines seem to be getting better and better at communicating with humans.

But some neuroscientists caution that today's computers will never truly understand what we're saying because they do not take into account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues, machines don't develop a shared understanding of the people, place and situation -- often including a long social history -- that is key to human communication. Without such common ground, a computer cannot help but be confused.

"People tend to think of communication as an exchange of linguistic signs or gestures, forgetting that much of communication is about the social context, about who you are communicating with," Stolk said.

The word "bank," for example, would be interpreted one way if you're holding a credit card but a different way if you're holding a fishing pole. Without context, making a "V" with two fingers could mean victory, the number two, or "these are the two fingers I broke."

"All these subtleties are quite crucial to understanding one another," Stolk said, perhaps more so than the words and signals that computers and many neuroscientists focus on as the key to communication. "In fact, we can understand one another without language, without words and signs that already have a shared meaning."

Babies and parents, not to mention strangers lacking a common language, communicate effectively all the time, based solely on gestures and a shared context they build up over even a short time.

Stolk argues that scientists and engineers should focus more on the contextual aspects of mutual understanding, basing his argument on experimental evidence from brain scans that humans achieve nonverbal mutual understanding using unique computational and neural mechanisms. Some of the studies Stolk has conducted suggest that a breakdown in mutual understanding is behind social disorders such as autism.

"This shift in understanding how people communicate without any need for language provides a new theoretical and empirical foundation for understanding normal social communication, and provides a new window into understanding and treating disorders of social communication in neurological and neurodevelopmental disorders," said Dr. Robert Knight, a UC Berkeley professor of psychology in the campus's Helen Wills Neuroscience Institute and a professor of neurology and neurosurgery at UCSF.

Stolk and his colleagues discuss the importance of conceptual alignment for mutual understanding in an opinion piece appearing Jan. 11 in the journal Trends in Cognitive Sciences.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels

AI: Deep-learning algorithm predicts photos’ memorability at ‘near-human’ levels | Amazing Science | Scoop.it

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a deep-learning algorithm that can predict how memorable or forgettable an image is almost as accurately as humans, and they plan to turn it into an app that tweaks photos to make them more memorable. For each photo, the “MemNet” algorithm also creates a “heat map” (a color-coded overlay) that identifies exactly which parts of the image are most memorable. You can try it out online by uploading your own photos to the project’s “LaMem” dataset.


The research is an extension of a similar algorithm the team developed for facial memorability. The team fed its algorithm tens of thousands of images from several different datasets developed at CSAIL, including LaMem and the scene-oriented SUN and Places. The images had each received a “memorability score” based on the ability of human subjects to remember them in online experiments.


The team then pitted its algorithm against human subjects by having the model predicting how memorable a group of people would find a new never-before-seen image. It performed 30 percent better than existing algorithms and was within a few percentage points of the average human performance. By emphasizing different regions, the algorithm can also potentially increase the image’s memorability.


“CSAIL researchers have done such manipulations with faces, but I’m impressed that they have been able to extend it to generic images,” says Alexei Efros, an associate professor of computer science at the University of California at Berkeley. “While you can somewhat easily change the appearance of a face by, say, making it more ‘smiley,’ it is significantly harder to generalize about all image types.”


LaMem is the world’s largest image-memorability dataset. With 60,000 images, each annotated with detailed metadata about qualities such as popularity and emotional impact, LaMem is the team’s effort to spur further research on what they say has often been an under-studied topic in computer vision.


Team members picture a variety of potential applications, from improving the content of ads and social media posts, to developing more effective teaching resources, to creating your own personal “health-assistant” device to help you remember things. The team next plans to try to update the system to be able to predict the memory of a specific person, as well as to better tailor it for individual “expert industries” such as retail clothing and logo design.


The work is supported by grants from the National Science Foundation, as well as the McGovern Institute Neurotechnology Program, the MIT Big Data Initiative at CSAIL, research awards from Google and Xerox, and a hardware donation from Nvidia.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Gary Marcus, A Deep Learning Researcher, Thinks He Has a More Powerful AI Approach

Gary Marcus, A Deep Learning Researcher, Thinks He Has a More Powerful AI Approach | Amazing Science | Scoop.it
Like any proud father, Gary Marcus is only too happy to talk about the latest achievements of his two-year-old son. More unusually, he believes that the way his toddler learns and reasons may hold the key to making machines much more intelligent.

Sitting in the boardroom of a bustling Manhattan startup incubator, Marcus, a 45-year-old professor of psychology at New York University and the founder of a new company called Geometric Intelligence, describes an example of his boy’s ingenuity. From the backseat of the car, his son had seen a sign showing the number 11, and because he knew that other double-digit numbers had names like “thirty-three” and “seventy-seven,” he asked his father if the number on the sign was “onety-one.”

“He had inferred that there is a rule about how you put your numbers together,” Marcus explains with a smile. “Now, he had overgeneralized it, and he made a mistake, but it was a very sophisticated mistake.”

Marcus has a very different perspective from many of the computer scientists and mathematicians now at the forefront of artificial intelligence. He has spent decades studying the way the human mind works and how children learn new skills such as language and musicality. This has led him to believe that if researchers want to create truly sophisticated artificial intelligence—something that readily learns about the world—they must take cues from the way toddlers pick up new concepts and generalize. And that’s one of the big inspirations for his new company, which he’s running while on a year’s leave from NYU. With its radical approach to machine learning, Geometric Intelligence aims to create algorithms for use in an AI that can learn in new and better ways.


Nowadays almost everyone else trying to commercialize AI, from Google to Baidu, is focused on algorithms that roughly model the way neurons and synapses in the brain change as they are exposed to new information and experiences. This approach, known as deep learning, has produced some astonishing results in recent years, especially as more data and more powerful computer hardware have allowed the underlying calculations to grow in scale. Deep-learning methods have matched—or even surpassed—human accuracy in recognizing faces in images or identifying spoken words in audio recordings. Google, Facebook, and other big companies are applying the approach to just about any task in which it is useful to spot a pattern in huge amounts of data, such as refining search results or teaching computers how to hold a conversation (see “Teaching Machines to Understand Us”).


But is deep learning based on a model of the brain that is too simple? Geometric Intelligence—indeed, Marcus himself—is betting that computer scientists are missing a huge opportunity by ignoring many subtleties in the way the human mind works. In his writing, public appearances, and comments to the press, Marcus can be a harsh critic of the enthusiasm for deep learning. But despite his occasionally abrasive approach, he does offer a valuable counter-perspective. Among other things, he points out that these systems need to be fed many thousands of examples in order to learn something. Researchers who are trying to develop machines capable of conversing naturally with people are doing it by giving their systems countless transcripts of previous conversations. This might well produce something capable of simple conversation, but cognitive science suggests it is not how the human mind acquires language.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Nostri Orbis
Scoop.it!

Complex Robot Behavior Emerges from a Master Algorithm

Complex Robot Behavior Emerges from a Master Algorithm | Amazing Science | Scoop.it
One researcher has developed a simple way to let robots generate remarkably sophisticated behaviors.


For all the talk of machines becoming intelligent, getting a sophisticated robot to do anything complex, like grabbing a heavy object and moving it from one place to another, still requires many hours of careful, patient programming. Igor Mordatch, a postdoctoral fellow at the University of California, Berkeley, is working on a different approach–one that could help hasten the arrival of robot helpers, if not overlords. He gives a robot an end goal and an algorithm that lets it figure out how to achieve the goal for itself. That’s the kind of independence that will be necessary for, say, a home assistance bot to reliably fetch you a cup of coffee from the counter.


Mordatch works in the lab of Pieter Abbeel, an associate professor of robotics at Berkeley. When I visited the lab this year, I saw all sorts of robots learning to perform different tasks. A large white research robot called PR2, which has an elongated head and two arms with pincer-like hands, was slowly figuring out how to pick up bright building blocks, through a painstaking and often clumsy process of trial and error.


As he works on a better teaching process, Mordatch is mainly using software that simulates robots. This virtual model, first developed with his PhD advisor at the University of Washington, Emo Todorov, and another professor at the school, Zoran Popović, has some understanding of how to make contact with the ground or with objects. The learning algorithm then uses these guidelines to search for the most efficient way to achieve a goal. “The only thing we say is ‘This is the goal, and the way to achieve the goal is to try to minimize effort,’” Mordatch says. “[The motion] then comes out these two principles.”


Mordatch’s simulated robots come in all sorts of shapes and sizes, rendered in blocky graphics that look like something from an unfished video game. He has tested his algorithm on humanoid shapes; headless, four-legged creatures with absurdly fat bodies; and even winged creations. In each case, after a period of learning, some remarkably complex behavior emerges.


Via Fernando Gil
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Conformable Contacts
Scoop.it!

Paired With AI and VR, Google Earth Will Change the Planet

Paired With AI and VR, Google Earth Will Change the Planet | Amazing Science | Scoop.it
When it debuted in 2005, Google Earth was a wonderfully intriguing novelty. From your personal computer, you could zoom in on the roof of your house or get a bird’s eye view of the park where you made out with your first girlfriend. But it proved to be more than just a party trick. And with the rapid rise of two other digital technologies—neural networks and virtual reality—the possibilities will only expand.

Via YEC Geo
more...
YEC Geo's curator insight, December 12, 2015 10:53 AM

Well, maybe that's overstating the case a wee bit, but still interesting.

Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Preparing For The Cyber Battleground Of The Future

Preparing For The Cyber Battleground Of The Future | Amazing Science | Scoop.it

For space and cyber Airmen, tomorrow’s fight will be determined largely by the concept of cyberspace dependency. That term, as defined by the author, is the degree to which a military capability relies on supremacy over a portion of the cyberspace domain in order to cause or carry out its effects. Cyber dependency is rapidly growing due to the cyberspace domain’s exponential nature, the trajectory of market forces in the civilian world, and the strategic integration by the military of computer technology in the land, maritime, and air domains.


Unlike employment in the three traditional war-fighting domains, the present employment of capabilities in the space domain cannot be achieved without cyberspace.3 The recognition of this unique relationship between space and cyberspace has profound implications for recruitment; initial, intermediate, and advanced training; and development in the space and cyber career fields. A transition from the current force-development system towards one that acknowledges the unique relationship between space and cyberspace will have the additional benefit of informing the greater operational community as war fighters in the land, maritime, and air domains continue to become increasingly dependent upon cyberspace and space. This article discusses the implications of cyber dependency and proposes six recommendations to ensure that from recruitment to advanced training, space and cyber Airmen are prepared to excel in their interconnected domains.


Via Ben van Lier
more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon | Amazing Science | Scoop.it
Juliano Pinto, a 29-year-old paraplegic, kicked off the 2014 World Cup in São Paulo with a robotic exoskeleton suit that he wore and controlled with his mind. The event was broadcast internationally and served as a symbol of the exciting possibilities of brain-controlled machines. Over the last few decades research into brain–computer interfaces (BCIs), which allow direct communication between the brain and an external device such a computer or prosthetic, has skyrocketed. Although these new developments are exciting, there are still major hurdles to overcome before people can easily use these devices as a part of daily life.

Until now such devices have largely been proof-of-concept demonstrations of what BCIs are capable of. Currently, almost all of them require technicians to manage and include external wires that tether individuals to large computers. New research, conducted by members of the BrainGate group, a consortium that includes neuroscientists, engineers and clinicians, has made strides toward overcoming some of these obstacles. “Our team is focused on developing what we hope will be an intuitive, always-available brain–computer interface that can be used 24 hours a day, seven days a week, that works with the same amount of subconscious thought that somebody who is able-bodied might use to pick up a coffee cup or move a mouse,” says Leigh Hochberg, a neuroengineer at Brown University who was involved in the research. Researchers are opting for these devices to also be small, wireless and usable without the help of a caregiver.

Via Wildcat2030, Jocelyn Stoller
more...
Lucile Debethune's curator insight, November 22, 2015 12:48 PM

Une approche intéressante de l'interface homme machine,  et le groupe Braingate apporte de très bonne idées sur ce sujet.A surveiller

 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science | Scoop.it

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.

 

Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.

 

“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.

 

MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.

 

Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.

 

While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Real or computer-generated: Can you tell the difference?

Real or computer-generated: Can you tell the difference? | Amazing Science | Scoop.it

Which of these are photos vs. computer-generated images?


As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated, a Dartmouth College-led study has foundThis has introduced complex forensic and legal issues*, such as how to distinguish between computer-generated and photographic images of child pornography, says Hany Farid, a professor of computer science, pioneering researcher in digital forensics at Dartmouth, and senior author of a paper in the journal ACM Transactions on Applied Perception“This can be problematic when a photograph is introduced into a court of law and the jury has to assess its authenticity,” Farid says.


In their study, Farid’s team conducted perceptual experiments in which 60 high-quality computer-generated and photographic images of men’s and women’s faces were shown to 250 observers. Each observer was asked to classify each image as either computer generated or photographic. Observers correctly classified photographic images 92 percent of the time, but correctly classified computer-generated images only 60 percent of the time.


But in a follow-up experiment, when the researchers provided a second set of observers with some training before the experiment, their accuracy on classifying photographic images fell slightly to 85 percent but their accuracy on computer-generated images jumped to 76 percent. With or without training, observers performed much worse than Farid’s team observed five years ago in a study when computer-generated imagery was not as photorealistic.


“We expect that human observers will be able to continue to perform this task for a few years to come, but eventually we will have to refine existing techniques and develop new computational methods that can detect fine-grained image details that may not be identifiable by the human visual system,” says Farid.

more...
Shafique Miraj Aman's curator insight, February 29, 2016 11:40 PM

This is a good article to show how computer graphics technology will slowly overcome the uncanny valley.

Babara Lopez's curator insight, March 4, 2016 8:46 PM
It's hard...
Taylah Mancey's curator insight, March 24, 2016 2:28 AM

With technology always changing and getting better, it can sometimes not always be for an advantage. While forensic science has imporved due to technological advancements, this story shows that there can aso be a negative way it can be affected. Techology in this instance has made it difficult for justice to prevail.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

This drone can automatically follow forest trails to track down lost hikers

This drone can automatically follow forest trails to track down lost hikers | Amazing Science | Scoop.it
A team of researchers at the University of Zurich just announced that they've developed a drone software that's capable of identifying and following trails.


Leave the breadcrumbs at home, folks, because just this week, a group of researchers in Switzerland announced the development of a drone capable of recognizing and following man-made forest trails. A collaborative effort between the University of Zurich and the Dalle Molle Institute of Artificial Intelligence, the conducted research was reportedly done to remedy the increasing number of lost hikers each year.


According to the University of Zurich, an estimated 1,000 emergency calls are made each year in regards to injured or lost hikers in Switzerland alone, an issue the group believes “inexpensive” drones could solve quickly.


Though the drone itself may get the bulk of the spotlight, it’s the artificial intelligence software developed by the partnership that deserves much of the credit. Run via a combination of AI algorithms, the software continuously scans its surroundings by way of two smartphone-like cameras built-in to the drone’s exterior. As the craft autonomously navigates a forested area, it consistently detects trails before piloting itself down open paths. However, the term “AI algorithms” is an incredibly easy way of describing something wildly complex. Before diving into the research, the team knew it would have to develop a supremely talented computing brain.

more...
Julie Cumming-Debrot's curator insight, February 15, 2016 7:14 AM

What a good idea........never be lost again.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Energy-friendly chip for mobile devices can perform powerful artificial-intelligence tasks

Energy-friendly chip for mobile devices can perform powerful artificial-intelligence tasks | Amazing Science | Scoop.it
MIT researchers have developed a new chip designed to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.


In recent years, some of the most exciting advances in artificial intelligence have come courtesy of convolutional neural networks, large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain.


Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.


At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.


Neural nets were widely studied in the early days of artificial-intelligence research, but by the 1970s, they’d fallen out of favor. In the past decade, however, they’ve enjoyed a revival, under the name “deep learning.”


“Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, the Emanuel E. Landsman Career Development Assistant Professor in MIT's Department of Electrical Engineering and Computer Science whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”


The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why

WIRED: Machine Learning Works Great — Mathematicians Just Don’t Know Why | Amazing Science | Scoop.it
In mathematical terms, these supervised-learning systems are given a large set of inputs and the corresponding outputs; the goal is for a computer to learn the function that will reliably transform a new input into the correct output. To do this, the computer breaks down the mystery function into a number of layers of unknown functions called sigmoid functions. These S-shaped functions look like a street-to-curb transition: a smoothened step from one level to another, where the starting level, the height of the step and the width of the transition region are not determined ahead of time.

Inputs enter the first layer of sigmoid functions, which spits out results that can be combined before being fed into a second layer of sigmoid functions, and so on. This web of resulting functions constitutes the “network” in a neural network. A “deep” one has many layers.


Decades ago, researchers proved that these networks are universal, meaning that they can generate all possible functions. Other researchers later proved a number of theoretical results about the unique correspondence between a network and the function it generates. But these results assume networks that can have extremely large numbers of layers and of function nodes within each layer. In practice, neural networks use anywhere between two and two dozen layers. Because of this limitation, none of the classical results come close to explaining why neural networks and deep learning work as spectacularly well as they do.


It is the guiding principle of many applied mathematicians that if something mathematical works really well, there must be a good underlying mathematical reason for it, and we ought to be able to understand it. In this particular case, it may be that we don’t even have the appropriate mathematical framework to figure it out yet. Or, if we do, it may have been developed within an area of “pure” mathematics from which it hasn’t yet spread to other mathematical disciplines.


Another technique used in machine learning is unsupervised learning, which is used to discover hidden connections in large data sets. Let’s say, for example, that you’re a researcher who wants to learn more about human personality types. You’re awarded an extremely generous grant that allows you to give 200,000 people a 500-question personality test, with answers that vary on a scale from one to 10. Eventually you find yourself with 200,000 data points in 500 virtual “dimensions”—one dimension for each of the original questions on the personality quiz. These points, taken together, form a lower-dimensional “surface” in the 500-dimensional space in the same way that a simple plot of elevation across a mountain range creates a two-dimensional surface in three-dimensional space.


What you would like to do, as a researcher, is identify this lower-dimensional surface, thereby reducing the personality portraits of the 200,000 subjects to their essential properties—a task that is similar to finding that two variables suffice to identify any point in the mountain-range surface. Perhaps the personality-test surface can also be described with a simple function, a connection between a number of variables that is significantly smaller than 500. This function is likely to reflect a hidden structure in the data.


In the last 15 years or so, researchers have created a number of tools to probe the geometry of these hidden structures. For example, you might build a model of the surface by first zooming in at many different points. At each point, you would place a drop of virtual ink on the surface and watch how it spread out. Depending on how the surface is curved at each point, the ink would diffuse in some directions but not in others. If you were to connect all the drops of ink, you would get a pretty good picture of what the surface looks like as a whole. And with this information in hand, you would no longer have just a collection of data points. Now you would start to see the connections on the surface, the interesting loops, folds and kinks. This would give you a map for how to explore it.


These methods are already leading to interesting and useful results, but many more techniques will be needed. Applied mathematicians have plenty of work to do. And in the face of such challenges, they trust that many of their “purer” colleagues will keep an open mind, follow what is going on, and help discover connections with other existing mathematical frameworks. Or perhaps even build new ones.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI Benchmark Will ask Computers to Make Sense of the World

AI Benchmark Will ask Computers to Make Sense of the World | Amazing Science | Scoop.it
A few years ago, a breakthrough in machine learning suddenly enabled computers to recognize objects shown in photographs with unprecedented—almost spooky—accuracy. The question now is whether machines can make another leap, by learning to make sense of what’s actually going on in such images.

A new image database, called Visual Genome, could push computers toward this goal, and help gauge the progress of computers attempting to better understand the real world. Teaching computers to parse visual scenes is fundamentally important for artificial intelligence. It might not only spawn more useful vision algorithms, but also help train computers how to communicate more effectively, because language is so intimately tied to representation of the physical world.

Visual Genome was developed by Fei-Fei Li, a professor who specializes in computer vision and who directs the Stanford Artificial Intelligence Lab, together with several colleagues. “We are focusing very much on some of the hardest questions in computer vision, which is really bridging perception to cognition,” Li says. “Not just taking pixel data in and trying to makes sense of its color, shading, those sorts of things, but really turn that into a fuller understanding of the 3-D as well as the semantic visual world.”

Li and colleagues previously created ImageNet, a database containing more than a million images tagged according to their contents. Each year, the ImageNet Large Scale Visual Recognition Challenge tests the ability of computers to automatically recognize the contents of images.

In 2012, a team led by Geoffrey Hinton at the University of Toronto built a large and powerful neural network that could categorize images far more accurately than anything created previously. The technique used to enable this advance, known as deep learning, involves feeding thousands or millions of examples into a many-layered neural network, gradually training each layer of virtual neurons to respond to increasingly abstract characteristics, from the texture of a dog’s fur, say, to its overall shape.

The Toronto team’s achievement marked both a boom of interest in deep learning and a sort of a renaissance in artificial intelligence more generally. And deep learning has since been applied in many other areas, making computers better at other important tasks, such as processing audio and text.

The images in Visual Genome are tagged more richly in ImageNet, including the names and details of various objects shown in an image; the relationships between those objects; and information about any actions that are occurring. This was achieved using a crowdsourcing approach developed by one of Li’s colleagues at Stanford, Michael Bernstein. The plan is to launch an ImageNet-style challenge using the data set in 2017.

Algorithms trained using examples in Visual Genome could do more than just recognize objects, and ought to have some ability to parse more complex visual scenes.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Why evolution may be intelligent, based on deep learning

Why evolution may be intelligent, based on deep learning | Amazing Science | Scoop.it

A computer scientist and biologist propose to unify the theory of evolution with learning theories to explain the “amazing, apparently intelligent designs that evolution produces.”


The scientists — University of Southampton School of Electronics and Computer Science professor Richard Watson* and Eötvös Loránd University (Budapest) professor of biology Eörs Szathmáry* — say they’ve found that it’s possible for evolution to exhibit some of the same intelligent behaviors as learning systems — including neural networks.


Writing in an opinion paper published in the journal Trends in Ecology and Evolution, they use “formal analogies” and transfer specific models and results between the two theories in an attempt to solve several evolutionary puzzles.


The authors cite work by Pavlicev and colleagues** showing that selection on relational alleles (gene variants) increases phenotypic (organism trait) correlation if the traits are selected together and decreases correlation if they are selected antagonistically, which is characteristic of Hebbian learning, they note.


“This simple step from evolving traits to evolving correlations between traits is crucial; it moves the object of natural selection from fit phenotypes (which ultimately removes phenotypic variability altogether) to the control of phenotypic variability,” the researchers say.


A simple analogy between learning and evolution is common and intuitive. But recently, work demonstrating a deeper unification has been expanding rapidly. Formal equivalences have been shown between learning and evolution in several different scenarios, including: selection in asexual and sexual populations with Bayesian learning, the evolution of genotype–phenotype maps with correlation learning, evolving gene regulation networks with neural network learning, and the evolution of ecological relationships with distributed memory models.


This unification suggests that evolution can learn in more sophisticated ways than previously realized and offers new theoretical approaches to tackling evolutionary puzzles such as the evolution of evolvability, the evolution of ecological organizations, and the evolution of Darwinian individuality.


In the picture above,the evolution of connections in a Recurrent Gene Regulation Network (GRN) shows associative learning behaviors. When a Hopfield network is trained on a set of patterns with Hebbian learning, it forms an associative memory of the patterns in the training set. When subsequently stimulated with random excitation patterns, the activation dynamics of the trained network will spontaneously recall the patterns from the training set or generate new patterns that are generalizations of the training patterns. (A–D) A GRN is evolved to produce first one phenotype (set of characteristics or traits — Charles Darwin in this example) and then another (Donald Hebb) in an alternating manner. The resulting phenotype is not merely an average of the two phenotypic patterns that were selected in the past. Rather, different embryonic phenotypes (e.g., random initial conditions C and D) developed into different adult phenotypes (with this evolved GRN) and match either A or B. These two phenotypes can be produced from genotypes (DNA sequences) that are a single mutation apart. In a separate experiment, selection iterates over a set of target phenotypes (E–H). In addition to developing phenotypes that match patterns selected in the past (e.g., I), this GRN also generalizes to produce new phenotypes that were not selected for in the past but belong to a structurally similar class, for example, by creating novel combinations of evolved modules (e.g., developmental attractors exist for a phenotype with all four “loops” (J). This demonstrates a capability for evolution to exhibit phenotypic novelty in exactly the same sense that learning neural networks can generalize from past experience. (credit: Richard A. Watson and Eörs Szathmáry/Trends in Ecology and Evolution)

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Robotic Toddler Learns to Stand by “Imagining” How to Do It

Robotic Toddler Learns to Stand by “Imagining” How to Do It | Amazing Science | Scoop.it
Instead of being programmed, a robot uses brain-inspired algorithms to “imagine” doing tasks before trying them in the real world.


Like many toddlers, Darwin sometimes looks a bit unsteady on its feet. But with each clumsy motion, the humanoid robot is demonstrating an important new way for androids to deal with challenging or unfamiliar environments. The robot learns to perform a new task by using a process somewhat similar to the neurological processes that underpin childhood learning.


Darwin lives in the lab of Pieter Abbeel, an associate professor at the University of California, Berkeley. When I saw the robot a few weeks ago, it was suspended from a camera tripod by a piece of rope, looking a bit tragic. A little while earlier, Darwin had been wriggling around on the end of the rope, trying to work out how best to move its limbs in order to stand up without falling over.


Darwin’s motions are controlled by several simulated neural networks—algorithms that mimic the way learning happens in a biological brain as the connections between neurons strengthen and weaken over time in response to input. The approach makes use of very complex neural networks, which are known as deep-learning networks, which have many layers of simulated neurons.


For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. The second network is required because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may throw it off completely, causing the robot to fall.


The researchers had the robot learn to stand, to move its hand to perform reaching motions, and to stay upright when the ground beneath it tilts. “It practices in simulation for about an hour,” says Igor Mordatch, a postdoctoral researcher at UC Berkeley who carried out the study. “Then at runtime it’s learning on the fly how not to slip.”


“We’re trying to be able to deal with more variability,” says Abbeel. “Just even a little variability beyond what it was designed for makes it really hard to make it work.” The new technique could prove useful for any robot working in all sorts of real environments, but it might prove especially useful for more graceful legged locomotion.


The current approach is to design an algorithm that takes into account the dynamics of a process such as walking or running (see “The Robots Walking This Way”). But such models can struggle to deal with variation in the real world, as many of the humanoid robots involved in the DARPA Robotics Challenge demonstrated by falling over when walking on sand, or when unbalancing themselves by reaching out to grasp something (see “Why Robots, and Humans, Struggled with DARPA’s Challenge”). “It was a bit of a reality check,” Abbeel says. “That’s what happens in the real world.”


Dieter Fox, a professor in the computer science and engineering department at the University of Washington who specializes in robot perception and control, says neural network learning has huge potential in robotics. “I’m very excited about this whole research direction,” Fox says. “The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system.”


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists Ponder How to Create Artificial Intelligence That Won’t Destroy Us Later

Scientists Ponder How to Create Artificial Intelligence That Won’t Destroy Us Later | Amazing Science | Scoop.it
The creators of artificially intelligent machines are often depicted in popular fiction as myopic Dr. Frankensteins who are oblivious to the apocalyptic technologies they unleash upon the world. In real life, they tend to wring their hands over the big questions: good versus evil and the impact the coming wave of robots and machine brains will have on human workers.

Scientists, recognizing their work is breaking out of the research lab and into the real world, grappled during a daylong summit on Dec. 10 in Montreal with such ethical issues as how to prevent computers that are smarter than humans from putting people out of work, adding complications to legal proceedings, or, even worse, seeking to harm society. Today’s AI can learn how to play video games, help automate e-mail responses, and drive cars under certain conditions. That’s already provoked concerns about the effect it may have workers.

"I think the biggest challenge is the challenge to employment," said Andrew Ng, the chief scientist for Chinese search engine Baidu Inc., which announced last week that one of its cars had driven itself on a 30 kilometer (19 mile) route around Beijing with no human required. The speed with which AI advances may change the workplace means "huge numbers of people in their 20s and 40s and 50s" would need to be retrained in a way that’s never happened before, he said.

"There’s no doubt that there are classes of jobs that can be automated today that could not be automated before," said Erik Brynjolfsson, an economist at the Massachusetts Institute of Technology, citing workers such as junior lawyers tasked with e-discovery or people manning the checkout aisles in self- checkout supermarkets.

"You hope that there are some new jobs needed in this economy," he said. "Entrepreneurs and managers haven’t been as creative in inventing the new jobs as they have been in automating some of the existing jobs."

Yann LeCun, Facebook’s director of AI research, isn’t as worried, saying that society has adapted to change in the past. "It’s another stage in the progress of technology," LeCun said. "It’s not going to be easy, but we’ll have to deal with it."

There are other potential quandaries, like how the legal landscape will change as AI starts making more decisions independent of any human operator. "It would be very difficult in some cases to bring an algorithm to the fore in the context of a legal proceeding," said Ian Kerr, the Canada Research Chair in Ethics, Law & Technology at the University of Ottawa Faculty of Law. "I think it would be a tremendous challenge."
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Artificial Intelligence as a Supplement to the Human Immune System

Artificial Intelligence as a Supplement to the Human Immune System | Amazing Science | Scoop.it

The human immune system is a protective force honed by millions of years of evolution to provide a broad, yet thorough opposition to entities that harm the human body. It is capable of responding to microscopic antagonists such as bacteria, fungi, viruses, and parasites quickly enough to prevent the body from being overwhelmed, while tailoring its approach to maximize the effectiveness of the response. It is also capable of recognizing malfunctions within its own forces by locating faulty cells and taking the appropriate actions to fix the problem or simply eliminate the cells through apoptosis. This impressive range and power is due to the two-part structure of the immune system, which separates the strategy into 2 segments called innate and adaptive immunity. Innate immunity is responsible for the very rapid response seen after the onset of infections, and runs on pattern recognition found in microorganisms. However, it cannot mount attacks on lone compounds such as proteins, carbohydrates, and fats, which may be byproducts of an infection.


It also has difficulty due to the evasive nature and highly evolved defensive mechanisms that microscopic agents have developed in the evolutionary battle between them and the human body. This is where the highly specific and flexible adaptive immune system comes in. While it takes longer to kick into action than the innate immune system, the adaptive branch has a very wide “library” of antigen1 receptors with great specificity, and has methods of “learning” from encounters with the antigens it encounters. It has mechanisms of tailoring its specificity to its targets and upon later encounters with the same antigen, mounts a stronger and faster response (Abbas & Lichtman, 2011). Most of the time this two-fold system does a remarkable job of covering any antagonists it encounters.


There are cases in which this system is lacking, or is simply unable to mount an effective response. Some microscopic foes have developed ways of incapacitating the system through hijacking the “police force” itself such as the notorious HIV virus, which attacks the T-cell subsection of the adaptive immune system. Occasionally the human body is its own enemy, as is seen in cancers. While the immune system is capable of identifying and attacking rogue human cells, the antigens can be difficult to identify due to their human origin and the cancers’ own evasive mechanisms.  By the time one’s body is ready to mount a defense, the number of cancerous cells is too great to overcome. Finally, the process of aging affects the effectiveness of the immune system, decreasing the quantity and quality of the cells present to protect the human body. The failure of this protective force leads to a greater susceptibility to opportunistic infections, and can lead to death from infections that would be successfully countered at a younger age.


These failings of the biological protective force lend themselves to an inorganic solution; a supplementary “immune system” made up of microscopic machines that lack the limitations inherent in all organic life forms.  The question is then how to train this inorganic squadron to be an effective and up-to-date force against possible antagonistic encounters. A potential solution is to have an AI system be the “brains” behind the approach these nanomachines take. In a way AI is a good match because like an immune system, it requires no higher consciousness and runs on a set of rules; genes in the case of the biological system and algorithms on the part of AI. This AI can be learn as it goes, becoming progressively better in its response with the experience it gains. This mirrors the adaptive immune system, as opposed to the innate immune system. With successful training, an AI with nanobots acting as its proxy would work well to enhance the human body’s response to cancer, HIV, and age-related immune decline.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI and smart machines are about to run the world: Here's how to prepare

AI and smart machines are about to run the world: Here's how to prepare | Amazing Science | Scoop.it

From self-driving cars to drones to smartphones, artificial intelligence is here. Here's how to prepare for a future with smart machines at the helm.


"The robots are here," Dr. Roman V. Yampolskiy told a packed room at IdeaFestival 2015 in Louisville, KY. "You may not see them every day, but we know how to make them. We fund research to design them. We have robots who can assist us and robot soldiers who can kill us."


Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville and author of Artificial Superintelligence: a Futuristic Approach, studies the implications of AI, the interface between machines and people, and the influence they have on our workplace. AI has formally been around since in the 1950s and a lot of it—spell-check, for example—is no longer called artificial intelligence. "In your head," Yampolskiy said, "those technologies aren't AI. But they really are."


AI, he said, is everywhere. "It's in your phones, your cars. It's Google. It's every bit of technology we're using." But AI has seen a recent explosion—a new Barbie doll, for example, will use AI to have conversations with children—and some worry advanced technology will begin to replace humans in the workplace. In Chengdu, China, Foxconn, a company making Apple and other electronics, has just built a factory run entirely run by robots.


What is next, Yampolskiy asked rhetorically. His answer: Superintelligence, intelligence beyond human. There are projects funded at unprecedented levels, conferences devoted to this, and private companies employing the brightest people in the world to solve these problems, Yampolskiy said. "Given the funding and intelligence, it would be surprising if they don't succeed."


Here is Yampolskiy's list of machine attributes to be aware of:

  • Superfast—These machines are not only super smart, but they're superfast. They can predict "ultrafast extreme events," such as stock market crashes, at a pace no human can keep up with.
  • Supercomplex—The intelligence that runs an airplane, for example, is made up of so many interconnected elements that the people operating them can't fully comprehend.
  • Supercontrolling—Once we cede power to the machines, Yampolskiy said, "we've lost it. We can't take it back."

What kind of devices will we see that have these abilities?

  • Supersoldiers—The military, Yampolskiy said, will be the first to use the advanced technology, in the form of drones, robot soldiers, and more.
  • Superviruses—We are only at the beginning of understanding how much damage can be done through computer viruses created with artificial intelligence.
  • Superworkers—We have been losing physical labor jobs for years due to automation. Now we're losing intellectual jobs. "Employers love robots," Yampolskiy said. "You don't have to deal with sick days, vacation, sexual harassment, 401k. There's a good chance a lot of us will be out of jobs."
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How to make machines learn like humans

How to make machines learn like humans | Amazing Science | Scoop.it

A team of scientists has developed an algorithm that captures human learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.


The work by researchers at MITNew York University, and the University of Toronto, which appears in the latest issue of the journal Science, marks a significant advance in the field — one that dramatically shortens the time it takes computers to “learn” new concepts and broadens their application to more creative tasks, according to the researchers.


“Our results show that by reverse-engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine-learning tasks.”


The paper’s other authors are Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.


When humans are exposed to a new concept — such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet — they often need only a few examples to understand its make-up and recognize new instances. But machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.


“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”


Salakhutdinov helped to launch recent interest in learning with “deep neural networks,” in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts — the digits 0-9 — from 6,000 examples each, or a total of 60,000 training examples.


In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge: learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts.

more...
No comment yet.