Amazing Science
721.4K views | +126 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Teaching robots to feel pain to protect themselves

Teaching robots to feel pain to protect themselves | Amazing Science |

A pair of researchers with Leibniz University of Hannover has demonstrated the means by which robots might be programmed to experience something akin to pain in animals. As part of their demonstration at last week's IEEE International Conference on Robotics and Automation held in Stockholm, Johannes Kuehn and Sami Haddaddin showed how pain might be used in robots, by interacting with a BioTac fingertip sensor on the end of a Kuka robotic arm that had been programmed to react differently to differing amounts of "pain."


The researchers explained that the reason for giving robots pain sensors is the same as for existing biological adaptations—to ensure a reaction that will lessen the damage incurred by our bodies, and perhaps, even more importantly, to help us to remember to avoid similar situations in the future. In the case of the robots, the researchers have built an electric network behind the fingertip sensor meant to mimic nerve pathways below the skin in animals, allowing the robot to "feel" what has been programmed to describe various types, or degrees of pain.


In the demonstration, the researchers inflicted varying degrees of pain on the robot, explaining the reasoning behind the programmed reaction: When experiencing light pain or discomfort, for example, the robot recoiled slowly, removing itself from the problem. Moderate pain, on the other hand called for a rapid response, moving quickly away from the source, though it had the option to move back, albeit, tentatively, if need be. Severe pain, on the other hand, is often indicative of damage, thus the robot had been programmed to become passive to prevent further damage.


Such robots are likely to incite a host of questions, of course, if they become more common—if a robot acts the same way a human does when touching a hot plate, are we to believe it is truly experiencing pain? And if so, will lawmakers find the need to enact laws to prevent cruelty to robots, as is the case with animals? Only time will tell of course, but one thing that is evident in such demonstrations—as robotics technology advances, researchers are more often forced to make hard decisions, some of which may fall entirely outside the domain of engineers.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

'Virtual partner' elicits emotional responses from a human partner in real-time

'Virtual partner' elicits emotional responses from a human partner in real-time | Amazing Science |

Can machines think? That's what renowned mathematician Alan Turing sought to understand back in the 1950s when he created an imitation game to find out if a human interrogator could tell a human from a machine based solely on conversation deprived of physical cues. The Turing test was introduced to determine a machine's ability to show intelligent behavior that is equivalent to or even indistinguishable from that of a human. Turing mainly cared about whether machines could match up to humans' intellectual capacities.


But there is more to being human than intellectual prowess, so researchers from the Center for Complex Systems and Brain Sciences (CCSBS) in the Charles E. Schmidt College of Science at Florida Atlantic University set out to answer the question: "How does it 'feel' to interact behaviorally with a machine?"


They created the equivalent of an "emotional" Turing test, and developed a virtual partner that is able to elicit emotional responses from its human partner while the pair engages in behavioral coordination in real-time.


Results of the study, titled "Enhanced Emotional Responses during Social Coordination with a Virtual Partner," are recently published in the International Journal of Psychophysiology. The researchers designed the virtual partner so that its behavior is governed by mathematical models of human-to-human interactions in a way that enables humans to interact with the mathematical description of their social selves.


"Our study shows that humans exhibited greater emotional arousal when they thought the virtual partner was a human and not a machine, even though in all cases, it was a machine that they were interacting with," said Mengsen Zhang, lead author and a Ph.D. student in FAU's CCSBS. "Maybe we can think of intelligence in terms of coordinated motion within and between brains."


The virtual partner is a key part of a paradigm developed at FAU called the Human Dynamic Clamp -- a state-of-the-art human machine interface technology that allows humans to interact with a computational model that behaves very much like humans themselves. In simple experiments, the model -- on receiving input from human movement -- drives an image of a moving hand which is displayed on a video screen. To complete the reciprocal coupling, the subject sees and coordinates with the moving image as if it were a real person observed through a video circuit. This social "surrogate" can be precisely tuned and controlled -- both by the experimenter and by the input from the human subject.


"The behaviors that gave rise to that distinctive emotional arousal were simple finger movements, not events like facial expressions for example, known to convey emotion," said Emmanuelle Tognoli, Ph.D., co-author and associate research professor in FAU's CCSBS. "So the findings are rather startling at first."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Face recognition app taking Russia by storm may bring end to public anonymity

Face recognition app taking Russia by storm may bring end to public anonymity | Amazing Science |

If the founders of a new face recognition app get their way, anonymity in public could soon be a thing of the past. FindFace, launched two months ago and currently taking Russia by storm, allows users to photograph people in a crowd and work out their identities, with 70% reliability.


It works by comparing photographs to profile pictures on Vkontakte, a social network popular in Russia and the former Soviet Union, with more than 200 million accounts. In future, the designers imagine a world where people walking past you on the street could find your social network profile by sneaking a photograph of you, and shops, advertisers and the police could pick your face out of crowds and track you down via social networks.


In the short time since the launch, Findface has amassed 500,000 users and processed nearly 3m searches, according to its founders, 26-year-old Artem Kukharenko, and 29-year-old Alexander Kabakov.


Kukharenko is a lanky, quietly spoken computer nerd who has come up with the algorithm that makes FindFace such an impressive piece of technology, while Kabakov is the garrulous money and marketing man, who does all of the talking when the pair meet the Guardian.


Unlike other face recognition technology, their algorithm allows quick searches in big data sets. “Three million searches in a database of nearly 1 Billion photographs: that’s hundreds of trillions of comparisons, and all on four normal servers. With this algorithm, you can search through a billion photographs in less than a second from a normal computer,” said Kabakov, during an interview at the company’s modest central Moscow office. The app will give you the most likely match to the face that is uploaded, as well as 10 people it thinks look similar.


Kabakov says the app could revolutionize dating: “If you see someone you like, you can photograph them, find their identity, and then send them a friend request.” The interaction doesn’t always have to involve the rather creepy opening gambit of clandestine street photography, he added: “It also looks for similar people. So you could just upload a photo of a movie star you like, or your ex, and then find 10 girls who look similar to her and send them messages.”


Some have sounded the alarm about the potentially disturbing implications. Already the app has been used by a St Petersburg photographer to snap and identify people on the city’s metro, as well as by online vigilantes to uncover the social media profiles of female porn actors and harass them.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Teaching assistant wasn't human and nobody guessed it

Teaching assistant wasn't human and nobody guessed it | Amazing Science |
Jill Watson is a virtual teaching assistant. She was one of nine teaching assistants in an artificial intelligence online course. And none of the students guessed she wasn't a human.


College of Computing Professor Ashok Goel teaches Knowledge Based Artificial Intelligence (KBAI) every semester. It's a core requirement of Georgia Tech's online master's of science in computer science program. And every time he offers it, Goel estimates, his 300 or so students post roughly 10,000 messages in the online forums -- far too many inquiries for him and his eight teaching assistants (TA) to handle. That's why Goel added a ninth TA this semester. Her name is Jill Watson, and she's unlike any other TA in the world. In fact, she's not even a "she." Jill is a computer -- a virtual TA -- implemented on IBM's Watson platform.


"The world is full of online classes, and they're plagued with low retention rates," Goel said. "One of the main reasons many students drop out is because they don't receive enough teaching support. We created Jill as a way to provide faster answers and feedback."


Goel and his team of Georgia Tech graduate students started to build her last year. They contacted Piazza, the course's online discussion forum, to track down all the questions that had ever been asked in KBAI since the class was launched in fall 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers.


"One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn't really go up," Goel said. "Students tend to ask the same questions over and over again."


That's an ideal situation for the Watson platform, which specializes in answering questions with distinct, clear solutions. The team wrote code that allows Jill to field routine questions that are asked every semester. For example, students consistently ask where they can find particular assignments and readings.


Jill wasn't very good for the first few weeks after she started in January, often giving odd and irrelevant answers. Her responses were posted in a forum that wasn't visible to students.

"Initially her answers weren't good enough because she would get stuck on keywords," said Lalith Polepeddi, one of the graduate students who co-developed the virtual TA. "For example, a student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons -- same keywords -- but different context. So we learned from mistakes like this one, and gradually made Jill smarter."


After some tinkering by the research team, Jill found her groove and soon was answering questions with 97 percent certainty. When she did, the human TAs would upload her responses to the students. By the end of March, Jill didn't need any assistance: She wrote the class directly if she was 97 percent positive her answer was correct.


The students, who were studying artificial intelligence, were unknowingly interacting with it. Goel didn't inform them about Jill's true identity until April 26. The student response was uniformly positive. One admitted her mind was blown. Another asked if Jill could "come out and play." Since then some students have organized a KBAI alumni forum to learn about new developments with Jill after the class ends, and another group of students has launched an open source project to replicate her.

Ra's curator insight, May 9, 4:42 PM
Scooped by Dr. Stefan Gruenwald!

Machine learning rivals human skills in cancer detection

Machine learning rivals human skills in cancer detection | Amazing Science |

Two announcements yesterday (April 21) suggest that deep learning algorithms rival human skills in detecting cancer from ultrasound images and other sources.


Samsung Medison, a global medical equipment company and an affiliate of Samsung Electronics, has just updated its RS80A ultrasound imaging system with a deep learning algorithm for breast-lesion analysis. The “S-Detect for Breast” feature uses big data collected from breast-exam cases and recommends whether the selected lesion is benign or malignant. It’s used in in lesion segmentation, characteristic analysis, and assessment processes, providing “more accurate results.”


“We saw a high level of conformity from analyzing and detecting lesion in various cases by using the S-Detect,” said professor Han Boo Kyung, a radiologist at Samsung Medical Center. “Users can reduce taking unnecessary biopsies and doctors-in-training will likely have more reliable support in accurately detecting malignant and suspicious lesions.”


Meanwhile, researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis say they’ve found that open-source machine learning tools are as good as — or better than — humans in extracting crucial meaning from free-text (unstructured) pathology reports and detecting cancer cases. The computer tools are also faster and less resource-intensive. U.S. states require cancer cases to be reported to statewide cancer registries for disease tracking, identification of at-risk populations, and recognition of unusual trends or clusters. This free-text information can be difficult for health officials to interpret, which can further delay health department action, when action is needed.


“We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not,” said study senior author Shaun Grannis*, M.D., M.S., interim director of the Regenstrief Center of Biomedical Informatics.


“We have come to the point in time that technology can handle this. A human’s time is better spent helping other humans by providing them with better clinical care. Everything — physician practices, health care systems, health information exchanges, insurers, as well as public health departments — are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can’t do it — but computers can.”

This is especially relevant for underserved nations, where a majority of clinical data is collected in the form of unstructured free text, he said.


The researchers sampled 7,000 free-text pathology reports from over 30 hospitals that participate in the Indiana Health Information Exchange and used open source tools, classification algorithms, and varying feature selection approaches to predict if a report was positive or negative for cancer. The results indicated that a fully automated review yielded results similar or better than those of trained human reviewers, saving both time and money.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The Amphibian SCUBA diving simulator: Experiencing underwater worlds, virtually

The Amphibian SCUBA diving simulator: Experiencing underwater worlds, virtually | Amazing Science |

The Amphibian SCUBA diving simulator, a research project from the MIT Media Lab, lets users experience the underwater world through a high presence virtual reality system. The system includes a motion platform, Oculus Rift head-mounted display, snorkel with sensors, leg-motion sensors, and gloves that enable motion detection, temperature simulation, and physical feedback of objects. Captured sensor data is fed into a processing unit that converts the users physical motion into virtual movement in the Oculus app.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

AI Writes Novel and Passes Literary Prize Screening

AI Writes Novel and Passes Literary Prize Screening | Amazing Science |

Out of 1450 entries, 11 were written by non-humans.


"The day a computer wrote a novel. The computer, placing priority on the pursuit of its own joy, stopped working for humans.” A pretty profound line—considering this sentence is part of a book that was actually co-authored by an artificial intelligence (AI).


While it may not have won the top prize, this short-form novel, which was a collaboration between humans and an AI program, managed to make it through the first round of screening for a national literary prize in Japan called the Nikkei Hoshi Shinichi Literary Award.


Titled ‘The Day A Computer Writes A Novel,’ the short story was a team effort between human authors, led by Hitoshi Matsubara from the Future University Hakodate, and, well, a computer. Matsubara, who selected words and sentences for the book, set the parameters for the AI to construct the novel before letting the program take over and essentially “write” the novel by itself.

The team submitted two entries for the literary prize—one of which made it through the first round.


This is the first time that the Hoshi Shinichi Literary Award has received submissions written by AI programs. Out of 1,450 entries, 11 were apparently written by non-humans.

“I was surprised at the work because it was a well structured novel. But there are still some problems [to overcome] to win the prize, such as character descriptions,” stated Satoshi Hase, a Japanese science fiction novelist who was part of the press conference surrounding the award.


By and large, AIs have been given computational tasks that require a strict sequence of problems to solve. However, creativity, which is an element inherent to all literary work, is pretty hard to quantify, which makes this AI’s achievement particularly impressive.


While its output may have been based on a strict sequence of construction rules, who’s to say that future efforts won’t improve on the current capabilities of autonomous AI? And soon, our greatest works—works that perfectly capture what it truly means to be human—may not be written by humans.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Virtual Neurorehabilitation!

Robotic exoskeleton maps sense-deficits in young stroke patients

Robotic exoskeleton maps sense-deficits in young stroke patients | Amazing Science |

Researchers at the University of Calgary are using robotics technology to try to come up with more effective treatments for children who have had strokes.


The robotic device measures a patient's position sense — what doctors call proprioception — the unconscious perception of where the body is while in motion or at rest.


"Someone whose position sense has been affected might have difficulty knowing where their hand or arm is in space, adding to their difficulty in using their affected, weaker limb," said one of the study's senior researchers, Dr. Kirton of the Cumming School of Medicine's departments of pediatrics and clinical neurosciences.


"We can try to make a hand stronger but, if your brain doesn't know where the hand is, this may not translate into meaningful function in daily life."


PhD candidate Andrea Kuczynski is doing ongoing research using the KINARM (Kinesiological Instrument for Normal and Altered Reaching Movements) robotic device.


During the test the children sit in the KINARM machine with their arms supported by its exoskeleton, which measured movement as they played video games and did other tasks. All the children also had MRIs, which gave researchers a detailed picture of their brain structures.

Via Daniel Perez-Marcos
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Startup brings driverless taxi service to Singapore

Startup brings driverless taxi service to Singapore | Amazing Science |

An exciting “driverless race” is underway among tech giants the United States: In recent months, Google, Uber, and Tesla have made headlines for developing self-driving taxis for big cities. But a comparatively small MIT spinout, nuTonomy, has entered the race somewhat under the radar. The startup is developing a fleet of driverless taxis to serve as a more convenient form of public transit while helping reduce greenhouse gas emissions in the densely populated city-state of Singapore.


“This could make car-sharing something that is almost as convenient as having your own private car, but with the accessibility and cost of public transit,” says nuTonomy co-founder and chief technology officer Emilio Frazzoli, an MIT professor of aeronautical and astronautical engineering.


The startup’s driverless taxis follow optimal paths for picking up and dropping off passengers to reduce traffic congestion. Without the need to pay drivers, they should be cheaper than Uber and taxis. These are also electric cars, manufactured through partnerships with automakers, which produce lower levels of greenhouse gas emissions than conventional vehicles do.


Last week, nuTonomy “passed its first driving test” in Singapore, Frazzoli says — meaning its driverless taxis navigated a custom obstacle course, without incident. Now, nuTonomy is in the process of getting approval for on-road testing in a business district, called One North, designated for autonomous-vehicle testing. In a few years, Frazzoli says, nuTonomy aims to deploy thousands of driverless taxis in Singapore. The company will act as the service provider to maintain the vehicles and determine when and how they can be operated safely.


But a big question remains: Will driverless taxis put public-transit operators out of work? In Singapore, Frazzoli says, that’s unlikely. “In Singapore, they want to have more buses, but they cannot find people to drive buses at night,” he says. “Robotics will not put these people out of jobs — it will provide more capacity and support that’s needed.”


Importantly, Frazzoli adds, driverless-taxi services used for public transit, such as nuTonomy’s, could promote wider use of electric cars, as consumers won’t need to purchase the expensive cars or worry about finding charging stations. This could have a major impact on the environment: A 2015 study published in Nature Climate Change found that by 2030 autonomous taxis — specifically, more efficient hybrid and electric cars — used worldwide could produce up to 94 percent less greenhouse gas emission per mile than conventional taxis.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

The momentous advance in artificial intelligence demands a new set of ethics

The momentous advance in artificial intelligence demands a new set of ethics | Amazing Science |

Let us all raise a glass to AlphaGo and mark another big moment in the advance of artificial intelligence (AI) and then perhaps start to worry. AlphaGo, Google DeepMind’s game of Go-playing AI just bested the best Go-playing human currently alive, the renowned Lee Sedol. This was not supposed to happen. At least, not for a while. An artificial intelligence capable of beating the best humans at the game was predicted to be 10 years away.


But as we drink to its early arrival, we should also begin trying to understand what the surprise means for the future – with regard, chiefly, to the ethics and governance implications that stretch far beyond a game.


As AlphaGo and AIs like it become more sophisticated – commonly outperforming us at tasks once thought to be uniquely human – will we feel pressured to relinquish control to the machines?


The number of possible moves in a game of Go is so massive that, in order to win against a player of Lee’s calibre, AlphaGo was designed to adopt an intuitive, human-like style of gameplay. Relying exclusively on more traditional brute-force programming methods was not an option. Designers at DeepMind made AlphaGo more human-like than traditional AI by using a relatively recent development – deep learning.


Deep learning uses large data sets, “machine learning” algorithms and deep neural networks – artificial networks of “nodes” that are meant to mimic neurons – to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.


Possessing a more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. For example, actions with high levels of unpredictablility – talking, driving, serving as a soldier – which were previously unmanageable for AI are now considered technically solvable, thanks in large part to deep learning.

Leonardo Wild's curator insight, March 27, 6:20 PM
The subject matter of one of my so-far unpublished novels, the third book in the Unemotion series *(Yo Artificial, in Spanish). It's starting to happen and we think Climate Change is big.
Scooped by Dr. Stefan Gruenwald!

Toshiba's Chihira Kanae robot is human-like and speaks 4 languages

Toshiba's Chihira Kanae robot is human-like and speaks 4 languages | Amazing Science |

Toshiba has shown off the latest generation of its Chihira robot at a trade fair in Berlin.


The machine - which is designed to look as human-like as possible - has had the German language added to its repertoire. The firm also told the BBC that it upgraded the machine's control system to make its movements smoother. However, one expert suggested the realistic appearance might not be best suited to Western audiences.


Prof Noel Sharkey - a roboticist at the University of Sheffield - said he thought the machine still fell "clearly on this side of the uncanny valley". The term refers to the fact that many people feel increasingly uncomfortable the closer a robot gets to appearing like a human being, so long as the two remain distinguishable.


Toshiba brought the Chihira Kanae droid to the ITB travel expo to highlight what it hopes could become a viable product for the tourism industry. The machine has been installed at an information desk where it responds to attendees' verbal questions about the conference.


It marks the first appearance of the robot outside Japan, where it was unveiled last month.


The earlier models in the series are:

  • Chihira Aico, which made its debut at Japan's Ceatec tech show in 2014
  • Chihira Junko, which was launched last October and is currently in use at a Tokyo shopping centre's information desk


"We have improved the software and the hardware to improve the air pressure system," explained Hitoshi Tokuda, chief specialist at Toshiba's research and development center. "If the air pressure is unstable, her movements become affected by vibrations. So, if the air flow is very precisely controlled, her movements are smoother."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Korean Go champ scores surprise victory over supercomputer

Korean Go champ scores surprise victory over supercomputer | Amazing Science |
A South Korean Go grandmaster on Sunday scored his first win over a Google-developed supercomputer, in a surprise victory after three humiliating defeats in a high-profile showdown between man and machine.


Lee Se-Dol thrashed AlphaGo after a nail-biting match that lasted for nearly five hours—the fourth of the best-of-five series in which the computer clinched a 3-0 victory on Saturday.


Lee struggled in the early phase of the fourth match but gained a lead towards the end, eventually prompting AlphaGo to resign.

The 33-year-old is one of the greatest players in modern history of the ancient board game, with 18 international titles to his name—the second most in the world.


"I couldn't be happier today...this victory is priceless. I wouldn't trade it for the world," a smiling Lee said after the match to cheers and applause from the audience. "I can't say I wasn't hurt by the past three defeats...but I still enjoyed every moment of playing so it really didn't damage me greatly," he said.


Lee earlier predicted a landslide victory over Artificial Intelligence (AI) but was later forced to concede that the AlphaGo was "too strong". Lee had vowed to try his best to win at least one game after his second defeat.


Described as the "match of the century" by local media, the game was closely watched by tens of millions of Go fans mostly in East Asia as well as AI scientists.


The most famous AI victory to date came in 1997, when the IBM-developed supercomputer Deep Blue beat the then-world class chess champion Garry Kasparov. But Go, played for centuries mostly in Korea, Japan and China, had long remained the holy grail for AI developers due to its complexity and near-infinite number of potential configurations.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Google is using machine learning to teach robots intelligent reactive behaviors

Google is using machine learning to teach robots intelligent reactive behaviors | Amazing Science |

Using your hand to grasp a pen that’s lying on your desk doesn’t exactly feel like a chore, but for robots, that’s still a really hard thing to do. So to teach robots how to better grasp random objects, Google’s research team dedicated 14 robots to the task. The standard way to solve this problem would be for the robot to survey the environment, create a plan for how to grasp the object, then execute on it. In the real world, though, lots of things can change between formulating that plan and executing on it.


Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.


The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”


“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”


Google’s researchers say the average failure rate without training was 34 percent on the first 30 picking attempts. After training, that number was down to 18 percent. Still not perfect, but the next time a robot comes running after you and tries to grab you, remember that it now has an 80 percent chance of succeeding.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Autonomous Mini Rally Car Teaches Itself to Powerslide

Autonomous Mini Rally Car Teaches Itself to Powerslide | Amazing Science |

Most autonomous vehicle control software is deliberately designed for well-constrained driving that's nice, calm, and under control. Not only is this a little bit boring, it's also potentially less safe: If your car autonomous vehicle has no experience driving aggressively, it won't know how to manage itself if something goes wrong. 


At Georgia Tech, researchers are developing control algorithms that allow small-scale autonomous cars to power around dirt tracks at ludicrous speeds. They presented some this week at the 2016 IEEE International Conference on Robotics and Automation in Stockholm, Sweden. Using real-time onboard sensing and processing, the little cars maximize their speed while keeping themselves stable and under control. Mostly.


The electrically powered research platform pictured above, which is a scale model one-fifth the size of a vehicle meant for human occupants, is called AutoRally. It's about a meter long, weighs 21kg, and has a top speed of nearly 100 kilometers per hour. It's based on an R/C truck chassis, with some largely 3D-printed modifications to support a payload that includes a GPS, IMU, wheel encoders, a pair of fast video cameras, and a beefy quad-core i7 computer with a Nvidia GTX 750ti GPU and 32 gigs of RAM. All of this stuff is protected inside of an aluminum enclosure that makes crashing (even crashing badly) not that big of a deal.


The researchers attest that most of the crashes in the video happened due to either software crashes (as opposed to failures of the algorithm itself), or the vehicle having trouble adapting to changes in the track surface. Since that video was made, they've upgraded the software to make it able to handle a more realistically dynamic environment. The result: AutoRally is now able to drive continuously on a track that, because of temperature changes, goes from, say, partially frozen to a huge puddle of mud over the course of a couple of hours.

They’ve placed all of AutoRally’s specs online (and made the software available on Github) in the hopes that other vehicle autonomy researchers will be able to take advantage of the platform’s robust, high-performance capabilities. The code is open source and ROS compatible, with an accompanying Gazebo-based simulation.

We're hoping that this algorithm will eventually be mature enough to be tried out on a full-size rally car (maybe in a little friendly competition with a human driver). But if that does ever happen, crashing will be a much bigger deal than it is now.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

This AI can recreate Nobel Prize-winning experiments

This AI can recreate Nobel Prize-winning experiments | Amazing Science |

Artificial intelligence developed by a group of Australian research teams has replicated a complex experiment which won the Nobel Prize for Physics in 2001. The intelligent machine learned how to run a Bose-Einstein condensation – isolating an extremely cold gas inside a beam of laser light – in under an hour, something the team "didn't expect". Results have been published in the Scientific Reports journal. The algorithm has also been uploaded to GitHub for other researchers working on "quantum experiments".


"A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out," said Paul Wigley, co-lead researcher of the study and professor at the School of Physics and Engineering at the Australian National University.


The gas was cooled to 1 microkelvin before the artificial intelligence was "handed control" of three laser beams in which to trap the gas. It also did things that "surprised" the team.


"It did things a person wouldn't guess – such as changing one laser's power up and down and compensating with another," said Wigley. "It may be able to come up with complicated ways humans haven't thought of to get experiments colder and more precise".

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory!

The next AI is no recognizable AI anymore

The next AI is no recognizable AI anymore | Amazing Science |
Artificial Intelligence is starting to turn invisible from the outside in -- and vice versa. The exact effects and workings of AI technologies are becoming..


In the near future, artificial intelligence will commonly become intangible, indistinguishable and incomprehensible for humans.

Firstly, AI doesn’t necessarily need a tangible embodiment. It can manifest itself through different mediators, such as a graphical user interface or a voice interface. Already we trust Spotify recommendations without a glance or talk to Siri and Alexa like they were summoned spirits, intelligences without a tangible form.


Secondly, AI becomes invisible by passing the Turing test, or its more relevant variants. An intelligent system that manages to simulate human-level communication, and cognitive as well as emotional abilities, can become indistinguishable from humans and, thus, the “artificiality” of its intelligence becomes imperceptible for us.


Thirdly, and most importantly, AI escapes human gaze when the details of its effects and technological dynamics go beyond human perception and understanding. We can be aware of the existence, presence and effects of intelligent systems, but we no longer fully comprehend what these systems do, how they achieve their goals or what are their definite effects. This means that AI technologies will soon go beyond Clarke’s third law, stating that “any sufficiently advanced technology is indistinguishable from magic.” Indeed, we don’t anymore have a chance to figure out the trick — or even realize that any trick occurred in the first place.


Following this, we are able to perceive manifestations and presentations of artificial intelligence, but the intelligence itself becomes unknowable to humans through human senses. Currently there are two distinct traits in this development. First, most algorithmic systems, as well as the latest advancements in AI technologies, are black boxes; inaccessible, unfathomable and uncontrollable to most people.


Therefore, it’s hard to perceive or assess how intelligent systems shape your life online and offline, from your latest song recommendations to your personalized insurance policy, not to mention the algorithmic stock market trading that shapes the global market economy affecting almost every aspect of modern life.


Concretely, when the actions of intelligent systems become more holistically intertwined with personal, social, cultural, political and economical systems, it becomes challenging to distinguish the exact effects or impact of the machine intelligence itself.

Second, AI technologies are becoming so complex that they are hard to understand — even for the experts designing and developing them. In his recent book, The Master Algorithm, machine learning expert Pedro Domingos points out that already back in 1950s scientists created an algorithm that could do something that humans couldn’t fully comprehend.

This development hasn’t changed its course; rather, to the contrary. With the current pace of AI development, even seasoned experts have a hard time keeping up. Today various machine learning systems can already provide unexpected insights in varying fields, from personalization technologies to particle physics, from cooking recipes and outlandish game moves to crime prevention and bioengineering. Concretely, specialized systems can empower scientific discoveries in biology or help you choose the best route to your next meeting.

Via Ben van Lier
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Papers!

Can artificial intelligence create the next wonder material?

Can artificial intelligence create the next wonder material? | Amazing Science |

It's a strong contender for the geekiest video ever made: a close-up of a smartphone with line upon line of numbers and symbols scrolling down the screen. But when visitors stop by Nicola Marzari's office, which overlooks Lake Geneva, he can hardly wait to show it off. “It's from 2010,” he says, “and this is my cellphone calculating the electronic structure of silicon in real time!”


Even back then, explains Marzari, a physicist at the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland, his now-ancient handset took just 40 seconds to carry out quantum-mechanical calculations that once took many hours on a supercomputer — a feat that not only shows how far such computational methods have come in the past decade or so, but also demonstrates their potential for transforming the way materials science is done in the future.


Instead of continuing to develop new materials the old-fashioned way — stumbling across them by luck, then painstakingly measuring their properties in the laboratory — Marzari and like-minded researchers are using computer modelling and machine-learning techniques to generate libraries of candidate materials by the tens of thousands. Even data from failed experiments can provide useful input1. Many of these candidates are completely hypothetical, but engineers are already beginning to shortlist those that are worth synthesizing and testing for specific applications by searching through their predicted properties — for example, how well they will work as a conductor or an insulator, whether they will act as a magnet, and how much heat and pressure they can withstand.


The hope is that this approach will provide a huge leap in the speed and efficiency of materials discovery, says Gerbrand Ceder, a materials scientist at the University of California, Berkeley, and a pioneer in this field. “We probably know about 1% of the properties of existing materials,” he says, pointing to the example of lithium iron phosphate: a compound that was first synthesized2 in the 1930s, but was not recognized3 as a promising replacement material for current-generation lithium-ion batteries until 1996. “No one had bothered to measure its voltage before,” says Ceder.

At least three major materials databases already exist around the world, each encompassing tens or hundreds of thousands of compounds. Marzari's Lausanne-based Materials Cloud project is scheduled to launch later this year. And the wider community is beginning to take notice. “We are now seeing a real convergence of what experimentalists want and what theorists can deliver,” says Neil Alford, a materials scientist who serves as vice-dean for research at Imperial College London, but who has no affiliation with any of the database projects.

Via Complexity Digest
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Danko Nikolić: How to Make Intelligent Robots That Understand the World

There are some amazing robots roving the surface of Mars. However, they are heavily dependent on their human operators. But what if we could provide them with human-like intelligence so that they could find their own way without assistance? What if we could teach them to autonomously deal with completely novel situations? Danko Nikolić, a neuroscientist at the Max-Planck Institute for Brain Research, has his own vision: a novel approach to Artificial Intelligence (AI) that could give robots the capability to understand the world through a method called “AI-Kindergarten”. So, can we provide for a sufficiently strong artificial intelligence to enable a robot to find its way in an environment as hostile and as unpredictable as space?

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Autonomously flying delivery drone picks up objects and spits them back out

Autonomously flying delivery drone picks up objects and spits them back out | Amazing Science |

Both gripping and flying have long been a topic for the Bionic Learning Network. In the FreeMotionHandling, Festo has for the first time combined both areas. The indoor flying object can manoeuvre autonomously in any direction, independently picking up and dropping off items where they are required.


The handling system consists of an ultra-light carbon ring with eight adaptive propellers. In the middle of the ring sits a rotatable helium ball with an integrated gripping element. As a result, both man and machine can interact with each other easily and safely, opening up entirely new possibilities for the workplace of the future. In this future, people could be supported by the spheres, using them as a flying assistance system – for example, when working at giddying heights or in hard-to-access areas.


No pilot is required to control the flying object. The sphere is coordinated externally by an indoor GPS, which has already been tried and tested on the eMotionSpheres and eMotionButterflies. In addition, the handling unit features two on-board cameras with which it monitors its surroundings, reacting to its environment even during the gripping process. Upon nearing the object to be gripped, the system takes over route planning, using the twin cameras for coordination.


No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel!

Machines are becoming more creative than humans

Machines are becoming more creative than humans | Amazing Science |

Can machines be creative? Recent successes in AI have shown that machines can now perform at human levels in many tasks that, just a few years ago, were considered to be decades away, like driving cars, understanding spoken language, and recognizing objects. But these are all tasks where we know what needs to be done, and the machine is just imitating us. What about tasks where the right answers are not known? Can machines be programmed to find solutions on their own, and perhaps even come up with creative solutions that humans would find difficult?


The answer is a definite yes! There are branches of AI focused precisely on this challenge, including evolutionary computation and reinforcement learning. Like the popular deep learning methods, which are responsible for many of the recent AI successes, these branches of AI have benefitted from the million-fold increase in computing power we’ve seen over the last two decades. There arenow antennas in spacecraft so complex they could only be designed through computational evolution. There are game playing agents in Othello, Backgammon, and most recently in Go that have learned to play at the level of the best humans, and in the case of AlphaGo, even beyond the ability of the best humans. There are non-player characters in Unreal Tournament that have evolved to be indistinguishable from humans, thereby passing the Turing test— at least for game bots. And in finance, there are computational traders in thestock market evolved to make real money.


These AI agents are different from those commonly seen in robotics, vision, and speech processing in that they were not taught to perform specific actions. Instead, they learned the best behaviors on their own by exploring possible behaviors and determining which ones lead to the best outcomes. Many such methods are modeled after similar adaptation in biology. For instance, evolutionary computation takes concepts from biological evolution. The idea is to encode candidate solutions (such as videogame players) in such a way that it is possible to recombine and mutate them to get new solutions. Then, given a large population of candidates with enough variation, a parallel search method is run to find a candidate that actually solves the problem. The most promising candidates are selected for mutation and recombination in order to construct even better candidates as offspring. In this manner, only an extremely tiny fraction of the entire group of possible candidates needs to be searched to find one that actually solves the problem, e.g. plays the game really well.


We can apply the same approach to many domains where it is possible to evaluate the quality of candidates computationally. It applies to many design domains, including the design of the space antenna mentioned above, the design of a control system for a finless rocket, or the design of a multilegged, walking robot. Often evolution comes up with solutions that are truly unexpected but still effective — in other words, creative. For instance, when working on a controller that would navigate a robotic arm around obstacles, we accidentally disabled its main motor. It could no longer reach targets far away, because it could not turn around its vertical axis. What the controller evolved to do instead was slowly turn the arm away from the target, using its remaining motors, and then swing it back really hard, turning the whole robot towards the target through inertia!

Via Levin Chin
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Systems Theory!

How Google Plans to Create Human-Like Artificial Intelligence

How Google Plans to Create Human-Like Artificial Intelligence | Amazing Science |
Mastering Go is just the beginning for Google DeepMind, which hopes to create human-like AI.


Google's Demis Hassabis thinks he can lay the foundations for software that’s smart enough to solve humanity’s biggest problems. “Our goal's very big,” says Hassabis, whose level-headed manner can mask the audacity of his ideas. He leads a team of roughly 200 computer scientists and neuroscientists at Google’s DeepMind, the London-based group behind the AlphaGo software that defeated the world champion at Go in a five-game series earlier this month, setting a milestone in computing.


It’s supposed to be just an early checkpoint in an effort Hassabis describes as the Apollo program of artificial intelligence, aimed at “solving intelligence, and then using that to solve everything else.” What passes for smart software today is specialized to a particular task—say, recognizing faces. Hassabis wants to create what he calls general artificial intelligence—something that, like a human, can learn to take on just about any task. He envisions it doing things as diverse as advancing medicine by formulating and testing scientific theories, and bounding around in agile robot bodies.


Doing that will require DeepMind’s software to explore beyond Go’s ordered world of black and white stones. It needs to get to grips with the messy real world—or to begin with a gloomy, pixelated approximation of it. DeepMind’s simulated world is called Labyrinth, and the company is using it to confront its software with increasingly complex tasks, such as navigating mazes. That should push DeepMind’s researchers to learn how to build even smarter software, and push the software to learn how to tackle more difficult decisions and problems. They’re doing this by using the techniques shown off in AlphaGo and earlier DeepMind software that learned to play 1980s-vintage Atari games such as Space Invaders better than a human could. But to succeed, Hassabis will also have to invent his way around some long-standing challenges in artificial intelligence.


Hassabis cofounded DeepMind in 2011 to transfer some of what he learned about biological intelligence to machines. The company revealed software that learned to master Atari games in December 2013, and early in 2014 it was purchased by Google for an amount reported to be 400 million pounds, more than $600 million at the time (see “Google’s Intelligence Designer”). DeepMind quickly expanded, hiring dozens more researchers and publishing scores of papers in leading machine-learning and artificial-intelligence conferences. This January it revealed the existence of AlphaGo, and that it had defeated Europe’s best Go player in October 2015. AlphaGo beat the world champion, Lee Sedol, earlier this month (see “Five Lessons from AlphaGo’s Historic Victory”).


DeepMind’s 3-D environment Labyrinth, built on an open-source clone of the first-person-shooter Quake, is designed to provide the next steps in proving that idea. The company has already used it to challenge agents with a game in which they must explore randomly generated mazes for 60 seconds, winning points for collecting apples or finding an exit (which leads to another randomly generated maze). Future challenges might require more complex planning—for example, learning that keys can be used to open doors. The company will also test software in other ways, and is considering taking on the video game Starcraft and even poker. But posing harder and harder challenges inside Labyrinth will be a major thread of research for some time, says Hassabis. “It should be good for the next couple of years,” he says.


Other companies and researchers working on artificial intelligence will be watching closely. The success of DeepMind’s reinforcement learning has surprised many machine-learning researchers. The technique was established in the 1980s, and has not proved to be as widely useful or very powerful as other ways of training software, says Pedro Domingos, a professor who works on machine learning at the University of Washington. DeepMind strengthened the venerable technique by combining it with a method called deep learning, which has recently produced big advances in how well computers can decode information such as images and triggered a recent boom in machine-learning technology (see “10 Breakthrough Technologies 2013: Deep Learning”).

Via Ben van Lier
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones

Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones | Amazing Science |

Artificial intelligence systems based on neural networks have had quite a string of recent successes: One beat human masters at the game of Go, another made up beer reviews, and another made psychedelic art. But taking these supremely complex and power-hungry systems out into the real world and installing them in portable devices is no easy feat. This February, however, at the IEEE International Solid-State Circuits Conference in San Francisco, teams from MIT, Nvidia, and the Korea Advanced Institute of Science and Technology (KAIST) brought that goal closer. They showed off prototypes of low-power chips that are designed to run artificial neural networks that could, among other things, give smartphones a bit of a clue about what they are seeing and allow self-driving cars to predict pedestrians’ movements.


Until now, neural networks—learning systems that operate analogously to networks of connected brain cells—have been much too energy intensive to run on the mobile devices that would most benefit from artificial intelligence, like smartphones, small robots, and drones. The mobile AI chips could also improve the intelligence of self-driving cars without draining their batteries or compromising their fuel economy.


Smartphone processors are on the verge of running some powerful neural networks as software. Qualcomm is sending its next-generation Snapdragonsmartphone processor to handset makers with a software-development kit to implement automatic image labeling using a neural network. This software-focused approach is a landmark, but it has its limitations. For one thing, the phone’s application can’t learn anything new by itself—it can only be trained by much more powerful computers. And neural networks experts think that more sophisticated functions will be possible if they can bake neural-net–friendly features into the circuits themselves.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Will this new ‘socially assistive robot’ from MIT Media Lab (or its progeny) replace teachers?

Will this new ‘socially assistive robot’ from MIT Media Lab (or its progeny) replace teachers? | Amazing Science |

Researchers in the Personal Robots Group at the MIT Media Lab, led by Cynthia Breazeal, PhD, have developed a powerful new “socially assistive” robot called Tega that senses the affective (emotional/feeling) state of a learner, and based on those cues, creates a personalized motivational strategy.


But what are the implications for the future of education … and society? A furry, brightly colored robot, Tega is the latest in a line of smartphone-based, socially assistive robots developed in the MIT Media Lab. In a nutshell: Tega is fun, effective, and personalized — unlike many human teachers.


Breazeal and team say Tega was developed specifically to enable long-term educational interactions with children. It uses an Android device to process movement, perception and thinking and can respond appropriately to individual children’s behaviors — contrasting with (mostly boring) conventional education with its impersonal large class sizes, lack of individual attention, and proclivity to pouring children into a rigid one-size-fits-all mold.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bacteria-powered microrobots navigate with help from new algorithm

Bacteria-powered microrobots navigate with help from new algorithm | Amazing Science |

The problem with having a microscopic robot propelled by a horde of tail-flailing bacteria is you never know where it's going to end up. The tiny, bio-robots, which amount to a chip coated with a "carpet" of flagellated bacteria, emerged from the primordial ooze of microrobotics research a few years ago as a concept for building microscopic devices and delivering medication at the cellular level. But as with any robot, the challenge for making them useful is bridging the gap from movement to automation.


A team of engineers at Drexel University might have done just that, according to research recently published in IEEE Transactions on Robotics about using electric fields to direct the robots in a fluid environment. In a follow-up to a 2014 report that presented a way to use the flagellated bacteria Serratia marcescens and an electric field to make a microrobot mobile, MinJun Kim, PhD, a professor in the College of Engineering and director of Drexel's Biological Actuation, Sensing & Transport (BAST) Lab, is now offering a method for making them agile.


"What's a ship without a captain? We know electric fields can be used to push the microrobots in any direction, like a boat carried by the ocean's currents, but in this paper we're exploring how those same fields can be used to help the robot detect obstacles and navigate around them," Kim said.


The key to both motion and navigation for the tiny hybrid robots is S. marcescens bacterium. These rod-shaped swimmers, who are known culprits of urinary tract and respiratory infections in hospitals, naturally possess a negative charge, which means they can be manipulated across an electric field as if they were pixels in an etch-a-sketch.


When a slimy smear of the bacteria is applied to a substrate, in this case a square chip of photosensitive material called SU-8, you get a negatively charged microrobot that can move around in a fluid by riding the waves of an electric field. The bacteria's whip-like flagella help keep the robot suspended in the fluid environment while also providing a small bit of forward propulsion. The real push comes from two perpendicular electric fields that turn the fluid into an electrified grid. Since the bacteria are negatively charged, the team can manipulate the robots simply by adjusting the strength of the current.


"We have shown that we can manually direct the robots or give it a set of coordinates to get it from point A to point B, but our goal in this research is to enable the microrobots to navigate a course with random impediments blocking its way," Kim said. "This requires a level of automation that has not previously been achieved in hybrid microrobotics research."


Kim's group met this goal by making a control algorithm that enables the tiny robots to effectively use the shape of the electric field they're riding as a way to detect and avoid obstacles -- like a surfer reading the waves' break to steer clear of submerged hazards.


By running a series of tests using charged particles, the team came to understand how the electric field changed when it encountered insulator objects. "The electric field was distorted near the corners of the obstacle," the authors write. "Particles that passed by the first corner of the obstacles also had affected trajectories even though they had a clear space ahead to pass; this is due to the distorted electric field."


They used this deformation in the field as input data for their steering algorithm. So when the robot senses a change in the pattern of the field the algorithm automatically adjusts its path of to dodge the obstacle. In this way, the robots are using electric fields both as a mode of transportation and as a means of navigation. In addition to the electric field information, the algorithm also uses image-tracking from a microscope-mounted camera to locate the initial starting point of the robot and its ultimate destination.


"With this level of control and input from the environment we can program the microrobot to make a series of value judgments during its journey that affect its path," Kim said. "If for instance we want the robot to avoid as many obstacles as possible, regardless of the distance traveled. Or we could set it to take the most direct, shortest route to the destination -- even if it's through the obstacles. This relative autonomy is an important step for microrobots if we're going to one day put them into a complex system and ask them to perform a task like delivering medication or building a microstructure."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science |

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.


Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.


“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.


MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.


Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.


While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

No comment yet.