Amazing Science
632.8K views | +100 today
Follow
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Probabilistic programming does in 50 lines of code what used to take thousands

Probabilistic programming does in 50 lines of code what used to take thousands | Amazing Science | Scoop.it
Most recent advances in artificial intelligence—such as mobile apps that convert speech to text—are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.


To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.


At the Computer Vision and Pattern Recognition conference in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programs—less than 50 lines long—written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code. "This is the first time that we're introducing probabilistic programming in the vision area," says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. "The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems."


By the standards of conventional computer programs, those "models" can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself.


"When you think about probabilistic programs, you think very intuitively when you're modeling," Kulkarni says. "You don't think mathematically. It's a very different style of modeling." The new work, Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research. Even though their computers were painfully slow by today's standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in which light reflected off of virtual objects. This is, essentially, how Pixar makes movies.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain in your pocket: Smartphone replaces thinking, study shows

Brain in your pocket: Smartphone replaces thinking, study shows | Amazing Science | Scoop.it

In the ancient world — circa, say, 2007 — terabytes of information were not available on sleekly designed devices that fit in our pockets. While we now can turn to iPhones and Samsung Galaxys to quickly access facts both essential and trivial — the fastest way to grandmother’s house, how many cups are in a gallon, the name of the actor who played Newman on “Seinfeld” — we once had to keep such tidbits in our heads or, perhaps, in encyclopedia sets.


With the arrival of the smartphone, such dusty tomes are unnecessary. But new research suggests our devices are more than a convenience — they may be changing the way we think. In “The brain in your pocket: Evidence that Smartphones are used to supplant thinking,” forthcoming from the journal Computers in Human Behavior, lead authors Nathaniel Barr and Gordon Pennycook of the psychology department at the University of Waterloo in Ontario said those who think more intuitively and less analytically are more likely to rely on technology.


“That people typically forego effortful analytic thinking in lieu of fast and easy intuition suggests that individuals may allow their Smartphones to do their thinking for them,” the authors wrote.


What’s the difference between intuitive and analytical thinking? In the paper, the authors cite this problem: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”


The brain-teaser evokes an intuitive response: The ball must cost 10 cents, right? This response, unfortunately, is obviously wrong — 10 cents plus $1.10 equals $1.20, not $1.10. Only through analytic thinking can one arrive at the correct response: The ball costs 5 cents. (Confused? Five cents plus $1.05 equals $1.10.)


It’s just this sort of analytical thinking that avid smartphone users seem to avoid. For the paper, researchers asked subjects how much they used their smartphones, then gave them tests to measure not just their intelligence, but how they processed information.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Dr. Google joins Mayo Clinic

Dr. Google joins Mayo Clinic | Amazing Science | Scoop.it
The deal to produce clinical summaries under the Mayo Clinic name for Google searches symbolizes the medical priesthood's acceptance that information technology has reshaped the doctor-patient relationship. More disruptions are already on the way.


If information is power, digitized information is distributed power. While “patient-centered care” has been directed by professionals towards patients, collaborative health – what some call “participatory medicine” or “person-centric care” ­– shifts the perspective from the patient outwards.


Collaboration means sharing. At places like Mayo and Houston’s MD Anderson Cancer Center, the doctor’s detailed notes, long seen only by other clinicians, are available through a mobile app for patients to see when they choose and share how they wish. mHealth makes the process mundane, while the content makes it an utterly radical act.


About 5 million patients nationwide currently have electronic access to open notes. Boston’s Beth Israel Deaconess Medical Center and a few other institutions are planning to allow patients to make additions and corrections to what they call “OurNotes.” Not surprisingly, many doctors remain mortified by this medical sacrilege.


Even more threatening is an imminent deluge of patient-generated health data churned out by a growing list of products from major consumer companies. Sensors are being incorporated into wearables, watches, smartphones and (in a Ford prototype) even a “car that cares” with biometric sensors in the seat and steering wheel. Sitting in your suddenly becomes telemedicine.


To be sure, traditional information channels remain. For example, a doctor-prescribed, Food and Drug Administration-approved app uses sensors and personalized analytics to prevent severe asthma attacks. Increasingly common, though, is digitized data that doesn’t need a doctor at all. For example, a Microsoft fitness band not only provides constant heart rate monitoring, according to a New York Times review, but is part of a health “platform” employing algorithms to deliver “actionable information” and contextual analysis. By comparison, “Dr. Google” belongs in a Norman Rockwell painting.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human-machine symbiosis: Software that augments human thinking

Human-machine symbiosis: Software that augments human thinking | Amazing Science | Scoop.it

The IBM computer Deep Blue’s 1997 defeat of world champion Garry Kasparov is one of the most famous events in chess history. But Kasparov himself and some computer scientists believe a more significant result occurred in 2005—and that it should guide how we use technology to make decisions and get work done.


In an unusual online tournament, two U.S. amateurs armed with three PCs snatched a $20,000 prize from a field of supercomputers and grandmasters. The victors’ technology and chess skills were plainly inferior. But they had devised a way of working that created a greater combined intelligence—one in which humans provided insight and intuition, and computers brute-force predictions.


Some companies are now designing software to foster just such man-machine combinations. One that owes its success to this approach is Palantir, a rapidly growing software company in Palo Alto, California, known for its close connections to intelligence agencies. Shyam Sankar, director of forward deployed engineering at the company, says Palantir’s founders became devotees while at PayPal, where they designed an automated system to flag fraudulent transactions. “It catches 80 percent of the fraud, the dumb fraud, but it’s not clever enough for the most sophisticated criminals,” says Sankar.


PayPal ended up creating software to enable humans to hunt for that toughest 20 percent themselves, in the form of a suite of analysis tools that allowed them to act on their own insights about suspicious activity in vast piles of data rather than wait for automated systems to discover it. Palantir, which received funding from the CIA, now sells similar data-analysis software to law enforcement, banks, and other industries.


Sankar describes Palantir’s goal as fostering “human-computer symbiosis,” a term adapted from J.C.R. Licklider, a psychologist and computer scientist who published a prescient essay on the topic in 1960. Sankar contrasts that with what he calls the “AI bias” now dominant in the tech industry. “We focus on helping humans investigate hypotheses,” says Sankar. That’s only possible if analysts have tools that let them creatively examine data from every angle in search of those “aha” moments.


In practice, Palantir’s software gives the user tools to explore interconnected data and tries to present the information visually, often as maps that track to how people think. One bank bought the software in order to detect rogue employees stealing or leaking sensitive information. The detective work was guided by when and where employees badged into buildings, and by records of their digital activities on the company’s network. “This is contrary to automated decision making, when an algorithm figures everything out based on past data,” says Ari Gesher, a Palantir engineer. “That works great. Except when the adversary is changing. And many classes of modern problems do have this adaptive adversary in the mix.”


Palantir’s devotion to human–computer symbiosis seems to be working. The nine-year-old company now has 1,200 employees and is expanding into new industries such as health care. Forbes estimated that it was on course for revenues of $450 million in 2013.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Four ways healthcare is putting artificial intelligence and machine learning to use

Four ways healthcare is putting artificial intelligence and machine learning to use | Amazing Science | Scoop.it

Several startups are applying AI to make healthcare delivery more efficient and automated. It’s worth a look at the diverse applications for AI across healthcare including biotech and health IT, since these are some areas where it is having a significant impact from informing healthcare decisions to speeding up the selection of targets for drug development.


Medication adherence AiCure uses mobile technology and facial recognition to determine if the right person is taking a given drug at the right time. It uses mobile devices to capture patient data from an application. It uses automated algorithms to identify patients, the medication and the process of medication ingestion. That data gets transmitted in real-time back to a clinician through a HIPAA-compliant network. Clinicians can confirm that the patients are taking their medication as directed. But its technology can also be used to flag adverse events.


Next IT developed Alme Health Coach to get a deeper dive on why people aren’t taking their meds. It is a relative newcomer to healthcare. It developed “virtual assistants” to guide and better understand consumer problems across areas like banking, retail and money management. Part of the AI component involves repeating what users say to verify and clarify thoughts that are transmitted back and forth by users. The health coach is designed to be configured for specific diseases, medications and treatments. The health coach may be synched with the user’s sleep alarm so it can trigger questions like how they slept and that can prompt questions about their medication. The idea is to collect actionable data that doctors can use to better work with patients, providing the patient agrees they can share the data.


Healthy behavior Welltok tapped IBM’s Watson superbrain to support its vision of connecting consumers with personalized activities. Its Caféwell Concierge app uses Watson’s natural language processing abilities to understand users goals and provide the right balance of nudges and alerts so it can meet those targets and reward them. Watson is also part of a broader mission in healthcare to provide more targeted care, such as guiding oncologists on the most appropriate cancer treatment options based on the patients medical history and other data.


Support care givers Automated Insights put its natural language generation platform Wordsmith to work in a collaboration with Great Call — a mobile app developer. GreatCall Link is an app that allows friends and family members to learn about what’s going on with a GreatCall device carrier the app connects with.The app creates a way to notify them when a connected device is used to call for help. The app is equipped with patented GPS technology so it also shows the location of the device (and the user). Underscoring the level of interest in AI, Automated Insights was acquired this week by Vista Equity Partners and sports data company, STATS.


Drug development Biotech companies are also combining artificial intelligence and big data to identify new drug compounds, such as Cloud Pharmaceuticals and Berg. Johnson & Johnson and Sanofi are using Watson to find new targets for FDA approved drugs.

more...
Dagmawi hailu's curator insight, March 27, 2015 5:13 AM

One of the areas that is seeing great benefit from the advancement of AI is medical. It has used the technology to its advantages by assigning it to tasks and projects that will make the medical service more breeze and flawless. Support for care givers, health development, medication adherence and drug development are the areas AI is giving incalculable use to the medical world.

Dagmawi hailu's curator insight, March 27, 2015 5:15 AM

Areas of medication adherence, the support care givers, drug development and health behaviors are areas where the healthcare is putting the advancement of AI to good use.

Josh Oj's curator insight, March 27, 2015 6:16 AM

A.I. is already being used in a variety of medical situations, so what does the future hold?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI Has Arrived, and That Really Worries Some Of The World's Brightest Minds

AI Has Arrived, and That Really Worries Some Of The World's Brightest Minds | Amazing Science | Scoop.it

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.


That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.


Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.


AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.


“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”


Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”


Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.


That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade gameBreakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.


more...
Steffi Tan's curator insight, March 23, 2015 8:23 AM

By voluntarily taking on the task of successful artificial intelligence, those who are involved need to consider the social and ethical implications in which AI may involve. The responsibility which comes with leading technology can be very daunting. But by taking a pledge to dedicate time and effort into "good" rather than selfish desires to see where this new technology can take you it provides a better initial outlook to where AI can take us.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

2Smart4Mankind: Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial Intelligence

2Smart4Mankind: Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial Intelligence | Amazing Science | Scoop.it

Robots are coming. Google has assembled a team of experts in London who are working to "solve intelligence." They make up Google DeepMind, the US tech giant's artificial intelligence (AI) company, which it acquired in 2014. In an interview with MIT Technology Review, published recently, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company's biggest fears about the future of AI.


Hassabis and his team are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services. It could enhance YouTube recommendations for users for example, or make the company's mobile voice search better.


But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this". He adds he thinks AI is the "no.1 risk for this century". It's ominous stuff. (Read about Elon Musk discussing his concerns about AI here.)

People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.


Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games — it wasn't programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were "shocked".


Google is also concerned about the "other side" of developing computers in this way. That's why it set up an "ethics board". It's tasked with making sure AI technology isn't abused. As Hassibis explains: "It's (AI) something that we or other people at Google need to be cognizant of." Hassibis does concede that "we're still playing Atari games currently" — but as AI moves forward, the fear sets in.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

What Happens to a Society when Robots Replace Workers?

What Happens to a Society when Robots Replace Workers? | Amazing Science | Scoop.it

The technologies of the past, by replacing human muscle, increased the value of human effort – and in the process drove rapid economic progress. Those of the future, by substituting for man’s senses and brain, will accelerate that process – but at the risk of creating millions of citizens who are simply unable to contribute economically, and with greater damage to an already declining middle class.


Estimates of general rates of technological progress are always imprecise, but it is fair to say that, in the past, progress came more slowly. Henry Adams, the historian, measured technological progress by the power generated from coal, and estimated that power output doubled every ten years between 1840 and 1900, a compounded rate of progress of about 7% per year. The reality was probably much less. For example, in 1848, the world record for rail speed reached60 miles per hour. A century later, commercial aircraft could carry passengers at speeds approaching 600 miles per hour, a rate of progress of only about 2% per year.


By contrast, progress today comes rapidly. Consider the numbers for information storage density in computer memory. Between 1960 and 2003, those densities increased by a factor of five million, at times progressing at a rate of 60% per year. At the same time, true to Moore’s Law, semiconductor technology has been progressing at a 40% rate for more than 50 years. These rates of progress are embedded in the creation of intelligent machines, from robots to automobiles to drones, that will soon dominate the global economy – and in the process drive down the value of human labor with astonishing speed.


This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. If you doubt the march of worker-replacing technology, look at Foxconn, the world’s largest contract manufacturer. It employs more than one million workers in China. In 2011, the company installed 10,000 robots, called Foxbots. Today, the company is installing them at a rate of 30,000 per year. Each robot costs about $20,000 and is used to perform routine jobs such as spraying, welding, and assembly. On June 26, 2013, Terry Gou, Foxconn’s CEO, told his annual meeting that “We have over one million workers. In the future we will add one million robotic workers.” This means, of course, that the company will avoid hiring those next million human workers.


Just imagine what a Foxbot will soon be able to do if Moore’s Law holds steady and we continue to see performance leaps of 40% per year. Baxter, a $22,000 robot that just got a software upgrade, is being produced in quantities of 500 per year. A few years from now, a much smarter Baxter produced in quantities of 10,000 might cost less than $5,000. At that price, even the lowest-paid workers in the least developed countries might not be able to compete.

more...
Tomasz Bienko's curator insight, January 19, 2015 12:29 PM

Przede wszystkim maszyny mogą zastąpić ludzi jako siłę roboczą, ale przecież ku temu między innymi prowadzone są badania i wprowadzane nowe technologie, widać to już teraz w mechanizacji poszczególnych sektorów gospodarki (np. rolnictwa). Człowiek stara się uprościć sobie życie, ale może zjeść własny ogon. To jest chyba bardziej bliższy problem z którym będziemy się musieli zmierzyć rozwijając dalej tę technologię, niż np. bardziej odległe zbuntowanie się sztucznej inteligencji. Biorąc pod uwagę jak Prawo Moore'a z roku na rok ulega modyfikacją, zmiany będzie można zaobserwować już niedługo i to właśnie rosnące bezrobocie może być problemem który dostrzeżemy jako pierwsi w rozwijaniu sztucznej inteligencji. Maszyna nie zastąpi człowieka we wszystkim, na wszystkich stanowiskach, lecz może to też tylko kwestia czasu?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Does rampant AI spell the end of humanity?

Does rampant AI spell the end of humanity? | Amazing Science | Scoop.it

Prof Stephen Hawking has joined a roster of experts worried about what follows when humans build a device or write some software that can properly be called intelligent. Such an artificial intelligence (AI), he fears, could spell the end of humanity. Similar worries were voiced by Tesla boss Elon Musk in October. He declared rampant AI to be the "biggest existential threat" facing mankind. He wonders if we will find our end beneath the heels of a cruel and calculating artificial intelligence. So too does the University of Oxford's Prof Nick Bostrom, who has said an AI-led apocalypse could engulf us within a century.


Google's director of engineering, Ray Kurzweil, is also worried about AI, albeit for more subtle reasons. He is concerned that it may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software. We're already living in the early days of the post-AI world”.


Many films, such as The Terminator movies, 2001, The Matrix, Blade Runner, to mention a few, pit puny humans against AI-driven enemies. More recently, Spike Jonze's Her involved a romance between man and operating system, Alex Garland's forthcoming Ex Machina debates the humanity of an android and the new Avengers movie sees superheroes battle Ultron - a super-intelligent AI intent on extinguishing mankind. Which it would do with ease were it not for Thor, Iron Man and their super-friends.


Even today we are getting hints about how paltry human wits can be when set against computers who throw all their computational horsepower at a problem. Chess computers now routinely beat all but the best human players. Complicated mathematics is a snap to as lowly a device as the smartphone in your pocket.


But is the risk real? Once humans code the first genuinely smart computer program that then goes on to develop its smarter successors, is the writing on the wall for humans? Maybe, said Neil Jacobstein, AI and robotics co-chairman at California's Singularity University. "I don't think that ethical outcomes from AI come for free," he said, adding that work now will significantly improve our chances of surviving the rise of rampant AI. "It's best to do that before the technologies are fully developed and AI and robotics are certainly not fully developed yet," he said. "The possibility of something going wrong increases when you don't think about what those potential wrong things are."


more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Alternative to the Turing test: The Lovelace 2.0 Test

Alternative to the Turing test: The Lovelace 2.0 Test | Amazing Science | Scoop.it

Georgia Tech associate professor Mark Ried has developed a new kind of “Turing test” — a test proposed in 1950 by computing pioneer Alan Turing to determine whether a machine or computer program exhibits human-level intelligence.Most Turing test designs require a machine to engage in dialogue and convince (trick) a human judge that it is an actual person. But creating certain types of art also requires intelligence, leading Reid to consider if that approach might lead to a better gauge of whether a machine can replicate human thought.


“It’s important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” Riedl said. “And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”


Here are the basic test rules:


  • The artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator.
  • The human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. (The created artifact needs only meet these criteria — it does not need to have any aesthetic value.)
  • A human referee must determine that the combination of the subset and criteria is not an impossible standard.


The Lovelace 2.0 Test stems from the original Lovelace* Test as proposed by Bringsjord, Bello and Ferrucci in 2001. The original test required that an artificial agent produce a creative item in such a way that the agent’s designer cannot explain how it developed the creative item. The item, thus, must be created in such a way that is valuable, novel and surprising.


Riedl contends that the original Lovelace test does not establish clear or measurable parameters. Lovelace 2.0, however, enables the evaluator to work with defined constraints without making value judgments such as whether the artistic object created surprise.

Riedl’s paper, available here, will be presented at Beyond the Turing Test, an Association for the Advancement of Artificial Intelligence (AAAI) workshop to be held January 25–29, 2015, in Austin, Texas.


more...
Carlos Garcia Pando's comment, November 21, 2014 3:59 AM
Is it a better machine for the tasks of machines if it passes the Lovelace 2.0 test? It proves the machine can imitate certain human habilities, but I expect the machines to surpass us humans on our weaknesses and limitations.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI breaking ground: building a natural description of images

AI breaking ground: building a natural description of images | Amazing Science | Scoop.it

People can summarize a complex scene in a few words without thinking twice. It’s much more difficult for computers. But we’ve just gotten a bit closer -- we’ve developed a machine-learning system that can automatically produce captions (like the three above) to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.

Recent research has greatly improved object detection, classification, and labeling. But accurately describing a complex scene requires a deeper representation of what’s going on in the scene, capturing how the various objects relate to one another and translating it all into natural-sounding language.


Many efforts to construct computer-generated natural descriptions of images propose combining current state-of-the-art techniques in both computer vision and natural language processing to form a complete image description approach. But what if we instead merged recent computer vision and language models into a single jointly trained system, taking an image and directly producing a human readable sequence of words to describe it?

This idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German.

Now, what if we replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images? Normally, the CNN’s last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But if we remove that final layer, we can instead feed the CNN’s rich encoding of the image into a RNN designed to produce phrases. We can then train the whole system directly on images and their captions, so it maximizes the likelihood that descriptions it produces best match the training descriptions for each image.

more...
Natural Language Careers's curator insight, November 19, 2014 8:53 AM

Google making progress towards automatic captioning.  Cool stuff.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

AI cameras can talk to each other to identify and track people

AI cameras can talk to each other to identify and track people | Amazing Science | Scoop.it

University of Washington electrical engineers have developed a way to automatically track people across moving and still cameras by using an algorithm that trains the networked cameras to learn one another’s differences. The cameras first identify a person in a video frame, then follow that same person across multiple camera views.


“Tracking humans automatically across cameras in a three-dimensional space is new,” said lead researcher Jenq-Neng Hwang, a UW professor of electrical engineering. “As the cameras talk to each other, we are able to describe the real world in a more dynamic sense.” Hwang and his research team presented their results last month in Qingdao, China, at the Intelligent Transportation Systems Conference sponsored by the Institute of Electrical and Electronics Engineers (IEEE).


With the new technology, a car with a mounted GPS display could take video of a scene, then identify and track humans and overlay them on a virtual 3D map on the display. The UW researchers are developing this to work in real time, which could track a specific person who is dodging the police.


Real-time tracking by Google Earth:  “Our idea is to enable the dynamic visualization of the realistic situation of humans walking on the road and sidewalks, so eventually people can see the animated version of the real-time dynamics of city streets on a platform like Google Earth,” Hwang said.


Hwang’s research team in the past decade has developed a way for video cameras — from the most basic models to high-end devices – to talk to each other as they record different places in a common location. The problem with tracking a human across cameras of non-overlapping fields of view is that a person’s appearance can vary dramatically in each video because of different perspectives, angles and color hues produced by different cameras. The researchers overcame this by building a link between the cameras. Cameras first record for a couple of minutes to gather training data, systematically calculating the differences in color, texture and angle between a pair of cameras for a number of people who walk into the frames in a fully unsupervised manner without human intervention.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us?

Man vs. Machine: Will Computers Soon Become More Intelligent Than Us? | Amazing Science | Scoop.it

Computers might soon become more intelligent than us. Some of the best brains in Silicon Valley are now trying to work out what happens next.


Nate Soares, a former Google engineer, is weighing up the chances of success for the project he is working on. He puts them at only about 5 per cent. But the odds he is calculating aren’t for some new smartphone app. Instead, Soares is talking about something much more arresting: whether programmers like him will be able to save mankind from extinction at the hands of its own most powerful creation.


The object of concern – both for him and the Machine Intelligence Research Institute (Miri), whose offices these are – is artificial intelligence (AI). Super-smart machines with malicious intent are a staple of science fiction, from the soft-spoken Hal 9000 to the scarily violent Skynet. But the AI that people like Soares believe is coming mankind’s way, very probably before the end of this century, would be much worse.


Besides Soares, there are probably only four computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure AI remains “friendly”, says Luke Muehlhauser, Miri’s director. It isn’t unusual to hear people express big thoughts about the future in Silicon Valley these days – though most of the technology visions are much more benign. It sometimes sounds as if every entrepreneur, however trivial the start-up, has taken a leaf from Google’s mission statement and is out to “make the world a better place”.


Warnings have lately grown louder. Astrophysicist Stephen Hawking, writing earlier this year, said that AI would be “the biggest event in human history”. But he added: “Unfortunately, it might also be the last.”


Elon Musk – whose successes with electric cars (through Tesla Motors) and private space flight (SpaceX) have elevated him to almost superhero status in Silicon Valley – has also spoken up. Several weeks ago, he advised his nearly 1.2 million Twitter followers to read Superintelligence, a book about the dangers of AI, which has made him think the technology is “potentially more dangerous than nukes”. Mankind, as Musk sees it, might be like a computer program whose usefulness ends once it has started up a more complex piece of software. “Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted. “Unfortunately, that is increasingly probable.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Could AI and Self-Flying Airplanes Have Prevented The Germanwings Crash?

Could AI and Self-Flying Airplanes Have Prevented The Germanwings Crash? | Amazing Science | Scoop.it

No level of security screening short of mind-reading could have prevented the crash of Germanwings flight 9525. But what can be done? The New York Times editorial calls for the American standard that requires two crew members be in the cockpit at all times to be adopted by “all airlines.” This suggestion is reasonable, but would not prevent a team of two pilots from accomplishing a similarly evil deed. The Times correctly asserts ”Air travel over all remains incredibly safe.”


The plane in question, the Airbus A320, has among the world’s best safety records and was the first commercial airliner to have an all-digital fly-by-wire control system. Much of the criticism over the years of these fly-by-wire systems has focused on the problem of pilots becoming too dependent on technology, but these systems could also be a means of preventing future tragedies. In fly-by-wire planes, a story on a previous Airbus crash in Popular Mechanics reports, “The vast majority of the time, the computer operates within what’s known as normal law, which means that the computer will not enact any control movements that would cause the plane to leave its flight envelope. The flight control computer under normal law will not allow an aircraft to stall, aviation experts say.” If autopilot is disconnected or reset, as the New York Times reports it was on the Germanwings plane, it can be switched to alternate law, “a regime with far fewer restrictions on what a pilot can do.”


AI pioneer Jeff Hawkins has addressed the recent upswell of fear about AI and “superintelligence” in a post on Re/code. “The Terminator Is Not Coming,” his title announces. “The Future Will Thank Us.” We are so concerned, it seems, about giving machines too much power that we appear to miss the fact that the largest existential threat to humans is other humans. Such seems to be the case with Germanwings 4U9525.


Hawkins is the inventor of the Palm Pilot (the first personal digital assistant or PDA) and the Palm Treo (one of the first smartphones). He is also the co-founder, with Donna Dubinsky, of the machine intelligence company NumentaGrok, the company’s first commercial product, sifts through massive amounts of server activity data on Amazon Web Services (AWS) to identify anomalous patterns of events. This same approach could easily be used to monitor flight data from airplanes and alert ground control in real time of the precise nature of unexpected activity. Numenta open sources its software (Numenta.org) and is known to DARPA and other government research agencies, so multiple parties could already be at work on such a system.


Hawkins’ approach to machine intelligence, Hierarchical Temporal Memory (HTM), has some distinct advantages over the highly-publicized technique of deep learning (DL). Both use hierarchies of matrices to learn patterns from large data sets. HTM takes its inspiration from biology and uses the layering of neurons in the brain as a model for its architecture. DL is primarily mathematical and projects the abstraction of the brain’s hierarchy to deeper and deeper levels. HTM uses larger matrices and flatter hierarchies to store patterns than DL and the data in these matrices is characterized by sparse distributions. Most important, HTM processes time-based data whereas DL trains mostly on static data sets.


For the emerging Internet of Things (IoT), time-based and real-time data is incredibly important. Systems that can learn continuously from these data streams, like Numenta’s, will be particularly valuable for keeping track of all of those things—including errant airplanes. Could machine intelligence have prevented this tragedy? Hawkins thinks so but notes, “All the intelligence in the world in the cockpit won’t solve any problem if the pilot decides to turn it off.” There will need to be aviation systems “designed for potential override from ground.” What are we the most scared of, individual agency or systematic control? Based on the Germanwings evidence so far,lack of override control from the ground is the greater threat.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body

Researchers Create A Simulated Mouse Brain in a Virtual Mouse Body | Amazing Science | Scoop.it

scientist Marc-Oliver Gewaltig and his team at the Human Brain Project (HBP) built a model mouse brain and a model mouse body, integrating them both into a single simulation and providing a simplified but comprehensive model of how the body and the brain interact with each other. "Replicating sensory input and motor output is one of the best ways to go towards a detailed brain model analogous to the real thing," explains Gewaltig.


As computing technology improves, their goal is to build the tools and the infrastructure that will allow researchers to perform virtual experiments on mice and other virtual organisms. This virtual neurorobotics platform is just one of the collaborative interfaces being developed by the HBP. A first version of the software will be released to collaborators in April. The HBP scientists used biological data about the mouse brain collected by the Allen Brain Institute in Seattle and the Biomedical Informatics Research Network in San Diego. These data contain detailed information about the positions of the mouse brain's 75 million neurons and the connections between different regions of the brain. They integrated this information with complementary data on the shapes, sizes and connectivity of specific types of neurons collected by the Blue Brain Project in Geneva.


A simplified version of the virtual mouse brain (just 200,000 neurons) was then mapped to different parts of the mouse body, including the mouse's spinal cord, whiskers, eyes and skin. For instance, touching the mouse's whiskers activated the corresponding parts of the mouse sensory cortex. And they expect the models to improve as more data comes in and gets incorporated. For Gewaltig, building a virtual organism is an exercise in data integration. By bringing together multiple sources of data of varying detail into a single virtual model and testing this against reality, data integration provides a way of evaluating – and fostering – our own understanding of the brain. In this way, he hopes to provide a big picture of the brain by bringing together separated data sets from around the world. Gewaltig compares the exercise to the 15th century European data integration projects in geography, when scientists had to patch together known smaller scale maps. These first attempts were not to scale and were incomplete, but the resulting globes helped guide further explorations and the development of better tools for mapping the Earth, until reaching today's precision.


Read more: https://www.humanbrainproject.eu
Human Brain Project: http://www.humanbrainproject.eu
NEST simulator software : http://nest-simulator.org/
Largest neuronalnetwork simulation using NEST : http://bit.ly/173mZ5j

Open Source Data Sets:
Allen Institute for Brain Science: http://www.brain-map.org
Bioinformatics Research Network (BIRN): http://www.birncommunity.org

The Behaim Globe : 
Germanisches National Museum, http://www.gnm.de/
Department of Geodesy and Geoinformation, TU Wien, http://www.geo.tuwien.ac.at

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First general learning system that can learn directly from experience to master a wide range of challenging tasks

First general learning system that can learn directly from experience to master a wide range of challenging tasks | Amazing Science | Scoop.it

The gamer punches in play after endless play of the Atari classic Space Invaders. Though an interminable chain of failures, the gamer adapts the gameplay strategy to reach for the highest score. But this is no human with a joystick in a 1970s basement. Artificial intelligence is learning to play Atari games. The Atari addict is a deep-learning algorithm called DQN.


This algorithm began with no previous information about Space Invaders—or, for that matter, the other 48 Atari 2600 games it is learning to play and sometimes master after two straight weeks of gameplay. In fact, it wasn't even designed to take on old video games; it is general-purpose, self-teaching computer program. Yet after watching the Atari screen and fiddling with the controls over two weeks, DQN is playing at a level that would humiliate even a professional flesh-and-blood gamer.


Volodymyr Mnih and his team of computer scientists at Google, who have just unveiled DQN in the journal Nature, says their creation is more than just an impressive gamer. Mnih says the general-purpose DQN learning algorithm could be the first rung on a ladder to artificial intelligence.


"This is the first time that anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks," says Demis Hassabis, a member of Google's team. The algorithm runs on little more than a powerful desktop PC with a souped up graphics card. At its core, DQN combines two separate advances in machine learning in a fascinating way. The first advance is a type of positive-reinforcement learning method called Q-learning. This is where DQN, or Deep Q-Network, gets its middle initial. Q-learning means that DQN is constantly trying to make joystick and button-pressing decisions that will get it closer to a property that computer scientists call "Q." In simple terms, Q is what the algorithm approximates to be biggest possible future reward for each decision. For Atari games, that reward is the game score.


Knowing what decisions will lead it to the high scorer's list, though, is no simple task. Keep in mind that DQN starts with zero information about each game it plays. To understand how to maximize your score in a game like Space Invaders, you have to recognize a thousand different facts: how the pixilated aliens move, the fact that shooting them gets you points, when to shoot, what shooting does, the fact that you control the tank, and many more assumptions, most of which a human player understands intuitively. And then, if the algorithm changes to a racing game, a side-scroller, or Pac-Man, it must learn an entirely new set of facts. That's where the second machine learning advance comes in. DQN is also built upon a vast and partially human brain-inspired artificial neural network. Simply put, the neural network is a complex program built to process and sort information from noise. It tells DQN what is and isn't important on the screen.


Nature Video of DQN AI

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Three Men have their hands amputated and replaced with bionic ones

Three Men have their hands amputated and replaced with bionic ones | Amazing Science | Scoop.it

Bionic hands are go. Three men with serious nerve damage had their hands amputated and replaced by prosthetic ones that they can control with their minds.


The procedure, dubbed "bionic reconstruction", was carried out by Oskar Aszmann at the Medical University of Vienna, Austria.

The men had all suffered accidents which damaged the brachial plexus – the bundle of nerve fibers that runs from the spine to the hand. Despite attempted repairs to those nerves, the arm and hand remained paralyzed.

"But still there are some nerve fibers present," says Aszmann. "The injury is so massive that there are only a few. This is just not enough to make the hand alive. They will never drive a hand, but they might drive a prosthetic hand."


This approach works because the prosthetic hands come with their own power source. Aszmann's patients plug their hands in to charge every night. Relying on electricity from the grid to power the hand means all the muscles and nerves need do is send the right signals to a prosthetic.


First they practized activating the muscle using an armband of sensors that picked up on the electrical activity. Then they moved on to controlling a virtual arm. Finally, Aszmann amputated their hands, and replaced them with a standard prosthesis under the control of the muscle and sensors.


"I was impressed and first struck with the surgical innovation," says Dustin Tyler of the Louis Stokes Veterans Affairs Medical Center in Cleveland, Ohio. "There's something very personal about having a hand; most people will go to great lengths to recover one, even if it's not very functional. It's interesting that people are opting for this."


While Aszmann's approach uses a grafted muscle to relay signals from the brain to a prosthesis, others are taking a more direct route, reading brain waves directly and using them to control the hand. A team at the University of Pittsburgh, Pennsylvania, has used a brain implant to allow a paralysed woman to control a robotic arm using her thoughts alone.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Supporting the elderly: A caring robot with ‘emotions’ and memory

Supporting the elderly: A caring robot with ‘emotions’ and memory | Amazing Science | Scoop.it

Researchers at the University of Hertfordshire have developed a prototype of a social robot that supports independent living for the elderly, working in partnership with their relatives or carers.


Farshid Amirabdollahian, a senior lecturer in Adaptive Systems at the university, led a team of nine partner institutions from five European countries as part of the €4,825,492 project called ACCOMPANY (Acceptable Robotics Companions for Ageing Years).


“This project proved the feasibility of having companion technology, while also highlighting different important aspects such as empathy, emotion, social intelligence as well as ethics and its norm surrounding technology for independent living,” Amirabdollahian said.



more...
Madison & Morgan's curator insight, February 11, 2015 1:31 PM

This article is about a robot that can help the elderly in their daily life. The robot is capable of human emotions and has moral ethics. This shows the technological advances that Europe has and relates to economy.

olyvia Schaefer and Rachel Shaberman's curator insight, February 11, 2015 5:09 PM

Europe Arts

Europe has many inventions that they have created, but the most interesting to me is the robot that has emotions and memory.  This robot is supposed to help the elderly with their careers and daily life.  The Europeans were able to create technology that has empathy,emotions, and social intelligence and is just a robot.  The Europeans were able to accomplish something amazing.

ToKTutor's curator insight, February 21, 2015 12:06 PM

Title 5: If a robot can have emotion and memory, can it also be programmed to have instinctive judgment?

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers show a machine learning network for connected devices

Researchers show a machine learning network for connected devices | Amazing Science | Scoop.it

Researchers at Ohio State University have developed a method for building a machine learning algorithm from data gathered from a variety of connected devices. There are two cool things about their model worth noting. The first is that the model is distributed and second, it can keep data private.


The researchers call their model Crowd-ML and the idea is pretty basic. Each device runs a version of a necessary app, much like one might run a version of SETI@home or other distributed computing application, and grabs samples of data to send to a central server. The server can tell when enough of the right data has been gathered to “teach” the computer and only grabs the data it needs, ensuring a relative amount of privacy.


The model uses a variant of stochastic (sub)gradient descent instead of batch processing, to grab data for machine learning, which is what makes the Crowd-ML effort different. Stochastic gradient descent is the basis for a lot of machine learning and deep learning efforts. It uses knowledge gleaned from previous computations to inform the next computations, making it iterative, as opposed to something processed all at once.


The paper goes on to describe how one can tweak the Crowd-ML model to ensure more or less privacy and process information faster or in greater amounts. It tries to achieve the happy medium between protecting privacy and gathering the right amount of data to generate a decent sample size to train the machine learning algorithm.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Neural Turing Machine (NTM) — Google's DeepMind AI project

The Neural Turing Machine (NTM) — Google's DeepMind AI project | Amazing Science | Scoop.it

The mission of Google’s DeepMind Technologies startup is to “solve intelligence.” Now, researchers there have developed an artificial intelligence system that can mimic some of the brain’s memory skills and even program like a human. The researchers developed a kind of neural network that can use external memory, allowing it to learn and perform tasks based on stored data.


The so-called Neural Turing Machine (NTM) that DeepMind researchers have been working on combines a neural network controller with a memory bank, giving it the ability to learn to store and retrieve information. The system’s name refers to computer pioneer Alan Turing’s formulation of computers as machines having working memory for storage and retrieval of data.


The researchers put the NTM through a series of tests including tasks such as copying and sorting blocks of data. Compared to a conventional neural net, the NTM was able to learn faster and copy longer data sequences with fewer errors. They found that its approach to the problem was comparable to that of a human programmer working in a low-level programming language.


The NTM “can infer simple algorithms such as copying, sorting and associative recall from input and output examples,” DeepMind’s Alex Graves, Greg Wayne and Ivo Danihelka wrote in a research paper available on the arXiv repository.


“Our experiments demonstrate that it is capable of learning simple algorithms from example data and of using these algorithms to generalize well outside its training regime.”


A spokesman for Google declined to provide more information about the project, saying only that the research is “quite a few layers down from practical applications.” In a 2013 paper, Graves and colleagues showed how they had used a technique known as deep reinforcement learning to get DeepMind software to learn to play seven classic Atari 2600 video games, some better than a human expert, with the only input being information visible on the game screen.


Google confirmed earlier this year that it had acquired London-based DeepMind Technologies, founded in 2011 as an artificial intelligence company. The move is expected to have a major role in advancing the search giant’s research into robotics, self-driving cars and smart-home technologies.


More recently, DeepMind co-founder Demis Hassabis wrote in a blog post that Google is partnering with artificial intelligence researchers from Oxford University to study topics including image recognition and natural language understanding.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

There's Now A Computer Program Playing Perfect Poker, Even Knows How To Bluff

There's Now A Computer Program Playing Perfect Poker, Even Knows How To Bluff | Amazing Science | Scoop.it

Until Cepheus came along. Bowling and his team instructed the computer to play billions of poker games against itself. Initially, they taught Cepheus only the basic rules of Texas Hold’em. The computer started off playing randomly, but eventually it began to learn. Cepheus started compiling lists of “regrets”—situations in which it could have folded or bluffed or bet differently, and won more money by doing so. The researchers then programmed Cepheus to begin acting on its most serious regrets, while ignoring its more minor regrets.


Ultimately, Cepheus whittled its list of regrets nearly down to zero. Now the program can bet and bluff with the best. “If you do this in a precise mathematical way, you can prove your regrets are guaranteed to go down to zero,” Bowling says. “And in the process of approaching zero, you must be approaching perfect play.”


Cepheus isn’t perfect, but it is guaranteed not to lose in the long run. That’s about as good as it gets for a game that still relies partially on chance. Cepheus’ performance has other experts in the field of artificial intelligence excited. “It’s a really interesting paper, with a convincing argument that a particular form of poker has been essentially solved,” says Howard Williams, a computer scientist and doctoral student at Queen Mary University of London, who was not involved in the study.


Beyond poker, Bowling envisions a new set of algorithms that could help security officers optimize checkpoints, random searches and placement of air marshals on flights. In these situations, a program like Cepheus could be taught to view potential terrorists as other players in a high-stakes game rife with variables. “That’s very close to what we have achieved here for the game of poker. It’s a strategy guaranteed not to lose,” he says.


If, however, you find yourself tempted (I know I am), Bowling and his team have set up a website where you can try your luck against Cepheus itself—the one computer program that always knows when to hold ‘em and when to fold ‘em.

more...
Lucile Debethune's curator insight, February 10, 2015 1:18 AM

It could be "strange" to show interest to this article here, but IA's behavior is one of the most difficult part of robot's studies. being able to create something same as bluff. And, as show last research regarding IA, this behavior is not all programmed, but a reszponse to experience and creativity, a sort of real learning... really impressive.

 

___

 

Je n'ai pas l'habitude généralement dans ce scoop it de partager des articles aussi loin de mes préoccupation (je préfère généralement les partager directement sur twitter... je ne peux d'ailleurs que vous conseiller d'aller voir amazing science). Cependant cet article parle d'un des defi les plus important de la création d'intelligence artificielle, la façon dont on a fait évoluer leur "personnalité" et leur réaction vers des comportements qui pourraient sembler non rationnel, comme ici le bluff. Et comme ces dernière années, il ne s'agit pas d'un comportement programmé directement mais d'une façon d'apprendre de construire un comportement en réaction a des expérience, des essai et des échecs...

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Deep-learning AI algorithm shines new light on mutations in once obscure areas of the genome

Deep-learning AI algorithm shines new light on mutations in once obscure areas of the genome | Amazing Science | Scoop.it

The so-called “streetlight effect” has often fettered scientists who study complex hereditary diseases. The term refers to an old joke about a drunk searching for his lost keys under a streetlight. A cop asks, "Are you sure this is where you lost them?" The drunk says, "No, I lost them in the park, but the light is better here."


For researchers who study the genetic roots of human diseases, most of the light has shone down on the 2 percent of the human genome that includes protein-coding DNA sequences. “That’s fine. Lots of diseases are caused by mutations there, but those mutations are low-hanging fruit,” says University of Toronto (U.T.) professor Brendan Frey who studies genetic networks. “They’re easy to find because the mutation actually changes one amino acid to another one, and that very much changes the protein.”


The trouble is, many disease-related mutations also happen in noncoding regions of the genome—the parts that do not directly make proteins but that still regulate how genes behave. Scientists have long been aware of how valuable it would be to analyze the other 98 percent but there has not been a practical way to do it.


Now Frey has developed a “deep-learning” machine algorithm that effectively shines a light on the entire genome. A paper appearing December 18 in Science describes how this algorithm can identify patterns of mutation across coding and noncoding DNA alike. The algorithm can also predict how likely each variant is to contribute to a given disease. “Our method works very differently from existing methods,” says Frey, the study’s lead author. “GWAS-, QTL- and ENCODE-type approaches can't figure out causal relationships. They can only correlate. Our system can predict whether or not a mutation will cause a change in RNA splicing that could lead to a disease phenotype.”


RNA splicing is one of the major steps in turning genetic blueprints into living organisms. Splicing determines which bits of DNA code get included in the messenger-RNA strings that build proteins. Different configurations yield different proteins. Misregulated splicing contributes to an estimated 15 to 60 percent of human genetic diseases.


The combination of whole-genome analysis and predictive models for RNA splicing makes Frey’s method a major contribution to the field, according to Stephan Sanders, an assistant professor at the University of California, San Francisco, School of Medicine. “I’m looking forward to using this tool in larger data sets and really getting sense of how important splicing is,” he says. Sanders, who researches the genetic causes of diseases, notes Frey’s approach complements, rather than replaces, other methods of genetic analysis. “I think any genomist [sic] would agree that noncoding [areas of the genome] are hugely important. This method is a really novel way of getting at that,” he says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers print out self-learning robots

Researchers print out self-learning robots | Amazing Science | Scoop.it

When the robots of the future are set to extract minerals from other planets, they need to be both self-learning and self-repairing. Researchers at Oslo University have already succeeded in producing self-instructing robots on 3D printers.


“In the future, robots must be able to solve tasks in deep mines on distant planets, in radioactive disaster areas, in hazardous landslip areas and on the sea bed beneath the Antarctic. These environments are so extreme that no human being can cope. Everything needs to be automatically controlled. Imagine that the robot is entering the wreckage of a nuclear power plant. It finds a staircase that no-one has thought of. The robot takes a picture. The picture is analyzed. The arms of one of the robots is fitted with a printer. This produces a new robot, or a new part for the existing robot, which enables it to negotiate the stairs,” hopes Associate Professor Kyrre Glette who is part of the Robotics and intelligent systems research team at Oslo University’s Department of Informatics.


Even if Glette’s ideas remain visions of the future, the robotics team in the Informatics Building have already developed three generations of self-learning robots.


Professor Mats Høvin was the man behind the first model, the chicken-robot named “Henriette”, which received much media coverage when it was launched ten years ago. Henriette had to teach itself how to walk, and to jump over obstacles. And if it lost a leg, it had to learn, unaided, how to hop on the other leg.


A few years later, Masters student Tønnes Nygaard launched the second generation robot. At the same time, the Informatics team developed a simulation program that was able to calculate what the body should look like. Just as for Henriette, its number of legs was pre-determined, but the computer program was at liberty to design the length of the legs and the distance between them.


The third generation of robots brings even greater flexibility. The simulation programme takes care of the complete design and suggests the optimal number of legs and joints.


Simulation is not enough. In order to test the functionality of the robots, they need to undergo trials in the real world. The robots are produced as printouts from a 3D printer. “Once the robots have been printed, their real-world functionalities quite often prove to be different from those of the simulated versions. We are talking of a reality gap. There will always be differences. Perhaps the floor is more slippery in reality, meaning that the friction coefficient will have to be changed. We are therefore studying how the robots deteriorate from simulation to laboratory stage,” says Mats Høvin.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Stanford project brings machine learning right to your browser and will make your tabs a lot smarter

Stanford project brings machine learning right to your browser and will make your tabs a lot smarter | Amazing Science | Scoop.it

Deep learning is one of the buzziest topics in technology at the moment, and for good reason: This subset of machine learning can unearth all kinds of useful new insights in data and teach computers to do things like understand human speech and see things. It employs the use of artificial neural networks to teach computers things like speech recognition, computer vision, and natural language processing. In the last few years, deep learning has helped forge advances in areas like object perception and machine translation—research topics that have long proven difficult for AI researchers to crack.


Trouble is, deep learning takes a ton of computational power, so its use is limited to companies that have the resources to throw at it. But what if you could achieve this kind of heavy duty artificial intelligence in the browser? That's exactly the aim of a project out of Standford called ConvNetJS. It's a JavaScript framework that brings deep learning models to the browser without the need for all that computing muscle. In time, it could make your tabs a lot smarter.


"Deep Learning has relatively recently achieved quite a few really nice results on big, important datasets," says Andrej Karpathy, the Standard PhD student behind ConvNetJS. "However, the majority of the available libraries and code base are currently geared primarily towards efficiency." Caffe, a popular convolutional neural network framework used by Flickr for image recognition (among many others) is written in C++ and is slow to compile. "ConvNetJS is designed with different trade-offs in mind," says Karpathy.



So how does this actually play out in the real world? Right now, Karpathy's website points to a couple of basic, live demos: classifying digits and other data using a neural network and using deep learning to dynamically "paint" an image based on any photo you upload. They are admittedly geeky—and not especially practical—examples, but what's important is the computation that's happening on the front end and how that's likely to evolve in the future. One likely usage is the development of browser extensions that run neural networks directly on websites. This could allow for more easily implemented image recognition or tools that can quickly parse and summarize long articles and perform sentiment analysis on their text. As the client-side technology evolves, the list of possibilities for machine learning in the browser will only grow.


Because it's JavaScript-based, the framework can't pull off quite the computational heavy lifting that other tools can, but it nonetheless raises the interesting prospect of bringing machine learning directly into the browser. "The idea is that a website could train a network on their end and then distribute the weights to the client, so the compute all happens on client and not server side, perhaps significantly improving latencies and significantly simplifying the necessary codebase," Karpathy explains. "ConvNetJS is not where I want it to be," admits Karpathy. "I work on it on a side of my PhD studies and with many deadlines it's hard to steal time. But I am slowly working on cleaner API, more docs, and WebGL support. Regardless, I think the cool trend here more generally is the possibility of running (or training) neural nets in the browser. Making our tabs smarter."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A brain-inspired chip lets drones learn during flight

A brain-inspired chip lets drones learn during flight | Amazing Science | Scoop.it

There isn’t much space between your ears, but what’s in there can do many things that a computer of the same size never could. Your brain is also vastly more energy efficient at interpreting the world visually or understanding speech than any computer system.


That’s why academic and corporate labs have been experimenting with “neuromorphic” chips modeled on features seen in brains. These chips have networks of “neurons” that communicate in spikes of electricity (see “Thinking in Silicon”). They can be significantly more energy-efficient than conventional chips, and some can even automatically reprogram themselves to learn new skills.


Now a neuromorphic chip has been untethered from the lab bench, and tested in a tiny drone aircraft that weighs less than 100 grams. In the experiment, the prototype chip, with 576 silicon neurons, took in data from the aircraft’s optical, ultrasound, and infrared sensors as it flew between three different rooms.


The first time the drone was flown into each room, the unique pattern of incoming sensor data from the walls, furniture, and other objects caused a pattern of electrical activity in the neurons that the chip had never experienced before. That triggered it to report that it was in a new space, and also caused the ways its neurons connected to one another to change, in a crude mimic of learning in a real brain. Those changes meant that next time the craft entered the same room, it recognized it and signaled as such.


The chip involved is far from ready for practical deployment, but the test offers empirical support for the ideas that have motivated research into neuromorphic chips, says Narayan Srinivasa, who leads HRL’s Center for Neural and Emergent Systems. “This shows it is possible to do learning literally on the fly, while under very strict size, weight, and power constraints,” he says.

more...
No comment yet.