Amazing Science
Follow
Find tag "AI"
444.3K views | +602 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM wants to put the power of Watson in your smartphone

IBM wants to put the power of Watson in your smartphone | Amazing Science | Scoop.it

Watson, IBM's Jeopardy-conquering super computer, has set its sites on mobile apps. Not long ago, the recently created Watson Business Group announced that would offer APIs to developers to create cloud-based apps built around cognitive computing. Now IBM is launching a competition to lure mobile app creators to its new platform. Over the next three months the company will be taking submissions that leverage Watson's unique capabilities like deep data analysis and natural language processing to put impossibly powerful tools to the palm of your hand. IBM is hoping for apps that "change the way consumers and businesses interact with data on their mobile devices." It's an ambitious goal, but considering the way Watson spanked Ken Jennings, it seems something that is well within its reach. The machine has already changed the way we view computers and artificial intelligence, not only by winningJeopardy, but by making cancer treatment decisions and attending college. Now it wants to make your smartphone smarter than you could ever hope to be.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Geordie Rose (D-wave) Interview: Machine Learning is Progressing Faster Than You Think

Geordie Rose (D-wave) Interview: Machine Learning is Progressing Faster Than You Think | Amazing Science | Scoop.it

D-wave CTO Geordie Rose talks about quantum computing, AI and the technological singularity.


Via Szabolcs Kósa
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

A Wikipedia for robots allowing them to share knowledge and experience worldwide

A Wikipedia for robots allowing them to share knowledge and experience worldwide | Amazing Science | Scoop.it

European scientists from six institutes and two universities have developed an online platform where robots can learn new skills from each other worldwide — a kind of “Wikipedia for robots.” The objective is to help develop robots better at helping elders with caring and household tasks. “The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e researcher and RoboEarth project leader.


“RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database.” In addition, some computing and “thinking” tasks can be carried out by the system’s “cloud engine,” he said, “so the robot doesn’t need to have as much computing or battery power on‑board.”


For example, a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.


RoboEarth is based on four years of research by a team of scientists from six European research institutes (TU/e, Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart).


Robots learn from each other on 'Wiki for robots'

more...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MIT: Big Data and Business Decision Making

MIT: Big Data and Business Decision Making | Amazing Science | Scoop.it
What’s the point of all that data, anyway? It’s to make decisions.


In a recent business reportMIT Technology Review explores a big question: how are data and the analytical tools to manipulate it changing decision making today? On Nasdaq, trading bots exchange a billion shares a day. Online, advertisers bid on hundreds of thousands of keywords a minute, in deals greased by heuristic solutions and optimization models rather than two-martini lunches. The number of variables and the speed and volume of transactions are just too much for human decision makers.


When there’s a person in the loop, technology takes a softer approach - Software That Augments Human Thinking. Think of recommendation engines on the Web that suggest products to buy or friends to catch up with. This works because Internet companies maintain statistical models of each of us, our likes and habits, and use them to decide what we see. In this report, we check in with LinkedIn, which maintains the world’s largest database of résumés — more than 200 million of them. One of its newest offerings is University Pages, which crunches résumé data to offer students predictions about where they’ll end up working depending on what college they go to: LinkedIn Offers College Choices by the Numbers.


These smart systems, and their impact, are prosaic next to what’s planned. Take IBM. The company is pouring $1 billion into its Watson computer system, the one that answered questions correctly on the game show Jeopardy! IBM now imagines computers that can carry on intelligent phone calls with customers, or provide expert recommendations after digesting doctors’ notes. IBM wants to provide “cognitive services”—computers that think, or seem to: Facing Doubters, IBM Expands Plans for Watson.


Andrew Jennings, chief analytics officer for FICO, says automating human decisions is only half the story. Credit scores had another major impact. They gave lenders a new way to measure the state of their portfolios—and to adjust them by balancing riskier loan recipients with safer ones. Now, as other industries get exposed to predictive data, their approach to business strategy is changing, too. In this report, we look at one technique that’s spreading on the Web, called A/B testing. It’s a simple tactic—put up two versions of a Web page and see which one performs better. Seeking Edge, Websites Turn to Experiments” and Startups Embrace a Way to Fail Fast.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Brainlike Computers Are Learning From Experience

Brainlike Computers Are Learning From Experience | Amazing Science | Scoop.it

Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.


The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.


The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.


In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.


Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.


“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.


Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.


The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.


The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.


They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.


“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”


Via Szabolcs Kósa
more...
VendorFit's curator insight, December 31, 2013 3:27 PM

Artificial intelligence is the holy grail of technological achievment, creating an entity that can learn from its own mistakes and can (independently of programmer intervention) develop new routines and programs.  The New York Times claims that the first ever "learning" computer chip is to be released in 2014, an innovation that has profound consequences for the tech market.  When these devices become cheaper, this should allow for robotics and device manufacture that incorporates more detailed sensory input and can parse real objects, like faces, from background noise. 

Laura E. Mirian, PhD's curator insight, January 10, 2014 1:16 PM

The Singularity is not far away

Rescooped by Dr. Stefan Gruenwald from Human Interest
Scoop.it!

NYT: Michio Kaku interviews 300 of the world's top scientists and predicts the future

NYT: Michio Kaku interviews 300 of the world's top scientists and predicts the future | Amazing Science | Scoop.it

Michio Kaku: When making predictions, I have two criteria: the laws of physics must be obeyed and prototypes must exist that demonstrate “proof of principle.” I’ve interviewed more than 300 of the world’s top scientists, and many allowed me into laboratories where they are inventing the future. Their accomplishments and dreams are eye-opening. From my conversations with them, here’s a glimpse of what to expect in the coming decades:


1.     Computers Will Disappear

2.     Augmented Reality Will Be Everyday Reality

3.     The Brain-Net Will Augment the Internet

4.     Capitalism Will Be Perfected

5.     Robots Will Be Commonplace

6.     Aged Body Parts Will Be Replaced

7.     Parents Will Design Their Offspring based on Genomics

8.     Cybermedicine Will Extend Lives

9.     Dictators Will Be Big Losers

10.   Intellectual Capitalism Will Replace Commodity Capitalism



Via Pierre Tran, Amy Cross, Mike Busarello's Digital Storybooks, Jukka Melaranta
more...
Teresa Lima's curator insight, January 10, 2014 4:38 AM

#Not 

I think the future is unpredictable, and no one  can predict the future!

Carlos Polaino Jiménez's curator insight, January 16, 2014 7:38 AM

Predicción científica del futuro, esto es un tema a leer por lo menos.

Jesús Martinez's curator insight, January 18, 2014 8:07 AM

add your insight...

Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Processors That Work Like Brains Will Accelerate Artificial Intelligence

Processors That Work Like Brains Will Accelerate Artificial Intelligence | Amazing Science | Scoop.it

A new breed of computer chips that operate more like the brain may be about to narrow the gulf between artificial and natural computation—between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world. Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These “neuromorphic” chips may be the missing piece of many promising but unfinished projects in artificial intelligence, such as cars that drive themselves reliably in all conditions, and smartphones that act as competent conversational assistants.


Via Szabolcs Kósa
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neuroengineering - Engineering Memories - The Future is Now

Dr. Theodore Berger's research is currently focused primarily on the hippocampus, a neural system essential for learning and memory functions.


Theodore Berger leads a multi-disciplinary collaboration with Drs. Marmarelis, Song, Granacki, Heck, and Liu at the University of Southern California, Dr. Cheung at City University of Hong Kong, Drs. Hampson and Deadwyler at Wake Forest University, and Dr. Gerhardt at the University of Kentucky, that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease), and is considered to underlie the memory deficits characteristic of these neurological conditions.


The essential goals of Dr. Berger's multi-laboratory effort include: (1) experimental study of neuron and neural network function during memory formation -- how does the hippocampus encode information?, (2) formulation of biologically realistic models of neural system dynamics -- can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event?, (3) microchip implementation of neural system models -- can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization?, and (4) creation of conformal neuron-electrode interfaces -- can cytoarchitectonic-appropriate multi-electrode arrays be created to optimize bi-directional communication with the brain? By integrating solutions to these component problems, the team is realizing a biomimetic model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

AIXI: To create a super-intelligent machine, start with an equation

AIXI: To create a super-intelligent machine, start with an equation | Amazing Science | Scoop.it
Intelligence is a very difficult concept and, until recently, no one has succeeded in giving it a satisfactory formal definition.

 

Most researchers have given up grappling with the notion of intelligence in full generality, and instead focus on related but more limited concepts – but Marcus Hutter argues that mathematically defining intelligence is not only possible, but crucial to understanding and developing super-intelligent machines. From this, his research group has even successfully developed software that can learn to play computer games from scratch.

 

But first, how do we define "intelligence"? Hutter's group has sifted through the psychology, philosophy and artificial intelligence literature and searched for definitions individual researchers and groups came up with. The characterizations are very diverse, but there seems to be a recurrent theme which we have aggregated and distilled into the following definition: Intelligence is an agent's ability to achieve goals or succeed in a wide range of environments.

 

The emerging scientific field is called universal artificial intelligence, with AIXI being the resulting super-intelligent agent. AIXI has a planning component and a learning component. The goal of AIXI is to maximise its reward over its lifetime – that's the planning part.

 

In summary, every interaction cycle consists of observation, learning, prediction, planning, decision, action and reward, followed by the next cycle. If you're interested in exploring further, AIXI integrates numerous philosophical, computational and statistical principles:

 

  • Ockham's razor (simplicity) principle for model selection
  • Epicurus principle of multiple explanations as a justification of model averaging
  • Bayes rule for updating beliefs
  • Turing machines as universal description language
  • Kolmogorov complexity to quantify simplicity
  • Solomonoff's universal prior, and
  • Bellman equations for sequential decision making.

 

AIXI's algorithm rigorously and uniquely defines a super-intelligent agent that learns to act optimally in arbitrary unknown environments. One can prove amazing properties of this agent – in fact, one can prove that in a certain sense AIXI is the most intelligent system possible. Note that this is a rather coarse translation and aggregation of the mathematical theorems into words, but that is the essence.

 

Since AIXI is incomputable, it has to be approximated in practice. In recent years, we have developed various approximations, ranging from provably optimal to practically feasible algorithms.

 

The point is not that AIXI is able to play these games (they are not hard) – the remarkable fact is that a single agent can learn autonomously this wide variety of environments. AIXI is given no prior knowledge about these games; it is not even told the rules of the games! It starts as a blank canvas, and just by interacting with these environments, it figures out what is going on and learns how to behave well. This is the really impressive feature of AIXI and its main difference to most other projects.

 

Even though IBM Deep Blue plays better chess than human Grand Masters, it was specifically designed to do so and cannot play Jeopardy. Conversely, IBM Watson beats humans in Jeopardy but cannot play chess – not even TicTacToe or Pac-Man. AIXI is not tailored to any particular application. If you interface it with any problem, it will learn to act well and indeed optimally.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The music video for 65daysofstatic’s new track “PRISMS” was entirely made by a computer algorithm

The music video for 65daysofstatic’s new track “PRISMS” was entirely made by a computer algorithm | Amazing Science | Scoop.it

Matt Pearson decided to prove wrong Alex Rutterford, the director of Autechre’s “Gantz Graf“, who said back in 2002 that a computer program could not make a music video on its own. Well, more than 10 years later, this feat has become possible. Pearson only wrote the algorithm and the system made all the artistic decisions such as the camera work by itself, based on its interpretation of the audio track.


Matt Pearson insists that he does not want to be labeled as a designer but rather as a coder. If he is indeed part of the design process, his work only creates the environment in which the generative animation evolves on its own. What was unthinkable ten years ago has now become a reality, so what will it be ten years from now? Algorithms creating other algorithms? Probably so. I don’t know if this perspective is exciting or terrifying.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Humans With Amplified Intelligence Could Be More Powerful Than AI

Humans With Amplified Intelligence Could Be More Powerful Than AI | Amazing Science | Scoop.it

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI.


As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store. Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.

 

Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger atAccelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI. The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.


The first step will be to create a direct neural link to information. Think of it as a "telepathic Google." The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex. The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.


For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.


more...
Dominic's curator insight, March 26, 6:24 PM

Our brain is a powerful device that has much potential to undertake theories in which we thought was impossible, to reality. This article discovers the ways that we humans can release our cognitive limitations and use the power of the brain to explore innovations that we couldn't even dream of. This also explores how amplified human intelligence (IA) could become more advanced than Human Intelligence. 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Taking human-machine interaction to the next level with cloned virtual applications for unlimited users

Taking human-machine interaction to the next level with cloned virtual applications for unlimited users | Amazing Science | Scoop.it

In 2007, Silicon Valley's SRI International formed Siri Inc. to commercialize a virtual personal assistant technology born out of the institute's DARPA-funded CALO (Cognitive Assistant that Learns and Organizes) artificial intelligence project. A free app for the iOS platform was subsequently launched as a public beta in early February 2010, and just a couple of months later, Apple acquired the company. Spin forward to October 2011, and a conversational search assistant called Siri was launched as a new feature for the iPhone 4S.

 

A little while later, Google premiered its own digital PA in Android 4.1 (Jelly Bean). In addition to providing Siri-like search and assistance using natural language, Google Now delivered information and suggestions based on actions or decisions that the user had previously taken. SRI's latest project, bRight, progresses beyond both systems as an answer to what's been dubbed cognitive overload, where the tidal wave of information that can flood in during emergency situations can prove to be just too much to deal with effectively and, perhaps more importantly, rapidly.

 

The research prototype uses face recognition (though more secure biometrics, such as iris scans, will likely be implemented in the future) and gaze monitoring systems, along with proximity, gesture and touch sensors, to build detailed user profiles. In a similar way that modern computers might make valuable performance gains by effectively taking a shortcut when certain conditions are met, bRight's powerful AI software uses this information to anticipate what might be needed so that only data that's relevant to the job in hand is presented to the user, necessary tools can be literally placed at a user's fingertips, and repetitive tasks can be fully or partly automated.

 

For example, at a fairly simple level, if a user highlights a word in a document, the system can guess which menu items might be needed next and present the user with likely choices. Or if someone's writing a specific kind of email, such as a staff newsletter or performance bulletin, bRight may be able to determine its recipients based on previous activity, and pre-populate the Send To field. It might also detect potential errors or breaches of standard protocol.

 

"If bRight recognizes a user's action to be of a certain class, then it could provide corrective action," explains Dr. Grit Denker of SRI's Computer Science Laboratory. "Say I am writing an email about new bRight ideas and I am sending it to a bunch of people. bRight could recognize that I usually first send this to an internal team, before sending it to outside folks. Thus, if I am about to send such an email without having first sent it to my team, bRight could notify me whether this is on purpose."

 

 "bRight combines semantic markup in the application layer with sensors at the observation layer (e.g., touch, gaze, gesture, etc.)," says Denker. "This combination provides higher precision for prediction, especially in an environment where you do not necessarily have days or months of training data. In order to be useful, it has to have high accuracy. This can only be achieved if the cognitive models we intend to build are tuned to the applications. We are currently working on developing a cognitive model of users in the cyber domain using our tools. We are very interested in finding partners who would work with us to instantiate bRight for domains that meet at least two of the following criteria: information overload, rapid decision making and execution, and the need for collaboration."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The Ubi Ubiquitous Computer is Here: Talk to Your Wall and Your Wall will Talk Back

The Ubi Ubiquitous Computer is Here: Talk to Your Wall and Your Wall will Talk Back | Amazing Science | Scoop.it

The Ubi is a WiFi-connected, voice-operated computer that plugs into a power outlet and makes the environment around it Internet enabled. Reminiscent of voice controlled computers depicted in science fiction, early uses of the Ubi include Internet search, messaging, and communications without the use of hands or screens. The Ubi also includes sensors that allow for remote monitoring of the environment around it.

Project Ubi Odyssey will allow early adopters of technology to get access to the Ubi, develop connectivity with home automation and Internet services, and create novel human computer interactions. Those interested can register for the program at www.theubi.com and selected candidates will be invited to participate in the program. The Beta Ubi cost is $299. The program is currently limited to 5,000 participants and to residents of the United States.

The Ubi relies on powerful server technology that processes natural language to infer requests from the user and then pulls data from various Internet sources. Users can easily build voice-driven interactions and connect devices and services through the Ubi Portal. The device is equipped with temperature, humidity, air pressure and ambient light sensors to provide feedback on the environment around it. Also onboard the Ubi are stereo speakers, two microphones, and bright multi-colored LED indicator lights.


Unified Computer Intelligence Corporation CEO Leor Grebler told me the device will also be able to sense devices that are openly connected to the Internet (eventually, the Nest “learning” thermostat and smart smoke/CO2 alarms), “but we’re not controlling devices outright yet. We will add a way to talk to devices/Internet services as well as for them to talk back to the user.”


Here are the impressive specs: Android OS 4.1, 1.6 GHz Dual-Core ARM Cortex-A9 Processor, 1 GB RAM, 802.11 b/n/g Wifi Enabled (WPA and WPA2 encryption), stereo speakers and dual microphones, Bluetooth-capable, ambient light sensor, cloud-based speech recognition (Google/Android libraries), and natural language understanding.


And you can program its user interface on a computer, or verbally on the Ubi, Grebler said. “We’re slowly releasing apps on portal.theubi.com,” he said. “We have the first blossoms of an API that will essentially allow any Internet service, such as email, calendar, Twitter, Facebook, etc. ) to have its own voice and be interactive through the Ubi.”


You can register for the program at www.theubi.com and selected candidates will be invited to participate in the program. The Beta Ubi cost is $299. The program is currently limited to 5,000 participants and to U.S. residents.

more...
Tamika Garay's curator insight, March 25, 11:24 PM

#4 Most Important Technologies in the next 5-10 years

Voice operated computers

 

Voice operated computers and operating systems have captured the imaginations of  sci-fi writers for years and have been included in recent works such as :

 

* Her (Movie, 2013 Director & Writer – Spike Jonze) - a movie about a man who falls in love with his interactive operating system.

 

* Extant (Series – 2014 Halle Berry) – there is a Siri-like talking computer device in every home and space station

 

Using voice commands to operate computers would make it more natural for humans to use by allowing the interface between user and computer become invisible. With the popularity of Siri and Google voice recognition, voice operated computers and operating systems will be important in the next 5-10 years.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Companion robot speaks 19 languages and expresses emotional intelligence

Companion robot speaks 19 languages and expresses emotional intelligence | Amazing Science | Scoop.it

Paris-based robotics company, Aldebaran created Nao, a toddler sized robot designed for companionship. It has the ability to hold a conversation in 19 different languages. Nao is developed with basic artificial intelligence, can walk, catch and brace itself when falling. With built-in language learning software by the voice-technology company Nuance, the robot is designed to develop its own personality as it gets better at speaking through repetition. What sets it apart from other machines is the built-in emotional intelligence that allows it to sense the mood of the people surrounding it and react, according to Nuance’s vice president Arnd Weil. “What they are trying to do with the robot is mainly capturing what is the mood of that other person. It’s more like somebody’s coming home and he had a bad day, he’s angry, so the robot should capture that and react accordingly,” he said.


The robot will be able to build its own vocabulary by accessing a cloud of data that will help it understand the flow of conversation and building its own personality. Weil said that this will allow Nao the companion robot to interact better with its users. Nao’s creators foresee their creation filling more roles and functions like bringing out drinks, providing company for the elderly or disabled or reading your children their bedtime story.


“People do need somebody to talk to. A lot of people have animals or a dog or something, and this is just a new way of engaging with people,” said Weil

more...
Cheryl Palmer's curator insight, February 15, 11:03 PM

Blog style article & video clip introducing Nao - a companion robot.  Superficial, but gives a starting point for more research.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Facing the Intelligence Explosion: There is Plenty of Room Above

Facing the Intelligence Explosion: There is Plenty of Room Above | Amazing Science | Scoop.it

Why are AIs in movies so often of roughly human-level intelligenceOne reason is that we almost always fail to see non-humans as non-human. We anthropomorphize. That’s why aliens and robots in fiction are basically just humans with big eyes or green skin or some special power. Another reason is that it’s hard for a writer to write characters that are smarter than the writer. How exactly would a superintelligent machine solve problem X?


The human capacity for efficient cross-domain optimization is not a natural plateau for intelligence. It’s a narrow, accidental, temporary marker created by evolution due to things like the slow rate of neuronal firing and how large a skull can fit through a primate’s birth canal. Einstein may seem vastly more intelligent than a village idiot, but this difference is dwarfed by the difference between the village idiot and a mouse.


As Vernor Vinge put it: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”[1 How could an AI surpass human abilities? Let us count the ways:


  • Speed. Our axons carry signals at seventy-five meters per second or slower. A machine can pass signals along about four million times more quickly.
  • Serial depth. The human brain can’t rapidly perform any computation that requires more than one hundred sequential steps; thus, it relies on massively parallel computation.[2More is possible when both parallel and deep serial computations can be performed.
  • Computational resources. The brain’s size and neuron count are constrained by skull size, metabolism, and other factors. AIs could be built on the scale of buildings or cities or larger. When we can make circuits no smaller, we can just add more of them.
  • Rationality. As we explored earlier, human brains do nothing like optimal belief formation or goal achievement. Machines can be built from the ground up using (computable approximations of) optimal Bayesian decision networks, and indeed this is already a leading paradigm in artificial agent design.
  • Introspective access/editability. We humans have almost no introspective access to our cognitive algorithms, and cannot easily edit and improve them. Machines can already do this (read about EURISKO and metaheuristics). A limited hack like the method of loci greatly improves human memory; machines can do this kind of thing in spades.


REFERENCES:

1Vernor Vinge, “Signs of the Singularity,” IEEE Spectrum, June 2008, http://spectrum.ieee.org/biomedical/ethics/signs-of-the-singularity.

2J. A. Feldman and Dana H. Ballard, “Connectionist Models and Their Properties,” Cognitive Science 6 (3 1982): 205–254, doi: 10.1207/s15516709cog0603_1.

more...
Steffi Tan's curator insight, March 24, 5:43 AM

Vernor Vinge answered the question, "Will computers ever be as smart as humans?" with the simple sentence of "Yes, but only briefly."

 

For only a short period of time as technology ever develops, will technology be on the same intellectual playing field before it is able to surpass and exponentially grow in its capabilities. Emphasis again on how controlled setting need to be taken if an intelligence explosion were to occur. However, even if everyone agrees on the priority of safety, it only requires a single group of people to blindly walk into such circumstance for the event to cause issues for everyone.

Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

Computer science: The learning machines

Computer science: The learning machines | Amazing Science | Scoop.it

Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence. Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does. After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats1.


Google Brain's discovery that the Internet is full of cat videos provoked a flurry of jokes from journalists. But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.


Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.


Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.



Via Szabolcs Kósa
more...
R Schumacher & Associates LLC's curator insight, January 15, 2014 1:43 PM

The monikers such as "deep learning" may be new, but Artificial Intelligence has always been the Holy Grail of computer science.  The applications are many, and the path is becoming less of an uphill climb.  

luiy's curator insight, February 26, 2014 6:19 AM

Deep learning itself is a revival of an even older idea for computing: neural networks. These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience. Google Brain, with about 1 million simulated neurons and 1 billion simulated connections, was ten times larger than any deep neural network before it. Project founder Andrew Ng, now director of the Artificial Intelligence Laboratory at Stanford University in California, has gone on to make deep-learning systems ten times larger again.

 

Such advances make for exciting times in artificial intelligence (AI) — the often-frustrating attempt to get computers to think like humans. In the past few years, companies such as Google, Apple and IBM have been aggressively snapping up start-up companies and researchers with deep-learning expertise. For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages. For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.

Rescooped by Dr. Stefan Gruenwald from Systems Theory
Scoop.it!

Forbes: Machine Learning (CS 229) is Stanford's Most Popular Course

Forbes: Machine Learning (CS 229) is Stanford's Most Popular Course | Amazing Science | Scoop.it

Why Is Machine Learning (CS 229) The Most Popular Course At Stanford?  It turns out that artificial intelligence (AI) and the robotics that is tied to it, consists of two primary systems, control and perception.


Via Ben van Lier
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

The brain’s visual data-compression algorithm

The brain’s visual data-compression algorithm | Amazing Science | Scoop.it
Data compression in the brain: When the primary visual cortex processes sequences of complete images and images with missing elements --- here vertical


Researchers have assumed that visual information in the brain was transmitted almost in its entirety from its entry point, the primary visual cortex (V1). “We intuitively assume that our visual system generates a continuous stream of images, just like a video camera,” said Dr. Dirk Jancke from the Institute for Neural Computation at Ruhr University.

“However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences,” similar to methods used for video data compression in communication technology.


Using recordings in cat visual cortex, Jancke and associates recorded the neurons’ responses to natural image sequences such as vegetation, landscapes, and buildings. They created two versions of the images: a complete one, and one in which they had systematically removed vertical or horizontal contours.


If these individual images were presented at 33Hz (30 milliseconds per image), the neurons represented complete image information. But at 10Hz (100 milliseconds), the neurons represented only those elements that were new or missing, that is, image differences.


To monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active, measured across a surface of several square millimeters. The result is a temporally and spatially precise record of transmission processes within the neuronal network.


REFERENCES:
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Machine-learning algorithms could make chemical reactions intelligent leading to "smart drugs"

Machine-learning algorithms could make chemical reactions intelligent leading to "smart drugs" | Amazing Science | Scoop.it

Computer scientists at the Harvard School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering at Harvard University have joined forces to put powerful probabilistic reasoning algorithms in the hands of bioengineers.


In a new paper presented at the Neural Information Processing Systems conference on December 7, Ryan P. Adams and Nils Napp have shown that an important class of artificial intelligence algorithms could be implemented using chemical reactions.


These algorithms, which use a technique called “message passing inference on factor graphs,” are a mathematical coupling of ideas from graph theory and probability. They represent the state of the art in machine learning and are already critical components of everyday tools ranging from search engines and fraud detection to error correction in mobile phones.


Adams’ and Napp’s work demonstrates that some aspects of artificial intelligence (AI) could be implemented at microscopic scales using molecules. In the long term, the researchers say, such theoretical developments could open the door for “smart drugs” that can automatically detect, diagnose, and treat a variety of diseases using a cocktail of chemicals that can perform AI-type reasoning.


“We understand a lot about building AI systems that can learn and adapt at macroscopic scales; these algorithms live behind the scenes in many of the devices we interact with every day,” says Adams, an assistant professor of computer science at SEAS whose Intelligent Probabilistic Systems group focuses on machine learning and computational statistics. “This work shows that it is possible to also build intelligent machines at tiny scales, without needing anything that looks like a regular computer. This kind of chemical-based AI will be necessary for constructing therapies that sense and adapt to their environment. The hope is to eventually have drugs that can specialize themselves to your personal chemistry and can diagnose or treat a range of pathologies.”


Adams and Napp designed a tool that can take probabilistic representations of unknowns in the world (probabilistic graphical models, in the language of machine learning) and compile them into a set of chemical reactions that estimate quantities that cannot be observed directly. The key insight is that the dynamics of chemical reactions map directly onto the two types of computational steps that computer scientists would normally perform in silico to achieve the same end.


This insight opens up interesting new questions for computer scientists working on statistical machine learning, such as how to develop novel algorithms and models that are specifically tailored to tackling the uncertainty molecular engineers typically face. In addition to the long-term possibilities for smart therapeutics, it could also open the door for analyzing natural biological reaction pathways and regulatory networks as mechanisms that are performing statistical inference. Just like robots, biological cells must estimate external environmental states and act on them; designing artificial systems that perform these tasks could give scientists a better understanding of how such problems might be solved on a molecular level inside living systems.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Tracking the Future
Scoop.it!

The Bio-intelligence Explosion

The Bio-intelligence Explosion | Amazing Science | Scoop.it
How recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence.
Via Szabolcs Kósa
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Carnegie Mellon's NEIL program searches web 24/7 to analyze images and teach itself common sense

Carnegie Mellon's NEIL program searches web 24/7 to analyze images and teach itself common sense | Amazing Science | Scoop.it

A computer program called the Never Ending Image Learner (NEIL) is now running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them. You can view NEIL’s findings at the project website (or help train it): http://www.neil-kb.com. And as it builds a growing visual database, it is gathering common sense on a massive scale.

 

NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.

 

But NEIL also makes associations between these things to obtain common sense information: cars often are found on roads, buildings tend to be vertical, and ducks look sort of like geese.

 

“Images are the best way to learn visual properties,” said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. “Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”

 

Since late July, the NEIL program has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.

 

One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued. “What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes,” Gupta said.

 

Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Cortical processor development: DARPA wants computers that fuse with higher human brain functions

Cortical processor development: DARPA wants computers that fuse with higher human brain functions | Amazing Science | Scoop.it

In the never-ending quest to get computers to process, really understand and actually reason, scientists at Defense Advanced Research Projects Agency want to look more deeply into how computers can mimic a key portion of our brain.

 

The military's advanced research group recently put out a call, or Request For information, on how it could develop systems that go beyond machine learning, Bayesian techniques, and graphical technology to solve "extraordinarily difficult recognition problems in real-time."

 

Current systems offer partial solutions to this problem, but are limited in their ability to efficiently scale to larger more complex datasets, DARPA said.  "They are also compute intensive, exhibit limited parallelism, require high precision arithmetic, and, in most cases, do not account for temporal data. "

What DARPA is interested in is looking at mimicking a portion of the brain known as the  neocortex which is utilized in higher brain functions such as sensory perception, motor commands, spatial reasoning, conscious thought and language. Specfically, DARPA said it is looking for information that provides new concepts and technologies for developing what it calls a "Cortical Processor" based on Hierarchical Temporal Memory.

 

"Although a thorough understanding of how the cortex works is beyond current state of the art, we are at a point where some basic algorithmic principles are being identified and merged into machine learning and neural network techniques. Algorithms inspired by neural models, in particular neocortex, can recognize complex spatial and temporal patterns and can adapt to changing environments. Consequently, these algorithms are a promising approach to data stream filtering and processing and have the potential for providing new levels of performance and capabilities for a range of data recognition problems," DARPA stated.  "The cortical computational model should be fault tolerant to gaps in data, massively parallel, extremely power efficient, and highly scalable. It should also have minimal arithmetic precision requirements, and allow ultra-dense, low power implementations."

 

The new RFI is only part of the research and development DARPA has been doing to build what it calls a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose intelligence matches that of mice and cats, DARPA says.

 

Recently IBM said it created DARPA-funded prototype chips that could mimic brain-like actions. The prototype chips will give mind-like abilities for computers to make decisions by collating and analyzing immense amounts of data, similar to humans gathering and understanding a series of events, Dharmendra Modha, project leader for IBM Research told the IDG News Service.  The experimental chips, modeled around neural systems, mimic the brain's structure and operation through silicon circuitry and advanced algorithms.

 

IBM hopes reverse-engineering the brain into a chip could forge computers that are highly parallel, event-driven and passive on power consumption, Modha said. The machines will be a sharp departure from modern computers, which have scaling limitations and require set programming by humans to generate results.

 

Like the brain, IBM's prototype chips can dynamically rewire to sense, understand and act on information fed via sight, hearing, taste, smell and touch, or through other sources such as weather and water-supply monitors. The chips will help discover patterns based on probabilities and associations, all while rivaling the brain's compact size and low power usage, Modha said.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Amazing Science: Artificial Intelligence (AI) Postings

Amazing Science: Artificial Intelligence (AI) Postings | Amazing Science | Scoop.it

Artificial intelligence (AI) is a technology and a branch of computer science that studies and develops intelligent machines and software. AI research is divided by several technical issues. The central goals of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Navy Completes First Unmanned Carrier Landing just Relying on Computer and Artificial Intelligence

Navy Completes First Unmanned Carrier Landing just Relying on Computer and Artificial Intelligence | Amazing Science | Scoop.it

The Navy successfully landed a drone the size of a fighter jet aboard an aircraft carrier for the first time on July 10, 2013, showcasing the military's capability to have a computer program perform one of the most difficult tasks that a pilot is asked to do.

 

The landing of the X-47B experimental aircraft means the Navy can move forward with its plans to develop another unmanned aircraft that will join the fleet alongside traditional airplanes to provide around-the-clock surveillance while also possessing a strike capability. It also would pave the way for the U.S. to launch unmanned aircraft without the need to obtain permission from other countries to use their bases.

 

"It is not often that you get a chance to see the future, but that's what we got to do today. This is an amazing day for aviation in general and for naval aviation in particular," Navy Secretary Ray Mabus said after watching the landing.

 

The X-47B experimental aircraft took off from Naval Air Station Patuxent River in Maryland before approaching the USS George H.W. Bush, which was operating about 70 miles off the coast of Virginia. The tail-less drone landed by deploying a hook that caught a wire aboard the ship and brought it to a quick stop, just like normal fighter jets do. The maneuver is known as an arrested landing and had previously only been done by the drone on land at Patuxent River. Landing on a ship that is constantly moving while navigating through turbulent air behind the aircraft carrier is seen as a more difficult maneuver, even on a clear day with low winds like Wednesday.

 

Rear Adm. Mat Winter, the Navy's program executive officer for unmanned aviation and strike weapons, said everything about the flight — including where on the flight deck the plane would first touch and how many feet its hook would bounce — appeared to go exactly as planned.

 

"This is a historic day. This is a banner day. This is a red-flag letter day," Winter said. "You can call it what you want, but the fact of the matter is that you just observed history — history that your great-grandchildren, my great grandchildren, everybody's great grandchildren are going to be reading in our history books."

more...
Matt's curator insight, October 27, 2014 11:25 AM

This is an incredible article that shows what the future of naval aviation holds for the United States. It shows how far technology has come and presents the question of what will come next. 

Austin Cataudella's comment, October 27, 2014 11:32 AM
This is incredible that we have been able to program something to do this task. Pilots make about a million (give or take) decisions a second when having to land on a carrier. No telling where this will lead naval flight in the future.
Scott Lanier's comment, November 30, 2014 5:41 PM
Just like Austin said that's really incredible that we have the technology to land planes on aircraft carriers. Landing planes is really difficult but then landing on an aircraft carrier is a different story. The carrier is constantly moving, there's a lot of wind out on the ocean so it makes it harder to land, and carriers aren't that big so you don't have any room for error when landing. This technology is amazing for pilots it ensures their safety when landing but the military is recruiting less and less pilots each year. The military wants people to be able to fly drones which are cheaper. So this technology may be beneficial for manned planes but since there isn't a huge demand for manned aircraft anymore this is kind of harmful for pilots. Think about Top Gun. You won't really need anymore fighter pilots if the plane can fly itself, and land itself. These are just my thoughts on the issue.