Amazing Science
Follow
Find tag "AI"
351.8K views | +98 today
Your new post is loading...
Rescooped by Dr. Stefan Gruenwald from Post-Sapiens, les êtres technologiques
Scoop.it!

Evolving AI: Data Will Be Born From Artificial Worms

AI's next great species, the artificial worm, will link neuroscience with computing and catapult us into an age of Star Trek-like intelligent systems.


Via Jean-Philippe BOCQUENET
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Is it a robot pretending to be human or a human pretending to be a robot?

Is it a robot pretending to be human or a human pretending to be a robot? | Amazing Science | Scoop.it

It's not a fake - The latest Geminoid is incredibly realistic. This is the latest iteration of the Geminoid series of ultra-realistic androids, from Japanese firm Kokoro and Osaka University mad scientist roboticist Hiroshi Ishiguro. Specifically, this is Geminoid DK, which was constructed to look exactly like associate professor Henrik Scharfe of Aalborg University in Denmark. Geminoid DK is the first Geminoid based on a non-Japanese person, and also the first bearded one.

 

When we contacted Prof. Scharfe inquiring about the android, he confirmed: "No, it is not a hoax," adding that he and colleagues in Denmark and Japan have been working on the project for about a year now. His Geminoid, which cost some US $200,000, was built by Kokoro in Tokyo and is now at Japan's Advanced Telecommunications Research Institute International (ATR) in Nara for setup and testing.


"In a couple of weeks I will go back to Japan to participate in the experiments," he says. "After that, the robot is shipped to Denmark to inhabit a newly designed lab."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Google Now: An AI smartphone helper that answers questions before you even ask them

Google Now: An AI smartphone helper that answers questions before you even ask them | Amazing Science | Scoop.it

Siri, the virtual assistant built into iPhones, launched to great fanfare last October and soon inspired a crowd of copycat apps, heated online arguments about its effectiveness, and an Apple ad campaign in which it played the starring role. Almost a year later, Google's vision of how a smartphone can become a trusted, all-knowing assistant is rolling out to consumers in the form of Google Now. It's a feature of the newest iteration of Android, Jelly Bean, which is so far available on only a handful of smartphones, and suggests that Google has ambitions to go well beyond what Siri has shown so far.

 

Google Now doesn't have a pretend personality like Apple's sassy assistant, instead just appearing as a familiar search box. But just like Siri, it can take voice commands related to phone functions such as setting reminders or sending messages, and field requests for information such as "How old is the Eiffel Tower?" and "Where can I find a good Chinese restaurant?"

 

Also like Siri, Google Now responds with speech. However, rather than passing along queries to third-party services such as Yelp for answers, Google's helper makes use of the company's recently launched Knowledge Graph, a database that categorizes information in useful ways (see "Google's New Brain Could Have a Big Impact").

 

Google Now also introduces a new trick. It combines the constant stream of data a smartphone collects on its owner with clues about the person's life that Google can sift from Web searches and e-mails to guess what he or she would ask it for next. This enables Google Now not only to meet a user's needs but also, in some cases, to preëmpt them. Virtual index cards appear offering information it thinks you need to know at a particular time.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How artificial intelligence is changing our lives

How artificial intelligence is changing our lives | Amazing Science | Scoop.it

In a sense, AI has become almost mundanely ubiquitous, from the intelligent sensors that set the aperture and shutter speed in digital cameras, to the heat and humidity probes in dryers, to the automatic parking feature in cars. And more applications are tumbling out of labs and laptops by the hour.


“It’s an exciting world,” says Colin Angle, chairman and cofounder of iRobot, which has brought a number of smart products, including the Roomba vacuum cleaner, to consumers in the past decade.


What may be most surprising about AI today, in fact, is how little amazement it creates. Perhaps science-fiction stories with humanlike androids, from the charming Data (“Star Trek“) to the obsequious C-3PO (“Star Wars”) to the sinister Terminator, have raised unrealistic expectations. Or maybe human nature just doesn’t stay amazed for long.


“Today’s mind-popping, eye-popping technology in 18 months will be as blasé and old as a 1980 pair of double-knit trousers,” says Paul Saffo, a futurist and managing director of foresight at Discern Analytics in San Francisco. “Our expectations are a moving target.”

 

The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.

 

Entrepreneurs like iRobot’s Mr. Angle aren’t fussing over whether today’s clever gadgets represent “true” AI, or worrying about when, or if, their robots will ever be self-aware. Starting with Roomba, which marks its 10th birthday this month, his company has produced a stream of practical robots that do “dull, dirty, or dangerous” jobs in the home or on the battlefield. These range from smart machines that clean floors and gutters to the thousands of PackBots and other robot models used by the US military for reconnaissance and bomb disposal.


While robots in particular seem to fascinate humans, especially if they are designed to look like us, they represent only one visible form of AI. Two other developments are poised to fundamentally change the way we use the technology: voice recognition and self-driving cars.

more...
oliviersc's comment, October 3, 2012 11:19 AM
Un petit tour par mes Cercles privés à Google+ Thanks for this article !
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Many Computers to Identify a Cat? 16,000 Cores

How Many Computers to Identify a Cat? 16,000 Cores | Amazing Science | Scoop.it
A neural network of computer processors, fed millions of YouTube videos, taught itself to recognize cats, a feat of significance for fields like speech recognition.

 

Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

 

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

 

The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

In future: DNA analysis alone might generate a picture of your face

In future: DNA analysis alone might generate a picture of your face | Amazing Science | Scoop.it

It might be time to get Gattaca-level paranoid about leaving your DNA all over the place, as geneticists are getting closer to being able to determine what your face looks like simply by analyzing your genetic code.

 

Whether you believe in nature or nurture, the physical structure of your body is defined almost entirely by your genes. There will be some variation, of course, depending on your age, your weight, how well you take care of yourself, and how many times you've gotten punched in the face, but things like the space between your eyes, the height of your cheekbones, and the size of your nose are all preset and encoded in your DNA. This is why twins can look identical, and also why siblings can look similar: it's shared genetics.

 

Methods for figuring out eye color, hair color and skin color from DNA are fairly well established, and geneticists are now working on the next step, which is extracting the locations of "facial landmarks" from a DNA sample and using them to reconstruct the shape of someone's face from their genetic code alone. We should stress that this research is very, very preliminary, but we should also stress that there were some results, albeit small effects correlated with a limited number of genes.

 

For example, the researchers found that a gene called TP63 was a predictor of the gap between the centers of each eye socket being narrower by about nine millimeters. A gene called PRDM16 is associated with nose width and nose height, while a gene called PAX3 influences the position of the bridge of the nose. All of these things are measurable and predictable, and don't take anything more than (say) a sample of blood from a crime scene.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers develop technique to remotely control cockroaches

Researchers develop technique to remotely control cockroaches | Amazing Science | Scoop.it
Researchers from North Carolina State University have developed a technique that uses an electronic interface to remotely control, or steer, cockroaches.

 

The new technique developed by Bozkurt's team works by embedding a low-cost, light-weight, commercially-available chip with a wireless receiver and transmitter onto each roach (they used Madagascar hissing cockroaches). Weighing 0.7 grams, the cockroach backpack also contains a microcontroller that monitors the interface between the implanted electrodes and the tissue to avoid potential neural damage. The microcontroller is wired to the roach's antennae and cerci. The cerci are sensory organs on the roach's abdomen, which are normally used to detect movement in the air that could indicate a predator is approaching – causing the roach to scurry away. But the researchers use the wires attached to the cerci to spur the roach into motion. The roach thinks something is sneaking up behind it and moves forward. The wires attached to the antennae serve as electronic reins, injecting small charges into the roach's neural tissue. The charges trick the roach into thinking that the antennae are in contact with a physical barrier, which effectively steers them in the opposite direction.

 

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

IBM Envisions Watson as Supercharged Siri for Businesses

IBM Envisions Watson as Supercharged Siri for Businesses | Amazing Science | Scoop.it

International Business Machines Corp. (IBM) researchers spent four years developing Watson, the computer smart enough to beat the champions of the quiz show “Jeopardy!” Now they’re trying to figure out how to get those capabilities into the phone in your pocket.

 

Finding additional uses for Watson is part of IBM’s plan to tap new markets and boost revenue from business analytics to $16 billion by 2015. After mastering history and pop culture for its “Jeopardy!” appearance, the system is crunching financial information for Citigroup Inc. and cancer data for WellPoint Inc. The next version, dubbed “Watson 2.0,” would be energy- efficient enough to work on smartphones and tablets. IBM expects to generate billions in sales by putting Watson to work in finance, health care, telecommunications and other areas. The computer, which 15 million people saw beat former “Jeopardy!” champions Ken Jennings and Brad Rutter, is the company’s most high-profile product since it sold its personal- computer unit to Lenovo Group Ltd. (992) seven years ago.

 

The challenge for IBM is overcoming the technical obstacles to making Watson a handheld product, and figuring out how to price and deliver it. Watson’s nerve center is 10 racks of IBM Power750 servers running in Yorktown Heights, New York, that have the same processing power as 6,000 desktop computers. Even though most of the computations occur at the data center, a Watson smartphone application would still consume too much power for it to be practical today. Researchers also need to add voice and image recognition to the service so that it can respond to real-world input, said Katharine Frase, vice president of industry research at Armonk, New York-based IBM.

 

Apple made Siri the focus of its marketing of the iPhone 4S, which debuted last year. The software is touted as a personal assistant that can answer a wide range of spoken questions -- “Do I need an umbrella tomorrow?” -- and put appointments in a calendar.
Siri has become a defining characteristic of the iPhone, though it’s also drawn complaints. In a June survey by Minneapolis-based Piper Jaffray & Co., Siri was found to resolve requests correctly less than 70 percent of the time.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science News
Scoop.it!

‘Superorganisations’ – Learning from Nature’s Networks

‘Superorganisations’ – Learning from Nature’s Networks | Amazing Science | Scoop.it

Fritjof Capra, in his book ‘The Hidden Connections’ applies aspects of complexity theory, particularly the analysis of networks, to global capitalism and the state of the world; and eloquently argues the case that social systems such as organisations and networks are not just like living systems – they are living systems. The concept and theory of living systems (technically known as autopoiesis) was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela.

 

This is a complete version of a ‘long-blog’ written by Al Kennedy on behalf of ‘The Nature of Business’ blog and BCI: Biomimicry for Creative Innovation www.businessinspired...


Via Peter Vander Auwera, ddrrnt, Spaceweaver, David Hodgson, pdjmoo, Sakis Koukouvis
more...
Lorien Pratt's curator insight, January 4, 11:29 PM

A great resource in the Decision Intelligence for Sustainability space.

Monica S Mcfeeters's curator insight, January 18, 8:57 PM

A look at how to go organic with business models in a tech age...

Nevermore Sithole's curator insight, March 14, 9:01 AM

Learning from Nature’s Networks

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Robot learns self-awareness for the first time

Robot learns self-awareness for the first time | Amazing Science | Scoop.it

“Only humans can be self-aware” - another myth bites the dust. Yale roboticists have programmed Nico, a robot, to be able to recognize itself in a mirror. Why is this important? Because robots will need to learn about themeselves and how they affect the world around them — especially people.

 

Using knowledge that it has learned about itself, Nico is able to use a mirror as an instrument for spatial reasoning, allowing it to accurately determine where objects are located in space based on their reflections, rather than naively believing them to exist behind the mirror.

 

Nico’s programmer, roboticist Justin Hart, a member of the Social Robotics Lab, focuses his thesis research primarily on “robots autonomously learning about their bodies and senses,” but he also explores human-robot interaction, “including projects on social presence, attributions of intentionality, and people’s perception of robots.”

 

Recently, the lab (along with MIT, Stanford, and USC) won a $10 million grant from the National Science Foundation to create “socially assistive” robots that can serve as companions for children with special needs. These robots will help with everything from cognitive skills to getting the right amount of exercise.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

This is what Wall Street’s terrifying bot invasion looks like

This is what Wall Street’s terrifying bot invasion looks like | Amazing Science | Scoop.it

This is what high frequency trading looks like, when specially programmed computers make massive bets at lightning speed, Mother Board reports. Created by Nanex, the GIF charts the rise of HFT trading volumes across all U.S. stock exchanges between 2007 and 2012.The initial murmur, the brewing storm, the final detonation: Not just unsettling, it’s terrifying.

 

As Mother Board notes, we don’t know is what the long term consequences are of all this hyper-volume as depicted by the Nanex GIF and the kind of systemic risks created from the market’s ongoing evolution from human traders to rapidfire AI. Sometimes things go wrong, a software glitch, an algorithm gone rogue and the music stops, like last week when Knight Capital lost $10 million a minute when it’s trading platform went haywire or during the infamous Flash Crash when the Dow dropped 1000 points in mere minutes.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science News
Scoop.it!

Musical Turing test: which audio clip was composed by a computer?

Musical Turing test: which audio clip was composed by a computer? | Amazing Science | Scoop.it

Were you fooled by the machine? Listen to five audio clips and try to guess which piece of music was dreamed up inside the brain of a computer.


Via Mário Florido, Sakis Koukouvis
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Stanford researchers produce first complete computer model of an organism

Stanford researchers produce first complete computer model of an organism | Amazing Science | Scoop.it

In a breakthrough effort for computational biology, the world's first complete computer model of an organism has been completed. A team led by Stanford bioengineering Professor Markus Covert used data from more than 900 scientific papers to account for every molecular interaction that takes place in the life cycle of Mycoplasma genitalium – the world's smallest free-living bacterium.

 

By encompassing the entirety of an organism in silicon, the paper fulfills a longstanding goal for the field. Not only does the model allow researchers to address questions that aren't practical to examine otherwise, it represents a stepping-stone toward the use of computer-aided design in bioengineering and medicine.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from anti dogmanti
Scoop.it!

Google simulates brain networks to recognize speech and images

Google simulates brain networks to recognize speech and images | Amazing Science | Scoop.it

How to build high-level, class-specific feature detectors from only unlabeled data? For example, is it possible to learn a face detector using only unlabeled images? This summer Google set a new landmark in the field of artificial intelligence with software that self-learned how to recognize cats, people, and other things simply by watching YouTube videos (so-called "Unsupervised Self-Taught Software“). That technology, modeled on how brain cells operate, is now being put to work making Google’s search smarter, with speech recognition being the first service to benefit.

 

Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something. Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face recognition. Google’s engineers have found ways to put more computing power behind this approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations. The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.

 

Google is now using these neural networks to recognize speech more accurately, a technology increasingly important to Google’s smartphone operating system, Android, as well as the search app it makes available for Apple devices. “We got between 20 and 25 percent improvement in terms of words that are wrong,” says Vincent Vanhoucke, a leader of Google’s speech-recognition efforts. “That means that many more people will have a perfect experience without errors.” The neural net is so far only working on U.S. English, and Vanhoucke says similar improvements should be possible when it is introduced for other dialects and languages.

 

Other Google products will likely improve over time with help from the new learning software. The company’s image search tools, for example, could become better able to understand what’s in a photo without relying on surrounding text. And Google’s self-driving cars and mobile computer built into a pair of glasses could benefit from software better able to make sense of more real-world data.

 

The new technology grabbed headlines back in June of this year, when Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause. A next step could be to have the same model learn the sounds of words as well. Being able to relate different forms of data like that could lead to speech recognition that gathers extra clues from video, for example, and it could boost the capabilities of Google’s self-driving cars by helping them understand their surroundings by combining the many streams of data they collect, from laser scans of nearby obstacles to information from the car’s engine.

 

Google’s work on making neural networks brings us a small step closer to one of the ultimate goals of AI — creating software that can match animal or perhaps even human intelligence, says Yoshua Bengio, a professor at the University of Montreal who works on similar machine-learning techniques. “This is the route toward making more general artificial intelligence — there’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world,” he says. In fact, the workings of Google’s neural networks operate in similar ways to what neuroscientists know about the visual cortex in mammals, the part of the brain that processes visual information, says Bengio. “It turns out that the feature learning networks being used [by Google] are similar to the methods used by the brain that are able to discover objects that exist.”

However, he is quick to add that even Google’s neural networks are much smaller than the brain, and that they can’t perform many things necessary to intelligence, such as reasoning with information collected from the outside world.


Via Sue Tamani
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

An artificially intelligent gamer bot UT^2 beats Turing test and scores higher than fellow humans

An artificially intelligent gamer bot UT^2 beats Turing test and scores higher than fellow humans | Amazing Science | Scoop.it

An AI virtual gamer has won the BotPrize by convincing a panel of judges that it was more human-like than half of its human opponents. The competition was sponsored by 2K Games and was set inside the virtual world of “Unreal Tournament 2004,” a first-person shooter video game.

 

“The idea is to evaluate how we can make game bots, which are non-player characters (NPCs) controlled by AI algorithms, appear as human as possible,” says Risto Miikkulainen, professor of computer science at the University of Texas at Austin. Miikkulainen created the bot, called the UT^2 game bot, with doctoral students Jacob Schrum and Igor Karpov.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Why Artificial Intelligence Has Failed And How To Fix It. The Laws Of Physics Imply 'AI' Must Be Possible

Why Artificial Intelligence Has Failed And How To Fix It. The Laws Of Physics Imply 'AI' Must Be Possible | Amazing Science | Scoop.it
The very laws of physics imply that artificial intelligence must be possible. What's holding us up?

 

 It is uncontroversial that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. It is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

 

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence. Why? Because, as an unknown sage once remarked, ‘it ain’t what we don’t know that causes trouble, it’s what we know for sure that just ain’t so’ (and if you know that sage was Mark Twain, then what you know ain’t so either). I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

 

more...
Chris Daniel's curator insight, March 29, 8:29 AM

My primary acquisition from this lengthy article is knowledge of some of the barriers that AI research and development will need to overcome should it hope to progress to points that, whilst theoretically possible, are far from the current position of existing technology. AI research primarily aims to emulate human intelligence. I am not convinced that this is the only method of formulating intelligence though I see the advantage of this approach as human intelligence is the most advanced available variety. Regardless, the next decade will likely see further progress into the understanding of human intelligence, thought, and learning, and these findings will potentially have great impact on the development of artificial intelligence.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Videos of machine learning, artificial intelligence and playful machines

Videos of machine learning, artificial intelligence and playful machines | Amazing Science | Scoop.it
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Interactive 3D protein structures on a virtual reality wall

Interactive 3D protein structures on a virtual reality wall | Amazing Science | Scoop.it

How do you get to know a protein? How about from the inside out? If you ask chemistry professor James Hinton, "It’s really important that scientists as well as students are able to touch, feel, see … embrace–if you like, these proteins structures”. For decades, with funding from the National Science Foundation (NSF), Hinton has used nuclear magnetic resonance (NMR) to look at protein structure and function. But he wanted to find a way to educate and engage students about his discoveries.

 

The picture above shows an example of the interactive visualization of proteins from the Protein Data Bank (PDB), using PDB browser software on the C-Wall (virtual reality wall) at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. The work was performed by Jürgen P. Schulze, project scientist, in collaboration with Jeff Milton, Philip Weber and Professor Philip Bourne of the University of California, San Diego. The software supports collaborative viewing of proteins at multiple sites on the Internet

more...
Sandys VR's curator insight, March 27, 2013 6:12 PM

Heard about this before, very cool use of VR!

Luis Carlos Peña Gordillo's curator insight, November 4, 2013 1:45 AM

Realidad virtual en visualización química.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

The advent of the global brain

The advent of the global brain | Amazing Science | Scoop.it

Get ready for the global brain. That was the grand finale of a presentation on the next generation of the Internet I heard last week from Yuri Milner. G-8 leaders had a preview of Milner’s predictions a few months earlier, when he was among the technology savants invited to brief the world’s most powerful politicians in Deauville, France.

 

Milner is the technology guru most of us have never heard of. He was an early outside investor in Facebook, sinking $200 million in the company in 2009 for a 1.96 percent stake, a decision that was widely derided as crazy at the time. He was also early to spot the potential of Zynga, the gaming company, and of Groupon, the daily deals site.

 

His investing savvy propelled Milner this year onto the Forbes Rich List, with an estimated net worth of $1 billion. One reason his is not yet a household name is that he does his tech spotting from Moscow, not a city most of us look to for innovative economic ideas.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human, AI and Robotics: Man Walks With Aid of Brain-Controlled Robotic Legs for the First Time

Human, AI and Robotics: Man Walks With Aid of Brain-Controlled Robotic Legs for the First Time | Amazing Science | Scoop.it

Walking on a treadmill is no great feat, unless your legs are being moved by a robotic device connected to your brain. A new brain-computer interface allows a person to walk using a pair of mechanical leg braces controlled by brain signals (above), as reported on arXiv. The device has only been tested on able-bodied people, and while it has limitations, it lays a foundation for helping people with paralysis walk again. The new device — developed by researchers at the Long Beach Veterans Affairs Medical Center and the University of California, Irvine – is controlled by electroencephalogram, or EEG, signals generated by small voltage fluctuations in the brain. The method is completely noninvasive, as the signals are measured by a cap worn on the scalp.

more...
Hayden Theuerkauf's curator insight, March 21, 2013 10:04 PM

This article/website shows the use of robotic limbs already being experimented with, this technology will change lives drastically, soon people will be able to have these robotic legs, arms you name it and they will be able to control these instruments with there brains , this technology is slowly starting to emerge now, but should likely take off in five to ten years, changing the lives of people who have lost there limbs or are have a disability.

cassian bulger's curator insight, March 22, 2013 8:17 AM

The article shows the use of robotic limbs that are currently being experimented with this is an amazing technology for people that have lost there arms or legs and previously nothing could be done to restore function to the severed arm or leg. The release of medical robotics such as this will surley affect many lives around the world now and in the future where the advances in medical robotics can only improve.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Up and Down the Ladder of Abstraction

Up and Down the Ladder of Abstraction | Amazing Science | Scoop.it

How can we design systems when we don't know what we're doing?

The most exciting engineering challenges lie on the boundary of theory and the unknown. Not so unknown that they're hopeless, but not enough theory to predict the results of our decisions. Systems at this boundary often rely on emergent behavior — high-level effects that arise indirectly from low-level interactions.

 

When designing at this boundary, the challenge lies not in constructing the system, but in understanding it. In the absence of theory, we must develop an intuition to guide our decisions. The design process is thus one of exploration and discovery.

How do we explore? If you move to a new city, you might learn the territory by walking around. Or you might peruse a map. But far more effective than either is both together — a street-level experience with higher-level guidance.

 

Likewise, the most powerful way to gain insight into a system is by moving between levels of abstraction. Many designers do this instinctively. But it's easy to get stuck on the ground, experiencing concrete systems with no higher-level view. It's also easy to get stuck in the clouds, working entirely with abstract equations or aggregate statistics.

 

This interactive essay presents the ladder of abstraction, a technique for thinking explicitly about these levels, so a designer can move among them consciously and confidently.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Immortal Avatars: Russian project seeks to create robot with human brain

Immortal Avatars: Russian project seeks to create robot with human brain | Amazing Science | Scoop.it
Inspired by director James Cameron’s idea, a Russian businessman has launched his own Avatar project. Hundreds of researchers are involved in creating a prototype of a human-like robot which would be able to contain the human consciousness.

 

The immortality project consists of four stages, and a team of researchers in the Moscow suburb of Zelenograd is currently working on the first one. About 100 scientists are already involved in the initiative, striving to bring the concept together, and Itskov is planning to hire even more during the upcoming stages.


So far, scientists are struggling to complete the Avatar-A, a human-like robot controlled through a brain-computer interface. Itskov served as a prototype for the machine, thus the robot was nicknamed Dima.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How Google's Self-Driving Car Works

How Google's Self-Driving Car Works | Amazing Science | Scoop.it

Once a secret project, Google's autonomous vehicles are now out in the open, quite literally, with the company test-driving them on public roads and, on one occasion, even inviting people to ride inside one of the robot cars as it raced around a closed course.

 

Google's fleet of robotic Toyota Priuses has now logged more than 190,000 miles (about 300,000 kilometers), driving in city traffic, busy highways, and mountainous roads with only occasional human intervention. The project is still far from becoming commercially viable, but Google has set up a demonstration system on its campus, using driverless golf carts, which points to how the technology could change transportation even in the near future.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

End of Chinese manufacturing and rebirth of US industry based on robotics, AI, 3D printing and nanotech

End of Chinese manufacturing and rebirth of US industry based on robotics, AI, 3D printing and nanotech | Amazing Science | Scoop.it

There is great concern about China’s real-estate and infrastructure bubbles. But these are just short-term challenges that China may be able to spend its way out of. The real threat to China’s economy is bigger and longer term: its manufacturing bubble.

 

Rising costs and political pressure aren’t what’s going to rapidly change the equation. The disruption will come from a set of technologies that are advancing at exponential rates and converging. These technologies include robotics, artificial intelligence, 3D printing, and nanotechnology. China has many reasons to worry, and manufacturing will undoubtedly return to the U.S. — if not in this decade then early in the next. But the same jobs that left the U.S. won’t come back: they won’t exist.

more...
industry-index.com's curator insight, January 17, 2013 8:23 AM

Will new technologies be the solution for the rebirth of western manufacturing? 

Rescooped by Dr. Stefan Gruenwald from Science News
Scoop.it!

Artificial Cerebellum in Robotics Developed

Artificial Cerebellum in Robotics Developed | Amazing Science | Scoop.it

University of Granada researchers have developed an artificial cerebellum (a biologically-inspired adaptive microcircuit) that controls a robotic arm with human-like precision. The cerebellum is the part of the human brain that controls the locomotor system and coordinates body movements.


Via Sakis Koukouvis
more...
No comment yet.