Amazing Science
802.8K views | +3 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science | Scoop.it

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.

 

Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.

 

“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.

 

MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.

 

Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.

 

While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Implantable ‘stentrode’ allows paralyzed patients to mind-control an exoskeleton

Implantable ‘stentrode’ allows paralyzed patients to mind-control an exoskeleton | Amazing Science | Scoop.it

A DARPA-funded research team has created a novel minimally invasive brain-machine interface and recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier when treating patients for physical disabilities and neurological disorders.


The new technology, developed by University of Melbourne medical researchers under DARPA’s Reliable Neural-Interface Technology (RE-NET) program, promises to give people with spinal cord injuries new hope to walk again.


The brain-machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.


The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017. The research results, published Monday Feb. 8 in Nature Biotechnology, show the device is capable of recording high-quality signals emitted from the brain’s motor cortex without the need for open brain surgery.


“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery,” said Thomas Oxley, principal author and neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne.


Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science
Scoop.it!

Can We Decipher the Language of the Brain?

Can We Decipher the Language of the Brain? | Amazing Science | Scoop.it

Understanding how brains work is one of the greatest scientific challenges of our times, but despite the impression sometimes given in the popular press, researchers are still a long way from some basic levels of understanding. A project recently funded by the Obama administration's BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative is one of several approaches promising to deliver novel insights by developing new tools that involves a marriage of nanotechnology and optics.


There are close to 100 billion neurons in the human brain. Researchers know a lot about how these individual cells behave, primarily through “electrophysiology,” which involves sticking fine electrodes into cells to record their electrical activity. We also know a fair amount about the gross organization of the brain into partially specialized anatomical regions, thanks to whole-brain imaging technologies like functional magnetic resonance imaging (fMRI), which measure how blood oxygen levels change as regions that work harder demand more oxygen to fuel metabolism. We know little, however, about how the brain is organized into distributed “circuits” that underlie faculties like, memory or perception. And we know even less about how, or even if, cells are arranged into “local processors” that might act as components in such networks.


We also lack knowledge regarding the “code” large numbers of cells use to communicate and interact. This is crucial, because mental phenomena likely emerge from the simultaneous activity of many thousands, or millions, of interacting neurons. In other words, neuroscientists have yet to decipher the “language” of the brain. “The first phase is learning what the brain's natural language is. If your resolution [in a hypothetical language detector] is too coarse, so you're averaging over paragraphs, or chapters, you can't hear individual words or discern letters,” says physicist Michael Roukes of the California Institute of Technology, one of the authors of the “Brain Activity Map” (BAM) paper published in 2012 inNeuron that inspired the BRAIN Initiative. “Once we have that, we could talk to the brain in complete sentences.”


This is the gap BRAIN aims to address. Launched in 2014 with an initial pot of more than $100 million, the idea is to encourage the development of new technologies for interacting with massively greater numbers of neurons than has previously been possible. The hope is that once researchers understand how the brain works (with cellular detail but across the whole brain) they'll have better understanding of neurodegenerative diseases, like Alzheimer's and psychiatric disorders like schizophrenia or depression.


Today’s state-of-the-art technology in the field is optical imaging, mainly using calcium indicators—fluorescent proteins introduced into cells via genetic tweaks, which emit light in response to the calcium level changes caused by neurons firing. These signals are recorded using special microscopes that produce light, as the indicators need to absorb photons in order to then emit these light particles. This can be combined with optogenetics, a technique that genetically modifies cells so they can be activated using light, allowing researchers to both observe and control neural activity.


Some incredible advances have already been made using these tools. For example, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus, led by Misha Ahrens, published a study in 2013 in Nature Methods in which they recorded activity from almost all of the neurons of zebra fish larvae brains. Zebra fish larvae are used because they are easily genetically tweaked, small and, crucially, transparent. The researchers refined a technique called light-sheet microscopy, which uses lasers to produce planes of light that illuminate the brain one cross-section at a time. The fish were genetically engineered with calcium indicators so the researchers were able to generate two-dimensional pictures of neural activity, which they then stacked into three-dimensional images, capturing 90 percent of the activity of the zebra fish’s 100,000 brain cells.


Via Mariaschnee
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Memory capacity of brain is 10 times more than previously thought

Memory capacity of brain is 10 times more than previously thought | Amazing Science | Scoop.it

Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient, and could help engineers build computers that are incredibly powerful but also conserve energy.


“This is a real bombshell in the field of neuroscience,” says Terry Sejnowski, Salk professor and co-senior author of the paper, which was published in eLife. “We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power. Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (1 quadrillion or 1015 bytes), in the same ballpark as the World Wide Web.”


“When we first reconstructed every dendrite, axon, glial process, and synapse* from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.”


The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.


At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.


“We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature,” says Bartol. Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.


It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small. But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.


“Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

UCSD spinoffs create lab-quality portable 64-channel BCI headset

UCSD spinoffs create lab-quality portable 64-channel BCI headset | Amazing Science | Scoop.it

The first dry-electrode, portable 64-channel wearable brain-computer interface (BCI) has been developed by bioengineers and cognitive scientists associated with UCSD Jacobs SchoolThe system is comparable to state-of-the-art equipment found in research laboratories, but with portability, allowing for tracking brain states throughout the day and augmenting the brain’s capabilities, the researchers say. Current BCI devices require gel-based electrodes or fewer than 64 channels.


The dry EEG sensors are easier to apply than wet sensors, while still providing high-density/low-noise brain activity data, according to the researchers. The headset includes a Bluetooth transmitter, eliminating the usual array of wires. The system also includes a sophisticated software suite for data interpretation and analysis for applications including research, neuro-feedback, and clinical diagnostics.


“This is going to take neuroimaging to the next level by deploying on a much larger scale,” including use in homes and even while driving, said Mike Yu Chi, a Jacobs School alumnus and CTO of Cognionics who led the team that developed the headset.


The researchers also envision a future when neuroimaging can be used to bring about new therapies for neurological disorders. “We will be able to prompt the brain to fix its own problems,” said Gert Cauwenberghs, a bioengineering professor at the Jacobs School and a principal investigator on a National Science Foundation grant. “We are trying to get away from invasive technologies, such as deep brain stimulation and prescription medications, and instead start up a repair process by using the brain’s synaptic plasticity.”


“In 10 years, using a brain-machine interface might become as natural as using your smartphone is today, said Tim Mullen, a UC San Diego alumnus, lead author on the study and a former researcher at the Swartz Center for Computational Neuroscience at UC San Diego.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Researchers find a way to decipher words in the mind before they are even spoken

Researchers find a way to decipher words in the mind before they are even spoken | Amazing Science | Scoop.it

A team of researchers working at Kyushu Institute of Technology in Japan and led by Yamazaki Toshimasa, has according to Japanese newspaper Nishinippon, found a way to read certain brain waves and to match them to a database, allowing for recognition of the words before a person speaks them. The paper reported that the team also presented a paper at a recent conference organized by the Institute of Electronics, Information and Communication Engineers, describing their findings.


Over the past half-century, scientists have tried many approaches to read the mind, whether for good or nefarious purposes—thus far, none has led to success, though some have claimed some progress has been made. In this latest bit of news, the team in Japan enlisted the assistance of 12 volunteers of various ages, asking each to undergo EEG scans while they thought about words and then spoke them out loud. The initial stages resulted in the buildup of brain patterns in a database—later, as each person recited words, the researchers read their brain waves to see if they could identify the words they were about to speak by comparing current brain wave patterns with those in the database. Similar research has been done before, but this time, the researchers looked specifically at brain waves emanating from the Broca area—which is a part of the brain where formation of words occurs before they are sent to other parts of the brain that are used to actually speak them. By limiting the vocabulary, the researchers found they could correctly interpret the words a person was about to speak (up to 2 seconds beforehand), approximately 25 percent of the time. They also found they could up that percentage to near 90 percent if they focused instead on just (Japanese) characters, or syllables.


The paper noted that the research benefited by using Japanese speaking volunteers, because every character in that language has a vowel in it—the lack of them in many western language, it has been noted, has been making it more difficult for researchers in the field working with volunteers who speak in English, for example.


Toshimasa and his team are reportedly optimistic about improving both their accuracy and the number of words they will be able to recognize as their research continues, perhaps going so far as developing a system capable of deciphering the entirely of the Japanese language, making it possible, for example, for people in a coma to speak, or for conversations in an environments where sound cannot travel, such as in outer space.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain scans show compulsive gamers have hyperconnected neural networks

Brain scans show compulsive gamers have hyperconnected neural networks | Amazing Science | Scoop.it

Brain scans from nearly 200 adolescent boys provide evidence that the brains of compulsive video game players are wired differently. Chronic video game play is associated with hyperconnectivity between several pairs of brain networks. Some of the changes are predicted to help game players respond to new information.


Other changes are associated with distractibility and poor impulse control. The research, a collaboration between the University of Utah School of Medicine, and Chung-Ang University in South Korea, was published online in Addiction Biology on Dec. 22, 2015.


“Most of the differences we see could be considered beneficial. However the good changes could be inseparable from problems that come with them,” says senior author Jeffrey Anderson, M.D., Ph.D.,associate professor of neuroradiology at the University of Utah School of Medicine.


Those with Internet gaming disorder are obsessed with video games, often to the extent that they give up eating and sleeping to play. This study reports that in adolescent boys with the disorder, certain brain networks that process vision or hearing are more likely to have enhanced coordination to the so-called salience network.


The job of the salience network is to focus attention on important events, poising that person to take action. In a video game, enhanced coordination could help a gamer to react more quickly to the rush of an oncoming fighter. And in life, to a ball darting in front of a car, or an unfamiliar voice in a crowded room.

more...
Flurries Unlimited's curator insight, January 4, 2016 11:44 AM

See, they really do think differently..  #gamers

Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe
Scoop.it!

Neuroscientists Can Now Predict Intelligence From Brain Activity

Neuroscientists Can Now Predict Intelligence From Brain Activity | Amazing Science | Scoop.it

Humans have a love/hate relationship with the cliques, clades, and classes that compartmentalize their world. That tension forms the backbone of so much dystopian sci-fi: The protagonist of Divergent is special because she doesn’t fit into her society’s rigid castes of personality traits; Minority Report is all about the follies of judging people before they act. These stories are fun to think about in part because they’re fiction, not fact. But now that neuroscientists have used maps of people’s brains to accurately predict intelligence, reality creeps ever so much closer to fiction.


By intelligence, in this case, the scientists mean abstract reasoning ability, which they inferred by mapping and analyzing the connections within people’s brains. But the study, published today in Nature Neuroscience, is compelling because it gets at a fundamental and very uncomfortable truth: Some brains are better than others at certain things, simply because of the way they’re wired. And now, scientists are closer to being able to determine precisely which brains those are, and how they got that way.


Via CineversityTV
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Pulsed laser light turns whole-brain activity on and off

Pulsed laser light turns whole-brain activity on and off | Amazing Science | Scoop.it

By flashing high-frequency (40 to 100 pulses per second) optogenetic lasers at the brain’s thalamus, scientists were able to wake up sleeping rats and cause widespread brain activity. In contrast, flashing the laser at 10 pulses per second suppressed the activity of the brain’s sensory cortex and caused rats to enter a seizure-like state of unconsciousness.


“We hope to use this knowledge to develop better treatments for brain injuries and other neurological disorders,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery, and bioengineering at Stanford University, and a senior author of the study, published in the open-access journal eLIFE.


Located deep inside the brain, the thalamus regulates arousal, acting as a relay station to the cortex for neural signals from the body. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory.*


The observations used a combination of optogenetics and whole-brain functional MRI (fMRI) — known as “ofMRI” — to detect overall effects on the brain, along with EEG and single-unit cell recordings.The researchers noted in the paper that “using targeted, temporally precise optogenetic stimulation in the current study allowed us to selectively excite a single group of neuronal elements and identify their specific role in creating distinct modes of network function.” That could not be achieved with conventional electrode stimulation, the researchers say.

They explain that this method may allow for direct-brain stimulation (DBS) therapeutic methods to be optimized in the clinic “for a wide range of neurological disorders that currently lack such treatment.” “This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS), which partially funded the study.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First language wires brain for later language-learning

First language wires brain for later language-learning | Amazing Science | Scoop.it

Research also demonstrates brain's plasticity and ability to adapt to new language environments


You may believe that you have forgotten the Chinese you spoke as a child, but your brain hasn’t. Moreover, that “forgotten” first language may well influence what goes on in your brain when you speak English or French today.


In a paper published today in Nature Communications, researchers from McGill University and the Montreal Neurological Institute describe their discovery that even brief, early exposure to a language influences how the brain processes sounds from a second language later in life. Even when the first language learned is no longer spoken.


It is an important finding because this research tells scientists both about how the brain becomes wired for language, but also about how that hardwiring can change and adapt over time in response to new language environments. The research has implications for our understanding of how brain plasticity functions, and may also be important information about creating educational practices geared to different types of learners.


The researchers asked three groups of children (aged 10 - 17) with very different linguistic backgrounds to perform a task that involved identifying French pseudo-words (such as vapagne and chansette). One group was born and raised in unilingual French-speaking families. The second group were adopted from China into a French-speaking family before age three, stopped speaking Chinese, and from that point on heard and used only French. The third group were fluently bilingual in Chinese and French. As the children responded to the words they heard, the researchers used functional magnetic resonance imaging (fMRI) to look at which parts of their brains were being activated.


Although all groups performed the tasks equally well, the areas of the brain that were activated differed between the groups. In monolingual French children with no exposure to Chinese, areas of the brain, notably the left inferior frontal gyrus and anterior insula, expected to be involved in processing of language-associated sounds were activated. However, among both the children who were bilingual (Chinese/French) and those who had been exposed to Chinese as young infants and had then stopped speaking it, additional areas of the brain, particularly the right middle frontal gyrus, left medial frontal cortex, and bilateral superior temporal gyrus were activated.


The researchers found that the Chinese children who had been adopted into unilingual French families and no longer spoke Chinese, and so were functionally unilingual at the time of testing, still had brains that processed language in a way that is similar to bilingual children.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from levin's linkblog: Knowledge Channel
Scoop.it!

Shady Science: How the Brain Remembers Colors

Shady Science: How the Brain Remembers Colors | Amazing Science | Scoop.it

A new study finds that while the human brain can distinguish between millions of colors, it has difficulty remembering specific shades. For example, most people can easily tell the difference between azure, navy and ultramarine, but when it comes to remembering these shades, people tend to label them all as blue, the study found. This tendency to lump colors together could explain why it's so hard to match the color of house paint based on memory alone, the researchers said. [Eye Tricks: Gallery of Visual Illusions].

 

Many cultures have the same color words or categories, said Jonathan Flombaum, a cognitive psychologist at Johns Hopkins University in Baltimore. "But at the same time, there's a lot of debate around the role those categories play in the perception of color," he said.

 

In the study, Flombaum and his colleagues conducted four experiments on four different groups of people. In the first experiment, they asked people to look at a color wheel with 180 different hues, and asked them to find the best name for each color. The exercise was designed to find the perceived boundaries between colors, the researchers said. In a second experiment, the scientists showed different people the same colors, but this time they asked them to find the "best example" of a particular color.

 

For a third experiment, the researchers showed participants colored squares, and asked them to select the best match on the color wheel. In a fourth experiment, another group of participants completed the same task, but there was a delay of 90 milliseconds between when each color square was displayed and when they were asked to select the best match on the color wheel.

 

The results revealed that categories are indeed important in how people identify and remember colors. The participants who were asked to name the colors reliably saw five hues: blue, yellow, pink, purple and green. Most of the colors were given one name, butambiguous colors got two labels, such as blue and green. "Where that fuzzy naming happened, those are the boundaries" between colors, Flombaum explained. In addition, people tended to choose the same shades as the best example of each color.

 

But what was really striking was how the people in the memory experiment remembered the colors they saw, the scientists said. The researchers expected that the participants' responses for what colors they had seen would reflect a bell curve centered on the correct color. But instead, they found that the distribution of responses was skewed toward the "best example" of the color they had seen, not the true color.

 

The findings suggest that the brain remembers colors as discrete categories as well as a continuum of shades, and combines these representations to produce a memory. There could be many reasons for this, but it likely boils down to efficiency, Flombaum said. "Most of the time, what we care about is the category," he said.

 


Via Levin Chin
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Taste comes from the brain, not the tongue, scientists discover

Taste comes from the brain, not the tongue, scientists discover | Amazing Science | Scoop.it

Researchers in the US have turned taste on and off in mice simply by activating and silencing certain brain cells. This demonstrates for the first time that taste is hardwired in the brain, and not dictated by our tastebuds, flipping our previous understanding of how taste works on its head.

It was previously thought that the taste receptors on our tongue perceived the five basic tastes – sweet, salty, sour, bitter, and umami – and then passed these messages onto our brain, where it registered what we'd just tasted. But the new study shows that although our tongues do detect the presence of certain chemicals, it's our brains that perceive flavor.


“Taste, the way you and I think of it, is ultimately in the brain,” said lead researcher Charles S. Zuker from Columbia University Medical Centre. “Dedicated taste receptors in the tongue detect sweet or bitter and so on, but it’s the brain that affords meaning to these chemicals.”


Previous work by Zuker's lab discovered that our tongue has dedicated receptors for each taste, and that each class of receptors sends a specific signal to the brain. More recently, the team built on this by showing that in addition to dedicated receptors, there are unique sets of brain cells – each in different locations – that receive these signals. The red area below is the bitter neurons, and the aqua shows where the sweet brain cells are.


In this study, they decided to play with these brain cells and see if they could activate or deactivate them in order to trick mice into thinking they were tasting something sweet or bitter, without them actually tasting either.


"In this study, we wanted to know if specific regions in the brain really represent sweet and bitter. If they do, silencing these regions would prevent the animal from tasting sweet or bitter, no matter how much we gave them," said Zuker. "And if we activate these fields, they should taste bitter or sweet, even though they’re only getting plain water."


What they observed was exactly as they'd expected – when the sweet neurons were silenced using an injectable drug, the mice couldn't taste anything sweet, but they could still detect bitter flavors. And when the researchers activated the sweet neurons using laser light, the mice tasted sweet flavors, even though they were only drinking plain water. The same thing happened when they stimulated or silenced the bitter brain cells. The team was able to tell what the mice were tasting by their obvious reactions – they licked their lips when they tasted real or simulated sweet flavors, and gagged and looked disgusted when they tasted bitter.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain implant lets rats ‘see’ infrared light

Brain implant lets rats ‘see’ infrared light | Amazing Science | Scoop.it
Aside from a few animals—like pythons and vampire bats—that can sense infrared light, the world of this particular electromagnetic radiation has been off-limits to most creatures. But now, researchers have engineered rodents to see infrared light by implanting sensors in their visual cortex—a first ever feat announced here yesterday at the annual meeting of the Society for Neuroscience.

Before they wired rats to see infrared light, Duke University neuroscientist Miguel Nicolelis and his postdoc Eric Thomson engineered them to feel it. In 2013, they surgically implanted a single infrared-detecting electrode into an area of the rat’s brain that processes touch called the somatosensory cortex. The other end of the sensor, outside the rat’s head, surveyed the environment for infrared light. When it picked up infrared, the sensor sent electrical messages to the rats’ brains that seemed to give them a physical sensation. At first, the rats would groom and rub their whiskers repeatedly whenever the light went on. But after a short while, they stopped fidgeting. They even learned to associate infrared with a reward-based task in which they followed the light to a bowl of water.

In the new experiment, the team inserted three additional electrodes, spaced out equally so that the rats could have 360 degrees of infrared perception. When they were primed to perform the same water-reward task, they learned it in just 4 days, compared with 40 days with the single implant. “Frankly, this was a surprise,” Thomson says. “I thought it would be really confusing for [the rats] to have so much stimulation all over their brain, rather than [at] one location.”

Next, the researchers began redirecting infrared traffic: Instead of the somatosensory cortex, they stuck the electrode into the rats’ visual cortex. And here’s the kicker: Rats receiving “visual” stimulus of infrared learned the same water-reward task in a single day. Thomson speculates that the visual cortex adjusted so well because the wavelength of infrared light is very close to that of visible light.
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Monkeys Drive Wheelchairs Using Only Their Thoughts

Monkeys Drive Wheelchairs Using Only Their Thoughts | Amazing Science | Scoop.it

Neuroscientists at Duke Health have developed a brain-machine interface (BMI) that allows primates to use only their thoughts to navigate a robotic wheelchair. 

 

A computer in the lab of Miguel Nicolelis, M.D., Ph.D., monitors brain signals from a rhesus macaque. The BMI uses signals from hundreds of neurons recorded simultaneously in two regions of the monkeys’ brains that are involved in movement and sensation. As the animals think about moving toward their goal -- in this case, a bowl containing fresh grapes -- computers translate their brain activity into real-time operation of the wheelchair.

 

The interface, described in the March 3 issue of the online journal Scientific Reports, demonstrates the future potential for people with disabilities who have lost most muscle control and mobility due to quadriplegia or ALS, said senior author Miguel Nicolelis, M.D., Ph.D., co-director for the Duke Center for Neuroengineering.

 

“In some severely disabled people, even blinking is not possible,” Nicolelis said. “For them, using a wheelchair or device controlled by noninvasive measures like an EEG (a device that monitors brain waves through electrodes on the scalp) may not be sufficient. We show clearly that if you have intracranial implants, you get better control of a wheelchair than with noninvasive devices.”

 

Scientists began the experiments in 2012, implanting hundreds of hair-thin microfilaments in the premotor and somatosensory regions of the brains of two rhesus macaques. They trained the animals by passively navigating the chair toward their goal, the bowl containing grapes. During this training phase, the scientists recorded the primates’ large-scale electrical brain activity. The researchers then programmed a computer system to translate brain signals into digital motor commands that controlled the movements of the wheelchair.

 

As the monkeys learned to control the wheelchair just by thinking, they became more efficient at navigating toward the grapes and completed the trials faster, Nicolelis said. In addition to observing brain signals that corresponded to translational and rotational movement, the Duke team also discovered that primates’ brain signals showed signs they were contemplating their distance to the bowl of grapes.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Now you can learn to fly a plane from expert-pilot brainwave patterns - improvement 33%

Now you can learn to fly a plane from expert-pilot brainwave patterns - improvement 33% | Amazing Science | Scoop.it

You can learn how to improve your novice pilot skills by having your brain zapped with recorded brain patterns of experienced pilots via transcranial direct current stimulation (tDCS), according to researchers at HRL Laboratories.


“We measured the brain activity patterns of six commercial and military pilots, and then transmitted these patterns into novice subjects as they learned to pilot an airplane in a realistic flight simulator,” says Matthew Phillips, PhD.


The study, published in an open-access paper in the February 2016 issue of the journal Frontiers in Human Neuroscience, found that novice pilots who received brain stimulation via electrode-embedded head caps improved their piloting abilities, with a 33 percent increase in skill consistency, compared to those who received sham stimulation. “We measured the average g-force of the plane during the simulated landing and compared it to control subjects who received a mock brain stimulation,” says Phillips.


“Pilot skill development requires a synthesis of multiple cognitive faculties, many of which are enhanced by tDCS and include dexterity, mental arithmetic, cognitive flexibility, visuo-spatial reasoning, and working memory,” the authors note.


The study focused on a working-memory area — the right dorsolateral prefrontal cortex (DLPFC) — and the left motor cortex (M1), using continuous electroencephalography (EEG) to monitor midline frontal theta-band oscillatory brain activity and functional near infrared spectroscopy (fNIRS) to monitor blood oxygenation to infer neuronal activity.


The researchers used the XForce Dream Simulator package from X-Force PC and the X-plane 10 flight simulator software from Laminar Research for flight simulation training. Previous research has demonstrated that tDCS can both help patients more quickly recover from a stroke and boost a healthy person’s creativity; HRL’s new study is one of the first to show that tDCS is effective in accelerating practical learning.


Phillips speculates that the potential to increase learning with brain stimulation may make this form of accelerated learning commonplace. “As we discover more about optimizing, personalizing, and adapting brain stimulation protocols, we’ll likely see these technologies become routine in training and classroom environments,” he says. “It’s possible that brain stimulation could be implemented for classes like drivers’ training, SAT prep, and language learning.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Faciotopy—A face-feature map with face-like topology is engraved in our brain

Faciotopy—A face-feature map with face-like topology is engraved in our brain | Amazing Science | Scoop.it

The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here a team of scientists tests these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organization. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighboring patches represent features that are physically neighboring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience.


Faces have a prototypical configuration of features and are usually perceived in a canonical upright orientation, frequently fixated in particular locations. To test the faciotopy hypothesis, the scientists presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The obtained responses in V1 are best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, and showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face.


Faciotopy would be the first example of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. It remains to be explored whether there are also other areas with cortical maps reflecting topology of external objects.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

WUSTL Team develops wireless, dissolvable sensors to monitor brain

WUSTL Team develops wireless, dissolvable sensors to monitor brain | Amazing Science | Scoop.it

A team of neurosurgeons and engineers has developed wireless brain sensors that monitor intracranial pressure and temperature and then are absorbed by the body, negating the need for surgery to remove the devices. Such implants, developed by scientists at Washington University School of Medicine in St. Louis and engineers at the University of Illinois at Urbana-Champaign, potentially could be used to monitor patients with traumatic brain injuries, but the researchers believe they can build similar absorbable sensors to monitor activity in organ systems throughout the body. Their findings are published online Jan. 18, 2015 in the journal Nature.


"Electronic devices and their biomedical applications are advancing rapidly," said co-first author Rory K. J. Murphy, MD, a neurosurgery resident at Washington University School of Medicine and Barnes-Jewish Hospital in St. Louis. "But a major hurdle has been that implants placed in the body often trigger an immune response, which can be problematic for patients. The benefit of these new devices is that they dissolve over time, so you don't have something in the body for a long time period, increasing the risk of infection, chronic inflammation and even erosion through the skin or the organ in which it's placed. Plus, using resorbable devices negates the need for surgery to retrieve them, which further lessens the risk of infection and further complications." Murphy is most interested in monitoring pressure and temperature in the brains of patients with traumatic brain injury.


About 50,000 people die of such injuries annually in the United States. When patients with such injuries arrive in the hospital, doctors must be able to accurately measure intracranial pressure in the brain and inside the skull because an increase in pressure can lead to further brain injury, and there is no way to reliably estimate pressure levels from brain scans or clinical features in patients.


"However, the devices commonly used today are based on technology from the 1980s," Murphy explained. "They're large, they're unwieldy, and they have wires that connect to monitors in the intensive care unit. They give accurate readings, and they help, but there are ways to make them better."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Singing in the brain: Songbirds sing like human opera singers

Singing in the brain: Songbirds sing like human opera singers | Amazing Science | Scoop.it

A songbirds' vocal muscles work like those of human speakers and singers, finds a study recently published in the Journal of Neuroscience. The research on Bengalese finches showed that each of their vocal muscles can change its function to help produce different parameters of sounds, in a manner similar to that of a trained opera singer.


"Our research suggests that producing really complex song relies on the ability of the songbirds' brains to direct complicated changes in combinations of muscles," says Samuel Sober, a biologist at Emory University and lead author of the study. "In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain."


Pitch, for example, is important to songbird vocalization, but there is no single muscle devoted to controlling it. "They don't just contract one muscle to change pitch," Sober says. "They have to activate a lot of different muscles in concert, and these changes are different for different vocalizations. Depending on what syllable the bird is singing, a particular muscle might increase pitch or decrease pitch."


Previous research has revealed some of the vocal mechanisms within the human "voice box," or larynx. The larynx houses the vocal cords and an array of muscles that help control pitch, amplitude and timbre.


Instead of a larynx, birds have a vocal organ called the syrinx, which holds their vocal cords deeper in their bodies. While humans have one set of vocal cords, a songbird has two sets, enabling it to produce two different sounds simultaneously, in harmony with itself.


"Lots of studies look at brain activity and how it relates to behaviors, but muscles are what translates the brain's output into behavior," Sober says. "We wanted to understand the physics and biomechanics of what a songbird's muscles are doing while singing."


The researchers devised a method involving electromyography (EMG) to measure how the neural activity of the birds activates the production of a particular sound through the flexing of a particular vocal muscle.


The results showed the complex redundancy of the songbird's vocal muscles. "It tells us how complicated the neural computations are to control this really beautiful behavior," Sober says, adding that songbirds have a network of brain regions that non-songbirds do not.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Revolutionary Neuroscience Technique - Optogenetics - Slated for Human Clinical Trials

Revolutionary Neuroscience Technique - Optogenetics - Slated for Human Clinical Trials | Amazing Science | Scoop.it

A technique called optogenetics has transformed neuroscience during the past 10 years by allowing researchers to turn specific neurons on and off in experimental animals. By flipping these neural switches, it has provided clues about which brain pathways are involved in diseases like depression and obsessive-compulsive disorder.


“Optogenetics is not just a flash in the pan,” says neuroscientist Robert Gereau of Washington University in Saint Louis. “It allows us to do experiments that were not doable before. This is a true game changer like few other techniques in science.”


Since the first papers were published on optogenetics in the mid-aughts some researchers have mused about one day using optogenetics in patients, imagining the possibility of an off-switch for depression, for instance.


The technique, however, would require that a patient submit to a set of highly invasive medical procedures: genetic engineering of neurons to insert molecular switches to activate or switch off cells, along with threading of an optical fiber into the brain to flip those switches.


Spurred on by a set of technical advances, optogenetics pioneer Karl Deisseroth, together with other Stanford University researchers, has formed a company to pursue optogenetics trials in patients within the next several years—one of several start-ups that are now contemplating clinical trials of the technique.


Circuit Therapeutics, founded in 2010, is moving forward with specific plans to treat neurological diseases. It also partners with pharmaceutical companies to help them use optogenetics in animal research to develop novel drug targets for human diseases. Circuit wants to begin clinical trials for optogenetics to treat chronic pain, a therapy that would be less invasive than applications requiring implantation deep inside the brain. Neurons affected by chronic pain are relatively accessible, because they reside in and just outside the spinal cord, an easier target than the brain.


Even nerve endings in the skin might be targeted, making them much easier to reach. “In animal models it works incredibly well,” says Scott Delp, a neuroscientist at Stanford, who collaborates with Deisseroth. The firm is also working to develop treatments for Parkinson’s and other neurological disorders.


Interest in optogenetics and closely related therapies in patients is growing. RetroSense Therapeutics, a Michigan-based company, has reported plans to soon begin human trials of optogenetics for a genetic condition that causes blindness. The new technology relies on opsins, a type of ion channel consisting of proteins that conduct neurons’ electrical signaling. Neurons contain hundreds of different types of ion channels but opsins open in response to light. Some opsins are found in the human retina but those used in optogenetics are derived from algae and other organisms.


The first opsins used in optogenetics, called channel rhodopsins, open to allow positively charged ions to enter the cell when activated by a flash of blue light, which causes the neuron to fire an electrical impulse. Other opsin proteins pass inhibitory, negatively charged ions in response to light, making it possible to silence neurons as well. Researchers have widely expanded the arsenal of available opsins with genetic engineering, for example making ones that stay open in response to a short burst of light.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Rhythmic expression of many genes falls out of sync in older human brains

Rhythmic expression of many genes falls out of sync in older human brains | Amazing Science | Scoop.it

Circadian cycles shift as humans get older—sleep and body temperature patterns change, for instance. The rhythmic cycling of numerous genes’ expression in the brain also shifts as people age, researchers reported this week (December 21) inPNAS. The levels of many transcripts became less robust in their daily ups and downs, while another set of mRNAs emerged with a rhythmicity not seen in younger counterparts.


“You can imagine that things actually get weaker with age, but that things can get stronger with age is really exciting,” Doris Kretzschmar, a neuroscientist at the Oregon Institute of Occupational Health Sciences who was not involved in the study, told NPR’s Shots.


The researchers, led by Colleen McClung at the University of Pittsburgh School of Medicine in Pennsylvania, collected cortical tissue from people whose hour of death was known. Comparing gene expression levels between 31 subjects under 40 years old and 37 subjects over age 60, the researchers found 1,063 transcripts in one part of the prefrontal cortex that lost rhythmicity altogether in the older group. In this same part of the brain, 434 genes gained a rhythm that was not seen among younger individuals. In another part of the prefrontal cortex, 588 genes lost their daily cycling with age, while 533 became rhythmic.


It’s not clear what these changes in expression cycles might mean for health and aging. “Since depression is associated with accelerated molecular aging, and with disruptions in daily routines, these results also may shed light on molecular changes occurring in adults with depression,” coauthor Etienne Sibille of the University of Toronto said in a press release.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Study links epigenetic processes to the development of brain circuitry

Study links epigenetic processes to the development of brain circuitry | Amazing Science | Scoop.it

From before birth through childhood, connections form between neurons in the brain, ultimately making us who we are. So far, scientists have gained a relatively good understanding of how neural circuits become established, but they know less about the genetic control at play during this crucial developmental process.


Now, a team led by researchers at The Rockefeller University has described for the first time the so-called epigenetic mechanisms underlying the development of the cerebellum, the portion of the brain that allows us to learn and execute complex movements. The term epigenetics refers to changes to DNA that influence how and when genes are expressed.


“Until now, nobody knew what genes control the steps needed to make the circuitry that in turn controls the behavior the cerebellum is famous for,” says Mary E. Hatten, Frederick P. Rose Professor and head of the Laboratory of Developmental Neurobiology. “Our study shows that pivotal changes in the levels of all epigenetic pathway genes are needed to form the circuits.” These epigenetic pathway genes modify chromatin, which is DNA packaged with protein. Alterations to chromatin are an important type of epigenetic change because they affect which genes are available for translation into proteins.


Further investigation revealed that one of these epigenetic regulators was specifically responsible for processes crucial to forming connections between these neurons and other parts of the brain and for the expression of ion channels that transmit signals across synapses, which are gaps between neurons.


Two developments in technology made the current study possible. The first, TRAP, was developed at Rockefeller. It enables researchers to map gene expression in specific types of neurons. Hatten’s team applied this method to identify the genes expressed in granule cells, one of the two cell types that make up the cerebellum, in the mouse brain from birth through adulthood. They focused on changes in gene expression 12 to 21 days after birth, since this is the main period during which the circuitry of the cerebellum is formed.


The second key method used in the study is metagene analysis, a mathematical model developed at the Broad Institute of MIT and Harvard that allows researchers to study large sets of genes and see changes in patterns that would be too difficult to interpret by looking at individual genes. Three investigators from Broad collaborated on the current study. “Using this analytical tool, we showed that during this crucial period of time in development, all the pathways that control the remodeling of chromatin changed,” Hatten says.


Reference:

Role of Tet1/3 Genes and Chromatin Remodeling Genes in Cerebellar Circuit FormationNeuron 89, 1–13
Xiaodong Zhu, David Girardo, Eve-Ellen Govek, Keisha John, Marian Mellén, Pablo Tamayo, Jill P. Mesirov, and Mary E. Hatten

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How is a developing brain assembled?

How is a developing brain assembled? | Amazing Science | Scoop.it
A new, open-source software that can help track the embryonic development and movement of neuronal cells throughout the body of the worm, is now available to scientists. The software is described in a paper published in the open access journal, eLife on December 3rd by researchers at the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Center for Information Technology (CIT); along with Memorial Sloan-Kettering Institute, New York City; Yale University, New Haven, Connecticut; Zhejiang University, China; and the University of Connecticut Health Center, Farmington. NIBIB is part of the National Institutes of Health.

As far as biologists have come in understanding the brain, much remains to be revealed. One significant challenge is determining the formation of complex neuronal structures made up of billions of cells in the human brain. As with many biological challenges, researchers are first examining this question in simpler organisms, such as worms.

Although scientists have identified a number of important proteins that determine how neurons navigate during brain formation, it's largely unknown how all of these proteins interact in a living organism. Model animals, despite their differences from humans, have already revealed much about human physiology because they are much simpler and easier to understand. In this case, researchers chose Caenorhabditis elegans (C. elegans), because it has only 302 neurons, 222 of which form while the worm is still an embryo. While some of these neurons go to the worm nerve ring (brain) they also spread along the ventral nerve cord, which is broadly analogous to the spinal cord in humans. The worm even has its own versions of many of the same proteins used to direct brain formation in more complex organisms such as flies, mice, or humans.

"Understanding why and how neurons form and the path they take to reach their final destination could one day give us valuable information about how proteins and other molecular factors interact during neuronal development," said Hari Shroff, Ph.D., head of the NIBIB research team. "We don't yet understand neurodevelopment even in the context of the humble worm, but we're using it as a simple model of how these factors work together to drive the development of the worm brain and neuronal structure. We're hoping that by doing so, some of the lessons will translate all the way up to humans."


However, following neurons as they travel through the worm during its embryonic development is not as simple as it might seem. The first challenge was to create new microscopes that could record the embryogenesis of these worms without damaging them through too much light exposure while still getting the resolution needed to clearly see individual cells. Shroff and his team at NIBIB, in collaboration with Daniel Colon-Ramos at Yale University and Zhirong Bao at Sloan-Kettering, tackled this problem by developing new microscopes that improved the speed and resolution at which they could image worm embryonic development.


Read more at: http://tinyurl.com/ngy9nvt

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

First evidence of a generalized active network for cognitive brain functions found

First evidence of a generalized active network for cognitive brain functions found | Amazing Science | Scoop.it

Norwegian researchers have recently found evidence of a generalized active network for cognitive functions of the brain.


“I experienced a kind of moment that may be more common for theoretical physicists: the idea that something just has to be there, even though you cannot see it,” says neuroscientist Kenneth Hugdahl from the Bergen fMRI Group in an interview with the University of Bergen’s newspaper På Høyden.


Initially Hugdahl thought that he was just misunderstanding. But during preparations for a lecture he sat with nine fMRI images in front of him, when he suddenly discovered that the active red and yellow regions in the brain-map appeared in almost the same places in all images. The neuroscientist had to ask himself: could it be possible that there was an existing fn the brain that overlapped between all cognitive functions


The article On the existence of a generalized non-specific task-dependent network was published in the online journal Frontiers in Human Neuroscience.


Although the idea has been mentioned before, no brain researchers previously have been able to empirically prove that there is a cognitive network “for everything”. The idea of something that works as some sort of wiring diagram for the brain is therefore quite revolutionary.


Traditionally, this kind of brain research has focused on looking at individual functions of problem solving in specific areas of the brain. Hugdahl and his colleagues' article could be the first step in a new direction, toward something that can become the neuroscientific version of the “theory of everything” – one single explanation for all active, cognitive functions.

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Research Workshop
Scoop.it!

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon

Unsupervised, Mobile and Wireless Brain–Computer Interfaces on the Horizon | Amazing Science | Scoop.it
Juliano Pinto, a 29-year-old paraplegic, kicked off the 2014 World Cup in São Paulo with a robotic exoskeleton suit that he wore and controlled with his mind. The event was broadcast internationally and served as a symbol of the exciting possibilities of brain-controlled machines. Over the last few decades research into brain–computer interfaces (BCIs), which allow direct communication between the brain and an external device such a computer or prosthetic, has skyrocketed. Although these new developments are exciting, there are still major hurdles to overcome before people can easily use these devices as a part of daily life.

Until now such devices have largely been proof-of-concept demonstrations of what BCIs are capable of. Currently, almost all of them require technicians to manage and include external wires that tether individuals to large computers. New research, conducted by members of the BrainGate group, a consortium that includes neuroscientists, engineers and clinicians, has made strides toward overcoming some of these obstacles. “Our team is focused on developing what we hope will be an intuitive, always-available brain–computer interface that can be used 24 hours a day, seven days a week, that works with the same amount of subconscious thought that somebody who is able-bodied might use to pick up a coffee cup or move a mouse,” says Leigh Hochberg, a neuroengineer at Brown University who was involved in the research. Researchers are opting for these devices to also be small, wireless and usable without the help of a caregiver.

Via Wildcat2030, Jocelyn Stoller
more...
Lucile Debethune's curator insight, November 22, 2015 12:48 PM

Une approche intéressante de l'interface homme machine,  et le groupe Braingate apporte de très bonne idées sur ce sujet.A surveiller

 

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neuroscientists identify brain region that holds objects in memory until they are spotted

Neuroscientists identify brain region that holds objects in memory until they are spotted | Amazing Science | Scoop.it

Neuroscientists identify brain region that holds objects in memory until they are spotted.


Imagine you are looking for your wallet on a cluttered desk. As you scan the area, you hold in your mind a mental picture of what your wallet looks like.


MIT neuroscientists have now identified a brain region that stores this type of visual representation during a search. The researchers also found that this region sends signals to the parts of the brain that control eye movements, telling individuals where to look next.


This region, known as the ventral pre-arcuate (VPA), is critical for what the researchers call “feature attention,” which allows the brain to seek objects based on their specific properties. Most previous studies of how the brain pays attention have investigated a different type of attention known as spatial attention — that is, what happens when the brain focuses on a certain location.


“The way that people go about their lives most of the time, they don’t know where things are in advance. They’re paying attention to things based on their features,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “In the morning you’re trying to find your car keys so you can go to work. How do you do that? You don’t look at every pixel in your house. You have to use your knowledge of what your car keys look like.”

more...
No comment yet.