Amazing Science
Find tag "neuroscience"
420.9K views | +234 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’

Chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ | Amazing Science |
Technique to make tissue transparent offers three-dimensional view of neural networks.


A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.


“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.


The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.


The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light  (K. Chung et al., Nature; 2013). Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acryl­amide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.


Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimeter-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Japanese scientists can 'read' dreams, study claims

Japanese scientists can 'read' dreams, study claims | Amazing Science |

Scientists in Japan said Friday they had found a way to "read" people´s dreams, using MRI scanners to unlock some of the secrets of the unconscious mind.

Researchers have managed what they said was "the world´s first decoding" of night-time visions, the subject of centuries of speculation that have captivated humanity since ancient times. 

In the study, published in the journal Science, researchers at the ATR Computational Neuroscience Laboratories, in Kyoto, western Japan, used magnetic resonance imaging (MRI) scans to locate exactly which part of the brain was active during the first moments of sleep. The scientists then woke up the dreamers and asked them what images they had seen, a process that was repeated 200 times.

These answers were compared with the brain maps that had been produced by the MRI scanner, the researchers said, adding that they later built a database, based on the results. On subsequent attempts they were able to predict what images the volunteers had seen with a 60 percent accuracy rate, rising to more than 70 percent with around 15 specific items including men, words and books, they said.

"We have concluded that we successfully decoded some kinds of dreams with a distinctively high success rate," said Yukiyasu Kamitani, a senior researcher at the laboratories and head of the study team. "Dreams have fascinated people since ancient times, but their function and meaning has remained closed," Kamitani told AFP. "I believe this result was a key step towards reading dreams more precisely."

His team is now trying to predict other dream experiences such as smells, colours and emotion, as well as entire stories in people´s dreams. "We would like to introduce a more accurate method so that we can work on a way of visualising dreams," he said.

Kamitani, however, admits that there is still a long way to go before they are anywhere near understanding a whole dream. He said the decoding patterns differ for individuals and the database they have developed cannot be applied generally, rather it has to be generated for each person.

The experiment also only used the images the subjects were seeing right before they were woken up. Deep sleep, where subjects have more vivid dreams, remains a mystery. "There are still a lot of things that are unknown," he added.

Kamitani´s experiment is the latest in a government-led brain study programme aimed at applying the science to medical and welfare services, government officials said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Neuroscientists Discover New Molecular Mechanism for Long-Term Memory Formation

Neuroscientists Discover New Molecular Mechanism for Long-Term Memory Formation | Amazing Science |

The team investigated the role of a gene called Baf53b, also known as ACTL6B or actin-like 6B, in long-term memory formation. Baf53b is one of several proteins making up a molecular complex called nBAF.


Mutations in the proteins of the nBAF complex have been linked to several intellectual disorders, including Coffin-Siris syndrome, Nicolaides-Baraitser syndrome and sporadic autism. One of the key questions the team addressed is how mutations in components of the nBAF complex lead to cognitive impairments.


The team used mice bred with mutations in Baf53b. While this genetic modification did not affect the mice’s ability to learn, it did notably inhibit long-term memories from forming and severely impaired synaptic function.

“These findings present a whole new way to look at how long-term memories form,” said Prof Wood, who co-authored a paper in Nature Neuroscience.


“They also provide a mechanism by which mutations in the proteins of the nBAF complex may underlie the development of intellectual disability disorders characterized by significant cognitive impairments.”


How does this mechanism regulate gene expression required for long-term memory formation? Most genes are tightly packaged by a chromatin structure – chromatin being what compacts DNA so that it fits inside the nucleus of a cell. That compaction mechanism represses gene expression. Baf53b, and the nBAF complex, physically open the chromatin structure so specific genes required for long-term memory formation are turned on. The mutated forms of Baf53b did not allow for this necessary gene expression.


“The results from this study reveal a powerful new mechanism that increases our understanding of how genes are regulated for memory formation,” Prof Wood said.


The researchers believe the discovery of this mechanism adds another piece to the puzzle in the ongoing effort to uncover the mysteries of memory and, potentially, certain intellectual disabilities.


“Our next step is to identify the key genes the nBAF complex regulates. With that information, we can begin to understand what can go wrong in intellectual disability disorders, which paves a path toward possible therapeutics.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Bees Buzz Each Other through Changes in the Electric Field Surrounding Them

Bees Buzz Each Other through Changes in the Electric Field Surrounding Them | Amazing Science |

The electric fields that build up on honey bees as they fly, flutter their wings, or rub body parts together may allow the insects to talk to each other, a new study suggests. Tests show that the electric fields, which can be quite strong, deflect the bees' antennae, which, in turn, provide signals to the brain through specialized organs at their bases.


Scientists have long known that flying insects gain an electrical charge when they buzz around. That charge, typically positive, accumulates as the wings zip through the air—much as electrical charge accumulates on a person shuffling across a carpet. And because an insect's exoskeleton has a waxy surface that acts as an electrical insulator, that charge isn't easily dissipated, even when the insect lands on objects, says Randolf Menzel, a neurobiologist at the Free University of Berlin in Germany.


Although researchers have suspected for decades that such electrical fields aid pollination by helping the tiny grains stick to insects visiting a flower, only more recently have they investigated how insects sense and respond to such fields. Just last month, for example, a team reported thatbumblebees may use electrical fields to identify flowers recently visited by other insects from those that may still hold lucrative stores of nectar and pollen. A flower that a bee had recently landed on might have an altered electrical field, the researchers speculated.


Now, in a series of lab tests, Menzel and colleagues have studied how honey bees respond to electrical fields. In experiments conducted in small chambers with conductive walls that isolated the bees from external electrical fields, the researchers showed that a small, electrically charged wand brought close to a honey bee can cause its antennae to bend. Other tests, using antennae removed from honey bees, indicated that electrically induced deflections triggered reactions in a group of sensory cells, called the Johnston's organ, located near the base of the antennae. In yet other experiments, honey bees learned that a sugary reward was available when they detected a particular pattern of electrical field.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science News!

Which Came First, the Head or the Brain?

Which Came First, the Head or the Brain? | Amazing Science |

The sea anemone, a cnidarian, has no brain. It does have a nervous system, and its body has a clear axis, with a mouth on one side and a basal disk on the other. However, there is no organized collection of neurons comparable to the kind of brain found in bilaterians, animals that have both a bilateral symmetry and a top and bottom. Most animals except sponges, cnidarians, and a few other phyla are bilaterians. So an interesting evolutionary question is, which came first, the head or the brain? Do animals such as sea anemones, which lack a brain, have something akin to a head?


Chiara Sinigaglia and colleagues reported recently that at least some developmental pathways seen in cnidarians share a common lineage with head and brain development in bilaterians. It might seem intuitive to expect to find genes involved in brain development around the mouth of the anemone, and previous work has suggested that the oral region in cnidarians corresponds to the head region of bilaterians. However, there has been debate over whether the oral or aboral pole of cnidarians is analogous to the anterior pole of bilaterians. At the start of its life cycle a sea anemone exists as a free swimming planula, which then attaches to a surface and becomes a sea anemone. That free-swimming phase contains an apical tuft, a sensory structure at the front of the swimming animal's body. The apical tuft is the part that attaches and becomes the aboral pole --the part distal from the mouth-- of the adult anemone.


To test whether genetic expression in the aboral pole of cnidarians does in fact resemble the head patterning seen in bilaterians, researchers analyzed gene expression in Nematostella vectensis, a sea anemone found in estuaries and bays. They focused on the six3 and FoxQ2transcription factors, as these genes are known to regulate development of the anterior-posterior axis in bilaterian species. six3 knockout mice, for example, fail to develop a forebrain, and in humans, six3 is known to regulate the development of forebrain and eyes.

The N. vectensis genome contains one gene from the six3/6 group and four foxQ2 genes. Sinigaglia and colleagues found that Nvsix3/6 and one of the foxQ2 genes, NvFoxQ2a, were expressed predominantly on the aboral pole of the developing cnidarian but, after gastrulation, were excluded from a small spot in that region (NvSix3/6 was also expressed in a small number of other cells of the planula that resembled neurons). Because of this, the authors callNvSix3/6 and NvFoQ2a “ring genes”, and genes that are then expressed in that spot “spot genes.” The spot then develops into the apical tuft.

Through knockdown and rescue experiments, the researchers demonstrate that NvSix3/6 is required for the development of the aboral region; without it, the expression of spot genes is reduced or eliminated and the apical tuft of the planula doesn't form. This suggests that development of the region distal from the cnidarian mouth appears to parallel the development of the bilaterian head.

This research demonstrates that at least a subset of the genes that cause head and brain formation in bilaterians are also differentially expressed in the aboral region of the sea urchin. The expression patterns are not identical to those in all bilaterians; however, the similarities suggest that the patterns of gene expression arose in an ancestor common to bilaterians and cnidarians, and that the process was then modified in bilaterians to produce a brain. So to answer the evolutionary question posed above, it seems that the developmental module that produces a head came first.

Via Sakis Koukouvis
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Switching night vision on or off

Switching night vision on or off | Amazing Science |

Neurobiologists at the Friedrich Miescher Institute have been able to dissect a mechanism in the retina that facilitates our ability to see both in the dark and in the light. They identified a cellular switch that activates distinct neuronal circuits at a defined light level. The switch cells of the retina act quickly and reliably to turn on and off computations suited specifically for vision in low and high light levels thus facilitating the transition from night to day vision. The scientists have published their results online in Neuron.


"It was fascinating to see how modern neurobiological methods allowed us to answer a question about vision that has been controversially discussed for the last 50 years", said Karl Farrow, postdoctoral fellow in Botond Roska's group at the Friedrich Miescher Institute for Biomedical Research. Since the late 1950 scientists debated how the retina handles the different visual processes at low and high light intensities, at starlight and at daylight. Farrow and his colleagues have now identified a cellular switch in the retina that controls perception during these two settings.


At first glance, everything seems clear. The interplay of two photoreceptor types in the retina, the rods and the cones, allow us to see across a wide range of light intensities. The rods are highly sensitive and spring into action in the dark; the cones are activated during the day and in humans come in three diversities allowing us to see color. The rods help us detect objects during the night; while the cones allow us to discriminate the fine details of those objects during the day. The plethora of initial signals originating from the photoreceptors is computed in a system of only approximately 20 neuronal channels that transport information to the brain. The relay stations are the roughly 20 types of ganglion cells in the retina. How they manage the transition from light to dark and enable vision at the different light regimes has remained unclear.


In the retina several cell layers are stacked on top of each other. The photoreceptors are the first to be activated by light; they relay the information to bipolar cells, which in turn activate ganglion cells. The different types of ganglion cells take on distinct tasks during vision. These ganglion cells are embedded in a mesh of amacrine cells that modulate their activity. "Here is where our new genetic tools proofed very helpful," said Farrow, "because they allowed us to look at individual ganglion cell types and to specifically measure their activities at different light intensities." Farrow and colleagues could thus show that the activity of one particular type of ganglion cells, called PV1, is modulated like a switch by amacrine cells. The amacrine cells inhibit the ganglion cell strongly at high light intensities and weakly at low ambient light levels. This switch is abrupt and reversible and it occurs at the light intensities where cones are starting to be activated. "We were surprised to see how fast this switch occurs and how reliable we were able to switch between the two states at defined light intensities", comments Farrow.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Neuroscience_technics!

Whole brain cellular-level activity mapping in a second

Whole brain cellular-level activity mapping in a second | Amazing Science |

It is now possible to map the activity of nearly all the neurons in a vertebrate brain at cellular resolution. What does this mean for neuroscience research and projects like the Brain Activity Map proposal?

In a recent publication, Misha Ahrens and Philipp Keller from the HHMI’s Janelia Farm Research Campus used high-speed light sheet microscopy to image the activity of 80% of the neurons in the brain of a fish larva at speeds of a whole brain every 1.3 seconds. This represents—to our knowledge—the first technology that achieves whole brain imaging of a vertebrate brain at cellular resolution with speeds that approximate neural activity patterns and behavior.


In an Article that just went live in Nature Methods, Misha Ahrens and Philipp Keller from HHMI’s Janelia Farm Research Campus used high-speed light sheet microscopy to image the activity of 80% of the neurons in the brain of a fish larva at speeds of a whole brain every 1.3 seconds. This represents—to our knowledge—the first technology that achieves whole brain imaging of a vertebrate brain at cellular resolution with speeds that approximate neural activity patterns and behavior.

Interestingly, the paper comes out at a time when much is being discussed and written about mapping brain activity at the cellular level. This is one of the main proposals of the Brain Activity Map—a project that is being discussed at the White House and could be NIH’s next ‘big science’ project for the next 10-15 years. [Just for clarity, the authors of this work are not formally associated with the BAM proposal].


The details of BAM’s exact goals and a clear roadmap and timeline to achieve them have yet to be presented, but from what its proponents have described in a recent Science paper the main aspiration of the project is to improve our understanding of how whole neuronal circuits work at the cellular level. The project seeks to monitor the activity of whole circuits as well as manipulate them to study their functional role. To reach these goals, first and foremost one must have technology capable of measuring the activity of individual neurons throughout the entire brain in a way that can discriminate individual circuits. The most obvious way to do this is by imaging the activity as it is occurring.


With improvements in the speed and resolution of existing microscopy setups and in the probes for monitoring activity, exhaustive imaging of neuronal function across a small transparent organism was bound to be possible—as this study has now shown.


The study has also made interesting discoveries. The authors saw correlated activity patterns measured at the cellular level that spanned large areas of the brain—pointing to the existence of broadly distributed functional circuits. The next steps will be to determine the causal role that these circuits play in behavior—something that will require improvements in the methods for 3D optogenetics. Obtaining the detailed anatomical map of these circuits will also be key to understand the brain’s organization at its deepest level.

Via Julien Hering, PhD
Scooped by Dr. Stefan Gruenwald!

Whole brain cellular-level activity mapping in one second

Whole brain cellular-level activity mapping in one second | Amazing Science |

Neuroscientists at Howard Hughes Medical Institute have mapped the activity of nearly all the neurons in a vertebrate brain at cellular resolution, with signficant implications for neuroscience research and projects like the proposed Brain Activity Map (BAM).


Fast volumetric imaging of the larval zebrafish brain with light-sheet microscopy (credit: Misha B Ahrens, Philipp J Keller/Nature Methods)

The researchers used high-speed light sheet microscopy to image the activity of 80% of the neurons in the brain (which is composed of ~100,000 neurons) of a fish larva at 0.8 Hz (an image every 1.3 seconds), with single-cell resolution.


This represents the first technology that achieves whole brain imaging of a vertebrate brain at cellular resolution with speeds that approximate neural activity patterns and behavior, as Nature Methodsmethagora blog noted.

The authors saw correlated activity patterns at the cellular level that spanned large areas of the brain — pointing to the existence of broadly distributed functional circuits.


The next steps will be to determine the causal role that these circuits play in behavior — something that will require improvements in the methods for 3D optogenetics, the blog said. Obtaining the detailed anatomical map of these circuits will also be key to understand the brain’s organization at its deepest level.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain Researchers Can Detect Who We Are Thinking About

Brain Researchers Can Detect Who We Are Thinking About | Amazing Science |

Scientists scanning the human brain can now tell whom a person is thinking of, the first time researchers have been able to identify what people are imagining from imaging technologies.


Work to visualize thought is starting to pile up successes. Recently, scientists have used brain scans to decode imagery directly from the brain, such as what number people have just seen and what memory a person is recalling. They can now even reconstruct videos of what a person has watched based on their brain activity alone. Cornell University cognitive neuroscientist Nathan Spreng and his colleagues wanted to carry this research one step further by seeing if they could deduce the mental pictures of people that subjects conjure up in their heads.


“We are trying to understand the physical mechanisms that allow us to have an inner world, and a part of that is how we represent other people in our mind,” Spreng says. His team first gave 19 volunteers descriptions of four imaginary people they were told were real. Each of these characters had different personalities. Half the personalities were agreeable, described as liking to cooperate with others; the other half were less agreeable, depicted as cold and aloof or having similar traits. In addition, half these characters were described as outgoing and sociable extroverts, while the others were less so, depicted as sometimes shy and inhibited. The scientists matched the genders of these characters to each volunteer and gave them popular names like Mike, Chris, Dave or Nick, or Ashley, Sarah, Nicole or Jenny.


The researchers then scanned volunteers’ brains using functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow. During the scans, the investigators asked participants to predict how each of the four fictitious people might behave in a variety of scenarios — for instance, if they were at a bar and someone else spilled a drink, or if they saw a homeless veteran asking for change.


“Humans are social creatures, and the social world is a complex place,” Spreng says. “A key aspect to navigating the social world is how we represent others.” The scientists discovered that each of the four personalities were linked to unique patterns of brain activity in a part of the organ known as the medial prefrontal cortex. In other words, researchers could tell whom their volunteers were thinking about.


“This is the first study to show that we can decode what people are imagining,” Spreng says. The medial prefrontal cortex helps people deduce traits about others. These findings suggest this region is also where personality models are encoded, assembled and updated, helping people understand and predict the likely behavior of others and prepare for the future.

Phillip Heath's curator insight, March 18, 2013 4:55 PM

Always interested in how brain research might impact education. The more we know, the more we affirm instinctively good practice. We know more about why it works. We also better know what we don't know. 

Scooped by Dr. Stefan Gruenwald!

Human Connectome Project releases major dataset on brain connectivity

Human Connectome Project releases major dataset on brain connectivity | Amazing Science |

The Human Connectome Project, a five-year endeavor to link brain connectivity to human behavior, has released a set of high-quality imaging and behavioral data to the scientific community. The project has two major goals: to collect vast amounts of data using advanced brain imaging methods on a large population of healthy adults, and to make the data freely available so that scientists worldwide can make further discoveries about brain circuitry.


The initial data release includes brain imaging scans plus behavioral information — individual differences in personality, cognitive capabilities, emotional characteristics and perceptual function — obtained from 68 healthy adult volunteers. Over the next several years, the number of subjects studied will increase steadily to a final target of 1,200. The initial release is an important milestone because the new data have much higher resolution in space and time than data obtained by conventional brain scans.


The Human Connectome Project (HCP) consortium is led by David C. Van Essen, PhD, Alumni Endowed Professor at Washington University School of Medicine in St. Louis, and Kamil Ugurbil, PhD, Director of the Center for Magnetic Resonance Research and the McKnight Presidential Endowed Chair Professor at the University of Minnesota.


“By making this unique data set available now, and continuing with regular data releases every quarter, the Human Connectome Project is enabling the scientific community to immediately begin exploring relationships between brain circuits and individual behavior,”  says Van Essen. “The HCP will have a major impact on our understanding of the healthy adult human brain, and it will set the stage for future projects that examine changes in brain circuits underlying the wide variety of brain disorders afflicting humankind.”


The consortium includes more than 100 investigators and technical staff at 10 institutions in the United States and Europe ( It is funded by 16 components of the National Institutes of Health via the Blueprint for Neuroscience Research (

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How to tell who a person is thinking about?

How to tell who a person is thinking about? | Amazing Science |

It is possible to tell who a person is thinking about by analyzing images of his or her brain. Our mental models of people produce unique patterns of brain activation, which can be detected using advanced imaging techniques according to a study by Cornell University neuroscientist Nathan Spreng and his colleagues.

"When we looked at our data, we were shocked that we could successfully decode who our participants were thinking about based on their brain activity," said Spreng, assistant professor of human development in Cornell's College of Human Ecology.


Understanding and predicting the behavior of others is a key to successfully navigating the social world, yet little is known about how the brain actually models the enduring personality traits that may drive others' behavior, the authors say. Such ability allows us to anticipate how someone will act in a situation that may not have happened before.


To learn more, the researchers asked 19 young adults to learn about the personalities of four people who differed on key personality traits. Participants were given different scenarios (i.e. sitting on a bus when an elderly person gets on and there are no seats) and asked to imagine how a specified person would respond. During the task, their brains were scanned using functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow.


They found that different patterns of brain activity in the medial prefrontal cortex (mPFC) were associated with each of the four different personalities. In other words, which person was being imagined could be accurately identified based solely on the brain activation pattern.


The results suggest that the brain codes the personality traits of others in distinct brain regions and this information is integrated in the medial prefrontal cortex (mPFC) to produce an overall personality model used to plan social interactions, the authors say.


"Prior research has implicated the anterior mPFC in social cognition disorders such as autism and our results suggest people with such disorders may have an inability to build accurate personality models," said Spreng. "If further research bears this out, we may ultimately be able to identify specific brain activation biomarkers not only for diagnosing such diseases, but for monitoring the effects of interventions."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers at Brown University create first wireless, implantable brain-computer interface

Researchers at Brown University create first wireless, implantable brain-computer interface | Amazing Science |

Researchers at Brown University have succeeded in creating the first wireless, implantable, rechargeable, long-term brain-computer interface. The wireless BCIs have been implanted in pigs and monkeys for over 13 months without issue, and human subjects are next.


A tether limits the mobility of the patient, and also the real-world testing that can be performed by the researchers. Brown’s wireless BCI allows the subject to move freely, dramatically increasing the quantity and quality of data that can be gathered — instead of watching what happens when a monkey moves its arm, scientists can now analyze its brain activity during complex activity, such as foraging or social interaction. Obviously, once the wireless implant is approved for human testing, being able to move freely — rather than strapped to a chair in the lab — would be rather empowering.


Inside the device, there’s a li-ion battery, an inductive (wireless) charging loop, a chip that digitizes the signals from your brain, and an antenna for transmitting those neural spikes to a nearby computer. The BCI is connected to a small chip with 100 electrodes protruding from it, which, in this study, was embedded in the somatosensory cortex or motor cortex. These 100 electrodes produce a lot of data, which the BCI transmits at 24Mbps over the 3.2 and 3.8GHz bands to a receiver that is one meter away. The BCI’s battery takes two hours to charge via wireless inductive charging, and then has enough juice to last for six hours of use.

One of the features that the Brown researchers seem most excited about is the device’s power consumption, which is just 100 milliwatts. For a device that might eventually find its way into humans, frugal power consumption is a key factor that will enable all-day, highly mobile usage. Amusingly, though, the research paper notes that the wireless charging does cause significant warming of the device, which was “mitigated by liquid cooling the area with chilled water during the recharge process and did not notably affect the animal’s comfort.” Another important factor is that the researchers were able to extract high-quality, “rich” neural signals from the wireless implant — a good indicator that it will also help human neuroscience, if and when the device is approved.

Benjamin Johnson's curator insight, March 21, 2013 10:36 PM

Let science open the doors for gaming!


Scooped by Dr. Stefan Gruenwald!

NATURE: First Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information

NATURE: First Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information | Amazing Science |

A brain-to-brain interface (BTBI) enabled a real-time transfer of behaviorally meaningful sensorimotor information between the brains of two rats. In this BTBI, an “encoder” rat performed sensorimotor tasks that required it to select from two choices of tactile or visual stimuli. While the encoder rat performed the task, samples of its cortical activity were transmitted to matching cortical areas of a “decoder” rat using intracortical microstimulation (ICMS). The decoder rat learned to make similar behavioral selections, guided solely by the information provided by the encoder rat's brain. These results demonstrated that a complex system was formed by coupling the animals' brains, suggesting that BTBIs can enable dyads or networks of animal's brains to exchange, process, and store information and, hence, serve as the basis for studies of novel types of social interaction and for biological computing devices.

In his seminal study on information transfer between biological organisms, Ralph Hartley wrote that “in any given communication the sender mentally selects a particular symbol and by some bodily motion, as his vocal mechanism, causes the receiver to be directed to that particular symbol”.


Brain-machine interfaces (BMIs) have emerged as a new paradigm that allows brain-derived information to control artificial actuators and communicate the subject's motor intention to the outside world without the interference of the subject's body. For the past decade and a half, numerous studies have shown how brain-derived motor signals can be utilized to control the movements of a variety of mechanical, electronic and even virtual external devices. Recently, intracortical microstimulation (ICMS) has been added to the classical BMI paradigm to allow artificial sensory feedback signals, generated by these brain-controlled actuators, to be delivered back to the subject's brain simultaneously with the extraction of cortical motor commands.


In the present study, a research group took the BMI approach to a new direction altogether and tested whether it could be employed to establish a new artificial communication channel between animals; one capable of transmitting behaviorally relevant sensorimotor information in real-time between two brains that, for all purposes, would from now on act together towards the fulfillment of a particular behavioral task. Previously, the same team reported that specific motor and sensory parameters can be extracted from populations of cortical neurons using linear or nonlinear decoders in real-time. Here, the scientists tested the hypothesis that a similar decoding performed by a “recipient brain” was sufficient to guide behavioral responses in sensorimotor tasks, therefore constituting a Brain-to-Brain Interface (BTBI). To test this hypothesis, they conducted three experiments in which different patterns of cortical sensorimotor signals, coding a particular behavioral response, were recorded in one rat (heretofore named the “encoder” rat) and then transmitted directly to the brain of another animal (i.e. the “decoder” rat), via intra-cortical microstimulation (ICMS). All BTBI experiments described below were conducted in awake, behaving rats chronically implanted with cortical microelectrode arrays capable of both neuronal ensemble recordings and intracortical microstimulation. The scientists demonstrated that pairs of rats could cooperate through a BTBI to achieve a common behavioral goal.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Wearable Gesture Control from Thalmic Labs Senses Your Muscles

Wearable Gesture Control from Thalmic Labs Senses Your Muscles | Amazing Science |

With visions of Minority Report, many a user's hoped to control gadgets by wildly waving at a Kinect like a symphony conductor. Now there's another way to make your friends laugh at you thanks to the Thalmic Labs' MYO armband, which senses motion and electrical activity in your muscles to let you control your computer or other device via Bluetooth 4.0. The company says its proprietary sensor can detect signals right down to individual fingers before you even move them, which -- coupled with an extremely sensitive 6-axis motion detector -- makes for a highly responsive experience. Feedback to the user is given through haptics in the device, which also packs an ARM processor and onboard Lithium-Ion batteries. MYO is now up for a limited pre-order with Thalmic saying you won't be charged until it ships near year's end, while developers can also grab the API. If you're willing to risk some ridicule to be first on the block to grab one, hit the source.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Computers and Neurons Unite: Scientists Deepen Understanding of Brain

Computers and Neurons Unite: Scientists Deepen Understanding of Brain | Amazing Science |

Combining neurons andcomputers in a new way could let scientists listen in on these cells talking to one another, deepening our understanding of the brain and paving the way for thought-controlled prosthetic limbs.


The University of Wisconsin researchers constructed nanoscale tubes of silicon and germanium, common materials used to make computer chips. They then placed mice neuron cells next to these tiny straw like tubes and watched as the cells’ axons – branches that carry information from the neuron – grew through the tubes. While this is not the first time that axons have been grown in the lab, it is the first time that they’ve been grown in semiconductor tubes that could potentially interface with electronics.


“Can we make devices that once are implanted can entice neurons to integrate and re-grow into them?” asked study co-author Justin Williams, an associate professor of biomedical engineering at the University of Wisconsin, Madison. “I don’t know whether this exact approach will be directly applicable to [implantation], but at the very least I think the things we can learn from these types of studies will inform the future development of implantable devices.”


The significance of this advance is twofold. First, these semiconductor-based tubes have properties similar to the insulating layer that surrounds the axons, creating a more realistic environment for studying neurons.


Second, because the simulated myelin sheath is made out of semiconductors – the basic building block of computers – other electronic devices such as sensors and probes can be easily integrated with the tubes, which will allow scientists to watch and listen as the cells communicate with one another.

It’s not clear how these findings will be applied to the development of future brain implants, which include brain-computer interfaces.


With processes typically used in the computer industry, the researchers were able to make tiny tubes of semiconductors. These tubes were mirrored after their biological counterpart, in the hopes that the axons would feel right at home in this environment and behave as they would in the body.


The result: The axons took to the tubes and grew through them with gusto.

The researchers hope that this attraction between the tubes and the neuron cells will let them create customized networks of these cells.


“Normally when you throw neurons in culture, they kind of bunch up with one another, they send out [axons], they connect to every other neuron in this random way and that’s not how the brain is formed, that’s not how the brain works,” Williams said. “If we can use the tubes to make predefined connections we may be able to make small circuits that would be better models of certain in vivo functions.” The next step will be to integrate sensors into the tubes, Williams said.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Connecting a Connectome to Behavior: An Ensemble of Neuroanatomical Models of C. elegans Klinotaxis

Connecting a Connectome to Behavior: An Ensemble of Neuroanatomical Models of C. elegans Klinotaxis | Amazing Science |

Increased efforts in the assembly and analysis of connectome data are providing new insights into the principles underlying the connectivity of neural circuits. However, despite these considerable advances in connectomics, neuroanatomical data must be integrated with neurophysiological and behavioral data in order to obtain a complete picture of neural function. Due to its nearly complete wiring diagram and large behavioral repertoire, the nematode worm Caenorhaditis elegans is an ideal organism in which to explore in detail this link between neural connectivity and behavior. In this paper, researchers recently developed a neuroanatomically-grounded model of salt klinotaxis, a form of chemotaxis in which changes in orientation are directed towards the source through gradual continual adjustments. They were able to identify a minimal klinotaxis circuit by systematically searching the C. elegans connectome for pathways linking chemosensory neurons to neck motor neurons, and prune the resulting network based on both experimental considerations and several simplifying assumptions. They then uses an evolutionary algorithm to find possible values for the unknown electrophsyiological parameters in the network such that the behavioral performance of the entire model is optimized to match that of the animal. Multiple runs of the evolutionary algorithm produce an ensemble of such models.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Babies' brains may be tuned to language even before birth

Babies' brains may be tuned to language even before birth | Amazing Science |

Despite having brains that are still largely under construction, babies born up to three months before full term can already distinguish between spoken syllables in much the same way that adults do, an imaging study has shown.


Full-term babies — those born after 37 weeks' gestation — display remarkable linguistic sophistication soon after they are born: they recognize their mother’s voice, can tell apart two languages they’d heard before birth and remember short stories read to them while in the womb. 


But exactly how these speech-processing abilities develop has been a point of contention. “The question is: what is innate, and what is due to learning immediately after birth?” asks neuroscientist Fabrice Wallois of the University of Picardy Jules Verne in Amiens, France. 


To answer that, Wallois and his team needed to peek at neural processes already taking place before birth. It is tough to study fetuses, however, so they turned to their same-age peers: babies born 2–3 months premature. At that point, neurons are still migrating to their final destinations; the first connections between upper brain areas are snapping into place; and links have just been forged between the inner ear and cortex.


To test these neural pathways, the researchers played soft voices to premature babies while they were asleep in their incubators a few days after birth, then monitored their brain activity using a non-invasive optical imaging technique called functional near-infrared spectroscopy. They were looking for the tell-tale signals of surprise that brains display — for example, when they suddenly hear male and female voices intermingled after hearing a long run of simply female voices.


The young brains were able to distinguish between male and female voices, as well as between the trickier sounds ‘ga’ and ‘ba’, which demands even faster processing. What is more, the parts of the cortex used were the same as those used by adults for sophisticated understanding of speech and language. 


The results show that linguistic connections inside the cortex are already “present and functional” and did not need to be gradually acquired through repeated exposure to sound, Wallois says. This suggests at least part of these speech-processing abilities is innate. The work could also lead to better techniques caring for the most vulnerable brains, Wallois adds, including premature babies.

Miro Svetlik's curator insight, March 28, 2013 6:16 AM

This may prove really interesting, babies can surely learn a lot new languages quicky in their early life but I think they will retain the preference (liking) for the language of some type, that might answer this (just a wild guess :)

Scooped by Dr. Stefan Gruenwald!

Brain-Computer Interface Goes Wireless

Brain-Computer Interface Goes Wireless | Amazing Science |

Engineers at Brown University have improved on their original and groundbreaking brain-computer interface by creating a wireless device that has successfully been implanted into the brains of monkeys and pigs. The device houses its own internal operating system, complete with a lithium ion battery, ultralow-power circuits for processing and conversion, wireless radio, infrared transmitters, and a copper coil for recharging. "A pill-sized chip of electrodes implanted on the cortex sends signals through uniquely designed electrical connections into the device's laser-welded, hermetically sealed 2.2 inches-long, 9 mm thick titanium 'can.'"

The device, recently written about in the Journal of Neural Engineering, has been functioning well in animals for over a year. Now scientists expect to move closer to testing the device on humans, for which the device was originally intended. "Brain-computer interfaces could help people with severe paralysis control devices with their thoughts. ... Brain-computer interfaces (BCIs) are used to assess the feasibility of people with severe paralysis being able to move assistive devices like robotic arms or computer cursors by thinking about moving their arms and hands."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers generate nerve cells directly in the brain from transplanted fibroblasts and astrocytes

Researchers generate nerve cells directly in the brain from transplanted fibroblasts and astrocytes | Amazing Science |

Cellular reprogramming is a new and rapidly emerging field in which somatic cells can be turned into pluripotent stem cells or other somatic cell types simply by the expression of specific combinations of genes. By viral expression of neural fate determinants, it is possible to directly reprogram mouse and human fibroblasts into functional neurons, also known as induced neurons. The resulting cells are nonproliferating and present an alternative to induced pluripotent stem cells for obtaining patient- and disease-specific neurons to be used for disease modeling and for development of cell therapy.

In addition, because the cells do not pass a stem cell intermediate, direct neural conversion has the potential to be performed in vivo. In an new study, a team of researchers show that transplanted human fibroblasts and human astrocytes, which are engineered to express inducible forms of neural reprogramming genes, convert into neurons when reprogramming genes are activated after transplantation. Using a transgenic mouse model to specifically direct expression of reprogramming genes to parenchymal astrocytes residing in the striatum, we also show that endogenous mouse astrocytes can be directly converted into neural nuclei (NeuN)-expressing neurons in situ. These experiments provide proof of principle that direct neural conversion can take place in the adult rodent brain when using transplanted human cells or endogenous mouse cells as a starting cell for neural conversion.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

When mind is at rest, brain sends backward signals to clear useless old information

When mind is at rest, brain sends backward signals to clear useless old information | Amazing Science |

When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.


The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder―in which the mind has difficulty moving beyond a disturbing experience.


During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.


It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.


“The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important,” said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. “These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Why makeup matters - Brain uses contrast of facial features to judge a person's age

Why makeup matters - Brain uses contrast of facial features to judge a person's age | Amazing Science |

The contrasting nature of facial features is one of the signals that people unconsciously use to decipher how old someone looks, says Psychology Prof. Richard Russell, who has been collaborating with researchers from CE.R.I.E.S. (Epidermal and Sensory Research and Investigation Center), a department of Chanel Research and Technology dedicated to skin related issues and facial appearance.


"Unlike with wrinkles, none of us are consciously aware that we're using this cue, even though it stares us in the face every day," said Russell. The discovery of this cue to facial age perception may partly explain why cosmetics are worn the way they are, and it lends more evidence to the idea that makeup use reflects our biological as well as our cultural heritage, according to Russell. In one study, Russell and his team measured images of 289 faces ranging in age from 20 to 70 years old, and found that through the aging process, the color of the lips, eyes and eyebrows change, while the skin becomes darker. This results in less contrast between the features and the surrounding skin – leaving older faces to have less contrast than younger faces.


The difference in redness between the lips and the surrounding skin decreases, as does the luminance difference between the eyebrow and the forehead, as the face ages. Although not consciously aware of this sign of aging, the mind uses it as a cue for perceiving how old someone is.

In another study involving more than a hundred subjects in Gettysburg and Paris, the scientists artificially increased these facial contrasts and found that the faces were perceived as younger. When they artificially decreased the facial contrasts, the faces were perceived as older.

The image shows two identical images of the same face, except that the facial contrast has been increased in the left image and decreased in the right image. The face on the left appears younger than the one on the right.

Cosmetics are commonly used to increase aspects of facial contrast, such as the redness of lips. Scientists propose that this can partly explain why makeup is worn the way that it is – shades of lipstick that increase the redness of the lips are making the face appear younger, which is related to healthiness and beauty.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from NetBiology!

Mice Learn Faster with Human Glia

Mice Learn Faster with Human Glia | Amazing Science |

Mice that received transplants of human glial progenitor cells learned much more quickly than normal mice, according to a study published today (March 7) in Cell Stem Cell. The findings support the theory that glial cells made a significant contribution to the evolution of our own enhanced cognitive abilities.


“This work is very exciting and surprising because it demonstrates that there may be something special about human glial progenitor cells that contribute to the amazing complexity and computational abilities of the human brain,” said Robert Malenka, a neuroscientist at Stanford University who was not involved in the study, in an email to The Scientist.


For many years, glia cells, non-neuronal cells present in the same numbers as neurons in the brain, were thought to play only a supporting role, providing structure, insulation, and nutrients for neurons. But in the past 20 years it has become clear that glia also participate in the transmission of electrical signals. Specifically, astrocytes—a type of glial cell with thousands of tendrils that reach and encase synapses—can modulate signals passing between neurons and affect the strength of those connections over time.

Recent studies have also demonstrated that human astrocytes are very different from those found in mouse and rat brains, on which most previous studies of astrocyte physiology were based. Human astrocytes are more numerous, larger, and more complex, and they are capable of far more rapid signaling responses than rodent astrocytes.


Together, these results suggest that astrocytes may have been critical to the evolution of enhanced neural processing in humans. Having already transplanted human glial progenitor cells (GPCs) to restore myelination in myelin-deficient mice, Steven Goldman of University of Rochester Medical Center in New York and colleagues realized that they could repeat the trick in normal mice to assess the contribution of human-specific astrocytes to synaptic plasticity and learning.


Goldman’s team grafted human GPCs into the brain of baby mice and waited until they became adults, by which time a large proportion of their forebrain glia were replaced by human cells differentiated from the GPCs, including astrocytes with the same structure and functional capabilities as in humans. The researchers then looked at long-term potentiation (LTP)—the strengthening of synaptic connections and a key mechanism underlying learning—in the hippocampus, and found that it was significantly enhanced in mice with human GPCs compared with normal mice and mice engrafted with mouse GPCs. Goldman and colleagues also assessed the performance of the mice on several behavioral tasks that measure leaning and memory—including auditory fear conditioning, a maze test, and object-location memory—and found across the board that mice with human GPCs learned significantly more quickly than normal mice.

Via Luisa Meira
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Flip of a Single Molecular Switch Recreates a Youthful Brain that Facilitates Learning and Healing in Mice

Flip of a Single Molecular Switch Recreates a Youthful Brain that Facilitates Learning and Healing in Mice | Amazing Science |

The flip of a single molecular switch helps create the mature neuronal connections that allow the brain to bridge the gap between adolescent impressionability and adult stability.


Scientists have long known that the young and old brains are very different. Adolescent brains are more malleable or plastic, which allows them to learn languages more quickly than adults and speeds recovery from brain injuries. The comparative rigidity of the adult brain results in part from the function of a single gene that slows the rapid change in synaptic connections between neurons.


The Nogo Receptor 1 gene is required to suppress high levels of plasticity in the adolescent brain and create the relatively quiescent levels of plasticity in adulthood. In mice without this gene, juvenile levels of brain plasticity persist throughout adulthood. When researchers blocked the function of this gene in old mice, they reset the old brain to adolescent levels of plasticity.


"These are the molecules the brain needs for the transition from adolescence to adulthood," said Stephen Strittmatter. Vincent Coates Professor of Neurology, Professor of Neurobiology and senior author of the paper. "It suggests we can turn back the clock in the adult brain and recover from trauma the way kids recover."


Rehabilitation after brain injuries like strokes requires that patients re-learn tasks such as moving a hand. Researchers found that adult mice lacking Nogo Receptor recovered from injury as quickly as adolescent mice and mastered new, complex motor tasks more quickly than adults with the receptor.


"This raises the potential that manipulating Nogo Receptor in humans might accelerate and magnify rehabilitation after brain injuries like strokes," said Feras Akbik, Yale doctoral student who is first author of the study.


Researchers also showed that Nogo Receptor slows loss of memories. Mice without Nogo receptor lost stressful memories more quickly, suggesting that manipulating the receptor could help treat post-traumatic stress disorder.


"We know a lot about the early development of the brain," Strittmatter said, "But we know amazingly little about what happens in the brain during late adolescence."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Awake? Are Patients Under Anesthesia Really Unconscious?

Awake? Are Patients Under Anesthesia Really Unconscious? | Amazing Science |

The prospect of undergoing surgery while not fully "under" may sound like the stuff of horror movies. But one patient in a thousand remembers moments of awareness while under general anesthesia, physicians estimate. The memories are sometimes neutral images or sounds of the operating room, but occasionally patients report being fully aware of pain, terror, and immobility. Though surgeons scrupulously monitor vital signs such as pulse and blood pressure, anesthesiologists have no clear signal of whether the patient is conscious. But a new study finds that the brain may produce an early-warning signal that consciousness is returning—one that's detectable by electroencephalography (EEG), the recording of neural activity via electrodes on the skull.


"We've known since the 1930s that brain activity changes dramatically with increasing doses of anesthetic," says the study's corresponding author, anesthesiologist Patrick Purdon of Massachusetts General Hospital in Boston. "But monitoring a patient's brain with EEG has never become routine practice."


Beginning in the 1990s, some anesthesiologists began using an approach called the bispectral (BIS) index, in which readings from a single electrode are connected to a device that calculates, and displays, a single number indicating where the patient's brain activity falls on a scale of 100 (fully conscious) to zero (a "flatline" EEG). Anything between 40 and 60 is considered the target range for unconsciousness. But this index and other similar ones are only indirect measurements, Purdon explains. In 2011, a team led by anesthesiologist Michael Avidan at the Washington University School of Medicine in St. Louis, Missouri, found that monitoring with the BIS index was slightly less successful at preventing awareness during surgery than the nonbrain-based method of measuring exhaled anesthesia in the patient's breath. Of the 2861 patients monitored with the BIS index, seven had memories of the surgery, whereas only two of 2852 patients whose breath was analyzed remembered anything.


Despite that, Purdon and his co-workers were hopeful that an "unconsciousness signature" in the brain could be found. Last year, the team worked with three epilepsy patients who'd had electrodes implanted in their brains in preparation for surgery to reduce their seizures. Recording from single neurons in the cortex, where awareness is thought to reside, the researchers gave the patients an injection of the anesthetic propofol. They asked the volunteers to press a button whenever they heard a tone, recording the activity of the neurons. Loss of consciousness, defined as the point when the patients stopped pressing the button, was immediate—40 seconds after injection. Just as immediately, groups of neurons began to emit a characteristic slow oscillation, a kind of ripple in the cells' electrical field. The neurons weren't entirely inactive, but bursts of activity occurred only at specific points in this oscillation, resulting in inconsistent brain cell activity.



Igor Stakhnyuk's curator insight, August 8, 2013 12:50 AM

I chose this article because many people who have had surgery depend on it being very painless since you are asleep while the procedure is being done, but many people ask the question if they are realy unconsious during the surgery. This article answers the that question.

Scooped by Dr. Stefan Gruenwald!

Brain Plasticity: Can Ectopic Eyes See Outside of the Head?

Brain Plasticity: Can Ectopic Eyes See Outside of the Head? | Amazing Science |

Recently, we have witnessed remarkable, fictional-sounding advancements in science and medicine. Using embryos from the African clawed frog (Xenopus laevis), scientists at Tufts’ Center for Regenerative and Developmental Biology were able to transplant eye primordia—cells that will eventually grow into an eye—from one tadpole’s head to another’s posterior, flank, or tail. They didn't connect nerve endings or “wiring” or anything like that. They just cut out the cells from the head, slice open the side or the tail, and jam them in.


As the eyes grow, they send out nerve fibesr, or axons. We know this because the “tissue donor” tadpoles were labeled with tdTomato, a red fluorescent protein. This allowed the researchers to watch innervation, or nerve growth, as it happened. Of those eye primordia that sent out axons, nearly half hardwired directly into the spine, while the other half built connections to the nearby stomach. None of the tadpoles grew tdTomato-marked pathways to the brain, however.


Before they could test the ectopic eyes for functionality, the native ones had to be severed and removed. Otherwise, how would the scientists know which of the tadpole’s three eyes was truly seeing? Finally, it was time to put the aberrant eyes to the test. Using an underwater arena rigged with blue and red LEDs and electric shock, scientists ran through an exhaustive array of controls and variables. Interestingly, the tadpoles with no eyes at all could still react to LED changes, revealing that they may have other ways of sensing light. However, they proved woefully inadequate at avoiding electro shocks, showing whatever information they were getting was ultimately flawed or unusable. On the other end of the spectrum were the control tadpoles with normal eyes that quickly learned to avoid the shocks through the scientists’ regimen of aversive conditioning. Amazingly, a statistically significant portion of the transplanted one eye tadpoles could not only detect LED changes, but they showed learning behavior when confronted with electric shock. Though eyes have been placed on or near rat brains in previous studies with success, this result marked the first time a vertebrate eye has been able to send visual information to the brain without a direct connection—and from as far away as the other end of the organism.


Obviously, many questions remain. For instance, how does the brain know information coming up the spine from the tail is visual? It should have no idea what that aberrant eye is blinking about—and yet it seems to take the information in stride. The paper suggests perhaps different types of data are somehow marked, not altogether different from the way we demarcate files and commands in a computer.


Ahead lies everything from better computer brain interfaces to bioengineered organ systems. If we can understand the limits of the brain’s plasticity, we might be able to one day create cybernetic devices that don’t just do what we program, but discover on their own what is required.

No comment yet.