Amazing Science
743.5K views | +114 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Reconstructing 3D neural tissue with biocompatible nanofiber scaffolds and hydrogels

Reconstructing 3D neural tissue with biocompatible nanofiber scaffolds and hydrogels | Amazing Science |

Damage to neural tissue is typically permanent and causes lasting disability in patients, but a new approach has recently been discovered that holds incredible potential to reconstruct neural tissue at high resolution in three dimensions. Research work recently published in the Journal of Neural Engineering demonstrated a method for embedding scaffolding of patterned nanofibers within three-dimensional (3D) hydrogel structures, and it was shown that neurite outgrowth from neurons in the hydrogel followed the nanofiber scaffolding by tracking directly along the nanofibers, particularly when the nanofibers were coated with a type of cell adhesion molecule called laminin. It was also shown that the coated nanofibers significantly enhanced the length of growing neurites, and that the type of hydrogel could significantly affect the extent to which the neurites tracked the nanofibers.

“Neural stem cells hold incredible potential for restoring damaged cells in the nervous system, and 3D reconstruction of neural tissue is essential for replicating the complex anatomical structure and function of the brain and spinal cord,” said Dr. McMurtrey, author of the study and director of the research institute that led this work. “So it was thought that the combination of induced neuronal cells with micropatterned biomaterials might enable unique advantages in 3D cultures, and this research showed that not only can neuronal cells be cultured in 3D conformations, but the direction and pattern of neurite outgrowth can be guided and controlled using relatively simple combinations of structural cues and biochemical signaling factors.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Hooking Up The Brain To A Computer: Human Cyborgs Reveal How We Learn

Hooking Up The Brain To A Computer: Human Cyborgs Reveal How We Learn | Amazing Science |
Hooking the brain up to a computer can do more than let the severely disabled move artificial limbs. It is also revealing the secrets of how we learn

When the patient Scheuermann began losing control of her muscles in 1996, due to her genetic disorder—spinocerebellar degeneration— she gave up her successful business as a planner of murder-mystery-themed events. By 2002 her disease had confined her to a wheelchair, which she now operates by flexing her chin up and down. She retains control of the muscles only in her head and neck. “The signals are not getting from my brain to my nerves,” she explains. “My brain is saying, ‘Lift up!’ to my arm, and my arm is saying, ‘I caaaan't heeeear you.’”

Yet technology now exists to extract those brain commands and shuttle them directly to a robotic arm, bypassing the spinal cord and limbs. Inside Scheuermann's brain are two grids of electrodes roughly the size of a pinhead that were surgically implanted in her motor cortex, a band of tissue on the surface of the brain that controls movement. The electrodes detect the rate at which about 150 of her neurons fire. Thick cables plugged into her scalp relay their electrical activity to a lab computer. As Scheuerman thinks about moving the arm, she produces patterns of electrical oscillations that software on the computer can interpret and translate into digital commands to position the robotic limb. Maneuvering the arm and hand, she can clasp a bar of chocolate or a piece of string cheese before bringing the food to her mouth.

When neuroscientists first set out to develop brain-controlled prostheses, they assumed they would simply record neural activity passively, as if taping a speech at a conference. The transcript produced by the monitored neurons would then be translated readily into digital commands to manipulate a prosthetic arm or leg. “Early on there was this thought that you could really decode the mind,” says neuroscientist Karunesh Ganguly of the University of California, San Francisco.

Yet the brain is not static. This extraordinarily complex organ evolved to let its owner react swiftly to changing conditions related to food, mates and predators. The electrical activity whirring inside an animal's head morphs constantly to integrate new information as the external milieu shifts.

Ganguly's postdoctoral adviser, neuroscientist Jose M. Carmena of the University of California, Berkeley, wondered whether the brain might adapt to a prosthetic device as well. That an implant could induce immediate changes in brain activity—what scientists call neuroplasticity—was apparent even in 1969, when Eberhard Fetz, a young neuroscientist at the University of Washington, reported on an electrode placed in a monkey's brain to record a single neuron. Fetz decided to reward the animal with a banana-flavored pellet every time that neuron revved up. To his surprise, the creature quickly learned how to earn itself more bites of fake banana. This revelation—that a monkey could be trained to control the firing rate of an arbitrary neuron in its brain—is what Stanford University neuroscientist Krishna Shenoy calls the “Nobel Prize moment” in the field of brain-computer interfaces.

Scientists were beginning to discover, however, that neurons can adjust their tuning in response to the software. In a 2009 study Carmena and Ganguly detailed two key ways that neurons begin to learn. Two monkeys spent several days practicing with a robotic arm. As their dexterity improved, their neurons changed their preferred direction (to point down rather than to the right, for example) and broadened the range of firing rates they were capable of emitting. These tuning adjustments gave the neurons the ability to issue more precise commands when they dispatched their missives.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Experiment demonstrates direct brain to brain interface between humans

Experiment demonstrates direct brain to brain interface between humans | Amazing Science |

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE ("A Direct Brain-to-Brain Interface in Humans").

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

In this photo, UW students Darby Losey, left, and Jose Ceballos are positioned in two different buildings on campus as they would be during a brain-to-brain interface demonstration. The sender, left, thinks about firing a cannon at various points throughout a computer game. That signal is sent over the Web directly to the brain of the receiver, right, whose hand hits a touchpad to fire the cannon.

Read more: Study shows direct brain interface between humans (w/video) 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Imagination and reality flow in opposite directions in the brain

Imagination and reality flow in opposite directions in the brain | Amazing Science |

As real as that daydream may seem, its path through your brain runs opposite reality. Aiming to discern discrete neural circuits, researchers at the University of Wisconsin-Madison have tracked electrical activity in the brains of people who alternately imagined scenes or watched videos.

"A really important problem in brain research is understanding how different parts of the brain are functionally connected. What areas are interacting? What is the direction of communication?" says Barry Van Veen, a UW-Madison professor of electrical and computer engineering. "We know that the brain does not function as a set of independent areas, but as a network of specialized areas that collaborate."

Van Veen, along with Giulio Tononi, a UW-Madison psychiatry professor and neuroscientist, Daniela Dentico, a scientist at UW-Madison's Waisman Center, and collaborators from the University of Liege in Belgium, published results recently in the journal NeuroImage. Their work could lead to the development of new tools to help Tononi untangle what happens in the brain during sleep and dreaming, while Van Veen hopes to apply the study's new methods to understand how the brain uses networks to encode short-term memory.

During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe -- from a higher-order region that combines inputs from several of the senses out to a lower-order region. In contrast, visual information taken in by the eyes tends to flow from the occipital lobe -- which makes up much of the brain's visual cortex -- "up" to the parietal lobe.

"There seems to be a lot in our brains and animal brains that is directional, that neural signals move in a particular direction, then stop, and start somewhere else," says. "I think this is really a new theme that had not been explored."

The researchers approached the study as an opportunity to test the power of electroencephalography (EEG) -- which uses sensors on the scalp to measure underlying electrical activity -- to discriminate between different parts of the brain's network.

Brains are rarely quiet, though, and EEG tends to record plenty of activity not necessarily related to a particular process researchers want to study.

To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle -- focusing on the details of shapes, colors and textures -- before watching a short video of silent nature scenes.

Using an algorithm Van Veen developed to parse the detailed EEG data, the researchers were able to compile strong evidence of the directional flow of information.

"We were very interested in seeing if our signal-processing methods were sensitive enough to discriminate between these conditions," says Van Veen, whose work is supported by the National Institute of Biomedical Imaging and Bioengineering. "These types of demonstrations are important for gaining confidence in new tools."

Vloasis's curator insight, November 22, 2014 11:10 AM

So imagination input flows from the parietal to the occipital lobe, while visual input flows vice versa.

Diane Johnson's curator insight, November 23, 2014 8:46 AM

Interesting findings from electrical and computer engineering studies. Useful connections to the information processing DCI's.

Scooped by Dr. Stefan Gruenwald!

Major brain pathway rediscovered

Major brain pathway rediscovered | Amazing Science |

A team of neuroscientists in America say they have rediscovered an important neural pathway that was first described in the late nineteenth century but then mysteriously disappeared from the scientific literature until very recently. In a study published today in Proceedings of the National Academy of Sciences, they confirm that the prominent white matter tract is present in the human brain, and argue that it plays an important and unique role in the processing of visual information. The vertical occipital fasciculus (VOF) is a large flat bundle of nerve fibres that forms long-range connections between sub-regions of the visual system at the back of the brain. It was originally discovered by the German neurologist Carl Wernicke, who had by then published his classic studies of stroke patients withlanguage deficits, and was studying neuroanatomy in Theodor Maynert’s laboratory at the University of Vienna. Wernicke saw the VOF in slices of monkey brain, and included it in his 1881 brain atlas, naming it the senkrechte occipitalbündel, or ‘vertical occipital bundle’.

Maynert - himself a pioneering neuroanatomist and psychiatrist, whose other students included Sigmund Freud and Sergei Korsakov - refused to accept Wernicke’s discovery, however. He had already described the brain’s white matter tracts, and had arrived at the general principle that they are oriented horizontally, running mostly from front to back within each hemisphere. But the pathway Wernicke had described ran vertically. Another of Maynert’s students, Heinrich Obersteiner, identified the VOF in the human brain, and mentioned it in his 1888 textbook, calling it the senkrechteoccipitalbündel in one illustration, and the fasciculus occipitalis perpendicularis in another. So, too, did Heinrich Sachs, a student of Wernicke’s, who labeled it thestratum profundum convexitatis in his 1892 white matter atlas.

The VOF appeared again in a number of other textbooks in the following decades, including the 1918 edition of Gray’s Anatomy, but eventually fell into obscurity. This may have been due to early confusion over the nomenclature; to Maynert, who remained influential but refused to acknowledge Wernicke’s discovery up until his death in 1892; and to changes in neuroanatomical methods, which gradually moved from brain dissections that exposed the white matter tracts in large part, to brain tissue slices, which did not. Jason Yeatman and his colleagues at Stanford University came across the VOF by chance several years ago. They have been visualizing the brain’s long-range connections using state-of-the-art neuroimaging techniques, in order to investigate the neural mechanisms underlying language processing and reading, and in 2012, reported that the growth pattern of the white matter tracts predicts how a child’s reading skills will develop over time.

“I stumbled upon it while studying the visual word form area,” says Yeatman. “In every subject, I found this large, vertically-oriented fibre bundle terminating in that region of the brain.” He searched for it in the literature, but found no mention of it, so his then Ph.D. supervisor sent the scans to colleagues in the neurosurgery department.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Popular Science!

New System Lets Humans Control Mouse Genes With Their Thoughts

New System Lets Humans Control Mouse Genes With Their Thoughts | Amazing Science |

Scientists have been able to tinker with the genes of other organisms for some time now—that’s nothing new. But controlling genes in another animal using only your thoughts? Sounds a rather insane idea that wouldn’t be out of place in a Sci-Fi movie, but it turns out it’s now possible, thanks to a newly-developed mind-controlled system.

As described in the journal Nature Communications, the system works by using brain waves from human participants to activate a light inside a mouse’s brain, which then switches on a particular set of genes. This marks the first time that synthetic biology has been linked to the mind, and the authors believe this work could lead to the development of novel ways totreat medical conditions. For example, the technology could one day be used to instantly deliver drugs when epileptic patients are about to experience a seizure. However, the authors note that the study is very much proof-of-concept at the moment.

To create the system, scientists from ETH Zurich married up two different technologies that were already in existence. The first is a brain computer interface (BCI) device that is capable of processing brain waves recorded by an electroencephalography (EEG) headset. Recently, this system allowed paralysed people to power a robotic arm using their thoughts. The second is a method called optogenetics which uses light to control specific events within cells.

The researchers started off by inserting a gene from a species of bacteria that uses light as a source of energy into designer human kidney cells. This gene is responsible for the production of a protein that is responsive to near-infrared light. The cells were engineered in such a way that when this protein is activated, a cascade of events are triggered that ultimately switch on a different gene that encodes a specific human protein. Alongside an infrared LED light that can be activated wirelessly, these cells were put inside a tiny implant that was inserted into the brain of a mouse.

Next, the researchers recorded the brain waves of eight volunteers while they were either meditating or concentrating. These activities produce different signatures of brain activity, which can then be recognized and processed by the EEG headset they were wearing. This information was then fed wirelessly into the brain implant, and if a particular threshold of brain activity was reached, the LED was switched on.

Via Neelima Sinha
Neelima Sinha's curator insight, November 12, 2014 5:29 PM

Mind control of gene expression, Wild!

Scooped by Dr. Stefan Gruenwald!

Brain-to-brain interface via Internet replicated and improved

Brain-to-brain interface via Internet replicated and improved | Amazing Science |

University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago, reported on KurzweilAI.

In the newly published study, which involved six people (instead of two), researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.

In the 2013 study, the UW team was the first to demonstrate two human brains communicating in this way. The recent more-comprehensive study was published Wednesday (Nov. 5) in the open-access journal PLOS ONE.

“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”

Collaborator Rajesh Rao, a UW professor of computer science and engineering, is the lead author on this work.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

New brain decoder algorithm can eavesdrop on your inner voice

New brain decoder algorithm can eavesdrop on your inner voice | Amazing Science |

As you read this, your neurons are firing – that brain activity can now be decoded to reveal the silent words in your head. TALKING to yourself used to be a strictly private pastime. That's no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

"If you're reading text in a newspaper or a book, you hear a voice in your own head," says Brian Pasley at the University of California, Berkeley. "We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak."

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain's temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone

 (PLoS Biology,

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists have found “hidden” brain activity that can indicate if a vegetative patient is aware

Scientists have found “hidden” brain activity that can indicate if a vegetative patient is aware | Amazing Science |
The new research could help doctors quickly identify patients who are aware despite appearing unresponsive and unable to communicate.

Researchers from University of Cambridge in the UK have identified hidden networks in vegetative patients that could support consciousness, even when a patient appears to be unresponsive. There’s been a lot of interest lately into how much patients in vegetative states, such as comas, are aware of their surroundings. Recently, research involving functional magnetic resonance imaging (fMRI) scanning has shown that even patients who are unable to respond or move are able to carry out mental tasks, such as imagining playing a game of tennis.

Now the team of scientists have used high-density electroencephalographs (EEG) and mathematics known as “graph theory” to study the networks of activity in the brains of 32 patients who have been diagnosed as being in a vegetative state.

They also compared these EEG scans to the scans of health adults. Their fundings reveal that the interconnected networks that support awareness in the healthy brain are usually - but, importantly, not always - impaired in patients in a vegetative state.

Amazingly, the research showed that some of these “vegetative patients” have well-preseved consciousness networks that look similar to those of healthy adults - and these are the same patients who had been able to imagine playing tennis in the previous study.

"Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question,” said Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge in a press release.

“But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative," he added.

This is a huge breakthrough, as it will help scientists develop a relatively simple way of identifying “vegetative” patients who might still be aware.

And unlike the tennis test, which was quite a difficult task that required expensive and often unavailable fMRI scanners, this new technique uses simple EEG technology and could be administered at a patient’s bedside.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Astrocytes can repair the brain after stroke

Astrocytes can repair the brain after stroke | Amazing Science |
A previously unknown mechanism, through which astrocytes in the brain produce new nerve cells after a stroke, has been discovered by researchers at Karolinska Institutet and Lund University. The findings are published in the journal Science.

Neurogenesis is restricted in the adult mammalian brain; most neurons are neither exchanged during normal life nor replaced in pathological situations. Scientists now report that stroke elicits a latent neurogenic program in striatal astrocytes in mice. Notch1 signaling is reduced in astrocytes after stroke, and attenuated Notch1 signaling is necessary for neurogenesis by striatal astrocytes. Blocking Notch signaling triggers astrocytes in the striatum and the medial cortex to enter a neurogenic program, even in the absence of stroke, resulting in 850 ± 210 (mean ± SEM) new neurons in a mouse striatum. Thus, under Notch signaling regulation, astrocytes in the adult mouse brain parenchyma carry a latent neurogenic program that may potentially be useful for neuronal replacement strategies.

Principal Investigator of this study has been Jonas Frisén, professor at the Department of Cell and Molecular Biology, Karolinska Institutet. First study-author is Jens Magnusson, a doctoral student in Jonas Frisén’s lab. The team also identified the signaling mechanism that regulates the conversion of the astrocytes to nerve cells. In a healthy brain, this signalling mechanism is active and inhibits the conversion and, consequently, the astrocytes do not generate nerve cells. Following a stroke, the signaling mechanism is suppressed and astrocytes can start the process of generating new cells.


No comment yet.
Scooped by Dr. Stefan Gruenwald!

This Device Lets Fully Paralyzed Rats Walk Again, and Human Trials Are Planned

This Device Lets Fully Paralyzed Rats Walk Again, and Human Trials Are Planned | Amazing Science |

In the past few years, there have been some pretty impressive breakthroughs for those suffering from partial paralysis, but a frustrating lack of successes when it comes to those who are fully paralyzed. But a new technique pioneered by scientists working on project NEUWalk at the Swiss Federal Institute for Technology (EPFL) have figured out a way to reactivate the severed spinal cords of fully paralyzed rats, allowing them to walk again via remote control. And, the researchers say, their system is just about ready for human trials.

Previous studies have had some success in using epidural electrical stimulation (EES) to improve motor control in rodents and humans with spinal cord injuries. However, electrocuting neurons in order to get allow natural walking is no easy task, and it requires extremely quick and precise stimulation. 

As the researchers wrote in a study published in Science Translational Medicine, "manual adjustment of pulse width, amplitude, and frequency" of the electrical signal being supplied to the spinal cord was required in EES treatment, until now. 

Manual adjustments don't exactly work when you're trying to walk.

The team developed algorithms that can generate and accommodate feedback in real-time during leg movement, making motion natural. Well, sort of. We’re talking about rats with severed spinal cords hooked up to electrodes being controlled by advanced algorithms, after all.

"We have complete control of the rat's hind legs," EPFL neuroscientist Grégoire Courtine said in a statement. "The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs."

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Brain Imaging and Neuroscience: The Good, The Bad, & The Ugly!

Rapid whole-brain imaging with single cell resolution

Rapid whole-brain imaging with single cell resolution | Amazing Science |

A major challenge of systems biology is understanding how phenomena at the cellular scale correlate with activity at the organism level. A concerted effort has been made especially in the brain, as scientists are aiming to clarify how neural activity is translated into consciousness and other complex brain activities.

One example of the technologies needed is whole-brain imaging at single-cell resolution. This imaging normally involves preparing a highly transparent sample that minimizes light scattering and then imaging neurons tagged with fluorescent probes at different slices to produce a 3D representation. However, limitations in current methods prevent comprehensive study of the relationship. A new high-throughput method, CUBIC (Clear, Unobstructed Brain Imaging Cocktails and Computational Analysis), published in Cell, is a great leap forward, as it offers unprecedented rapid whole-brain imaging at single cell resolution and a simple protocol to clear up and make the brain sample transparent, is based on the use of amino-alcohols.

In combination with light sheet fluorescence microscopy, CUBIC was tested for rapid imaging of a number of mammalian systems, such as mouse and primate, showing its scalability for brains of different size. Additionally, it was used to acquire new spatial-temporal details of gene expression patterns in the hypothalamic circadian rhythm center. Moreover, by combining images taken from opposite directions, CUBIC enables whole brain imaging and direct comparison of brains in different environmental conditions.

CUBIC overcomes a number of obstacles compared with previous methods. One is the clearing and transparency protocol, which involves serially immersing fixed tissues into just two reagents for a relatively short time. Second, CUBIC is compatible with many fluorescent probes because of low quenching, which allows for probes with longer wavelengths and reduces concern for scattering when whole brain imaging while at the same time inviting multi-color imaging. Finally, it is highly reproducible and scalable. While other methods have achieved some of these qualities, CUBIC is the first to realize all.

CUBIC provides information on previously unattainable 3D gene expression profiles and neural networks at the systems level. Because of its rapid and high-throughput imaging, CUBIC offers extraordinary opportunity to analyze localized effects of genomic editing. It also is expected to identify neural connections at the whole brain level. In fact, last author Hiroki Ueda is optimistic about further application to even larger mammalian systems. “In the near future, we would like to apply CUBIC technology to whole-body imaging at single cell resolution”.

Via Donald J Bolger
Donald J Bolger's curator insight, August 13, 2014 11:15 AM

This sounds too good to be true!


Scooped by Dr. Stefan Gruenwald!

Researchers demonstrate first direct brain-to-brain communication in human subjects over a distance of 5,000 miles

Researchers demonstrate first direct brain-to-brain communication in human subjects over a distance of 5,000 miles | Amazing Science |

In a first-of-its-kind study, an international team of neuroscientists and robotics engineers have demonstrated the viability of direct brain-to-brain communication in humans. Recentlypublished in PLOS ONE the highly novel findings describe the successful transmission of information via the internet between the intact scalps of two human subjects – located 5,000 miles apart.

"We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways," explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School. "One such pathway is, of course, the internet, so our question became, 'Could we develop an experiment that would bypass the talking or typing part of internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France ?'" It turned out the answer was "yes."

In the neuroscientific equivalent of instant messaging, Pascual-Leone, together with Giulio Ruffini and Carles Grau leading a team of researchers from Starlab Barcelona, Spain, and Michel Berg, leading a team from Axilum Robotics, Strasbourg, France, successfully transmitted the words "hola" and "ciao" in a computer-mediated brain-to-brain transmission from a location in India to a location in France using internet-linked electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TMS) technologies.

Using EEG, the research team first translated the greetings "hola" and "ciao" into binary code and then emailed the results from India to France. There a computer-brain interface transmitted the message to the receiver's brain through noninvasive brain stimulation. The subjects experienced this as phosphenes, flashes of light in their peripheral vision. The light appeared in numerical sequences that enabled the receiver to decode the information in the message, and while the subjects did not report feeling anything, they did correctly receive the greetings.

A second similar experiment was conducted between individuals in Spain and France, with the end result a total error rate of just 15 percent, 11 percent on the decoding end and five percent on the initial coding side.

"By using advanced precision neuro-technologies including wireless EEG and robotized TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write," says Pascual-Leone. "This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications. We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

A Worm's Mind In A Lego Body: Scientists Map Brain Connectome of C.elegans and Upload it to a Lego Robot

A Worm's Mind In A Lego Body: Scientists Map Brain Connectome of C.elegans and Upload it to a Lego Robot | Amazing Science |

Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot - what happens next? It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don't have a concrete example of the principle in action.

The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.

The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload "3". The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.

The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the "nose" then UDP packets are sent to the sensory neurons in the network.

The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor.  The motor neurons can be excitatory or inhibitory and positive and negative weights are used. 

And the result? It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.

More Information: The Robotic Worm (Biocoder pdf - free on registration)
No comment yet.
Scooped by Dr. Stefan Gruenwald!

The smart mouse with the partially human brain

The smart mouse with the partially human brain | Amazing Science |

Mice have been created whose brains are half human. As a result, the animals are smarter than their siblings. The idea is not to mimic fiction, but to advance our understanding of human brain diseases by studying them in whole mouse brains rather than in dishes. The altered mice still have mouse neurons – the "thinking" cells that make up around half of all their brain cells. But practically all the glial cells in their brains, the ones that support the neurons, are human.

"It's still a mouse brain, not a human brain," says Steve Goldman of the University of Rochester Medical Center in New York. "But all the non-neuronal cells are human." Goldman's team extracted immature glial cells from donated human fetuses. They injected them into mouse pups where they developed into astrocytes, a star-shaped type of glial cell.

Within a year, the mouse glial cells had been completely usurped by the human interlopers. The 300,000 human cells each mouse received multiplied until they numbered 12 million, displacing the native cells.

"We could see the human cells taking over the whole space," says Goldman. "It seemed like the mouse counterparts were fleeing to the margins."

Human astrocytes are 10 to 20 times the size of mouse astrocytes and carry 100 times as many tendrils. This means they can coordinate all the neural signals in an area far more adeptly than mouse astrocytes can. "It's like ramping up the power of your computer," says Goldman.

A battery of standard tests for mouse memory and cognition showed that the mice with human astrocytes are much smarter than their mousy peers. In one test that measures ability to remember a sound associated with a mild electric shock, for example, the humanized mice froze for four times as long as other mice when they heard the sound, suggesting their memory was about four times better. "These were whopping effects," says Goldman. "We can say they were statistically and significantly smarter than control mice."

Goldman first reported last year that mice with human glial cells are smarter. But the human cells his team injected then were mature so they simply integrated into the mouse brain tissue and stayed put. This time, he injected the precursors of these cells, glial progenitor cells, which were able to divide and multiply. That, he says, explains how they were able to take over the mouse brains so completely, stopping only when they reached the physical limits of the space.

To explore further how the human astrocytes affect intelligence, memory and learning, Goldman is already grafting the cells into rats, which are more intelligent than mice. "We've done the first grafts, and are mapping distributions of the cells," he says.

Although this may sound like the work of science fiction – think Deep Blue Sea, where researchers searching for an Alzheimer's cure accidently create super-smart sharks, or Algernon, the lab mouse who has surgery to enhance his intelligence, or even the pigoons, Margaret Atwood's pigs with human stem cells – and human thoughts – Goldman is quick to dismiss any idea that the added cells somehow make the mice more human.

"This does not provide the animals with additional capabilities that could in any way be ascribed or perceived as specifically human," he says. "Rather, the human cells are simply improving the efficiency of the mouse's own neural networks. It's still a mouse."

However, the team decided not to try putting human cells into monkeys. "We briefly considered it but decided not to because of all the potential ethical issues," Goldman says. "It could be difficult to decide which animals to put human brain cells into. If you make animals more human-like, where do you stop?" he says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Babies' brains adjust to listening to a language, even if they never learn it.

Babies' brains adjust to listening to a language, even if they never learn it. | Amazing Science |

Our brains start soaking in details from the languages around us from the moment we can hear them. One of the first things infants learn of their native languages is the system of consonants and vowels, as well as other speech sound characteristics, like pitch. In the first year of life, a baby’s ear tunes in to the particular set of sounds being spoken in its environment, and the brain starts developing the ability to tell subtle differences among them—a foundation that will make a difference in meaning down the line, allowing the child to learn words and grammar.

But what happens if that child gets shifted into a different culture after laying the foundations of its first native language? Does it forget everything about that first language, or are there some remnants that remain buried in the brain?

According to a recent PNAS paper, the effects of very early language learning are permanently etched into the brain, even if input from that language stops and it’s replaced by another language. To identify this lasting influence, the researchers used functional magnetic resonance imaging (fMRI) scans on children who had been adopted to see what neural patterns could be identified years after adoption.

Because not all linguistic features have easily identifiable effects on the brain, the researchers decided to focus on lexical tone. This is a feature found in some languages that allows a single arrangement of consonants and vowels to have different meanings that are distinguished by a change in pitch. For example, in Mandarin Chinese, the word “ma” with a rising tone means “hemp”—the same syllable with a falling tone means “scold.”

People who speak tone languages have differences in brain activity in a certain region of the brain’s left hemisphere. This region activates in response to pitch differences that are used to convey a difference in linguistic meaning; non-linguistic pitch is processed in the right hemisphere. Tone information is learned very early in life: infants learning Chinese languages (including Mandarin and Cantonese) show signs of recognizing tonal contrasts as early as four months.

Peter Rettig's curator insight, November 30, 2014 12:10 PM

A very good reason to expose our young children to the sounds of different languages ...

Abagail Celsky's curator insight, September 18, 2015 8:20 AM

its like saying you cant teach an old dog new tricks. But you can teach a puppy new tricks. 

Scooped by Dr. Stefan Gruenwald!

Researchers Provide First Peek at How Neurons Multitask

Researchers Provide First Peek at How Neurons Multitask | Amazing Science |

Researchers at the University of Michigan have shown how a single neuron can perform multiple functions in a model organism, illuminating for the first time this fundamental biological mechanism and shedding light on the human brain.

Investigators in the lab of Shawn Xu at the Life Sciences Institute found that a neuron in C. elegans, a tiny worm with a simple nervous system used as a model for studying sensation, movement and other neurological function, regulates both the speed and direction in which the worm moves. The individual neurons can route information through multiple downstream neural circuits, with each circuit controlling a specific behavioral output.

The findings are scheduled for online publication in the journal Cell on Nov. 6. The research is also featured on the cover.

"Understanding how the nervous system and genes lead to behavior is a fundamental question in neuroscience, and we wanted to figure out how C. elegans are able to perform a wide range of complex behaviors with their small nervous systems," Xu said.

The C. elegans nervous system contains 302 neurons.

"Scientists think that even though humans have billions of neurons, some perform multiple functions. Seeing the mechanism in worms will help to understand the human brain," Xu said.

The model neuron studied, AIY, regulates at least two distinct motor outputs: locomotion speed and direction-switch. AIY interacts with two circuits, one that is inhibitory and controls changes in the direction of the worm's movement, and a second that is excitatory and controls speed.

"It's important to note that these two circuits have connections with other neurons and may cross-talk with each other," Xu said. "Neuronal control of behavior is very complex."

Xu is a faculty member in the U-M Life Sciences Institute, where his laboratory is located and research conducted. He is also a professor of molecular and integrative physiology at the U-M Medical School.

Other authors on the paper were Zhaoyu Li, Jie Liu and Maohua Zheng, also of the Life Sciences Institute and Department of Molecular and Integrative Physiology in the U-M Medical School.
The research was supported by the National Institutes of Health.
Shawn Xu: 

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Removing the brake: How to increase brain activity and memory

Removing the brake: How to increase brain activity and memory | Amazing Science |

Is it possible to rapidly increase (or decrease) the amount of information the brain can store? A new international study led by the Research Institute of the McGill University Health Centre (RI-MUHC) suggests is may be. Their research has identified a molecule that improves brain function and memory recall is improved. Published in the latest issue of Cell Reports, the study has implications for neurodevelopmental and neurodegenerative diseases, such as autism spectral disorders and Alzheimer’s disease.

“Our findings show that the brain has a key protein called FXR1P (Fragile X Related Protein 1) that limits the production of molecules necessary for memory formation,” says RI-MUHC neuroscientist Keith Murai, the study’s senior author and Associate Professor in the Department of Neurology and Neurosurgery at McGill University. “When this brake-protein is suppressed, the brain is able to store more information.”

Murai and his colleagues used a mouse model to study how changes in brain cell connections produce new memories. When FXR1P was selectively removed from certain parts of the brain, new molecules were produced. They strengthened connections between brain cells, which correlated with improved memory and recall in the mice.

“The role of FXR1P was a surprising result,” says Dr. Murai. “Previous to our work, no-one had identified a role for this regulator in the brain. Our findings have provided fundamental knowledge about how the brain processes information. We’ve identified a new pathway that directly regulates how information is handled and this could have relevance for understanding and treating brain diseases.” 

“Future research in this area could be very interesting,” he adds. “If we can identify compounds that control the braking potential of FXR1P, we may be able to alter the amount of brain activity or plasticity. For example, in autism, one may want to decrease certain brain activity and in Alzheimer’s disease, we may want to enhance the activity. By manipulating FXR1P, we may eventually be able to adjust memory formation and retrieval, thus improving the quality of life of people suffering from brain diseases.” 

Carlos Rodrigues Cadre's curator insight, November 17, 2014 4:28 PM

adicionar a sua visão ...

Diane Johnson's curator insight, November 18, 2014 9:21 AM

NGSS includes opportunities for students to understand and apply learning about information processing in biological systems

Lucile Debethune's curator insight, November 21, 2014 5:45 AM

Parmi les nombreuses proteines du cerveau, cette recherche se concentre sur la proteines FXR1P, qui agit comme un frein à la production de molécule nécessaire à la formation de molécules. Travailler sur cette protéine pourait être un élément clef dans le traitement du fonctionnement anormal du cerveau.

Scooped by Dr. Stefan Gruenwald!

Optical control for simultaneous stimulation and neural recording of motor functions in the spinal cord

Optical control for simultaneous stimulation and neural recording of motor functions in the spinal cord | Amazing Science |

MIT researchers have demonstrated a highly flexible neural probe made entirely of polymers that can both optically stimulate and record neural activity in a mouse spinal cord—a step toward developing prosthetic devices that can restore functionality to damaged nerves.

"Our goal was to create a tool that would enable neuroscientists and physicians to investigate spinal-cord function on both cellular and systems levels with minimal impact on the tissue integrity," notes Polina Anikeeva, the AMAX Assistant Professor in Materials Science and Engineering and a senior author of the paper published Nov. 7 in Advanced Functional Materials.

Although optogenetics, a method that makes mammalian nerve cells sensitive to light via genetic modification, has been applied extensively in investigation of brain function over the past decade, spinal-cord research has lagged. Earlier this year Caggiano and Bizzi have demonstrated inhibition of motor functions using optogenetics, and now the collaboration between the two groups yielded a device suitable for spinal optical excitation of muscle activity, while giving the researchers an electrical readout.

"Laser pulses ... delivered through the [polycarbonate] core of the fiber probe robustly evoked neural activity in the spinal cord, as recorded with the ... electrodes integrated within the same device," the researchers report.

The fiber was inserted into the proximal lumbar section of the spinal cord in mice, and light delivered through it triggered activity in one of the calf muscles, the gastrocnemius muscle. The results in the optically-sensitive mice were validated by comparison with results in wild type mice, which showed no response to the optical trigger. A toe pinch showed the device could still record mechanically stimulated neuronal activity in the wild-type mice. The researchers monitored muscle activity through electromyographical (EMG) recording, while the conductive polyethylene electrodes in the new device recorded neuronal activity in the spinal cord.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Transplant of stem-cell-derived dopamine neurons shows promise for Parkinson's disease

Transplant of stem-cell-derived dopamine neurons shows promise for Parkinson's disease | Amazing Science |
Parkinson's disease is an incurable movement disorder that affects millions of people around the world, but current treatment options can cause severe side effects and lose effectiveness over time. In a new study, researchers showed that transplantation of neurons derived from human embryonic stem cells, hESCs, can restore motor function in a rat model of Parkinson's disease, paving the way for the use of cell replacement therapy in human clinical trials.

"Our study represents an important milestone in the preclinical assessment of hESC-derived dopamine neurons and provides essential support for their usefulness in treating Parkinson's disease," says senior study author Malin Parmar of Lund University.

Parkinson's disease is caused, in part, by the death of neurons that release a brain chemical called dopamine, leading to the progressive loss of control over dexterity and the speed of movement. Currently available drug and surgical treatment options can lose effectiveness over time and cause serious side effects such as involuntary movements and psychiatric problems. Meanwhile, another approach involving the transplantation of human fetal cells has produced long-lasting clinical benefits; however, the positive effects were only seen in some individuals and can also cause involuntary movements driven by the graft itself. Moreover, the use of tissue from aborted human fetuses presents logistical issues such as the limited availability of cells, hampering the effective translation of fetal tissue transplantation as a realistic therapeutic option.

To rigorously assess an alternative hESC-based treatment approach, Parmar and lead study author Shane Grealish of Lund University transplanted hESC-derived dopamine neurons into brain regions that control movement in a rat model of Parkinson's disease. The transplanted cells survived the procedure, restored dopamine levels back to normal within five months, and established the correct pattern of long-distance connections in the brain. As a result, this therapy restored normal motor function in the animals. Importantly, the hESC-derived neurons show efficacy and potency similar to fetal neurons when transplanted in the rat model of Parkinson's disease, suggesting that the hESC-based approach may be a viable alternative to the approaches that have already been established with fetal cells in Parkinson's patients.

In a related Forum article published in the same issue, Roger Barker of Addenbrooke's Hospital and the University of Cambridge laid out the roadmap for taking stem-cell-derived dopamine neurons to the clinic for treating Parkinson's disease. "This involves understanding the history of the whole field of cell-based therapies for Parkinson's disease and some of the mistakes that have happened," he says. "It also requires a knowledge of what the final product should look like and the need to get there in a collaborative way without being tempted to take shortcuts, because a premature clinical trial could impact negatively on the whole field of regenerative medicine."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Human skin cells reprogrammed directly into brain cells with combination of microRNAs and transcription factors

Human skin cells reprogrammed directly into brain cells with combination of microRNAs and transcription factors | Amazing Science |

Scientists have described a way to convert human skin cells directly into a specific type of brain cell affected by Huntington’s disease, an ultimately fatal neurodegenerative disorder. Unlike other techniques that turn one cell type into another, this new process does not pass through a stem cell phase, avoiding the production of multiple cell types, the study’s authors report.

The researchers, at Washington University School of Medicine in St. Louis, demonstrated that these converted cells survived at least six months after injection into the brains of mice and behaved similarly to native cells in the brain.

“Not only did these transplanted cells survive in the mouse brain, they showed functional properties similar to those of native cells,” said senior author Andrew S. Yoo, PhD, assistant professor of developmental biology. “These cells are known to extend projections into certain brain regions. And we found the human transplanted cells also connected to these distant targets in the mouse brain. That’s a landmark point about this paper.”

The work appears Oct. 22 in the journal Neuron.

The investigators produced a specific type of brain cell called medium spiny neurons, which are important for controlling movement. They are the primary cells affected in Huntington’s disease, an inherited genetic disorder that causes involuntary muscle movements and cognitive decline usually beginning in middle-adulthood. Patients with the condition live about 20 years following the onset of symptoms, which steadily worsen over time. 

The research involved adult human skin cells, rather than more commonly studied mouse cells or even human cells at an earlier stage of development. In regard to potential future therapies, the ability to convert adult human cells presents the possibility of using a patient’s own skin cells, which are easily accessible and won’t be rejected by the immune system.

To reprogram these cells, Yoo and his colleagues put the skin cells in an environment that closely mimics the environment of brain cells. They knew from past work that exposure to two small molecules of RNA, a close chemical cousin of DNA, could turn skin cells into a mix of different types of neurons.

In a skin cell, the DNA instructions for how to be a brain cell, or any other type of cell, is neatly packed away, unused. In past research published in Nature, Yoo and his colleagues showed that exposure to two microRNAs called miR-9 and miR-124 altered the machinery that governs packaging of DNA. Though the investigators still are unraveling the details of this complex process, these microRNAs appear to be opening up the tightly packaged sections of DNA important for brain cells, allowing expression of genes governing development and function of neurons.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

From decisions to disorders: how neuroscience is changing what we know about ourselves

From decisions to disorders: how neuroscience is changing what we know about ourselves | Amazing Science |

People have wanted to understand our motivations, thoughts and behaviors since the ancient Greeks inscribed “know thyself” on the Temple of Apollo at Delphi. And understanding the brain’s place in health and disease is one of this century’s greatest challenges – take Alzheimer’s, dementia and depression for example.

There are many exciting contributions from neuroscience that have given insight into our thoughts and actions. Three neuroscientists have just been awarded the 2014 Nobel Prize for their discoveries of cells that act as a positioning system in the brain – in other words, the mechanism that allows us to navigate spaces using spatial information and memory at a cellular level.

There are many exciting contributions from neuroscience that have given insight into our thoughts and actions. For example, the neural basis of how we make fast and slow decisions and decision-making under conditions of uncertainty. There is also an understanding how the brain is affected by stress and how these stresses might switch our brains into habit mode, for example operating on “automatic pilot” and forgetting to carry out planned tasks, or the opposite goal-directed system, which would see you going out of your usual routine, for example, popping into a different supermarket to get special ingredients for a recipe.

Disruption in the balance between the two is evident in neuro-psychiatric disorders, such as obsessive compulsive disorder, and recent evidence suggests that lower grey matter volumes in the brain can bias towards habit formation. Neuroscience is also demonstrating commonalities in disorders of compulsivity, methamphetamine abuse and obese subjects with eating disorders.

Neuroscience can challenge previously accepted views. For example, major abnormalities in dopamine function were thought the main cause of adult attention deficit hyperactivity disorder (ADHD). However, recent work suggests that the main cause of the disorder may instead be associated with structural differences in grey matter in the brain.

What neuroscience has made evidently clear is that changes in the brain cause changes in your thinking and actions, but the relationship is two-way. Environmental stressors, including psychological and substance abuse, can also change the brain. We also now know our brains continue developing into late adolescence or early young adulthood, it is not surprising that these environmental influences are particularly potent in a number of disorders during childhood and adolescence including autism.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Erasing memory with light: Cortical Representations Are Reinstated by the Hippocampus during Memory Retrieval

Erasing memory with light: Cortical Representations Are Reinstated by the Hippocampus during Memory Retrieval | Amazing Science |

Just look into the light: not quite, but researchers at the UC Davis Center for Neuroscience and Department of Psychology have used light to erase specific memories in mice, and proved a basic theory of how different parts of the brain work together to retrieve episodic memories.

Optogenetics, pioneered by Karl Diesseroth at Stanford University, is a new technique for manipulating and studying nerve cells using light. The techniques of optogenetics are rapidly becoming the standard method for investigating brain function.

Kazumasa Tanaka, Brian Wiltgen and colleagues at UC Davis applied the technique to test a long-standing idea about memory retrieval. For about 40 years, Wiltgen said, neuroscientists have theorized that retrieving episodic memories -- memories about specific places and events -- involves coordinated activity between the cerebral cortex and the hippocampus, a small structure deep in the brain.

"The theory is that learning involves processing in the cortex, and the hippocampus reproduces this pattern of activity during retrieval, allowing you to re-experience the event," Wiltgen said. If the hippocampus is damaged, patients can lose decades of memories.

But this model has been difficult to test directly, until the arrival of optogenetics.

Wiltgen and Tanaka used mice genetically modified so that when nerve cells are activated, they both fluoresce green and express a protein that allows the cells to be switched off by light. They were therefore able both to follow exactly which nerve cells in the cortex and hippocampus were activated in learning and memory retrieval, and switch them off with light directed through a fiber-optic cable.

They trained the mice by placing them in a cage where they got a mild electric shock. Normally, mice placed in a new environment will nose around and explore. But when placed in a cage where they have previously received a shock, they freeze in place in a "fear response."

Tanaka and Wiltgen first showed that they could label the cells involved in learning and demonstrate that they were reactivated during memory recall. Then they were able to switch off the specific nerve cells in the hippocampus, and show that the mice lost their memories of the unpleasant event. They were also able to show that turning off other cells in the hippocampus did not affect retrieval of that memory, and to follow fibers from the hippocampus to specific cells in the cortex.

"The cortex can't do it alone, it needs input from the hippocampus," Wiltgen said. "This has been a fundamental assumption in our field for a long time and Kazu’s data provides the first direct evidence that it is true."

They could also see how the specific cells in the cortex were connected to the amygdala, a structure in the brain that is involved in emotion and in generating the freezing response.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Nerve impulses can collide and but still continue unaffected

Nerve impulses can collide and but still continue unaffected | Amazing Science |

According to the traditional theory of nerves, two nerve impulses sent from opposite ends of a nerve annihilate when they collide. New research from the Niels Bohr Institute now shows that two colliding nerve impulses simply pass through each other and continue unaffected. This supports the theory that nerves function as sound pulses. The results are published in the scientific journal Physical Review X.

Nerve signals control the communication between the billions of cells in an organism and enable them to work together in neural networks. But how do nerve signals work?

In 1952, Hodgkin and Huxley introduced a model in which nerve signals were described as an electric current along the nerve produced by the flow of ions. The mechanism is produced by layers of electrically charged particles (ions of sodium and potassium) on either side of the nerve membrane that change places when stimulated. This change in charge creates an electric current.

This model has enjoyed general acceptance. For more than 60 years, all medical and biology textbooks have said that nerves function is due to an electric current along the nerve pathway. However, this model cannot explain a number of phenomena that are known about nerve functionResearchers at the Niels Bohr Institute at the University of Copenhagen have now conducted experiments that raise doubts about this well-established model of electrical impulses along the nerve pathway.

"According to the theory of this ion mechanism, the electrical signal leaves an inactive region in its wake, and the nerve can only support new signals after a short recovery period of inactivity. Therefore, two electrical impulses sent from opposite ends of the nerve should be stopped after colliding and running into these inactive regions," explains Thomas Heimburg, Professor and head of the Membrane Biophysics Group at the Niels Bohr Institute at the University of Copenhagen.

Thomas Heimburg and his research group conducted experiment in the laboratory using nerves from earthworms and lobsters. The nerves were removed and used in an experiment in which allowed the researchers to stimulate the nerve fibres with electrodes on both ends. Then they measured the signals en route.

"Our study showed that the signals passed through each other completely unhindered and unaltered. That's how sound waves work. A sound wave doesn't stop when it meets another sound wave. Both waves continue on unimpeded. The nerve impulse can therefore be explained by the fact that the pulse is a mechanical wave in the form of a sound pulse, a soliton, that moves along the nerve membrane," explains Thomas Heimburg. When the sound pulse moves through the nerve pathway, the membrane changes locally from a liquid to a more solid form. The membrane is compressed slightly, and this change leads to an electrical pulse as a consequence of the piezoelectric effect. "The electrical signal is thus not based on an electric current but is caused by a mechanical force," points out Thomas Heimburg.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Human skin neurons perform advanced calculations previously believed that only the brain could perform

Human skin neurons perform advanced calculations previously believed that only the brain could perform | Amazing Science |

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.

According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.

Their recent work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object's shape, says Andrew Pruszynski, who is one of the researchers behind the study.

The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.

Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.

No comment yet.