Amazing Science
757.7K views | +77 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald!

Brain scans reveal how LSD affects consciousness

Brain scans reveal how LSD affects consciousness | Amazing Science |

Researchers have published the first images showing the effects of LSD on the human brain, as part of a series of studies to examine how the drug causes its characteristic hallucinogenic effects1.


David Nutt, a neuropsychopharmacologist at Imperial College London who has previously examined the neural effects of mind-altering drugs such as the hallucinogen psilocybin, found in magic mushrooms, was one of the study's leaders. He tells Nature what the research revealed, and how he hopes LSD (lysergic acid diethylamide) might ultimately be useful in therapies.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Scientists create largest map of brain connections to date

Scientists create largest map of brain connections to date | Amazing Science |
Map of mouse visual cortex shows some striking functional connections


This tangle of wiry filaments is not a bird’s nest or a root system. Instead, it’s the largest map to date of the connections between brain cells—in this case, about 200 from a mouse’s visual cortex. To map the roughly 1300 connections, or synapses, between the cells, researchers used an electron microscope to take millions of nanoscopic pictures from a speck of tissue not much bigger than a dust mite, carved into nearly 3700 slices.


Then, teams of “annotators” traced the spindly projections of the synapses, digitally stitching stacked slices together to form the 3D map. The completed map reveals some interesting clues about how the mouse brain is wired: Neurons that respond to similar visual stimuli, such as vertical or horizontal bars, are more likely to be connected to one another than to neurons that carry out different functions, the scientists report online today in Nature.


In the image above, some neurons are color-coded according to their sensitivity to various line orientations. Ultimately, by speeding up and automating the process of mapping such networks in both mouse and human brain tissue, researchers hope to learn how the brain’s structure enables us to sense, remember, think, and feel.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

DARPA’s ‘Targeted Neuroplasticity Training’ program aims to accelerate learning ‘beyond normal levels’

DARPA’s ‘Targeted Neuroplasticity Training’ program aims to accelerate learning ‘beyond normal levels’ | Amazing Science |

DARPA has announced a new program called Targeted Neuroplasticity Training (TNT) aimed at exploring how to use peripheral nerve stimulation and other methods to enhance learning.


DARPA already has research programs underway to use targeted stimulation of the peripheral nervous system as a substitute for drugs to treat diseases and accelerate healing*, to control advanced prosthetic limbs**, and to restore tactile sensation.


But now DARPA plans to to take an even more ambitious step: It aims to enlist the body’s peripheral nerves to achieve something that has long been considered the brain’s domain alone: facilitating learning — specifically, training in a wide range of cognitive skills.


The goal is to reduce the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes. If successful, TNT could accelerate learning and reduce the time needed to train foreign language specialists, intelligence analysts, cryptographers, and others.


“Many of these skills, such as understanding and speaking a new foreign language, can be challenging to learn,” says the DARPA statement. “Current training programs are time consuming, require intensive study, and usually require evidence of a more-than-minimal aptitude for eligibility. Thus, improving cognitive skill learning in healthy adults is of great interest to our national security.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Babies know when they don’t know something

Babies know when they don’t know something | Amazing Science |

Thoughts don't just flit around in our heads unobserved: humans know when something's going on in our own brains, and we evaluate our own thoughts. For example, we can judge when we're not certain about something, and act accordingly. This ability, called metacognition (thinking about thinking), has been found in a number of species, but humans are unusual in our ability to communicate what we know about our own thoughts and knowledge.


How early in life do we develop metacognition? Children under the age of four, who confidently proclaim knowledge of things they can’t possibly know, seem to be pretty bad at it. Babies, on the other hand, point at things to ask questions about them. They shouldn't be able to do this unless they've worked out that they don't know something.


It’s possible that previous experiments haven’t found evidence of metacognition in younger children because they just weren’t testing it in the right way. After all, other species have metacognition, and experimenters have found ways to test that even though the animals can’t talk about what they know. What if children under four-years-old experience and use metacognition but are just bad at realizing it and letting anyone know?


Researchers at a cognitive science lab at Paris Sciences et Lettres Research University investigated this question by setting up an experiment that tested babies’ metacognition without forcing them to talk about it. They found evidence that the babies looked to their parents for help when they didn't know the answer to something.


Here's how the researchers adapted an experiment that had previously been used with rhesus monkeys: show a toy to an infant before hiding it. After waiting 3 to 12 seconds for the baby's mind to wander, instruct the baby to point to where the toy is hidden in order to get it back.


The researchers did this with 80 infants, all between 19 and 21 months. Each baby did this four times as practice (what professionals call "training trials") then a further 10 times. Half of those times the babies saw where the toy was hidden—that is, they knew where it had gone—and half of the time they didn’t get to watch, meaning that they couldn’t know where the toy was.


Two different groups were set up. All the babies had a parent or other caregiver present, but for half of them (the control group) the parent was told to be unresponsive if the baby looked at them for help. During the training trials, the parents in the other group demonstrated that they knew where the toy was all the time and would help the baby find it—but only if the baby looked at them for help.


The researchers found that the babies who were able to ask for help performed with better accuracy than the control group, finding the toy 66 percent of the time compared to the control group’s 56 percent. Although this result isn’t statistically significant, there are reasons to think these babies experienced metacognition. Fourteen of the babies in the group with help never actually asked for it, and their performances suffered. They had a similar accuracy rate to the babies who weren’t able to ask for help. Among those who asked for help, they were more likely to be those that either had a longer delay, or those who did not see where the toy was hidden.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Monkeys Drive Wheelchairs Using Only Their Thoughts

Monkeys Drive Wheelchairs Using Only Their Thoughts | Amazing Science |

Neuroscientists at Duke Health have developed a brain-machine interface (BMI) that allows primates to use only their thoughts to navigate a robotic wheelchair. 


A computer in the lab of Miguel Nicolelis, M.D., Ph.D., monitors brain signals from a rhesus macaque. The BMI uses signals from hundreds of neurons recorded simultaneously in two regions of the monkeys’ brains that are involved in movement and sensation. As the animals think about moving toward their goal -- in this case, a bowl containing fresh grapes -- computers translate their brain activity into real-time operation of the wheelchair.


The interface, described in the March 3 issue of the online journal Scientific Reports, demonstrates the future potential for people with disabilities who have lost most muscle control and mobility due to quadriplegia or ALS, said senior author Miguel Nicolelis, M.D., Ph.D., co-director for the Duke Center for Neuroengineering.


“In some severely disabled people, even blinking is not possible,” Nicolelis said. “For them, using a wheelchair or device controlled by noninvasive measures like an EEG (a device that monitors brain waves through electrodes on the scalp) may not be sufficient. We show clearly that if you have intracranial implants, you get better control of a wheelchair than with noninvasive devices.”


Scientists began the experiments in 2012, implanting hundreds of hair-thin microfilaments in the premotor and somatosensory regions of the brains of two rhesus macaques. They trained the animals by passively navigating the chair toward their goal, the bowl containing grapes. During this training phase, the scientists recorded the primates’ large-scale electrical brain activity. The researchers then programmed a computer system to translate brain signals into digital motor commands that controlled the movements of the wheelchair.


As the monkeys learned to control the wheelchair just by thinking, they became more efficient at navigating toward the grapes and completed the trials faster, Nicolelis said. In addition to observing brain signals that corresponded to translational and rotational movement, the Duke team also discovered that primates’ brain signals showed signs they were contemplating their distance to the bowl of grapes.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Now you can learn to fly a plane from expert-pilot brainwave patterns - improvement 33%

Now you can learn to fly a plane from expert-pilot brainwave patterns - improvement 33% | Amazing Science |

You can learn how to improve your novice pilot skills by having your brain zapped with recorded brain patterns of experienced pilots via transcranial direct current stimulation (tDCS), according to researchers at HRL Laboratories.

“We measured the brain activity patterns of six commercial and military pilots, and then transmitted these patterns into novice subjects as they learned to pilot an airplane in a realistic flight simulator,” says Matthew Phillips, PhD.

The study, published in an open-access paper in the February 2016 issue of the journal Frontiers in Human Neuroscience, found that novice pilots who received brain stimulation via electrode-embedded head caps improved their piloting abilities, with a 33 percent increase in skill consistency, compared to those who received sham stimulation. “We measured the average g-force of the plane during the simulated landing and compared it to control subjects who received a mock brain stimulation,” says Phillips.

“Pilot skill development requires a synthesis of multiple cognitive faculties, many of which are enhanced by tDCS and include dexterity, mental arithmetic, cognitive flexibility, visuo-spatial reasoning, and working memory,” the authors note.

The study focused on a working-memory area — the right dorsolateral prefrontal cortex (DLPFC) — and the left motor cortex (M1), using continuous electroencephalography (EEG) to monitor midline frontal theta-band oscillatory brain activity and functional near infrared spectroscopy (fNIRS) to monitor blood oxygenation to infer neuronal activity.

The researchers used the XForce Dream Simulator package from X-Force PC and the X-plane 10 flight simulator software from Laminar Research for flight simulation training. Previous research has demonstrated that tDCS can both help patients more quickly recover from a stroke and boost a healthy person’s creativity; HRL’s new study is one of the first to show that tDCS is effective in accelerating practical learning.

Phillips speculates that the potential to increase learning with brain stimulation may make this form of accelerated learning commonplace. “As we discover more about optimizing, personalizing, and adapting brain stimulation protocols, we’ll likely see these technologies become routine in training and classroom environments,” he says. “It’s possible that brain stimulation could be implemented for classes like drivers’ training, SAT prep, and language learning.”

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Faciotopy—A face-feature map with face-like topology is engraved in our brain

Faciotopy—A face-feature map with face-like topology is engraved in our brain | Amazing Science |

The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here a team of scientists tests these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organization. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighboring patches represent features that are physically neighboring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience.

Faces have a prototypical configuration of features and are usually perceived in a canonical upright orientation, frequently fixated in particular locations. To test the faciotopy hypothesis, the scientists presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The obtained responses in V1 are best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, and showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face.

Faciotopy would be the first example of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance. It remains to be explored whether there are also other areas with cortical maps reflecting topology of external objects.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

WUSTL Team develops wireless, dissolvable sensors to monitor brain

WUSTL Team develops wireless, dissolvable sensors to monitor brain | Amazing Science |

A team of neurosurgeons and engineers has developed wireless brain sensors that monitor intracranial pressure and temperature and then are absorbed by the body, negating the need for surgery to remove the devices. Such implants, developed by scientists at Washington University School of Medicine in St. Louis and engineers at the University of Illinois at Urbana-Champaign, potentially could be used to monitor patients with traumatic brain injuries, but the researchers believe they can build similar absorbable sensors to monitor activity in organ systems throughout the body. Their findings are published online Jan. 18, 2015 in the journal Nature.

"Electronic devices and their biomedical applications are advancing rapidly," said co-first author Rory K. J. Murphy, MD, a neurosurgery resident at Washington University School of Medicine and Barnes-Jewish Hospital in St. Louis. "But a major hurdle has been that implants placed in the body often trigger an immune response, which can be problematic for patients. The benefit of these new devices is that they dissolve over time, so you don't have something in the body for a long time period, increasing the risk of infection, chronic inflammation and even erosion through the skin or the organ in which it's placed. Plus, using resorbable devices negates the need for surgery to retrieve them, which further lessens the risk of infection and further complications." Murphy is most interested in monitoring pressure and temperature in the brains of patients with traumatic brain injury.

About 50,000 people die of such injuries annually in the United States. When patients with such injuries arrive in the hospital, doctors must be able to accurately measure intracranial pressure in the brain and inside the skull because an increase in pressure can lead to further brain injury, and there is no way to reliably estimate pressure levels from brain scans or clinical features in patients.

"However, the devices commonly used today are based on technology from the 1980s," Murphy explained. "They're large, they're unwieldy, and they have wires that connect to monitors in the intensive care unit. They give accurate readings, and they help, but there are ways to make them better."

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Singing in the brain: Songbirds sing like human opera singers

Singing in the brain: Songbirds sing like human opera singers | Amazing Science |

A songbirds' vocal muscles work like those of human speakers and singers, finds a study recently published in the Journal of Neuroscience. The research on Bengalese finches showed that each of their vocal muscles can change its function to help produce different parameters of sounds, in a manner similar to that of a trained opera singer.

"Our research suggests that producing really complex song relies on the ability of the songbirds' brains to direct complicated changes in combinations of muscles," says Samuel Sober, a biologist at Emory University and lead author of the study. "In terms of vocal control, the bird brain appears as complicated and wonderful as the human brain."

Pitch, for example, is important to songbird vocalization, but there is no single muscle devoted to controlling it. "They don't just contract one muscle to change pitch," Sober says. "They have to activate a lot of different muscles in concert, and these changes are different for different vocalizations. Depending on what syllable the bird is singing, a particular muscle might increase pitch or decrease pitch."

Previous research has revealed some of the vocal mechanisms within the human "voice box," or larynx. The larynx houses the vocal cords and an array of muscles that help control pitch, amplitude and timbre.

Instead of a larynx, birds have a vocal organ called the syrinx, which holds their vocal cords deeper in their bodies. While humans have one set of vocal cords, a songbird has two sets, enabling it to produce two different sounds simultaneously, in harmony with itself.

"Lots of studies look at brain activity and how it relates to behaviors, but muscles are what translates the brain's output into behavior," Sober says. "We wanted to understand the physics and biomechanics of what a songbird's muscles are doing while singing."

The researchers devised a method involving electromyography (EMG) to measure how the neural activity of the birds activates the production of a particular sound through the flexing of a particular vocal muscle.

The results showed the complex redundancy of the songbird's vocal muscles. "It tells us how complicated the neural computations are to control this really beautiful behavior," Sober says, adding that songbirds have a network of brain regions that non-songbirds do not.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Revolutionary Neuroscience Technique - Optogenetics - Slated for Human Clinical Trials

Revolutionary Neuroscience Technique - Optogenetics - Slated for Human Clinical Trials | Amazing Science |

A technique called optogenetics has transformed neuroscience during the past 10 years by allowing researchers to turn specific neurons on and off in experimental animals. By flipping these neural switches, it has provided clues about which brain pathways are involved in diseases like depression and obsessive-compulsive disorder.

“Optogenetics is not just a flash in the pan,” says neuroscientist Robert Gereau of Washington University in Saint Louis. “It allows us to do experiments that were not doable before. This is a true game changer like few other techniques in science.”

Since the first papers were published on optogenetics in the mid-aughts some researchers have mused about one day using optogenetics in patients, imagining the possibility of an off-switch for depression, for instance.

The technique, however, would require that a patient submit to a set of highly invasive medical procedures: genetic engineering of neurons to insert molecular switches to activate or switch off cells, along with threading of an optical fiber into the brain to flip those switches.

Spurred on by a set of technical advances, optogenetics pioneer Karl Deisseroth, together with other Stanford University researchers, has formed a company to pursue optogenetics trials in patients within the next several years—one of several start-ups that are now contemplating clinical trials of the technique.

Circuit Therapeutics, founded in 2010, is moving forward with specific plans to treat neurological diseases. It also partners with pharmaceutical companies to help them use optogenetics in animal research to develop novel drug targets for human diseases. Circuit wants to begin clinical trials for optogenetics to treat chronic pain, a therapy that would be less invasive than applications requiring implantation deep inside the brain. Neurons affected by chronic pain are relatively accessible, because they reside in and just outside the spinal cord, an easier target than the brain.

Even nerve endings in the skin might be targeted, making them much easier to reach. “In animal models it works incredibly well,” says Scott Delp, a neuroscientist at Stanford, who collaborates with Deisseroth. The firm is also working to develop treatments for Parkinson’s and other neurological disorders.

Interest in optogenetics and closely related therapies in patients is growing. RetroSense Therapeutics, a Michigan-based company, has reported plans to soon begin human trials of optogenetics for a genetic condition that causes blindness. The new technology relies on opsins, a type of ion channel consisting of proteins that conduct neurons’ electrical signaling. Neurons contain hundreds of different types of ion channels but opsins open in response to light. Some opsins are found in the human retina but those used in optogenetics are derived from algae and other organisms.

The first opsins used in optogenetics, called channel rhodopsins, open to allow positively charged ions to enter the cell when activated by a flash of blue light, which causes the neuron to fire an electrical impulse. Other opsin proteins pass inhibitory, negatively charged ions in response to light, making it possible to silence neurons as well. Researchers have widely expanded the arsenal of available opsins with genetic engineering, for example making ones that stay open in response to a short burst of light.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Rhythmic expression of many genes falls out of sync in older human brains

Rhythmic expression of many genes falls out of sync in older human brains | Amazing Science |

Circadian cycles shift as humans get older—sleep and body temperature patterns change, for instance. The rhythmic cycling of numerous genes’ expression in the brain also shifts as people age, researchers reported this week (December 21) inPNAS. The levels of many transcripts became less robust in their daily ups and downs, while another set of mRNAs emerged with a rhythmicity not seen in younger counterparts.

“You can imagine that things actually get weaker with age, but that things can get stronger with age is really exciting,” Doris Kretzschmar, a neuroscientist at the Oregon Institute of Occupational Health Sciences who was not involved in the study, told NPR’s Shots.

The researchers, led by Colleen McClung at the University of Pittsburgh School of Medicine in Pennsylvania, collected cortical tissue from people whose hour of death was known. Comparing gene expression levels between 31 subjects under 40 years old and 37 subjects over age 60, the researchers found 1,063 transcripts in one part of the prefrontal cortex that lost rhythmicity altogether in the older group. In this same part of the brain, 434 genes gained a rhythm that was not seen among younger individuals. In another part of the prefrontal cortex, 588 genes lost their daily cycling with age, while 533 became rhythmic.

It’s not clear what these changes in expression cycles might mean for health and aging. “Since depression is associated with accelerated molecular aging, and with disruptions in daily routines, these results also may shed light on molecular changes occurring in adults with depression,” coauthor Etienne Sibille of the University of Toronto said in a press release.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Study links epigenetic processes to the development of brain circuitry

Study links epigenetic processes to the development of brain circuitry | Amazing Science |

From before birth through childhood, connections form between neurons in the brain, ultimately making us who we are. So far, scientists have gained a relatively good understanding of how neural circuits become established, but they know less about the genetic control at play during this crucial developmental process.

Now, a team led by researchers at The Rockefeller University has described for the first time the so-called epigenetic mechanisms underlying the development of the cerebellum, the portion of the brain that allows us to learn and execute complex movements. The term epigenetics refers to changes to DNA that influence how and when genes are expressed.

“Until now, nobody knew what genes control the steps needed to make the circuitry that in turn controls the behavior the cerebellum is famous for,” says Mary E. Hatten, Frederick P. Rose Professor and head of the Laboratory of Developmental Neurobiology. “Our study shows that pivotal changes in the levels of all epigenetic pathway genes are needed to form the circuits.” These epigenetic pathway genes modify chromatin, which is DNA packaged with protein. Alterations to chromatin are an important type of epigenetic change because they affect which genes are available for translation into proteins.

Further investigation revealed that one of these epigenetic regulators was specifically responsible for processes crucial to forming connections between these neurons and other parts of the brain and for the expression of ion channels that transmit signals across synapses, which are gaps between neurons.

Two developments in technology made the current study possible. The first, TRAP, was developed at Rockefeller. It enables researchers to map gene expression in specific types of neurons. Hatten’s team applied this method to identify the genes expressed in granule cells, one of the two cell types that make up the cerebellum, in the mouse brain from birth through adulthood. They focused on changes in gene expression 12 to 21 days after birth, since this is the main period during which the circuitry of the cerebellum is formed.

The second key method used in the study is metagene analysis, a mathematical model developed at the Broad Institute of MIT and Harvard that allows researchers to study large sets of genes and see changes in patterns that would be too difficult to interpret by looking at individual genes. Three investigators from Broad collaborated on the current study. “Using this analytical tool, we showed that during this crucial period of time in development, all the pathways that control the remodeling of chromatin changed,” Hatten says.


Role of Tet1/3 Genes and Chromatin Remodeling Genes in Cerebellar Circuit FormationNeuron 89, 1–13
Xiaodong Zhu, David Girardo, Eve-Ellen Govek, Keisha John, Marian Mellén, Pablo Tamayo, Jill P. Mesirov, and Mary E. Hatten

No comment yet.
Scooped by Dr. Stefan Gruenwald!

How is a developing brain assembled?

How is a developing brain assembled? | Amazing Science |
A new, open-source software that can help track the embryonic development and movement of neuronal cells throughout the body of the worm, is now available to scientists. The software is described in a paper published in the open access journal, eLife on December 3rd by researchers at the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the Center for Information Technology (CIT); along with Memorial Sloan-Kettering Institute, New York City; Yale University, New Haven, Connecticut; Zhejiang University, China; and the University of Connecticut Health Center, Farmington. NIBIB is part of the National Institutes of Health.

As far as biologists have come in understanding the brain, much remains to be revealed. One significant challenge is determining the formation of complex neuronal structures made up of billions of cells in the human brain. As with many biological challenges, researchers are first examining this question in simpler organisms, such as worms.

Although scientists have identified a number of important proteins that determine how neurons navigate during brain formation, it's largely unknown how all of these proteins interact in a living organism. Model animals, despite their differences from humans, have already revealed much about human physiology because they are much simpler and easier to understand. In this case, researchers chose Caenorhabditis elegans (C. elegans), because it has only 302 neurons, 222 of which form while the worm is still an embryo. While some of these neurons go to the worm nerve ring (brain) they also spread along the ventral nerve cord, which is broadly analogous to the spinal cord in humans. The worm even has its own versions of many of the same proteins used to direct brain formation in more complex organisms such as flies, mice, or humans.

"Understanding why and how neurons form and the path they take to reach their final destination could one day give us valuable information about how proteins and other molecular factors interact during neuronal development," said Hari Shroff, Ph.D., head of the NIBIB research team. "We don't yet understand neurodevelopment even in the context of the humble worm, but we're using it as a simple model of how these factors work together to drive the development of the worm brain and neuronal structure. We're hoping that by doing so, some of the lessons will translate all the way up to humans."

However, following neurons as they travel through the worm during its embryonic development is not as simple as it might seem. The first challenge was to create new microscopes that could record the embryogenesis of these worms without damaging them through too much light exposure while still getting the resolution needed to clearly see individual cells. Shroff and his team at NIBIB, in collaboration with Daniel Colon-Ramos at Yale University and Zhirong Bao at Sloan-Kettering, tackled this problem by developing new microscopes that improved the speed and resolution at which they could image worm embryonic development.

Read more at:

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain’s Location-tracking Cells Use Transcendental Number System

Brain’s Location-tracking Cells Use Transcendental Number System | Amazing Science |

Animals use specialized neurons in their brain known as grid cells to keep track of their physical location. The subject of the 2014 Nobel Prize in Physiology or Medicine, they earned that name because, when researchers monitored an individual grid cell in a moving rat, mapping the places where the neuron fired produced a regular triangular grid. However, the way grid cells encode and decode this information to produce a usable mental map is still a mystery. Individual grid cells don’t differentiate between the points on their corresponding grids; to the brain, each point looks like the same burst of electricity. How can the brain translate those signals into something that says, “you are here?”


Penn researchers now have a theory for how grid cells work together to allow a rat—or a person—to accomplish this task.


By digging into the fundamental mathematics of the grids they encode, Vijay Balasubramanian, the Cathy and Marc Lasry Professor in the Department of Physics and Astronomy, and grad student Xue-Xin Wei, have shown that grid cells form a kind of number system, with different-sized grids acting as the equivalent of the tens, hundreds, and thousands place in a decimal number.


They used this theory to make a key prediction—that different sized grids would be found in ratios based on the “transcendental” mathematical constant e—which has been borne out by evidence gleaned from earlier experiments.


In collaboration with Jason Prentice of Princeton University, they published their theory in the journal eLife. Partly for his work on this subject, Wei received the 2015 Louis B. Flexner Award for Outstanding Thesis Work in Neurosciences.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Long-term memory in Bonobos: Apes never forget an old friend

Long-term memory in Bonobos: Apes never forget an old friend | Amazing Science |
Researchers from University of St Andrews made the discovery after recording the calls of individual bonobos (pictured) and playing them to those they had known years before.


The team concluded that the primates are therefore capable of remembering the voice of a former group member, even after five years of separation.


Sumir Keenan, of the School of Psychology and Neuroscience at the University of St Andrews, said: 'Members of a bonobo community separate regularly into small groups for hours or even days and often use loud calls to communicate with one another. 


'Moreover, females leave their original community but may continue to interact with their old companions in subsequent meetings between communities. So, effective social navigation depends on the ability to recognise social partners past and present.


'It is fascinating to discover that this knowledge of familiar voices in the long term is another characteristic we share with our closest relatives.'


The researchers were able to use recordings of bonobos from zoos across Europe, taking advantage of the fact that some bonobos had experienced several zoos and had formed links past and present with members of their species in different places. Mimicking the events characteristic of the arrival of a new bonobo, the scientists played the recorded bonobo calls using carefully hidden speakers.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

In order to learn, we must forget: A neural pathway that erases memories

In order to learn, we must forget: A neural pathway that erases memories | Amazing Science |

In order to remember, we must forget. Recent research shows that when your brain retrieves newly encoded information, it suppresses older related information so that it does not interfere with the process of recall. Now a team of European researchers has identified a neural pathway that induces forgetting by actively erasing memories. The findings could eventually lead to novel treatments for conditions such as post-traumatic stress disorder (PTSD).


We’ve known since the early 1950s that a brain structure called the hippocampusis critical for memory formation and retrieval, and subsequent work using modern techniques has revealed a great deal of information about the underlyingcellular mechanisms. The hippocampus contains neural circuits that loop through three of its sub-regions – the dentate gyrus and the CA3 and CA1 areas – and it’s widely believed that memories form by the strengthening and weakening of synaptic connections within these circuits.

The dentate gyrus gives rise to so-called mossy fibres, which form the main ‘input’ to the hippocampus, by relaying sensory information from an upstream region called the entorhinal cortex first to CA3 and then onto CA1. It’s thought that the CA3 region integrates the information to encode, store, and retrieve new memories, before transferring them to the cerebral cortex for long-term storage. Exactly how each of these hippocampal sub-regions contribute to memory formation, storage, and retrieval is still not entirely clear, however.
Previously, Cornelius Gross of the European Molecular Biology Laboratory (EMBL) in Monterotondo, Italy, and his colleagues used genetic engineering to develop a new way of inhibiting the activity of specific cell types in the brain. When they used the technique to inhibit granule cells in the dentate gyrus of live mice, they found that the animals could not learn to avoid a part of their cage that gave them mild electric shocks.
Their latest study, led by Noelia Madroñal, combined this technique with several others to learn more about the function of the dentate gyrus. First, they trained some of their genetically engineered mice to associate mild electric shocks with particular sounds, and found that inhibiting the activity of dentate gyrus granule cells during, but not after, the learning procedure, prevented them from learning the associations.
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Project to reverse-engineer the brain to make computers to think like humans

Project to reverse-engineer the brain to make computers to think like humans | Amazing Science |

Three decades ago, the U.S. government launched the Human Genome Project, a 13-year endeavor to sequence and map all the genes of the human species. Although initially met with skepticism and even opposition, the project has since transformed the field of genetics and is today considered one of the most successful scientific enterprises in history.


Now the Intelligence Advanced Research Projects Activity (IARPA), a research organization for the intelligence community modeled after the defense department’s famed DARPA, has dedicated $100 million to a similarly ambitious project. The Machine Intelligence from Cortical Networks program, or MICrONS, aims to reverse-engineer one cubic millimeter of the brain, study the way it makes computations, and use those findings to better inform algorithms in machine learning and artificial intelligence. IARPA has recruited three teams, led by David Cox, a biologist and computer scientist at Harvard University, Tai Sing Lee, a computer scientist at Carnegie Mellon University, and Andreas Tolias, a neuroscientist at the Baylor College of Medicine. Each team has proposed its own five-year approach to the problem.


“It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,” says Jacob Vogelstein at IARPA, who manages the MICrONS program.


MICrONS, as a part of President Obama’s BRAIN Initiative, is an attempt to push forward the status quo in brain-inspired computing. A great deal of technology today already relies on a class of algorithms called artificial neural networks, which, as their name would suggest, are inspired by the architecture (or at least what we know about the architecture) of the brain. Thanks to significant increases in computing power and the availability of vast amounts of data on the Internet, Facebook can identify faces, Siri can recognize voices, cars can self-navigate, and computers can beat humans at games like chess. These algorithms, however, are still primitive, relying on a highly simplified process of analyzing information for patterns.

Based on models dating back to the 1980s, neural networks tend to perform poorly in cluttered environments, where the object the computer is trying to identify is hidden among a large number of objects, many of which are overlapping or ambiguous. These algorithms do not generalize well, either. Seeing one or two examples of a dog, for instance, does not teach the computer how to identify all dogs.


Humans, on the other hand, seem to overcome these challenges effortlessly. We can make out a friend in a crowd, focus on a familiar voice in a noisy setting, and deduce patterns in sounds or an image based on just one or a handful of examples. We are constantly learning to generalize without the need for any instructions. And so the MICrONS researchers have turned to the brain to find what these models are missing. “That’s the smoking gun,” Cox says.


While neural networks retain elements of the architecture found in the brain, the computations they use are not copied directly from any algorithms that neurons use to process information. In other words, the ways in which current algorithms represent, transform, and learn from data are engineering solutions, determined largely by trial and error. They work, but scientists do not really know why—certainly not well enough to define a way to design a neural network. Whether this neural processing is similar to or different from corresponding operations in the brain remains unknown. “So if we go one level deeper and take information from the brain at the computational level and not just the architectural level, we can enhance those algorithms and get them closer to brain-like performance,” Vogelstein says.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Implantable ‘stentrode’ allows paralyzed patients to mind-control an exoskeleton

Implantable ‘stentrode’ allows paralyzed patients to mind-control an exoskeleton | Amazing Science |

A DARPA-funded research team has created a novel minimally invasive brain-machine interface and recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier when treating patients for physical disabilities and neurological disorders.

The new technology, developed by University of Melbourne medical researchers under DARPA’s Reliable Neural-Interface Technology (RE-NET) program, promises to give people with spinal cord injuries new hope to walk again.

The brain-machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.

The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017. The research results, published Monday Feb. 8 in Nature Biotechnology, show the device is capable of recording high-quality signals emitted from the brain’s motor cortex without the need for open brain surgery.

“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery,” said Thomas Oxley, principal author and neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne.

Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

No comment yet.
Rescooped by Dr. Stefan Gruenwald from Fragments of Science!

Can We Decipher the Language of the Brain?

Can We Decipher the Language of the Brain? | Amazing Science |

Understanding how brains work is one of the greatest scientific challenges of our times, but despite the impression sometimes given in the popular press, researchers are still a long way from some basic levels of understanding. A project recently funded by the Obama administration's BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative is one of several approaches promising to deliver novel insights by developing new tools that involves a marriage of nanotechnology and optics.

There are close to 100 billion neurons in the human brain. Researchers know a lot about how these individual cells behave, primarily through “electrophysiology,” which involves sticking fine electrodes into cells to record their electrical activity. We also know a fair amount about the gross organization of the brain into partially specialized anatomical regions, thanks to whole-brain imaging technologies like functional magnetic resonance imaging (fMRI), which measure how blood oxygen levels change as regions that work harder demand more oxygen to fuel metabolism. We know little, however, about how the brain is organized into distributed “circuits” that underlie faculties like, memory or perception. And we know even less about how, or even if, cells are arranged into “local processors” that might act as components in such networks.

We also lack knowledge regarding the “code” large numbers of cells use to communicate and interact. This is crucial, because mental phenomena likely emerge from the simultaneous activity of many thousands, or millions, of interacting neurons. In other words, neuroscientists have yet to decipher the “language” of the brain. “The first phase is learning what the brain's natural language is. If your resolution [in a hypothetical language detector] is too coarse, so you're averaging over paragraphs, or chapters, you can't hear individual words or discern letters,” says physicist Michael Roukes of the California Institute of Technology, one of the authors of the “Brain Activity Map” (BAM) paper published in 2012 inNeuron that inspired the BRAIN Initiative. “Once we have that, we could talk to the brain in complete sentences.”

This is the gap BRAIN aims to address. Launched in 2014 with an initial pot of more than $100 million, the idea is to encourage the development of new technologies for interacting with massively greater numbers of neurons than has previously been possible. The hope is that once researchers understand how the brain works (with cellular detail but across the whole brain) they'll have better understanding of neurodegenerative diseases, like Alzheimer's and psychiatric disorders like schizophrenia or depression.

Today’s state-of-the-art technology in the field is optical imaging, mainly using calcium indicators—fluorescent proteins introduced into cells via genetic tweaks, which emit light in response to the calcium level changes caused by neurons firing. These signals are recorded using special microscopes that produce light, as the indicators need to absorb photons in order to then emit these light particles. This can be combined with optogenetics, a technique that genetically modifies cells so they can be activated using light, allowing researchers to both observe and control neural activity.

Some incredible advances have already been made using these tools. For example, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus, led by Misha Ahrens, published a study in 2013 in Nature Methods in which they recorded activity from almost all of the neurons of zebra fish larvae brains. Zebra fish larvae are used because they are easily genetically tweaked, small and, crucially, transparent. The researchers refined a technique called light-sheet microscopy, which uses lasers to produce planes of light that illuminate the brain one cross-section at a time. The fish were genetically engineered with calcium indicators so the researchers were able to generate two-dimensional pictures of neural activity, which they then stacked into three-dimensional images, capturing 90 percent of the activity of the zebra fish’s 100,000 brain cells.

Via Mariaschnee
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Memory capacity of brain is 10 times more than previously thought

Memory capacity of brain is 10 times more than previously thought | Amazing Science |

Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient, and could help engineers build computers that are incredibly powerful but also conserve energy.

“This is a real bombshell in the field of neuroscience,” says Terry Sejnowski, Salk professor and co-senior author of the paper, which was published in eLife. “We discovered the key to unlocking the design principle for how hippocampal neurons function with low energy but high computation power. Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (1 quadrillion or 1015 bytes), in the same ballpark as the World Wide Web.”

“When we first reconstructed every dendrite, axon, glial process, and synapse* from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.”

The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.

At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.

“We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature,” says Bartol. Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small. But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.

“Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

UCSD spinoffs create lab-quality portable 64-channel BCI headset

UCSD spinoffs create lab-quality portable 64-channel BCI headset | Amazing Science |

The first dry-electrode, portable 64-channel wearable brain-computer interface (BCI) has been developed by bioengineers and cognitive scientists associated with UCSD Jacobs SchoolThe system is comparable to state-of-the-art equipment found in research laboratories, but with portability, allowing for tracking brain states throughout the day and augmenting the brain’s capabilities, the researchers say. Current BCI devices require gel-based electrodes or fewer than 64 channels.

The dry EEG sensors are easier to apply than wet sensors, while still providing high-density/low-noise brain activity data, according to the researchers. The headset includes a Bluetooth transmitter, eliminating the usual array of wires. The system also includes a sophisticated software suite for data interpretation and analysis for applications including research, neuro-feedback, and clinical diagnostics.

“This is going to take neuroimaging to the next level by deploying on a much larger scale,” including use in homes and even while driving, said Mike Yu Chi, a Jacobs School alumnus and CTO of Cognionics who led the team that developed the headset.

The researchers also envision a future when neuroimaging can be used to bring about new therapies for neurological disorders. “We will be able to prompt the brain to fix its own problems,” said Gert Cauwenberghs, a bioengineering professor at the Jacobs School and a principal investigator on a National Science Foundation grant. “We are trying to get away from invasive technologies, such as deep brain stimulation and prescription medications, and instead start up a repair process by using the brain’s synaptic plasticity.”

“In 10 years, using a brain-machine interface might become as natural as using your smartphone is today, said Tim Mullen, a UC San Diego alumnus, lead author on the study and a former researcher at the Swartz Center for Computational Neuroscience at UC San Diego.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Researchers find a way to decipher words in the mind before they are even spoken

Researchers find a way to decipher words in the mind before they are even spoken | Amazing Science |

A team of researchers working at Kyushu Institute of Technology in Japan and led by Yamazaki Toshimasa, has according to Japanese newspaper Nishinippon, found a way to read certain brain waves and to match them to a database, allowing for recognition of the words before a person speaks them. The paper reported that the team also presented a paper at a recent conference organized by the Institute of Electronics, Information and Communication Engineers, describing their findings.

Over the past half-century, scientists have tried many approaches to read the mind, whether for good or nefarious purposes—thus far, none has led to success, though some have claimed some progress has been made. In this latest bit of news, the team in Japan enlisted the assistance of 12 volunteers of various ages, asking each to undergo EEG scans while they thought about words and then spoke them out loud. The initial stages resulted in the buildup of brain patterns in a database—later, as each person recited words, the researchers read their brain waves to see if they could identify the words they were about to speak by comparing current brain wave patterns with those in the database. Similar research has been done before, but this time, the researchers looked specifically at brain waves emanating from the Broca area—which is a part of the brain where formation of words occurs before they are sent to other parts of the brain that are used to actually speak them. By limiting the vocabulary, the researchers found they could correctly interpret the words a person was about to speak (up to 2 seconds beforehand), approximately 25 percent of the time. They also found they could up that percentage to near 90 percent if they focused instead on just (Japanese) characters, or syllables.

The paper noted that the research benefited by using Japanese speaking volunteers, because every character in that language has a vowel in it—the lack of them in many western language, it has been noted, has been making it more difficult for researchers in the field working with volunteers who speak in English, for example.

Toshimasa and his team are reportedly optimistic about improving both their accuracy and the number of words they will be able to recognize as their research continues, perhaps going so far as developing a system capable of deciphering the entirely of the Japanese language, making it possible, for example, for people in a coma to speak, or for conversations in an environments where sound cannot travel, such as in outer space.

No comment yet.
Scooped by Dr. Stefan Gruenwald!

Brain scans show compulsive gamers have hyperconnected neural networks

Brain scans show compulsive gamers have hyperconnected neural networks | Amazing Science |

Brain scans from nearly 200 adolescent boys provide evidence that the brains of compulsive video game players are wired differently. Chronic video game play is associated with hyperconnectivity between several pairs of brain networks. Some of the changes are predicted to help game players respond to new information.

Other changes are associated with distractibility and poor impulse control. The research, a collaboration between the University of Utah School of Medicine, and Chung-Ang University in South Korea, was published online in Addiction Biology on Dec. 22, 2015.

“Most of the differences we see could be considered beneficial. However the good changes could be inseparable from problems that come with them,” says senior author Jeffrey Anderson, M.D., Ph.D.,associate professor of neuroradiology at the University of Utah School of Medicine.

Those with Internet gaming disorder are obsessed with video games, often to the extent that they give up eating and sleeping to play. This study reports that in adolescent boys with the disorder, certain brain networks that process vision or hearing are more likely to have enhanced coordination to the so-called salience network.

The job of the salience network is to focus attention on important events, poising that person to take action. In a video game, enhanced coordination could help a gamer to react more quickly to the rush of an oncoming fighter. And in life, to a ball darting in front of a car, or an unfamiliar voice in a crowded room.

Flurries Unlimited's curator insight, January 4, 2016 11:44 AM

See, they really do think differently..  #gamers

Rescooped by Dr. Stefan Gruenwald from Limitless learning Universe!

Neuroscientists Can Now Predict Intelligence From Brain Activity

Neuroscientists Can Now Predict Intelligence From Brain Activity | Amazing Science |

Humans have a love/hate relationship with the cliques, clades, and classes that compartmentalize their world. That tension forms the backbone of so much dystopian sci-fi: The protagonist of Divergent is special because she doesn’t fit into her society’s rigid castes of personality traits; Minority Report is all about the follies of judging people before they act. These stories are fun to think about in part because they’re fiction, not fact. But now that neuroscientists have used maps of people’s brains to accurately predict intelligence, reality creeps ever so much closer to fiction.

By intelligence, in this case, the scientists mean abstract reasoning ability, which they inferred by mapping and analyzing the connections within people’s brains. But the study, published today in Nature Neuroscience, is compelling because it gets at a fundamental and very uncomfortable truth: Some brains are better than others at certain things, simply because of the way they’re wired. And now, scientists are closer to being able to determine precisely which brains those are, and how they got that way.

Via CineversityTV
No comment yet.
Scooped by Dr. Stefan Gruenwald!

Pulsed laser light turns whole-brain activity on and off

Pulsed laser light turns whole-brain activity on and off | Amazing Science |

By flashing high-frequency (40 to 100 pulses per second) optogenetic lasers at the brain’s thalamus, scientists were able to wake up sleeping rats and cause widespread brain activity. In contrast, flashing the laser at 10 pulses per second suppressed the activity of the brain’s sensory cortex and caused rats to enter a seizure-like state of unconsciousness.

“We hope to use this knowledge to develop better treatments for brain injuries and other neurological disorders,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery, and bioengineering at Stanford University, and a senior author of the study, published in the open-access journal eLIFE.

Located deep inside the brain, the thalamus regulates arousal, acting as a relay station to the cortex for neural signals from the body. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory.*

The observations used a combination of optogenetics and whole-brain functional MRI (fMRI) — known as “ofMRI” — to detect overall effects on the brain, along with EEG and single-unit cell recordings.The researchers noted in the paper that “using targeted, temporally precise optogenetic stimulation in the current study allowed us to selectively excite a single group of neuronal elements and identify their specific role in creating distinct modes of network function.” That could not be achieved with conventional electrode stimulation, the researchers say.

They explain that this method may allow for direct-brain stimulation (DBS) therapeutic methods to be optimized in the clinic “for a wide range of neurological disorders that currently lack such treatment.” “This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS), which partially funded the study.

No comment yet.