Amazing Science
677.7K views | +703 today
Amazing Science
Amazing science facts - 3D_printing • aging • AI • anthropology • art • astronomy • bigdata • bioinformatics • biology • biotech • chemistry • computers • cosmology • education • environment • evolution • future • genetics • genomics • geosciences • green_energy • history • language • map • material_science • math • med • medicine • microscopy • nanotech • neuroscience • paleontology • photography • photonics • physics • postings • robotics • science • technology • video
Your new post is loading...
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neuroscientists find the brain can identify images seen for as little as 13 milliseconds

Neuroscientists find the brain can identify images seen for as little as 13 milliseconds | Amazing Science | Scoop.it

Imagine seeing a dozen pictures flash by in a fraction of a second. You might think it would be impossible to identify any images you see for such a short time. However, a team of neuroscientists from MIT has found that the human brain can process entire images that the eye sees for as little as 13 milliseconds — the first evidence of such rapid processing speed.

That speed is far faster than the 100 milliseconds suggested by previous studies. In the new study, which appears in the journal Attention, Perception, and Psychophysics, researchers asked subjects to look for a particular type of image, such as “picnic” or “smiling couple,” as they viewed a series of six or 12 images, each presented for between 13 and 80 milliseconds.

“The fact that you can do that at these high speeds indicates to us that what vision does is find concepts. That’s what the brain is doing all day long — trying to understand what we’re looking at,” says Mary Potter, an MIT professor of brain and cognitive sciences and senior author of the study.


This rapid-fire processing may help direct the eyes, which shift their gaze three times per second, to their next target, Potter says. “The job of the eyes is not only to get the information into the brain, but to allow the brain to think about it rapidly enough to know what you should look at next. So in general we’re calibrating our eyes so they move around just as often as possible consistent with understanding what we’re seeing,” she says.


After visual input hits the retina, the information flows into the brain, where information such as shape, color, and orientation is processed. In previous studies, Potter has shown that the human brain can correctly identify images seen for as little as 100 milliseconds. In the new study, she and her colleagues decided to gradually increase the speeds until they reached a point where subjects’ answers were no better than if they were guessing. All images were new to the viewers.

The researchers expected they might see a dramatic decline in performance around 50 milliseconds, because other studies have suggested that it takes at least 50 milliseconds for visual information to flow from the retina to the “top” of the visual processing chain in the brain and then back down again for further processing by so-called “re-entrant loops.” These processing loops were believed necessary to confirm identification of a particular scene or object.

However, the MIT team found that although overall performance declined, subjects continued to perform better than chance as the researchers dropped the image exposure time from 80 milliseconds to 53 milliseconds, then 40 milliseconds, then 27, and finally 13 — the fastest possible rate with the computer monitor being used.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Contextual Computing: Our Sixth, Seventh And Eighth Senses

Contextual Computing: Our Sixth, Seventh And Eighth Senses | Amazing Science | Scoop.it

As mobile computing becomes increasingly pervasive, so do our expectations of the devices we use and interact with in our everyday lives. In looking at the advancements seen in computing technology in 2013, a few things are beginning to stand out, namely the idea of context based computing.


First and unsurprisingly, the desktop is no longer the center of the computing experience. Instead a variety of Internet connected peripheral devices are increasingly becoming central to our daily lives. These things can ranging from the wearable to the embedded, yet regardless of the form they take, they have begun to augment how we as humans interact with both our virtual and physical worlds around us. Thanks to recent advancements, in the near future, devices will able to see and perceive the world as humans do, providing a kind of contextual sixth sense. Yes, computers which are contextually aware.


Context awareness did not originate in computer science, the word “context” stems from a study of human “text”; and the idea of “situated cognition,” that context changes the interpretation of text, is an idea that goes back many thousand years. In terms of computing, contextual awareness was first described by Georgia Tech researchers Anind Dey and Gregory Abowd more than a decade ago. It is an idea that computers can both sense, and react based on their environment in much the same way our brain interpret various stimuli. Context aware devices are given information about the circumstances under which they are able to operate and based on rules, or an intelligent stimulus, react accordingly.


Although we’re in the earliest days, the future generation of connected things will become smarter, may anticipate our needs, and share our perception of the world so we can interact with them more naturally.  In a recent article published on fastcodesign, Pete Mortensen, a senior strategist at Jump Associates, described contextual computing as “our Sixth, Seventh And Eighth Senses.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Picture Gallery of the Human Connectome Project

Picture Gallery of the Human Connectome Project | Amazing Science | Scoop.it

Navigate the brain in a way that was never before possible; fly through major brain pathways, compare essential circuits, zoom into a region to explore the cells that comprise it, and the functions that depend on it.


The Human Connectome Project aims to provide an unparalleled compilation of neural data, an interface to graphically navigate this data and the opportunity to achieve never before realized conclusions about the living human brain.


References:


more...
Anne Fleischman's curator insight, January 9, 2014 9:16 AM

Pour le plaisir les yeux #bigdata

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human-Specific Genes: How a Gene Duplication Helped our Brains Become ‘Human’

Human-Specific Genes: How a Gene Duplication Helped our Brains Become ‘Human’ | Amazing Science | Scoop.it

What genetic changes account for the vast behavioral differences between humans and other primates? Researchers so far have catalogued only a few, but now it seems that they can add a big one to the list. A team led by scientists at The Scripps Research Institute has shown that an extra copy of a brain-development gene, which appeared in our ancestors’ genomes about 2.4 million years ago, allowed maturing neurons to migrate farther and develop more connections.


Surprisingly, the added copy doesn’t augment the function of the original gene, SRGAP2, which makes neurons sprout connections to neighboring cells. Instead it interferes with that original function, effectively giving neurons more time to wire themselves into a bigger brain.


“This appears to be a major example of a genomic innovation that contributed to human evolution,” said Franck Polleux, a professor at The Scripps Research Institute. “The finding that a duplicated gene can interact with the original copy also suggests a new way to think about how evolution occurs and might give us clues to human-specific developmental disorders such as autism and schizophrenia.”


Polleux is the senior author of the new report, which was published online ahead of print on May 3, 2012 by the journal Cell. The same issue features a related paper on SRGAP2’s recent evolution by the laboratory of Evan E. Eichler at the University of Washington, Seattle.


Polleux specializes in the study of human brain development, and, several years ago, his lab began researching the function of the newly-discovered SRGAP2. He and his colleagues found that in mice, the gene’s protein product plays a key role during brain development: It deforms the membranes of young neurons outward, forcing the growth of root-like appendages called filopodia. As young neurons sprout these filopodia, they migrate more slowly through the expanding brain; eventually they reach their final position where they form connections. Most excitatory connections  made on pyramidal neurons in the cortex are formed on spines, which are microscopic protrusions from the dendrite playing a critical role in integrating synaptic signals from other neurons.


Shortly after beginning the project, Polleux learned from other labs’ work that SRGAP2 was among the few genes (approximately 30) that had been duplicated in the human genome less than six million years ago after separation from other apes. “These evolutionarily recent gene duplications are so nearly identical to the original genes that they aren’t detectable by traditional genome sequencing methods,” said Polleux. “Only in the last five years have scientists developed methods to reliably map these hominid-specific duplications.”


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

MIT: How human language could have evolved from birdsong

MIT: How human language could have evolved from birdsong | Amazing Science | Scoop.it
Linguistics and biology researchers propose a new theory on the deep roots of human speech.


“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.” 

Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.

“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology

The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.

Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

UC Berkeley Researchers Propose "Neural Dust" Brain-Computer Interface

UC Berkeley Researchers Propose "Neural Dust" Brain-Computer Interface | Amazing Science | Scoop.it

Advances in brain imaging and neural activity detection technologies, such as fMRI and EEG, have allowed us to learn much about the brain over the years, and neural implants have offered the ability to stimulate and all but control activity in certain parts of the brain. However, these brain-computer interfaces are limited in that they offer finite resolution, are hard to apply to many brain regions, and usually can only stay directly connected to the brain for a short period of time due to their invasiveness.


Engineers at the University of California, Berkeley, have proposed an ultra-small, ultrasound-based neural recording system that they call “neural dust”. Neural dust consists of thousands of sensors that are 10-100 micrometers in size containing CMOS circuits and sensors to detect and report local extracellular electrophysiological data. The neural dust is powered by ultrasonic waves via a transducer that is implanted just below the dura. The sub-dural unit also interrogates the neural dust and sends information to another receiver outside the body.


If neural dust becomes a reality, it could give us a much higher resolution look at what is going on inside the brain, as it will be able to record from thousands of sites within the brain, in contrast to the hundreds of channels that current technology allows. Moreover, because these tiny sensors are literally the size of dust particles, they could cause far less damage to the surrounding brain tissue and could stay embedded in the brain for long periods of time.


Journal article: arXivNeural Dust: An Ultrasonic, Low Power Solution for Chronic Brain-Machine Interfaces

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Science-Videos
Scoop.it!

Scientists Reconstruct Brains' Visions Into Digital Video In Historic Experiment

Scientists Reconstruct Brains' Visions Into Digital Video In Historic Experiment | Amazing Science | Scoop.it

UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.

 

I just can't believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."

 

Indeed, it's mindblowing. I'm simultaneously excited and terrified. This is how it works: They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.

 

The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

 

After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

 

Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video arenot what the subject is seeing. They are random bits used just to compose the brain image.

 

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain. Right now, the resulting quality is not good, but the potential is enormous. Lead research author — and one of the lab test bunnies — Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines: "Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences".


The brain recorders of the future - Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.


This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.

 

This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull. 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

WIRED: A neuroscientist's radical theory of how highly complex networks become conscious

WIRED: A neuroscientist's radical theory of how highly complex networks become conscious | Amazing Science | Scoop.it
It’s a question that’s perplexed philosophers for centuries and scientists for decades: where does consciousness come from?

 

Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. According to Koch, consciousness arises within any sufficiently complex, information-processing system. All animals, from humans on down to earthworms, are conscious; even the internet could be. That's just the way the universe works.


What Koch proposes is a scientifically refined version of an ancient philosophical doctrine called panpsychism -- and, coming from someone else, it might sound more like spirituality than science. But Koch has devoted the last three decades to studying the neurological basis of consciousness. His work at the Allen Institute now puts him at the forefront of the BRAIN Initiative, the massive new effort to understand how brains work, which will begin next year.

 

Koch's insights have been detailed in dozens of scientific articles and a series of books, including last year's Consciousness: Confessions of a Romantic Reductionist. Wired talked to Koch about his understanding of this age-old question.


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neuroengineering - Engineering Memories - The Future is Now

Dr. Theodore Berger's research is currently focused primarily on the hippocampus, a neural system essential for learning and memory functions.


Theodore Berger leads a multi-disciplinary collaboration with Drs. Marmarelis, Song, Granacki, Heck, and Liu at the University of Southern California, Dr. Cheung at City University of Hong Kong, Drs. Hampson and Deadwyler at Wake Forest University, and Dr. Gerhardt at the University of Kentucky, that is developing a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory. Damage to the hippocampus is frequently associated with epilepsy, stroke, and dementia (Alzheimer's disease), and is considered to underlie the memory deficits characteristic of these neurological conditions.


The essential goals of Dr. Berger's multi-laboratory effort include: (1) experimental study of neuron and neural network function during memory formation -- how does the hippocampus encode information?, (2) formulation of biologically realistic models of neural system dynamics -- can that encoding process be described mathematically to realize a predictive model of how the hippocampus responds to any event?, (3) microchip implementation of neural system models -- can the mathematical model be realized as a set of electronic circuits to achieve parallel processing, rapid computational speed, and miniaturization?, and (4) creation of conformal neuron-electrode interfaces -- can cytoarchitectonic-appropriate multi-electrode arrays be created to optimize bi-directional communication with the brain? By integrating solutions to these component problems, the team is realizing a biomimetic model of hippocampal nonlinear dynamics that can perform the same function as part of the hippocampus.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Memories may be passed down through generations in DNA, a process underlying cause of phobias

Memories may be passed down through generations in DNA, a process underlying cause of phobias | Amazing Science | Scoop.it

In a recent study, published in the journal of Nature Neuroscience, the researchers trained mice to fear the smell of cherry blossom using electric shocks before allowing them to breed. The offspring produced showed fearful responses to the odor of cherry blossom compared to a neutral odor, despite never having encountered them before. The following generation also showed the same behavior. This effect continued even if the mice had been fathered through artificial insemination.

 

The researchers found the brains of the trained mice and their offspring showed structural changes in areas used to detect the odor. The DNA of the animals also carried chemical changes, known as epigenetic methylation, on the gene responsible for detecting the odor. This suggests that experiences are somehow transferred from the brain into the genome, allowing them to be passed on to later generations.

 

The researchers now hope to carry out further work to understand how the information comes to be stored on the DNA in the first place.

They also want to explore whether similar effects can be seen in the genes of humans.

 

Prof. Marcus Pembrey, a paediatric geneticist at University College London, said the work provided "compelling evidence" for the biological transmission of memory. He added: "It addresses constitutional fearfulness that is highly relevant to phobias, anxiety and post-traumatic stress disorders, plus the controversial subject of transmission of the ‘memory’ of ancestral experience down the generations.

 

"It is high time public health researchers took human transgenerational responses seriously. "I suspect we will not understand the rise in neuropsychiatric disorders or obesity, diabetes and metabolic disruptions generally without taking a multigenerational approach.”

 

Prof. Wolf Reik, head of epigenetics at the Babraham Institute in Cambridge, said, however, further work was needed before such results could be applied to humans. He said: "These types of results are encouraging as they suggest that transgenerational inheritance exists and is mediated by epigenetics, but more careful mechanistic study of animal models is needed before extrapolating such findings to humans.”

 

Another study in mice has shown that their ability to remember can be effected by the presence of immune system factors in their mother's milk. Dr Miklos Toth, from Cornell University in New York, found that chemokines carried in a mother's milk caused changes in the brains of their offspring, affecting their memory in later life.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

People with highly superior memory powers of recall are also vulnerable to false memories

People with highly superior memory powers of recall are also vulnerable to false memories | Amazing Science | Scoop.it
People who can accurately remember details of their daily lives going back decades are as susceptible as everyone else to forming fake memories, psychologists and neurobiologists have found.

 

Persons with highly superior autobiographical memory (HSAM, also known as hyperthymesia) -- which was first identified in 2006 by scientists at UC Irvine's Center for the Neurobiology of Learning & Memory -- have the astounding ability to remember even trivial details from their distant past. This includes recalling daily activities of their life since mid-childhood with almost 100 percent accuracy.

 

The lead researcher on the study, Patihis believes it's the first effort to test malleable reconstructive memory in HSAM individuals. Working with neurobiology and behavior graduate student Aurora LePort, Patihis asked 20 people with superior memory and 38 people with average memory to do word association exercises, recall details of photographs depicting a crime, and discuss their recollections of video footage of the United Flight 93 crash on 9/11. (Such footage does not exist.) These tasks incorporated misinformation in an attempt to manipulate what the subjects thought they had remembered.

 

"While they really do have super-autobiographical memory, it can be as malleable as anybody else's, depending on whether misinformation was introduced and how it was processed," Patihis said. "It's a fascinating paradox. In the absence of misinformation, they have what appears to be almost perfect, detailed autobiographical memory, but they are vulnerable to distortions, as anyone else is."

 

He noted that there are still many mysteries about people with highly superior autobiographical memory that need further investigation. LePort, for instance, is studying forgetting curves (which involve how many autobiographical details people can remember from one day ago, one week ago, one month ago, etc., and how the number of details decreases over time) in both HSAM and control participants and will employ functional MRI to better understand the phenomenon.

 

"What I love about the study is how it communicates something that memory distortion researchers have suspected for some time: that perhaps no one is immune to memory distortion," Patihis said. "It will probably make some nonexperts realize, finally, that if even memory prodigies are susceptible, then they probably are too. This teachable moment is almost as important as the scientific merit of the study. It could help educate people -- including those who deal with memory evidence, such as clinical psychologists and legal professionals -- about false memories."

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

No such thing as ‘right-brained’ or ‘left-brained,’ new research finds

No such thing as ‘right-brained’ or ‘left-brained,’ new research finds | Amazing Science | Scoop.it
Individual differences don’t favor one brain hemisphere or the other.

 

The terms "left-brained" and "right-brained" have come to refer to personality types in popular culture, with an assumption that people who use the right side of their brains more are more creative, thoughtful and subjective, while those who tap the left side more are more logical, detail-oriented and analytical.

 

But there's no evidence for this, suggest findings from a two-year study led by University of Utah neuroscientists who conducted analyses of brain imaging (PLOS One, Aug. 14, 2013).

 

The researchers analyzed resting brain scans of 1,011 people ages 7 to 29, measuring their functional lateralization — the specific mental processes taking place in each side of the brain. Turns out, individual differences don't favor one hemisphere or the other, says lead author Jeff Anderson, MD, PhD.

 

"It's absolutely true that some brain functions occur in one or the other side of the brain," Anderson says. "Language tends to be on the left, attention more on the right. But people don't tend to have a stronger left- or right-sided brain network."

more...
Belinda Suvaal's curator insight, November 12, 2013 1:18 PM

klinkt absoluut logisch, vond het altijd al een overtrokken issue en riep altijd maar: ik ben het allebei, afhankelijk van wat nodig is.

Dat is dus heel normaal!

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain decoding: Reading minds

Brain decoding: Reading minds | Amazing Science | Scoop.it

By scanning blobs of brain activity, scientists may be able to decode people's thoughts, their dreams and even their intentions. Media reports have suggested that such techniques bring mind-reading “from the realms of fantasy to fact”, and “could influence the way we do just about everything”. The Economist in London even cautioned its readers to “be afraid”, and speculated on how long it will be until scientists promise telepathy through brain scan.

 

Although companies are starting to pursue brain decoding for a few applications, such as market research and lie detection, scientists are far more interested in using this process to learn about the brain itself. Gallant's group and others are trying to find out what underlies those different brain patterns and want to work out the codes and algorithms the brain uses to make sense of the world around it. They hope that these techniques can tell them about the basic principles governing brain organization and how it encodes memories, behaviour and emotion.

 

Brain decoding took off about a decade ago, when neuroscientists realized that there was a lot of untapped information in the brain scans they were producing using functional magnetic resonance imaging (fMRI). That technique measures brain activity by identifying areas that are being fed oxygenated blood, which light up as coloured blobs in the scans. To analyse activity patterns, the brain is segmented into little boxes called voxels — the three-dimensional equivalent of pixels — and researchers typically look to see which voxels respond most strongly to a stimulus, such as seeing a face. By discarding data from the voxels that respond weakly, they conclude which areas are processing faces.

 

Decoding techniques interrogate more of the information in the brain scan. Rather than asking which brain regions respond most strongly to faces, they use both strong and weak responses to identify more subtle patterns of activity. Early studies of this sort proved, for example, that objects are encoded not just by one small very active area, but by a much more distributed array.

 

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Study Shows Where Alzheimer's Starts and How It Spreads

Study Shows Where Alzheimer's Starts and How It Spreads | Amazing Science | Scoop.it
Using high-resolution fMRI imaging in patients with Alzheimer's disease and in mouse models of the disease, researchers have clarified three fundamental issues about Alzheimer's: where it starts, why it starts there, and how it spreads.


In addition to advancing understanding of Alzheimer's, the findings could improve early detection of the disease, when drugs may be most effective. The study was published today in the online edition of the journal Nature Neuroscience.


“It has been known for years that Alzheimer's starts in a brain region known as the entorhinal cortex,” said co-senior author Scott A. Small, MD, Boris and Rose Katz Professor of Neurology, professor of radiology, and director of the Alzheimer's Disease Research Center.


“But this study is the first to show in living patients that it begins specifically in the lateral entorhinal cortex, or LEC. The LEC is considered to be a gateway to the hippocampus, which plays a key role in the consolidation of long-term memory, among other functions. If the LEC is affected, other aspects of the hippocampus will also be affected.”


The study also shows that, over time, Alzheimer's spreads from the LEC directly to other areas of the cerebral cortex, in particular, the parietal cortex, a brain region involved in various functions, including spatial orientation and navigation. The researchers suspect that Alzheimer's spreads “functionally,” that is, by compromising the function of neurons in the LEC, which then compromises the integrity of neurons in adjoining areas.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Bluebrain: Attempt to engineer a full brain, one neuron at a time

Henry Markram is attempting to reverse engineer an entire human brain, one neuron at a time. This piece is an introduction to director Noah Hutton's 10-year film-in-the-making that will chronicle the development of The Blue Brain Project, a landmark endeavor in modern neuroscience.

Year 2: vimeo.com/28040230
Year 3: vimeo.com/51685540
Year 4: vimeo.com/52664485


Further info: [1], [2], [3], [4], [5], [6], and [7]


Szabolcs Kósa

more...
No comment yet.
Rescooped by Dr. Stefan Gruenwald from Brain Imaging and Neuroscience: The Good, The Bad, & The Ugly
Scoop.it!

Scientists Discover How Brains Keep Themselves Clean Of Waste

Scientists Discover How Brains Keep Themselves Clean Of Waste | Amazing Science | Scoop.it

Every organ produces waste, and the brain is no exception. But unlike the rest of our body, it doesn’t have a lymphatic system, a network of vessels that filter out junk. Now, a new study of mouse brains suggests how ours handle waste: by rapidly pumping fluid along the outside of blood vessels, literally flushing waste away. The finding, reported in Science Translational Medicine, could hint at how diseases like Alzheimer’s develop and might be treated.


“If you look at a body-wide map of the lymphatic system, you see a great big void in the brain,” said neuroscientist Jeffrey Iliff of the University of Rochester Medical Center. He and his colleagues found that puzzling, given how active the brain is and how sensitive it is to waste buildup.


Scientists long suspected that the brain’s refuse ended up in the cerebrospinal fluid, which cushions the brain inside the skull. In the 1980s, some researchers proposed that the fluid might be pumped into the brain to wash it, then pumped out again. Other researchers weren’t convinced.


Thanks to new imaging techniques that made it possible to peer inside the brain of a living mouse, Iliff’s team saw the process in action. Cerebrospinal fluid flowed along the outside of blood vessels, carried through a network of pipe-like protein structures. The fluid picked up waste that accumulated between cells, then drained out through major veins.


Via Donald J Bolger
more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brain training works, but just for the practiced task and not for general intelligence improvement

Brain training works, but just for the practiced task and not for general intelligence improvement | Amazing Science | Scoop.it

Search for "brain training" on the Web. You'll find online exercises, games, software, even apps, all designed to prepare your brain to do better on any number of tasks. Do they work? University of Oregon psychologists say, yes, but "there's a catch."


The catch, according to Elliot T. Berkman, a professor in the Department of Psychology and lead author on a study published in the Jan. 1 issue of the Journal of Neuroscience, is that training for a particular task does heighten performance, but that advantage doesn't necessarily carry over to a new challenge.


The training provided in the study caused a proactive shift in inhibitory control. However, it is not clear if the improvement attained extends to other kinds of executive function such as working memory, because the team's sole focus was on inhibitory control, said Berkman, who directs the psychology department's Social and Affective Neuroscience Lab.


"With training, the brain activity became linked to specific cues that predicted when inhibitory control might be needed," he said. "This result is important because it explains how brain training improves performance on a given task — and also why the performance boost doesn't generalize beyond that task."


Sixty participants (27 male, 33 females and ranging from 18 to 30 years old) took part in a three-phase study. Change in their brain activity was monitored with functional magnetic resonance imaging (fMRI).


Source: 
http://uonews.uoregon.edu/archive/news-release/2014/1/brain-training-works-just-practiced-task-say-oregon-researchers


more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human Brain Cells Make Mice Smart

Human Brain Cells Make Mice Smart | Amazing Science | Scoop.it

Study shows that intelligence derives from brain cells other than neurons


A team of neuroscientists has grafted human brain cells into the brains of mice and found that the rodents’ rate of learning and memory far surpassed that of ordinary mice.  Remarkably, the cells transplanted were not neurons, but rather types of brain cells, called glia, that are incapable of electrical signaling.  The new findings suggest that information processing in the brain extends beyond the mechanism of electrical signaling between neurons.


The experiments were motivated by a desire to understand the functions of glia and test the intriguing possibility that non-electric brain cells could contribute to information processing, cognitive ability, and perhaps even the unparalleled cognitive ability of the human brain, which far exceeds that of any other animal.


Current thinking about how the brain operates at a cellular level rests on a foundation established over a century ago by the great Spanish neuroanatomist and Nobel Prize winner, Ramon ý Cajal, who conceived the “Neuron Doctrine.” This doctrine states that all information processing and transmission in the nervous system takes place by electrical signals passing through neurons in one direction, entering through synapses on the neuron’s root-like dendrites and then passing out of the neuron through its wire-like axon as high-speed electrical impulses that stimulate the next neuron in a circuit through points of close apposition called synapses.  All thinking of how the brain receives sensory input, performs computational analysis, generates thoughts, emotions, and behaviors, rests on the Neuron Doctrine.


The possibility that glia, which lack any of the tell-tale attributes of neurons (dendrites, synapses, or axons) could contribute to information processing and cognition is well beyond traditional thinking.  Glia are understood to be cells that support neurons physically and physiologically and respond to neuronal disease and injury. In recent years, however, some neuroscientists have begun to wonder whether these neuron support functions, together with other aspects of the poorly understood glial biology, could participate in learning, memory and other cognitive functions.


Human glia cells are different: Looking through a microscope at a type of glial cell called an astrocyte, neuroscientist Maiken Nedergaard, was struck by a peculiar observation.  “Steve [Goldman] and I were culturing human brain cells many years ago and noted that the cultured astrocytes were much, much larger than in cultures [of astrocytes] prepared from rodent brain,” she says recalling the moment of inspiration for these human-mouse transplant experiments.  Nedergaard is a pioneer in research on neuron-glia interactions working together with Steven Goldman, an expert in neural stem cells.  Both are members of the Center for Translational Medicine at the University of Rochester Medical Center.  “Human glia, and astrocytes in particular, are substantially different from those of rodents,” Goldman explains. “Human astrocytes are larger and more varied in morphology, features that accompanied evolution of the human brain.”

Dr. Stefan Gruenwald's insight:

Interestingly, Albert Einstein's glia cell / neuro ratio was also much higher in certain areas when compared to "normal" human brains.


http://en.wikipedia.org/wiki/Albert_Einstein's_brain

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Brains on Trial: Determine criminal fate based on high-tech images of the brain

Brains on Trial: Determine criminal fate based on high-tech images of the brain | Amazing Science | Scoop.it
What if we could peer into a brain and see guilt or innocence? Brain scanning technology is trying to break its way into the courtroom, but can we—and should we—determine criminal fate based on high-tech images of the brain?


Join a distinguished group of neuroscientists and legal experts who will debate how and if neuroscience should inform our laws and how we treat criminals. This World Science Festival program is based on a two-part PBS special, “Brains on Trial with Alan Alda,” which aired on September 11 and 18, 2013, supported by the Alfred P. Sloan Foundation.

more...
Laura E. Mirian, PhD's curator insight, December 30, 2013 10:32 AM

Although this may be possible there is always the chance it could be wrong and then we have Vanilla Sky.

Scooped by Dr. Stefan Gruenwald
Scoop.it!

Scientists pinpoint brain’s math area, important for numeral recognition

Scientists pinpoint brain’s math area, important for numeral recognition | Amazing Science | Scoop.it

Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”

Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.

“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.

“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.


Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”

It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.

Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Human memories are ‘geotagged’ with spatial information, researchers find

Human memories are ‘geotagged’ with spatial information, researchers find | Amazing Science | Scoop.it

Neurons that encode spatial information form “geotags” for specific memories and these geotags are activated immediately before those memories are recalled, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered. They used a video game in which people navigate through a virtual town delivering objects to specific locations.


“These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags,” said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.

 

Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.

 

As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering.

 

After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.

After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.

 

This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories (the list of items that had been delivered).

 

“During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device,” Kahana said. “These ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation.”

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Neurobiologists investigate neuronal basis intelligence in birds

Neurobiologists investigate neuronal basis intelligence in birds | Amazing Science | Scoop.it
Scientists have long suspected that corvids – the family of birds including ravens, crows and magpies – are highly intelligent.

 

The Tübingen researchers are the first to investigate the brain physiology of crows' intelligent behavior. They trained crows to carry out memory tests on a computer. The crows were shown an image and had to remember it. Shortly afterwards, they had to select one of two test images on a touchscreen with their beaks based on a switching behavioral rules. One of the test images was identical to the first image, the other different. Sometimes the rule of the game was to select the same image, and sometimes it was to select the different one. The crows were able to carry out both tasks and to switch between them as appropriate. That demonstrates a high level of concentration and mental flexibility which few animal species can manage – and which is an effort even for humans.

 

The crows were quickly able to carry out these tasks even when given new sets of images. The researchers observed neuronal activity in the nidopallium caudolaterale, a brain region associated with the highest levels of cognition in birds. One group of nerve cells responded exclusively when the crows had to choose the same image – while another group of cells always responded when they were operating on the "different image" rule. By observing this cell activity, the researchers were often able to predict which rule the crow was following even before it made its choice.

 

The study published in Nature Communications provides valuable insights into the parallel evolution of intelligent behavior. "Many functions are realized differently in birds because a long evolutionary history separates us from these direct descendants of the dinosaurs," says Lena Veit. "This means that bird brains can show us an alternative solution out of how intelligent behavior is produced with a different anatomy." Crows and primates have different brains, but the cells regulating decision-making are very similar. They represent a general principle which has re-emerged throughout the history of evolution. "Just as we can draw valid conclusions on aerodynamics from a comparison of the very differently constructed wings of birds and bats, here we are able to draw conclusions about how the brain works by investigating the functional similarities and differences of the relevant brain areas in avian and mammalian brains," says Professor Andreas Nieder.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

How do you sense the passing of time? Your brain has two clocks

How do you sense the passing of time? Your brain has two clocks | Amazing Science | Scoop.it

Did you make it to work on time this morning? Go ahead and thank the traffic gods, but also take a moment to thank your brain. The brain’s impressively accurate internal clock allows us to detect the passage of time, a skill essential for many critical daily functions. Without the ability to track elapsed time, our morning shower could continue indefinitely. Without that nagging feeling to remind us we’ve been driving too long, we might easily miss our exit. 

 

But how does the brain generate this finely tuned mental clock? Neuroscientists believe that we have distinct neural systems for processing different types of time, for example, to maintain a circadian rhythm, to control the timing of fine body movements, and for conscious awareness of time passage. Until recently, most neuroscientists believed that this latter type of temporal processing – the kind that alerts you when you’ve lingered over breakfast for too long – is supported by a single brain system. However, emerging research indicates that the model of a single neural clock might be too simplistic. A new study, recently published in the Journal of Neuroscience by neuroscientists at the University of California, Irvine, reveals that the brain may in fact have a second method for sensing elapsed time. What’s more, the authors propose that this second internal clock not only works in parallel with our primary neural clock, but may even compete with it.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Glowing Worms Illuminate Roots of Behavior in Animals

Glowing Worms Illuminate Roots of Behavior in Animals | Amazing Science | Scoop.it

Researchers develop novel method to image worm brain activity and screen early stage compounds aimed at treating autism and anxiety.


A research team at Worcester Polytechnic Institute (WPI) and The Rockefeller University in New York has developed a novel system to image brain activity in multiple awake and unconstrained worms. The technology, which makes it possible to study the genetics and neural circuitry associated with animal behavior, can also be used as a high-throughput screening tool for drug development targeting autism, anxiety, depression, schizophrenia, and other brain disorders.

 

The team details their technology and early results in the paper "High-throughput imaging of neuronal activity in Caenorhabditis elegans," published on-line in advance of print by the journal Proceedings of the National Academy of Sciences .

 

"One of our major objectives is to understand the neural signals that direct behavior—how sensory information is processed through a network of neurons leading to specific decisions and responses," said Dirk Albrecht, PhD, assistant professor of biomedical engineering at WPI and senior author of the paper. Albrecht led the research team both at WPI and at Rockefeller, where he served previously as a postdoctoral researcher in the lab of Cori Bargmann, PhD, a Howard Hughes Medical Institute Investigator and a co-author of the new paper.


To study neuronal activity, Albrecht’s lab uses the tiny worm Caenorhabditis elegans (C. elegans), a nematode found in many environments around the world. A typical adult C. elegans is just 1 millimeter long and has 969 cells, of which 302 are neurons. Despite its small size, the worm is a complex organism able to do all of the things animals must do to survive. It can move, eat, mate, and process environmental cues that help it forage for food or react to threats. As a bonus for researchers, C.elegans is transparent. By using various imaging technologies, including optical microscopes, one can literally see into the worm and watch physiological processes in real time.


In addition to watching the head neurons light up as they picked up odor cues, the new system can trace signaling through "interneurons." These are pathways that connect external sensors to the rest of the network (the "worm brain") and send signals to muscle cells that adjust the worm's movement based on the cues. Numerous brain disorders in people are believed to arise when neural networks malfunction. In some cases the malfunction is dramatic overreaction to a routine stimulus, while in others it is a lack of appropriate reactions to important cues. Since C. elegans and humans share many of the same genes, discovering genetic causes for differing neuronal responses in worms could be applicable to human physiology. Experimental compounds designed to modulate the action of nerve cells and neuronal networks could be tested first on worms using Albrecht’s new system. The compounds would be infused in the worm arena, along with other stimuli, and the reaction of the worms’ nervous systems could be imaged and analyzed.

more...
No comment yet.
Scooped by Dr. Stefan Gruenwald
Scoop.it!

Patients in ‘vegetative state’ not just aware, but paying attention

Patients in ‘vegetative state’ not just aware, but paying attention | Amazing Science | Scoop.it

A patient in a seemingly vegetative state, unable to move or speak, showed signs of attentive awareness that had not been detected before, a new study reveals. This patient was able to focus on words signalled by the experimenters as auditory targets as successfully as healthy individuals. If this ability can be developed consistently in certain patients who are vegetative, it could open the door to specialised devices in the future and enable them to interact with the outside world.

 

For the study, the researchers used electroencephalography (EEG), which non-invasively measures the electrical activity over the scalp, to test 21 patients diagnosed as vegetative or minimally conscious, and eight healthy volunteers. Participants heard a series of different words  - one word a second over 90 seconds at a time - while asked to alternatingly attend to either the word ‘yes’ or the word ‘no’, each of which appeared 15% of the time. (Some examples of the words used include moss, moth, worm and toad.) This was repeated several times over a period of 30 minutes to detect whether the patients were able to attend to the correct target word.

 

They found that one of the vegetative patients was able to filter out unimportant information and home in on relevant words they were being asked to pay attention to. Using brain imaging (fMRI), the scientists also discovered that this patient could follow simple commands to imagine playing tennis. They also found that three other minimally conscious patients reacted to novel but irrelevant words, but were unable to selectively pay attention to the target word.

 

These findings suggest that some patients in a vegetative or minimally conscious state might in fact be able to direct attention to the sounds in the world around them

more...
No comment yet.