1.7K views | +0 today
Thoughts and Research on Neuroeducation Science

Curated by Huey O'Brien
Your new post is loading...
Your new post is loading...
Scooped by Huey O'Brien

The Human Brain Now Reacts to Emoticons Like Real Faces

The Human Brain Now Reacts to Emoticons Like Real Faces | LEARNING AND COGNITION | Scoop.it

We’re all familiar with the smiley emoticon, and its power to add levity, flirtation, and occasionally passive-aggression to our texts, chats, and e-mails. But according to researchers, our brains have started to take the cluster of punctuation one step further and actually respond to it like a real face.


According to a recent study published in the journal Social Neuroscience, looking at faces crafted from colons and parentheses can trigger the same facial recognition response in the occipitotemporalparts of brain that takes place when we gaze into meatspace visages of other humans.


Although the iconic grinning yellow sphere with two eyes and a mouth originated in the 1960s and other typographical depictions of emotion cropped up even earlier, the sideways smiley emoticon as we know it originated in 1982. Most people now instantly recognize :) as a smiling face. However, this response isn’t innate, but rather learned.


Nor are all smileys created equal. The neural reaction in the study changed significantly depending on whether or not people were looking at the most familiar version of the smiley emoticon. While the traditional :) and :-) symbols triggered the same face-specific mechanisms used for processing actual faces, the non-standard (-: did not. (If you’ve ever had an argument over whether the reverse smiley is valid, feel free to point to this study as evidence that even our brains reject them as abominations.)


“There is no innate neural response to emoticons that babies are born with. Before 1982 there would be no reason that ‘:-)’ would activate face sensitive areas of the cortex but now it does because we’ve learnt that this represents a face,” researcher Owen Churches told Australia’s ABC. “This is an entirely culturally-created neural response. It’s really quite amazing.”


Now if only our brains could determine if :) in a text from a new acquaintance means they’re flirting or just laughing at us.


No comment yet.
Scooped by Huey O'Brien

How you practice matters for learning skill quickly

How you practice matters for learning skill quickly | LEARNING AND COGNITION | Scoop.it

Practice alone doesn't make perfect, but learning can be optimized if you practice in the right way, according to new research based on online gaming data from more than 850,000 people.


The research, led by psychological scientist Tom Stafford of the University of Sheffield (UK), suggests that the way you practice is just as important as how often you practice when it comes to learning quickly.


The new findings are published in Psychological Science, a journal of the Association for Psychological Science.  Stafford and Michael Dewar from The New York Times Research and Development Lab analyzed data from 854,064 people playing an online game called Axon. Players are tasked with guiding a neuron from connection to connection by clicking on potential targets, testing participants' ability to perceive, make decisions, and move quickly.


Stafford and Dewar were interested to know how practice affected players' subsequent performance in the game.  Some Axon players achieved higher scores than others despite practicing for the same amount of time. Game play data revealed that those players who seemed to learn more quickly had either spaced out their practice or had more variable early performance -- suggesting they were exploring how the game works -- before going on to perform better.


"The study suggests that learning can be improved -- you can learn more efficiently or use the same practice time to learn to a higher level," says Stafford. "As we live longer, and as more of our lives become based around acquiring complex skills, optimal learning becomes increasingly relevant to everyone."

No comment yet.
Scooped by Huey O'Brien

A novel look at how stories may change the brain

A novel look at how stories may change the brain | LEARNING AND COGNITION | Scoop.it

Many people can recall reading at least one cherished story that they say changed their life. Now researchers at Emory University have detected what may be biological traces related to this feeling: Actual changes in the brain that linger, at least for a few days, after reading a novel.


Their findings, that reading a novel may cause changes in resting-state connectivity of the brain that persist, were published by the journal Brain Connectivity. "Stories shape our lives and in some cases help define a person," says neuroscientist Gregory Berns, lead author of the study and the director of Emory's Center for Neuropolicy. "We want to understand how stories get into your brain, and what they do to it."


His co-authors included Kristina Blaine and Brandon Pye from the Center for Neuropolicy, and Michael Prietula, professor of information systems and operations management at Emory's Goizueta Business School.  Neurobiological research using functional magnetic resonance imaging (fMRI) has begun to identify brain networks associated with reading stories. Most previous studies have focused on the cognitive processes involved in short stories, while subjects are actually reading them as they are in the fMRI scanner.


The Emory study focused on the lingering neural effects of reading a narrative. Twenty-one Emory undergraduates participated in the experiment, which was conducted over 19 consecutive days. The results showed heightened connectivity in the left temporal cortex, an area of the brain associated with receptivity for language, on the mornings following the reading assignments. "Even though the participants were not actually reading the novel while they were in the scanner, they retained this heightened connectivity," Berns says. "We call that a 'shadow activity,' almost like a muscle memory." 


Heightened connectivity was also seen in the central sulcus of the brain, the primary sensory motor region of the brain. Neurons of this region have been associated with making representations of sensation for the body, a phenomenon known as grounded cognition. Just thinking about running, for instance, can activate the neurons associated with the physical act of running.


"The neural changes that we found associated with physical sensation and movement systems suggest that reading a novel can transport you into the body of the protagonist," Berns says. "We already knew that good stories can put you in someone else's shoes in a figurative sense. Now we're seeing that something may also be happening biologically."  The neural changes were not just immediate reactions, Berns says, since they persisted the morning after the readings, and for the five days after the participants completed the novel.


"It remains an open question how long these neural changes might last," Berns says. "But the fact that we're detecting them over a few days for a randomly assigned novel suggests that your favorite novels could certainly have a bigger and longer-lasting effect on the biology of your brain."

No comment yet.
Scooped by Huey O'Brien

Reduce Cognitive Load in eLearning

Reduce Cognitive Load in eLearning | LEARNING AND COGNITION | Scoop.it

The information processing capacity of learners is limited, so it's important that designers take this into account when creating eLearning courses by considering the three types of cognitive load. 


In our brains, we have two types of memory. One is our working memory, which we use to process new information. The capacity of our working memory is quite limited so it can only handle so much before it becomes overloaded. The second is our long-term memory, which is where we store information from our working memory and where we retrieve that information from later. Within our long-term memory, information is organized into schemas, which are organizational frameworks of storage (like filing cabinets). Not exceeding working memory capacity will result in greater transfer of information into long-term memory.


Cognitive Load Theory (CLT) proposes that there are three types of cognitive load:


1. Intrinsic

This is the level of complexity inherent in the material being studied. There isn’t much that we can do about intrinsic cognitive load; some tasks are more complex than others so will have different levels of intrinsic cognitive load.


2. Extraneous 

This is cognitive load imposed by non-relevant elements that require extra mental processing e.g. decorative pictures, animations etc. that add nothing to the learning experience.


3. Germane 

These are elements that allow cognitive resources to be put towards learning i.e. assist with information processing.


Instructional designers, need to be aware of the cognitive requirements that learning designs impose and ensure that learners can meet those requirements. Learning designers must also ensure that all aspects of design focus on adding value to the learning experience.

No comment yet.
Scooped by Huey O'Brien

Rats! Humans and rodents process their mistakes

Rats! Humans and rodents process their mistakes | LEARNING AND COGNITION | Scoop.it

PROVIDENCE, R.I. [Brown University] — People and rats may think alike when they've made a mistake and are trying to adjust their thinking.


That's the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.


"These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex," wrote the authors, who performed the research at Brown University and Yale University.


The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such "adaptive control" neural mechanics are compromised in psychiatric diseases.



No comment yet.
Scooped by Huey O'Brien

What Multitasking Does To Your Brain

What Multitasking Does To Your Brain | LEARNING AND COGNITION | Scoop.it

In case we needed another reason to close the 15 extra browser tabs we have open, Clifford Nass, a communication professor at Stanford, has provided major motivation for monotasking: according to his research, the more you multitask, the less you're able to learn, concentrate, or be nice to people.


>>Our brains are plastic but they're not elastic.


For a case study, turn to your nearest broadcast news station (and don't say Fast Company didn't warn you): if the talking head on the screen is accompanied by a "crawler" at the bottom blurbing baseball scores and the day's tragedies, you'll be less likely to remember whatever the pundit is saying. Why? Because, research shows that the more you're multitasking, the less you're able to filter out irrelevant information.


As Nass told NPR, if you think you're good at multitasking, you aren't:

. . . "We have scales that allow us to divide up people into people who multitask all the time and people who rarely do, and the differences are remarkable. People who multitask all the time can't filter out irrelevancy. They can't manage a working memory. They're chronically distracted.

They initiate much larger parts of their brain that are irrelevant to the task at hand. And . . . they're even terrible at multitasking. When we ask them to multitask, they're actually worse at it. So they're pretty much mental wrecks."


>>Multitasking rewires our brains.


When we multitask all day, those scattered habits literally change the pathways in our brains. The consequence, according to Nass's research, is that sustaining your attention becomes impossible.


"If we [multitask] all the time--brains are remarkably plastic, remarkably adaptable," he says, referencing neuroplasticity, the way the structures of your brain literally re-form to the patterns of your thought. "We train our brains to a new way of thinking. And then when we try to revert our brains back, our brains are plastic but they're not elastic. They don't just snap back into shape."


>>How it affects our work


As James O'Toole notes on the strategy+business blog, the dangers of multitasking are as multifarious as they are nefarious.


- Multitasking stunts emotional intelligence:

Instead of addressing the person in front of you, you address a text message.


- Multitasking makes us worse managers:

The more we multitask, the worse we are at sorting through information--recall the broadcast news kerfuffle above.


- Multitasking makes us less creative:

Since attention is the midwife of creativity, if you can't focus, that thought-baby isn't coming out.

No comment yet.
Scooped by Huey O'Brien

'Brain training' may boost working memory, but not intelligence

'Brain training' may boost working memory, but not intelligence | LEARNING AND COGNITION | Scoop.it
Brain training games, apps, and websites are popular and it's not hard to see why -- who wouldn't want to give their mental abilities a boost? New research suggests that brain training programs might strengthen your ability to hold information in mind, but they won't bring any benefits to the kind of intelligence that helps you reason and solve problems.


The findings are published in Psychological Science, a journal of the Association for Psychological Science. "It is hard to spend any time on the web and not see an ad for a website that promises to train your brain, fix your attention, and increase your IQ," says psychological scientist and lead researcher Randall Engle of Georgia Institute of Technology. "These claims are particularly attractive to parents of children who are struggling in school."


According to Engle, the claims are based on evidence that shows a strong correlation between working memory capacity (WMC) and general fluid intelligence. Working memory capacity refers to our ability to keep information either in mind or quickly retrievable, particularly in the presence of distraction. General fluid intelligence is the ability to infer relationships, do complex reasoning, and solve novel problems.


The correlation between WMC and fluid intelligence has led some to surmise that increasing WMC should lead to an increase in both fluid intelligence, but "this assumes that the two constructs are the same thing, or that WMC is the basis for fluid intelligence," Engle notes.


To better understand the relationship between these two aspects of cognition, Engle and colleagues had 55 undergraduate students complete 20 days of training on certain cognitive tasks.  The researchers administered a battery of tests before and after training to gauge improvement and transfer of learning, including a variety of WMC measures and three measures of fluid intelligence.


The results suggest that the students improved in their ability to update and maintain information on multiple tasks as they switched between them, which could have important implications for real-world multitasking:  "This work affects nearly everyone living in the complex modern world," says Harrison, "but it particularly affects individuals that find themselves trying to do multiple tasks or rapidly switching between complex tasks, such as driving and talking on a cell phone, alternating between conversations with two different people, or cooking dinner and dealing with a crying child." 


Despite the potential boost for multitasking, the benefits of training didn't transfer to fluid intelligence. Engle points out that just because WMC and fluid intelligence are highly correlated doesn't mean that they are the same.

No comment yet.
Scooped by Huey O'Brien

Online time can hobble brain's important work

Online time can hobble brain's important work | LEARNING AND COGNITION | Scoop.it

While you are browsing online, you could be squandering memories -- or losing important information.

Contrary to common wisdom, an idle brain is in fact doing important work -- and in the age of constant information overload, it's a good idea to go offline on a regular basis, says a researcher from Stockholm's KTH Royal Institute of Technology.


Erik Fransén, whose research focuses on short-term memory and ways to treat diseased neurons, says that a brain exposed to a typical session of social media browsing can easily become hobbled by information overload. The result is that less information gets filed away in your memory.


The problem begins in a system of the brain commonly known as the working memory, or what most people know as short-term memory. That's the system of the brain that we need when we communicate, Fransén says.


"Working memory enables us to filter out information and find what we need in the communication," he says. "It enables us to work online and store what we find online, but it's also a limited resource." Models show why it has limits. At any given time, the working memory can carry up to three or four items, Fransén says. When we attempt to stuff more information in the working memory, our capacity for processing information begins to fail.


"When you are on Facebook, you are making it harder to keep the things that are 'online' in your brain that you need," he says. "In fact, when you try to process sensory information like speech or video, you are going to need partly the same system of working memory, so you are reducing your own working memory capacity. 


"And when you try to store many things in your working memory, you get less good at processing information." You're also robbing the brain of time it needs to do some necessary housekeeping. The brain is designed for both activity and relaxation, he says. "The brain is made to go into a less active state, which we might think is wasteful; but probably memory consolidation, and transferring information into memory takes place in this state. Theories of how memory works explain why these two different states are needed


"When we max out our active states with technology equipment, just because we can, we remove from the brain part of the processing, and it can't work."

No comment yet.
Scooped by Huey O'Brien

Tips To Improve Learners' Motivation for eLearning Courses

Tips To Improve Learners' Motivation for eLearning Courses | LEARNING AND COGNITION | Scoop.it

Motivation has been and continues to be a widely studied area across many of life’s domains. Many motivation theories focus on the amount of motivation, with a larger quantity said to result in improved outcomes. However, as educators we should not focus on generating more motivation from our learners but instead focus on creating conditions that facilitate the internalization of motivation from within our learners.


Self-determination theory (SDT), an empirical theory of motivation by Edward Deci and Richard Ryan, focuses on the degree in which behaviour is self-motivated and self-determined. SDT proposes that all humans require the satisfaction of three basic psychological needs, namely:


 - Autonomy (a sense of being in control and freedom),                

-  Competence (a sense of being able to do something
    i.e. being competent), 

-  Relatedness (a sense of being associated or connected to others).


Research by Ryan, Rigby and Przybylski into the motivation to play video games (regardless of the game type) found that motivation to play is accounted for by how well the game satisfies our psychological needs:


1. Autonomy

the extent to which the game provides flexibility over movement and strategies, choice over task and goals, and rewards that provide feedback and not control.

2. Competence


the extent to which tasks within the game provide ongoing challenges and opportunities for feedback.

3.  Relatedness

 the extent to which the game provides interactions between players.


In addition to need satisfaction, their research also found that:


Presence – the extent to which the player feels within the game environment as opposed to being outside the game manipulating the controls, and


Intuitive controls – the extent to which the controls make sense and don’t interfere with feelings of presence, were also important as they allow players to focus on game play and access the need satisfaction provided by the game.


Contexts that satisfy all three basic needs will help support people’s actions, resulting in more sustained motivation over time and positive outcomes. Therefore, if we can use strategies to support competence, autonomy and relatedness needs we can assist learners to internalize their motivation of externally regulated activities. 

No comment yet.
Scooped by Huey O'Brien

The Top 3 Trends in e-Learning for Generation Z

The Top 3 Trends in e-Learning for Generation Z | LEARNING AND COGNITION | Scoop.it

Bernard Luskin, a pioneer of e-learning, who advocates that the ‘e’ should be interpreted to mean “exciting, energetic, enthusiastic, emotional, extended and excellent” in addition to “electronic”.  These words are particularly key when designing e-learning platforms for Generation Z. Generation Z are our future audiences. They are young people born after 1992. This is the first generation to have truly been brought up digital (they are sometimes referred to as Digital Natives) and they don’t see the Internet or social media as anything special, they just expect it to be there. 


The top 3 attitudinal and behavioral trends to consider when designing e-Learning products for Generation Z:


> Online is as important as offline to Gen Z

With Gen Z, it is crucial to understand that the online experience is just as important as offline (real life) to them. 

 > Playing = Learning

Another key trend is in the relationship between playing and learning. Gen Z children have been found to learn more effectively if they are left to solve problems rather than being taught the answers, and their gaming experience means that they are happy to ‘work on a level’ as they know that even if they fail, they will learn something that they can use to progress further next time. 

 > Digital DNA

Generation Z really does have ‘digital DNA’ – researchers say that if you compare the brains of a Gen Z child against someone born 10 years earlier you can see a physical difference. The part of the brain responsible for visual ability is more developed. 

Huey O'Brien's insight:

IMPLICATION: Generational Differences

No comment yet.
Scooped by Huey O'Brien

When we forget to remember: Failures in prospective memory range from annoying to lethal

When we forget to remember: Failures in prospective memory range from annoying to lethal | LEARNING AND COGNITION | Scoop.it

A surgical team closes an abdominal incision, successfully completing a difficult operation. Weeks later, the patient comes into the ER complaining of abdominal pain and an X-ray reveals that one of the forceps used in the operation was left inside the patient. Why would highly skilled professionals forget to perform a simple task they have executed without difficulty thousands of times before?


These kinds of oversights occur in professions as diverse as aviation and computer programming, but research from psychological science reveals that these lapses may not reflect carelessness or lack of skill but failures of prospective memory.


In an article in the August issue of Current Directions in Psychological Science, a journal of the Association for Psychological Science, R. Key Dismukes, a scientist at the NASA Ames Research Center, reviews the rapidly growing field of research on prospective memory, highlighting the various ways in which characteristics of everyday tasks interact with normal cognitive processes to produce memory failures that sometimes have disastrous consequences.


Failures of prospective memory typically occur when we form an intention to do something later, become engaged with various other tasks, and lose focus on the thing we originally intended to do. Despite the name, prospective memory actually depends on several cognitive processes, including planning, attention, and task management. Common in everyday life, these memory lapses are mostly annoying, but can have tragic consequences. "Every summer several infants die in hot cars when parents leave the car, forgetting the child is sleeping quietly in the back seat," Dismukes points out.


To defend against prospective memory failures and their potentially disastrous consequences, professionals in aviation and medicine now rely on specific memory tools, including checklists. Research also reveals that implementation intentions, identifying when and where a specific intention will be carried out, can help guard against such failures in everyday life. Dismukes points out that having this kind of concrete plan has been shown to improve prospective memory performance by as much as two to four times in tasks such as exercising, medication adherence, breast self-examination, and homework completion.


Along with checklists and implementation intentions, Dismukes and others have highlighted several other measures that can help to remember and carry out intended actions:


    -  Use external memory aids such as the alerting calendar on cell phones

    -  Avoid multitasking when one of your tasks is critical

    -  Carry out crucial tasks now instead of putting them off until later

    -  Create reminder cues that stand out and put them in a 
       difficult-to-miss spot

    -  Link the target task to a habit that you have already established

No comment yet.
Scooped by Huey O'Brien

Cracking a Secret Code to Learning: Hand Gestures

Cracking a Secret Code to Learning: Hand Gestures | LEARNING AND COGNITION | Scoop.it
Research shows that the act of gesturing itself seems to accelerate learning, bringing nascent knowledge into consciousness and aiding the understanding of new concepts


There is a new conceptualization of intelligence that takes shape in the social and biological sciences. This conceptualization involves many lines of inquiry that can be loosely grouped under the title situated cognition: the idea that thinking doesn’t happen in some abstract, disembodied space, but always in a particular brain, in a particular body, located in a particular social and physical world. The moment-by-moment conditions that prevail in that brain, that body, and that world powerfully affect how well we think and perform.


One of the most interesting lines of inquiry within this perspective is known as embodied cognition: the recognition that our bodies play a big role in how we think. Physical gestures, for example, constitute a kind of back-channel way of expressing and even working out our thoughts. Research demonstrates that the movements we make with our hands when we talk constitute a kind of second language, adding information that’s absent from our words. It’s learning’s secret code: Gesture reveals what we know. It reveals what we don’t know. And it reveals (as Donald Rumsfeld might put it) what we know, but don’t yet know we know. What’s more, the congruence—or lack of congruence—between what our voices say and how our hands move offers a clue to our readiness to learn.

Huey O'Brien's insight:

IMPLICATION:  Embodied Cognition, Situated Cognition

No comment yet.
Scooped by Huey O'Brien

Memory: Types, Facts, and Myths

Memory: Types, Facts, and Myths | LEARNING AND COGNITION | Scoop.it

Our memory system, according to cognitive psychology, is divided into the following 2 types: 


Short-term memory that stores sounds, images and words, allows for short computations and filters information that either goes to long-term memory or is discarded.

Long-term memory that allows us to store information based on meaning and importance for extended periods of time, affects our perception and constitutes a framework where new info is attached. 




Short-term memory has 3 main characteristics:


- Brief duration that can only last up to 20 seconds.

- Its capacity is limited to 7 ±2 chunks of independent information (Miller’s Law) and is vulnerable to interference and interruption.

- Its weakening (due to many reasons, such as medication, sleep deprivation, a stroke, or a head injury, for example) is the first step to memory loss.

Short-term memory is responsible for 3 operations:


 - Iconic, which is the ability to store images.

 - Acoustic, which is the ability to store sounds. 

 - Working Memory, which is the ability to store information until it’s put to use.


For some scientists, working memory is synonymous to short-term memory, but truth is that working memory is not only used for information storage, but also for the manipulation of information. What’s important is that it’s flexible, dynamic and makes all the difference in successful learning.




Information in Long-term memory is stored as a network of schemas, which then converts into knowledge structures. This is exactly why we recall relevant knowledge when we stumble upon similar information. The challenge for an instructional designer is to activate those existing structures before presenting new information and that can be achieved in a variety of ways, like with graphics, movies, curiosity-provoking questions, etc.  


2 Types of Long-term memory 


 - Explicit: Conscious memories that include our perception of the world, as well as our own personal experiences.

 - Implicit: Unconscious memories that we use without realizing it. 


Long-term memory is responsible for 3 operations


- Encoding, which is the ability to convert information into a knowledge structure.

- Storage, which is the ability to accumulate chunks of information.

- Retrieval, which is the ability to recall things we already know.

Huey O'Brien's insight:

IMPLICATION:  Lesson Design, Memory, Practice

No comment yet.
Scooped by Huey O'Brien

Should you drink coffee before or after a learning task?

Should you drink coffee before or after a learning task? | LEARNING AND COGNITION | Scoop.it

Popular wisdom holds that caffeine enhances learning, alertness and retention, leading millions to consume coffee or caffeinated drinks before a challenging learning task such as attending a business strategy meeting or a demanding scientific presentation.


However a new study in the journal Nature Neuroscience conducted by researchers from Johns Hopkins hints that when it comes to long-term memory and caffeine, timing may be everything; caffeine may enhance consolidation of memories only if it is consumed after a learning or memory challenge.


In the study the authors conducted a randomized, double-blind controlled experiment in which 160 healthy female subjects between the ages of 18 and 30 were asked to perform a series of learning tasks. The subjects were handed cards with pictures of various random indoor and outdoor objects (for instance leaves, ducks and handbags) on them and asked to classify the objects as indoor or outdoor. Immediately after the task the volunteers were handed pills, either containing 200 mg of caffeine or placebo. Saliva samples to test for caffeine and its metabolites were collected after 1, 3 and 24 hours.


After 24 hours the researchers tested the participants’ recollection of the past day’s test. Along with the items in the test (‘old’) they were presented with new items  (‘foils’) and similar looking items (‘lures’), neither of which were part of the task. They were then asked to again classify the items as old, new and similar. There was a statistically significant percentage of volunteers in the caffeinated group that was more likely to mark the ‘similar’ items as ‘similar’ rather than ‘old’. That is, caffeinated participants were clearly able to distinguish much better between the old and the other items, indicating that they were retaining the memory of the old items much better than the people in the placebo group.


To rule out the effect of caffeine on memory retrieval rather than memory consolidation, the authors also conducted another test in which caffeine was administered 1 hour before a learning task on the second day. This prior administration led to no statistical differences relative to placebo. Different doses of caffeine (100 mg, 200 mg, 300 mg) were also tested and  it was found that there was a difference between the results of the 100 and the 200 mg doses but not between the 200 mg and the higher dose. This indicates that while there is a minimum dose of caffeine that may be helpful for memory consolidation, the effects of higher doses will have to be further investigated (many people take much higher doses of caffeine during their typical caffeinated week).


The authors acknowledge one obvious limitation of the study – the possibility that awareness of caffeine ingestion might have affected memory consolidation; however they found an even split between participants who thought they had ingested caffeine and those who thought they had been administered a placebo.

No comment yet.
Scooped by Huey O'Brien

SHY hypothesis explains that sleep is the price we pay for learning

SHY hypothesis explains that sleep is the price we pay for learning | LEARNING AND COGNITION | Scoop.it

 Two leading sleep scientists from the University of Wisconsin School of Medicine and Public Health say that their synaptic homeostasis hypothesis of sleep or "SHY" challenges the theory that sleep strengthens brain connections. The SHY hypothesis, which takes into account years of evidence from human and animal studies, says that sleep is important because it weakens the connections among brain cells to save energy, avoid cellular stress, and maintain the ability of neurons to respond selectively to stimuli.


"Sleep is the price the brain must pay for learning and memory," says Dr. Giulio Tononi, of the UW Center for Sleep and Consciousness. "During wake, learning strengthens the synaptic connections throughout the brain, increasing the need for energy and saturating the brain with new information. Sleep allows the brain to reset, helping integrate, newly learned material with consolidated memories, so the brain can begin anew the next day. "


Tononi and his co-author Dr. Chiara Cirelli, both professors of psychiatry, explain their hypothesis in a review article in today's issue of the journal Neuron. Their laboratory studies sleep and consciousness in animals ranging from fruit flies to humans; SHY takes into account evidence from molecular, electrophysiological and behavioral studies, as well as from computer simulations. "Synaptic homeostasis" refers to the brain's ability to maintain a balance in the strength of connections within its nerve cells.


Why would the brain need to reset? Suppose someone spent the waking hours learning a new skill, such as riding a bike. The circuits involved in learning would be greatly strengthened, but the next day the brain will need to pay attention to learning a new task. Thus, those bike-riding circuits would need to be damped down so they don't interfere with the new day's learning.


"Sleep helps the brain renormalize synaptic strength based on a comprehensive sampling of its overall knowledge of the environment," Tononi says, "rather than being biased by the particular inputs of a particular waking day."


The reason we don't also forget how to ride a bike after a night's sleep is because those active circuits are damped down less than those that weren't actively involved in learning. Indeed, there is evidence that sleep enhances important features of memory, including acquisition, consolidation, gist extraction, integration and "smart forgetting," which allows the brain to rid itself of the inevitable accumulation of unimportant details. However, one common belief is that sleep helps memory by further strengthening the neural circuits during learning while awake. But Tononi and Cirelli believe that consolidation and integration of memories, as well as the restoration of the ability to learn, all come from the ability of sleep to decrease synaptic strength and enhance signal-to-noise ratios.

No comment yet.
Scooped by Huey O'Brien

Improve learning by taming instructional complexity

Improve learning by taming instructional complexity | LEARNING AND COGNITION | Scoop.it

From using concrete or abstract materials to giving immediate or delayed feedback, there are rampant debates over the best teaching strategies to use. But, in reality, improving education is not as simple as choosing one technique over another.

 Carnegie Mellon University and Temple University researchers scoured the educational research landscape and found that because improved learning depends on many different factors, there are actually more than 205 trillion instructional options available.


In the Nov. 22 issue of Science, the researchers break down exactly how complicated improving education really is when considering the combination of different dimensions -- spacing of practice, studying examples or practicing procedures, to name a few -- with variations in ideal dosage and in student needs as they learn. The researchers offer a fresh perspective on educational research by focusing on conclusive approaches that truly impact classroom learning.


The findings were published only a week after CMU launched the Simon Initiative to accelerate the use of learning science and technology to improve student learning. Named to honor the work of the late Nobel Laureate and CMU Professor Herbert Simon, the initiative will harness CMU's decades of learning data and research to improve educational outcomes for students everywhere.


"There are not just two ways to teach, as our education debates often seem to indicate," said lead author Ken Koedinger, professor of human-computer interaction at Carnegie Mellon, director of the Pittsburgh Science of Learning Center (PSLC) and co-coordinator of the Simon Initiative. "There are trillions of possible ways to teach. Part of the instructional complexity challenge is that education is not 'one size fits all,' and optimal forms of instruction depend on details, such as how much a learner already knows and whether a fact, concept, or thinking skill is being targeted.



No comment yet.
Scooped by Huey O'Brien

Why Do Employees Forget Their Training?

Why Do Employees Forget Their Training? | LEARNING AND COGNITION | Scoop.it

On its own, the human brain does an adequate job of retaining information. Any knowledge retained will decrease over time without reinforcement (don’t even ask me to recall high school math!). There is a responsibility to make sure that an employee truly knows the skills. This goes further than referring to your online training software to see if they’ve passed.




A good starting point to stem the leak of knowledge is to look at the steps that someone needs to take in order to become truly proficient at a task. This is described eloquently as the Four Stages Of Competence.


>> Unconscious Incompetent


In this stage you don’t know that you are unable to tie your shoes.


>> Conscious Competent


In this stage you are aware but haven’t gained the skill to tie your shoes.


>> Conscious Competent (skill)


In this stage you have gained a skill and are aware when you are using it to tie your shoes.


>> Unconscious Competent (habit)


In this stage, the skill has become habit and you are unaware that you are using that skill in order to tie your shoes.


Unconscious Competent is the goal of any training program but rarely is that ever done through online training alone. It’s accomplished through the reinforcement of the knowledge via spaced repetition of knowledge checks after the initial online training has been completed. This can be accomplished by subsequent knowledge checks delivered via your learning management system (LMS) or in-person through a blended learning program. 


While a well-designed online training course can do wonders to impart knowledge, we are human after all. It’s the continued reinforcement of that knowledge which is key and transforms a skill into a habit.





No comment yet.
Rescooped by Huey O'Brien from Collective Intelligence & Distance Learning

Is technology making things too easy?

Is technology making things too easy? | LEARNING AND COGNITION | Scoop.it

Do you remember when you had to remember everything yourself? No?  Well, if you’ve forgotten those days, don’t worry, because they aren’t coming back.


If you’ve perfected your relationship with technology, then the memory on your smartphone means that you should never miss an appointment, lose someone’s contact details, or struggle to remember an important detail again. Mobile technology gives you perfect recall, freeing up your precious brainpower for other things.


But is all this advanced technology making things a little bit too easy? Is the fact that we almost always have an internet-connected device to hand making us lazy?


In a study conducted at Columbia University, subjects were asked to type facts and trivia into a computer. Half of the subjects were told that the information would be saved, while the other half were told it would be erased. The group who were told it would be erased were significantly more likely to remember the information.


In another test, they were asked to remember a trivia statement and which of five computer folders it was saved in on the computer; the subjects found it easier to recall the folder than the fact.


The researchers concluded that the internet has become a primary form of external or transactive memory. (Transactive memory is a kind of collective external memory – it used to be the ‘group mind’ of a family, group, or team, but is increasingly being replaced by the web itself; our collective recollections stored on the omnipresent cloud.)



No comment yet.
Scooped by Huey O'Brien

The Human Brain Project's Ambitious Mission

The Human Brain Project's Ambitious Mission | LEARNING AND COGNITION | Scoop.it

The Human Brain Project is an international project that wants to understand the human brain and use that research to advance computer technologies. The goal would be to create a computer simulation of the human brain and the research is being funded, in part, by the European Union and will include more than 135 institutions, reports BBC.


The Human Brain Project, HBP, is “A global, collaborative effort for neuroscience, medicine and computing to understand the brain, its diseases and its computational capabilities.” For ten years, institutions involved with the HBP will develop new technology and conduct research on the human brain.  The HBP will attempt to create an “exascale” supercomputer, 1,000 times faster than supercomputers currently available. The HBP will cost one billion pounds, $1.6 billion, and will run for 10 years. Partners include Cray, HP, Olympus and GlaxoSmithKline.


As part of its neuroscience objective, the HBP will conduct experiments and develop brain models and simulations to map out the brain. According to the HBP, “Neuroscience has the potential to reveal the detailed mechanisms leading from genes to cells and circuits and ultimately to cognition and behavior – the very heart of that which makes us human.”


The project’s medicine objective will analyze data from hospitals to identify biological changes associated with neurological or psychiatric diseases. Through these efforts, the HBP hopes to create a human brain model that they can use to conduct disease simulations, a tool that lets researchers develop and test potential treatments.


Another objective of the HBP involves computing technologies. The human brain’s computational abilities remain a mystery and figuring out how the human brain can make different decisions, as well as how it communicates, could revolutionize computing technology. A new field, “Neuromorphic Computing Systems,” that mimics the human brain, including the ability to learn, could be created based on the work by institutions involved in the HBP. As BBC notes, the HBP is akin to the Human Genome Project but won't attempt to map the entire brain, instead focusing on creating brain simulations using new computer technology.

No comment yet.
Scooped by Huey O'Brien

Brain may rely on computer-like mechanism to make sense of novel situations

Brain may rely on computer-like mechanism to make sense of novel situations | LEARNING AND COGNITION | Scoop.it

Our brains give us the remarkable ability to make sense of situations we've never encountered before -- a familiar person in an unfamiliar place, for example, or a coworker in a different job role -- but the mechanism our brains use to accomplish this has been a longstanding mystery of neuroscience.


Now, researchers at the University of Colorado Boulder have demonstrated that our brains could process these new situations by relying on a method similar to the "pointer" system used by computers. "Pointers" are used to tell a computer where to look for information stored elsewhere in the system to replace a variable. 


For the study, published today in the Proceedings of the National Academy of Sciences, the research team relied on sentences with words used in unique ways to test the brain's ability to understand the role familiar words play in a sentence even when those words are used in unfamiliar, and even nonsensical, ways.


For example, in the sentence, "I want to desk you," we understand the word "desk" is being used as a verb even though our past experience with the word "desk" is as a noun.  "The fact that you understand that the sentence is grammatically well formed means you can process these completely novel inputs," said Randall O'Reilly, a professor in CU-Boulder's Department of Psychology and Neuroscience and co-author of the study.


This (study) shows that human brains are able to understand the sentence as a structure with variables -- a subject, a verb and often, an object -- and that the brain can assign a wide variety of words to those variables and still understand the sentence structure. But the way the brain does this has not been understood.


While the results show that a pointer-like system could be at play in the brain, the function is not identical to the system used in computer science, the scientists said. It's similar to comparing an airplane's wing and a bird's wing, O'Reilly said. They're both used for flying but they work differently.


In the brain, for example, the pointer-like system must still be learned. The brain has to be trained, in this case, to understand sentences while a computer can be programmed to understand sentences immediately.


"As your brain learns, it gets better and better at processing these novel kinds of information," O'Reilly said.

John Mekrut's curator insight, September 25, 2013 12:18 PM

I would like to see this research applied to autism, it seems that many people with autism have difficulty processing data unless it is presented in exactly the same way every time.  Deviations and rule breaking seem difficult.to adjust to.

Scooped by Huey O'Brien

Old memories recombine to give a taste of the unknown

Old memories recombine to give a taste of the unknown | LEARNING AND COGNITION | Scoop.it

Ever tried beetroot custard? Probably not, but your brain can imagine how it might taste by reactivating old memories in a new pattern.


Helen Barron and her colleagues at University College London and Oxford University wondered if our brains combine existing memories to help us decide whether to try something new.


So the team used an fMRI scanner to look at the brains of 19 volunteers who were asked to remember specific foods they had tried.

Each volunteer was then given a menu of 13 unusual food combinations – including beetroot custard, tea jelly, and coffee yoghurt – and asked to imagine how good or bad they would taste, and whether or not they would eat them.


"Tea jelly was popular," says Barron. "Beetroot custard not so much."


When each volunteer imagined a new combination, they showed brain activity associated with each of the known ingredients at the same time. It is the first evidence to suggest that we use memory combination to make decisions, says Barron.


Journal reference: Nature Neuroscience, doi: 10.1038/nn.3515

No comment yet.
Scooped by Huey O'Brien

Brain's flexible hub network helps humans adapt

Brain's flexible hub network helps humans adapt | LEARNING AND COGNITION | Scoop.it

One thing that sets humans apart from other animals is our ability to intelligently and rapidly adapt to a wide variety of new challenges -- using skills learned in much different contexts to inform and guide the handling of any new task at hand.


Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex -- parts of the brain most changed evolutionarily since our common ancestor with chimpanzees -- contains "flexible hubs" that coordinate the brain's responses to novel cognitive challenges.


Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds.  "Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks -- like a large Internet traffic router," suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience. 


"Like an Internet router, flexible hubs shift which networks they communicate with based on instructions for the task at hand and can do so even for tasks never performed before," he adds.  By tracking where and when these unique connection patterns occur in the brain, researchers were able to document flexible hubs' role in shifting previously learned and practiced problem-solving skills and protocols to novel task performance. Known as compositional coding, the process allows skills learned in one context to be re-packaged and re-used in other applications, thus shortening the learning curve for novel tasks.


What's more, by tracking the testing performance of individual study participants, the team demonstrated that the transfer of these processing skills helped participants speed their mastery of novel tasks, essentially using previously practiced processing tricks to get up to speed much more quickly for similar challenges in a novel setting.


"The flexible hub theory suggests this is possible because flexible hubs build up a repertoire of task component connectivity patterns that are highly practiced and can be reused in novel combinations in situations requiring high adaptivity," Cole explains.


No comment yet.
Scooped by Huey O'Brien

UCSB study reveals that overthinking can be detrimental to human performance

UCSB study reveals that overthinking can be detrimental to human performance | LEARNING AND COGNITION | Scoop.it

Trying to explain riding a bike is difficult because it is an implicit memory. The body knows what to do, but thinking about the process can often interfere. So why is it that under certain circumstances paying full attention and trying hard can actually impede performance? A new UC Santa Barbara study, published today in the Journal of Neuroscience, reveals part of the answer.


There are two kinds of memory: implicit, a form of long-term memory not requiring conscious thought and expressed by means other than words; and explicit, another kind of long-term memory formed consciously that can be described in words. Scientists consider these distinct areas of function both behaviorally and in the brain.


Long-term memory is supported by various regions in the prefrontal cortex, the newest part of the brain in terms of evolution and the part of the brain responsible for planning, executive function, and working memory. "A lot of people think the reason we're human is because we have the most advanced prefrontal cortex," said the study's lead author, Taraz Lee, a postdoctoral scholar working in UCSB's Action Lab.


Lee and his colleagues decided to test whether the effects of the attentional control processes associated with explicit memory could directly interfere with implicit memory.  Lee's study used continuous theta-burst transcranial magnetic stimulation (TMS) to temporarily disrupt the function of two different parts of the prefrontal cortex, the dorsolateral and ventrolateral. The dorsal and ventral regions are close to each other but have slightly different functions. Disrupting function in two distinct areas provided a direct causal test of whether explicit memory processing exerts control over sensory resources -- in this case, visual information processing -- and in doing so indirectly harms implicit memory processes.


Participants were shown a series of kaleidoscopic images for about a minute, then had a one-minute break before being given memory tests containing two different kaleidoscopic images. They were then asked to distinguish images they had seen previously from the new ones. "After they gave us that answer, we asked whether they remembered a lot of rich details, whether they had a vague impression, or whether they were blindly guessing," explains Lee. "And the participants only did better when they said they were guessing."


The results of disrupting the function of the dorsolateral prefrontal cortex shed light on why paying attention can be a distraction and affect performance outcomes. "If we ramped down activity in the dorsolateral prefrontal cortex, people remembered the images better," said Lee.

No comment yet.
Scooped by Huey O'Brien

Memory improves for older adults using computerized brain-fitness program

Memory improves for older adults using computerized brain-fitness program | LEARNING AND COGNITION | Scoop.it

UCLA Researchers have found that older adults who regularly used a brain-fitness program on a computer demonstrated significantly improved memory and language skills.


The researchers found that of the 69 participants, the 52 individuals who over a six-month period completed at least 40 sessions (of 20 minutes each) on the program showed improvement in both immediate and delayed memory skills, as well as language skills.


The findings suggest that older adults who participate in computerized brain training can improve their cognitive skills.  The study's findings add to a body of research exploring whether brain fitness tools may help improve language and memory and ultimately help protect individuals from the cognitive decline associated with aging and Alzheimer's disease.


Age-related memory decline affects approximately 40 percent of older adults. And while previous studies have shown that engaging in stimulating mental activities can help older adults improve their memory, little research had been done to determine whether the numerous computerized brain-fitness games and memory training programs on the market are effective in improving memory. This is one of the first studies to assess the cognitive effects of a computerized memory-training program.

Huey O'Brien's insight:


No comment yet.
Scooped by Huey O'Brien

Eight Ways Of Looking At Intelligence

Eight Ways Of Looking At Intelligence | LEARNING AND COGNITION | Scoop.it

We’re going to consider eight ways of looking at intelligence—eight perspectives provided by the science of learning. A few words about that term: The science of learning is a relatively new discipline born of an agglomeration of fields: cognitive science, psychology, philosophy, neuroscience. Its project is to apply the methods of science to human endeavors—teaching and learning—that have for centuries been mostly treated as an art.


As with anything to do with our idiosyncratic and unpredictable species, there is still a lot of art involved in teaching and learning. But the science of learning can offer some surprising anduseful perspectives on how we guide and educate young people.


And so: Eight Ways Of Looking At Intelligence.










Huey O'Brien's insight:

Thought provoking article summarizing Science of Learning touchpoints.

No comment yet.