Sign up with Facebook
Sign up with Twitter
Sign up with Linkedin
I don't have a Facebook, a Twitter or a LinkedIn account
Start a free trial of Scoop.it Business
Thought and analysis regarding social learning and the evolution of traditional e-learning methods. The paper also looks at Learning 2.0 strategies, approaches, along with innovation in modern training systems.
IMPLICATION: Learning Content Design
Are you sure you want to delete this scoop?
From using concrete or abstract materials to giving immediate or delayed feedback, there are rampant debates over the best teaching strategies to use. But, in reality, improving education is not as simple as choosing one technique over another.
Carnegie Mellon University and Temple University researchers scoured the educational research landscape and found that because improved learning depends on many different factors, there are actually more than 205 trillion instructional options available.
In the Nov. 22 issue of Science, the researchers break down exactly how complicated improving education really is when considering the combination of different dimensions -- spacing of practice, studying examples or practicing procedures, to name a few -- with variations in ideal dosage and in student needs as they learn. The researchers offer a fresh perspective on educational research by focusing on conclusive approaches that truly impact classroom learning.
The findings were published only a week after CMU launched the Simon Initiative to accelerate the use of learning science and technology to improve student learning. Named to honor the work of the late Nobel Laureate and CMU Professor Herbert Simon, the initiative will harness CMU's decades of learning data and research to improve educational outcomes for students everywhere.
"There are not just two ways to teach, as our education debates often seem to indicate," said lead author Ken Koedinger, professor of human-computer interaction at Carnegie Mellon, director of the Pittsburgh Science of Learning Center (PSLC) and co-coordinator of the Simon Initiative. "There are trillions of possible ways to teach. Part of the instructional complexity challenge is that education is not 'one size fits all,' and optimal forms of instruction depend on details, such as how much a learner already knows and whether a fact, concept, or thinking skill is being targeted.
On its own, the human brain does an adequate job of retaining information. Any knowledge retained will decrease over time without reinforcement (don’t even ask me to recall high school math!). There is a responsibility to make sure that an employee truly knows the skills. This goes further than referring to your online training software to see if they’ve passed.
LATHER, RINSE REPEAT
A good starting point to stem the leak of knowledge is to look at the steps that someone needs to take in order to become truly proficient at a task. This is described eloquently as the Four Stages Of Competence.
>> Unconscious Incompetent
In this stage you don’t know that you are unable to tie your shoes.
>> Conscious Competent
In this stage you are aware but haven’t gained the skill to tie your shoes.
>> Conscious Competent (skill)
In this stage you have gained a skill and are aware when you are using it to tie your shoes.
>> Unconscious Competent (habit)
In this stage, the skill has become habit and you are unaware that you are using that skill in order to tie your shoes.
Unconscious Competent is the goal of any training program but rarely is that ever done through online training alone. It’s accomplished through the reinforcement of the knowledge via spaced repetition of knowledge checks after the initial online training has been completed. This can be accomplished by subsequent knowledge checks delivered via your learning management system (LMS) or in-person through a blended learning program.
While a well-designed online training course can do wonders to impart knowledge, we are human after all. It’s the continued reinforcement of that knowledge which is key and transforms a skill into a habit.
Do you remember when you had to remember everything yourself? No? Well, if you’ve forgotten those days, don’t worry, because they aren’t coming back.
If you’ve perfected your relationship with technology, then the memory on your smartphone means that you should never miss an appointment, lose someone’s contact details, or struggle to remember an important detail again. Mobile technology gives you perfect recall, freeing up your precious brainpower for other things.
But is all this advanced technology making things a little bit too easy? Is the fact that we almost always have an internet-connected device to hand making us lazy?
In a study conducted at Columbia University, subjects were asked to type facts and trivia into a computer. Half of the subjects were told that the information would be saved, while the other half were told it would be erased. The group who were told it would be erased were significantly more likely to remember the information.
In another test, they were asked to remember a trivia statement and which of five computer folders it was saved in on the computer; the subjects found it easier to recall the folder than the fact.
The researchers concluded that the internet has become a primary form of external or transactive memory. (Transactive memory is a kind of collective external memory – it used to be the ‘group mind’ of a family, group, or team, but is increasingly being replaced by the web itself; our collective recollections stored on the omnipresent cloud.)
The Human Brain Project is an international project that wants to understand the human brain and use that research to advance computer technologies. The goal would be to create a computer simulation of the human brain and the research is being funded, in part, by the European Union and will include more than 135 institutions, reports BBC.
The Human Brain Project, HBP, is “A global, collaborative effort for neuroscience, medicine and computing to understand the brain, its diseases and its computational capabilities.” For ten years, institutions involved with the HBP will develop new technology and conduct research on the human brain. The HBP will attempt to create an “exascale” supercomputer, 1,000 times faster than supercomputers currently available. The HBP will cost one billion pounds, $1.6 billion, and will run for 10 years. Partners include Cray, HP, Olympus and GlaxoSmithKline.
As part of its neuroscience objective, the HBP will conduct experiments and develop brain models and simulations to map out the brain. According to the HBP, “Neuroscience has the potential to reveal the detailed mechanisms leading from genes to cells and circuits and ultimately to cognition and behavior – the very heart of that which makes us human.”
The project’s medicine objective will analyze data from hospitals to identify biological changes associated with neurological or psychiatric diseases. Through these efforts, the HBP hopes to create a human brain model that they can use to conduct disease simulations, a tool that lets researchers develop and test potential treatments.
Another objective of the HBP involves computing technologies. The human brain’s computational abilities remain a mystery and figuring out how the human brain can make different decisions, as well as how it communicates, could revolutionize computing technology. A new field, “Neuromorphic Computing Systems,” that mimics the human brain, including the ability to learn, could be created based on the work by institutions involved in the HBP. As BBC notes, the HBP is akin to the Human Genome Project but won't attempt to map the entire brain, instead focusing on creating brain simulations using new computer technology.
Our brains give us the remarkable ability to make sense of situations we've never encountered before -- a familiar person in an unfamiliar place, for example, or a coworker in a different job role -- but the mechanism our brains use to accomplish this has been a longstanding mystery of neuroscience.
Now, researchers at the University of Colorado Boulder have demonstrated that our brains could process these new situations by relying on a method similar to the "pointer" system used by computers. "Pointers" are used to tell a computer where to look for information stored elsewhere in the system to replace a variable.
For the study, published today in the Proceedings of the National Academy of Sciences, the research team relied on sentences with words used in unique ways to test the brain's ability to understand the role familiar words play in a sentence even when those words are used in unfamiliar, and even nonsensical, ways.
For example, in the sentence, "I want to desk you," we understand the word "desk" is being used as a verb even though our past experience with the word "desk" is as a noun. "The fact that you understand that the sentence is grammatically well formed means you can process these completely novel inputs," said Randall O'Reilly, a professor in CU-Boulder's Department of Psychology and Neuroscience and co-author of the study.
This (study) shows that human brains are able to understand the sentence as a structure with variables -- a subject, a verb and often, an object -- and that the brain can assign a wide variety of words to those variables and still understand the sentence structure. But the way the brain does this has not been understood.
While the results show that a pointer-like system could be at play in the brain, the function is not identical to the system used in computer science, the scientists said. It's similar to comparing an airplane's wing and a bird's wing, O'Reilly said. They're both used for flying but they work differently.
In the brain, for example, the pointer-like system must still be learned. The brain has to be trained, in this case, to understand sentences while a computer can be programmed to understand sentences immediately.
"As your brain learns, it gets better and better at processing these novel kinds of information," O'Reilly said.
I would like to see this research applied to autism, it seems that many people with autism have difficulty processing data unless it is presented in exactly the same way every time. Deviations and rule breaking seem difficult.to adjust to.
Ever tried beetroot custard? Probably not, but your brain can imagine how it might taste by reactivating old memories in a new pattern.
Helen Barron and her colleagues at University College London and Oxford University wondered if our brains combine existing memories to help us decide whether to try something new.
So the team used an fMRI scanner to look at the brains of 19 volunteers who were asked to remember specific foods they had tried.
Each volunteer was then given a menu of 13 unusual food combinations – including beetroot custard, tea jelly, and coffee yoghurt – and asked to imagine how good or bad they would taste, and whether or not they would eat them.
"Tea jelly was popular," says Barron. "Beetroot custard not so much."
When each volunteer imagined a new combination, they showed brain activity associated with each of the known ingredients at the same time. It is the first evidence to suggest that we use memory combination to make decisions, says Barron.
Journal reference: Nature Neuroscience, doi: 10.1038/nn.3515
One thing that sets humans apart from other animals is our ability to intelligently and rapidly adapt to a wide variety of new challenges -- using skills learned in much different contexts to inform and guide the handling of any new task at hand.
Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex -- parts of the brain most changed evolutionarily since our common ancestor with chimpanzees -- contains "flexible hubs" that coordinate the brain's responses to novel cognitive challenges.
Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds. "Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks -- like a large Internet traffic router," suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience.
"Like an Internet router, flexible hubs shift which networks they communicate with based on instructions for the task at hand and can do so even for tasks never performed before," he adds. By tracking where and when these unique connection patterns occur in the brain, researchers were able to document flexible hubs' role in shifting previously learned and practiced problem-solving skills and protocols to novel task performance. Known as compositional coding, the process allows skills learned in one context to be re-packaged and re-used in other applications, thus shortening the learning curve for novel tasks.
What's more, by tracking the testing performance of individual study participants, the team demonstrated that the transfer of these processing skills helped participants speed their mastery of novel tasks, essentially using previously practiced processing tricks to get up to speed much more quickly for similar challenges in a novel setting.
"The flexible hub theory suggests this is possible because flexible hubs build up a repertoire of task component connectivity patterns that are highly practiced and can be reused in novel combinations in situations requiring high adaptivity," Cole explains.
Trying to explain riding a bike is difficult because it is an implicit memory. The body knows what to do, but thinking about the process can often interfere. So why is it that under certain circumstances paying full attention and trying hard can actually impede performance? A new UC Santa Barbara study, published today in the Journal of Neuroscience, reveals part of the answer.
There are two kinds of memory: implicit, a form of long-term memory not requiring conscious thought and expressed by means other than words; and explicit, another kind of long-term memory formed consciously that can be described in words. Scientists consider these distinct areas of function both behaviorally and in the brain.
Long-term memory is supported by various regions in the prefrontal cortex, the newest part of the brain in terms of evolution and the part of the brain responsible for planning, executive function, and working memory. "A lot of people think the reason we're human is because we have the most advanced prefrontal cortex," said the study's lead author, Taraz Lee, a postdoctoral scholar working in UCSB's Action Lab.
Lee and his colleagues decided to test whether the effects of the attentional control processes associated with explicit memory could directly interfere with implicit memory. Lee's study used continuous theta-burst transcranial magnetic stimulation (TMS) to temporarily disrupt the function of two different parts of the prefrontal cortex, the dorsolateral and ventrolateral. The dorsal and ventral regions are close to each other but have slightly different functions. Disrupting function in two distinct areas provided a direct causal test of whether explicit memory processing exerts control over sensory resources -- in this case, visual information processing -- and in doing so indirectly harms implicit memory processes.
Participants were shown a series of kaleidoscopic images for about a minute, then had a one-minute break before being given memory tests containing two different kaleidoscopic images. They were then asked to distinguish images they had seen previously from the new ones. "After they gave us that answer, we asked whether they remembered a lot of rich details, whether they had a vague impression, or whether they were blindly guessing," explains Lee. "And the participants only did better when they said they were guessing."
The results of disrupting the function of the dorsolateral prefrontal cortex shed light on why paying attention can be a distraction and affect performance outcomes. "If we ramped down activity in the dorsolateral prefrontal cortex, people remembered the images better," said Lee.
UCLA Researchers have found that older adults who regularly used a brain-fitness program on a computer demonstrated significantly improved memory and language skills.
The researchers found that of the 69 participants, the 52 individuals who over a six-month period completed at least 40 sessions (of 20 minutes each) on the program showed improvement in both immediate and delayed memory skills, as well as language skills.
The findings suggest that older adults who participate in computerized brain training can improve their cognitive skills. The study's findings add to a body of research exploring whether brain fitness tools may help improve language and memory and ultimately help protect individuals from the cognitive decline associated with aging and Alzheimer's disease.
Age-related memory decline affects approximately 40 percent of older adults. And while previous studies have shown that engaging in stimulating mental activities can help older adults improve their memory, little research had been done to determine whether the numerous computerized brain-fitness games and memory training programs on the market are effective in improving memory. This is one of the first studies to assess the cognitive effects of a computerized memory-training program.
We’re going to consider eight ways of looking at intelligence—eight perspectives provided by the science of learning. A few words about that term: The science of learning is a relatively new discipline born of an agglomeration of fields: cognitive science, psychology, philosophy, neuroscience. Its project is to apply the methods of science to human endeavors—teaching and learning—that have for centuries been mostly treated as an art.
As with anything to do with our idiosyncratic and unpredictable species, there is still a lot of art involved in teaching and learning. But the science of learning can offer some surprising anduseful perspectives on how we guide and educate young people.
And so: Eight Ways Of Looking At Intelligence.
1. SITUATIONS CAN MAKE US SMARTER.
2. BELIEFS CAN MAKE US SMARTER.
3. EXPERTISE CAN MAKE US SMARTER.
4. ATTENTION CAN MAKE US SMARTER.
5. EMOTIONS CAN MAKE US SMARTER.
6. TECHNOLOGY CAN MAKE US SMARTER.
7. OUR BODIES CAN MAKE US SMARTER.
8. RELATIONSHIPS CAN MAKE US SMARTER.
Thought provoking article summarizing Science of Learning touchpoints.
One of the main concepts that leads to successful e-Learning course design is Information Chunking. But what is chunking? Why is it embedded in the world of instructional design? And what kind of chunking strategies can an instructional designer use to enhance learning? What is chunking?
Chunking refers to the strategy of making a more efficient use of our short-term memory by organizing and grouping various pieces of information together. When information is chunked into groups, the brain can process them easier and faster, because our working memory can hold a limited amount of data at the same time. So, chunking is useful when we try to memorize something complex, such as a 14digit number for example, or large pieces of information, like a course’s content, since the smaller pieces are easier to retain and recall.
Chunking For e-Learning
Even though chunking is a strategic concept, instructors shouldn’t forget that it does have its limitations. The more complex the concepts the brain attempts to absorb, the less our working memory will retain. If the working memory is full, the new information will just vanish. However, this doesn’t mean that you should just cut the length of the e-Learning course. It takes a more effective technique to make the process work, and it’s by structuring the e-Learning course’s content in a progressive way. This constitutes a challenge for e-Learning and consequently for the instructional designer.
Thus, before the creation of an e-Learning course, the Instructional Designer should ask himself a number of questions. What are the e-Learning course’s limits? How much and what exactly should appear on screen? What kind of strategies can I use to group information? All these questions fall into the concept of chunking and there are various strategies to successfully implement it and enhance the student’s learning potential.
*** Go to Article for 3 Chunking Strategies ***
IMPLICATION: Lesson Design, Working Memory
It's well known that synapses in the brain, the connections between neurons and other cells that allow for the transmission of information, grow when they're exposed to a stimulus. New research from the lab of Carnegie Mellon Associate Professor of Biological Sciences Alison L. Barth has shown that in the short term, synapses get even stronger than previously thought, but then quickly go through a transitional phase where they weaken.
"When you think of learning, you think that it's cumulative. We thought that synapses started small and then got bigger and bigger. This isn't the case," said Barth, who also is a member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition. "Based on our data, it seems like synapses that have recently been strengthened are peculiarly vulnerable -- more stimulation can actually wipe out the effects of learning.
"Psychologists know that for long-lasting memory, spaced training -- like studying for your classes after very lecture, all semester long -- is superior to cramming all night before the exam," Barth said. "This study shows why. Right after plasticity, synapses are almost fragile -- more training during this labile phases is actually counterproductive."
IMPLICATION: Spacing, Memory
Google Glass may allow users to do amazing things, but it does not abolish the limits on the human ability to pay attention. Intuitions about attention lead to wrong assumptions about what we’re likely to see; we are especially unaware of how completely our attention can be absorbed by the continual availability of compelling and useful information. Only by understanding the science of attention and the limits of the human mind and brain can we design new interfaces that are both revolutionary and safe.
IMPLICATION: Working Memory, Attention
The information processing capacity of learners is limited, so it's important that designers take this into account when creating eLearning courses by considering the three types of cognitive load.
In our brains, we have two types of memory. One is our working memory, which we use to process new information. The capacity of our working memory is quite limited so it can only handle so much before it becomes overloaded. The second is our long-term memory, which is where we store information from our working memory and where we retrieve that information from later. Within our long-term memory, information is organized into schemas, which are organizational frameworks of storage (like filing cabinets). Not exceeding working memory capacity will result in greater transfer of information into long-term memory.
Cognitive Load Theory (CLT) proposes that there are three types of cognitive load:
This is the level of complexity inherent in the material being studied. There isn’t much that we can do about intrinsic cognitive load; some tasks are more complex than others so will have different levels of intrinsic cognitive load.
This is cognitive load imposed by non-relevant elements that require extra mental processing e.g. decorative pictures, animations etc. that add nothing to the learning experience.
These are elements that allow cognitive resources to be put towards learning i.e. assist with information processing.
Instructional designers, need to be aware of the cognitive requirements that learning designs impose and ensure that learners can meet those requirements. Learning designers must also ensure that all aspects of design focus on adding value to the learning experience.
PROVIDENCE, R.I. [Brown University] — People and rats may think alike when they've made a mistake and are trying to adjust their thinking.
That's the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.
"These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex," wrote the authors, who performed the research at Brown University and Yale University.
The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such "adaptive control" neural mechanics are compromised in psychiatric diseases.
In case we needed another reason to close the 15 extra browser tabs we have open, Clifford Nass, a communication professor at Stanford, has provided major motivation for monotasking: according to his research, the more you multitask, the less you're able to learn, concentrate, or be nice to people.
>>Our brains are plastic but they're not elastic.
For a case study, turn to your nearest broadcast news station (and don't say Fast Company didn't warn you): if the talking head on the screen is accompanied by a "crawler" at the bottom blurbing baseball scores and the day's tragedies, you'll be less likely to remember whatever the pundit is saying. Why? Because, research shows that the more you're multitasking, the less you're able to filter out irrelevant information.
As Nass told NPR, if you think you're good at multitasking, you aren't:
. . . "We have scales that allow us to divide up people into people who multitask all the time and people who rarely do, and the differences are remarkable. People who multitask all the time can't filter out irrelevancy. They can't manage a working memory. They're chronically distracted.
They initiate much larger parts of their brain that are irrelevant to the task at hand. And . . . they're even terrible at multitasking. When we ask them to multitask, they're actually worse at it. So they're pretty much mental wrecks."
>>Multitasking rewires our brains.
When we multitask all day, those scattered habits literally change the pathways in our brains. The consequence, according to Nass's research, is that sustaining your attention becomes impossible.
"If we [multitask] all the time--brains are remarkably plastic, remarkably adaptable," he says, referencing neuroplasticity, the way the structures of your brain literally re-form to the patterns of your thought. "We train our brains to a new way of thinking. And then when we try to revert our brains back, our brains are plastic but they're not elastic. They don't just snap back into shape."
>>How it affects our work
As James O'Toole notes on the strategy+business blog, the dangers of multitasking are as multifarious as they are nefarious.
- Multitasking stunts emotional intelligence:
Instead of addressing the person in front of you, you address a text message.
- Multitasking makes us worse managers:
The more we multitask, the worse we are at sorting through information--recall the broadcast news kerfuffle above.
- Multitasking makes us less creative:
Since attention is the midwife of creativity, if you can't focus, that thought-baby isn't coming out.
Brain training games, apps, and websites are popular and it's not hard to see why -- who wouldn't want to give their mental abilities a boost? New research suggests that brain training programs might strengthen your ability to hold information in mind, but they won't bring any benefits to the kind of intelligence that helps you reason and solve problems.
The findings are published in Psychological Science, a journal of the Association for Psychological Science. "It is hard to spend any time on the web and not see an ad for a website that promises to train your brain, fix your attention, and increase your IQ," says psychological scientist and lead researcher Randall Engle of Georgia Institute of Technology. "These claims are particularly attractive to parents of children who are struggling in school."
According to Engle, the claims are based on evidence that shows a strong correlation between working memory capacity (WMC) and general fluid intelligence. Working memory capacity refers to our ability to keep information either in mind or quickly retrievable, particularly in the presence of distraction. General fluid intelligence is the ability to infer relationships, do complex reasoning, and solve novel problems.
The correlation between WMC and fluid intelligence has led some to surmise that increasing WMC should lead to an increase in both fluid intelligence, but "this assumes that the two constructs are the same thing, or that WMC is the basis for fluid intelligence," Engle notes.
To better understand the relationship between these two aspects of cognition, Engle and colleagues had 55 undergraduate students complete 20 days of training on certain cognitive tasks. The researchers administered a battery of tests before and after training to gauge improvement and transfer of learning, including a variety of WMC measures and three measures of fluid intelligence.
The results suggest that the students improved in their ability to update and maintain information on multiple tasks as they switched between them, which could have important implications for real-world multitasking: "This work affects nearly everyone living in the complex modern world," says Harrison, "but it particularly affects individuals that find themselves trying to do multiple tasks or rapidly switching between complex tasks, such as driving and talking on a cell phone, alternating between conversations with two different people, or cooking dinner and dealing with a crying child."
Despite the potential boost for multitasking, the benefits of training didn't transfer to fluid intelligence. Engle points out that just because WMC and fluid intelligence are highly correlated doesn't mean that they are the same.
While you are browsing online, you could be squandering memories -- or losing important information.
Contrary to common wisdom, an idle brain is in fact doing important work -- and in the age of constant information overload, it's a good idea to go offline on a regular basis, says a researcher from Stockholm's KTH Royal Institute of Technology.
Contrary to common wisdom, an idle brain is in fact doing important work -- and in the age of constant information overload, it's a good idea to go offline on a regular basis, says a researcher from Stockholm's KTH Royal Institute of Technology.
Erik Fransén, whose research focuses on short-term memory and ways to treat diseased neurons, says that a brain exposed to a typical session of social media browsing can easily become hobbled by information overload. The result is that less information gets filed away in your memory.
The problem begins in a system of the brain commonly known as the working memory, or what most people know as short-term memory. That's the system of the brain that we need when we communicate, Fransén says.
"Working memory enables us to filter out information and find what we need in the communication," he says. "It enables us to work online and store what we find online, but it's also a limited resource." Models show why it has limits. At any given time, the working memory can carry up to three or four items, Fransén says. When we attempt to stuff more information in the working memory, our capacity for processing information begins to fail.
"When you are on Facebook, you are making it harder to keep the things that are 'online' in your brain that you need," he says. "In fact, when you try to process sensory information like speech or video, you are going to need partly the same system of working memory, so you are reducing your own working memory capacity.
"And when you try to store many things in your working memory, you get less good at processing information." You're also robbing the brain of time it needs to do some necessary housekeeping. The brain is designed for both activity and relaxation, he says. "The brain is made to go into a less active state, which we might think is wasteful; but probably memory consolidation, and transferring information into memory takes place in this state. Theories of how memory works explain why these two different states are needed
"When we max out our active states with technology equipment, just because we can, we remove from the brain part of the processing, and it can't work."
Motivation has been and continues to be a widely studied area across many of life’s domains. Many motivation theories focus on the amount of motivation, with a larger quantity said to result in improved outcomes. However, as educators we should not focus on generating more motivation from our learners but instead focus on creating conditions that facilitate the internalization of motivation from within our learners.
Self-determination theory (SDT), an empirical theory of motivation by Edward Deci and Richard Ryan, focuses on the degree in which behaviour is self-motivated and self-determined. SDT proposes that all humans require the satisfaction of three basic psychological needs, namely:
- Autonomy (a sense of being in control and freedom),
- Competence (a sense of being able to do something i.e. being competent),
- Relatedness (a sense of being associated or connected to others).
Research by Ryan, Rigby and Przybylski into the motivation to play video games (regardless of the game type) found that motivation to play is accounted for by how well the game satisfies our psychological needs:
the extent to which the game provides flexibility over movement and strategies, choice over task and goals, and rewards that provide feedback and not control.
the extent to which tasks within the game provide ongoing challenges and opportunities for feedback.
the extent to which the game provides interactions between players.
In addition to need satisfaction, their research also found that:
Presence – the extent to which the player feels within the game environment as opposed to being outside the game manipulating the controls, and
Intuitive controls – the extent to which the controls make sense and don’t interfere with feelings of presence, were also important as they allow players to focus on game play and access the need satisfaction provided by the game.
Contexts that satisfy all three basic needs will help support people’s actions, resulting in more sustained motivation over time and positive outcomes. Therefore, if we can use strategies to support competence, autonomy and relatedness needs we can assist learners to internalize their motivation of externally regulated activities.
Bernard Luskin, a pioneer of e-learning, who advocates that the ‘e’ should be interpreted to mean “exciting, energetic, enthusiastic, emotional, extended and excellent” in addition to “electronic”. These words are particularly key when designing e-learning platforms for Generation Z. Generation Z are our future audiences. They are young people born after 1992. This is the first generation to have truly been brought up digital (they are sometimes referred to as Digital Natives) and they don’t see the Internet or social media as anything special, they just expect it to be there.
The top 3 attitudinal and behavioral trends to consider when designing e-Learning products for Generation Z:
> Online is as important as offline to Gen Z
With Gen Z, it is crucial to understand that the online experience is just as important as offline (real life) to them.
> Playing = Learning
Another key trend is in the relationship between playing and learning. Gen Z children have been found to learn more effectively if they are left to solve problems rather than being taught the answers, and their gaming experience means that they are happy to ‘work on a level’ as they know that even if they fail, they will learn something that they can use to progress further next time.
> Digital DNA
Generation Z really does have ‘digital DNA’ – researchers say that if you compare the brains of a Gen Z child against someone born 10 years earlier you can see a physical difference. The part of the brain responsible for visual ability is more developed.
IMPLICATION: Generational Differences
A surgical team closes an abdominal incision, successfully completing a difficult operation. Weeks later, the patient comes into the ER complaining of abdominal pain and an X-ray reveals that one of the forceps used in the operation was left inside the patient. Why would highly skilled professionals forget to perform a simple task they have executed without difficulty thousands of times before?
These kinds of oversights occur in professions as diverse as aviation and computer programming, but research from psychological science reveals that these lapses may not reflect carelessness or lack of skill but failures of prospective memory.
In an article in the August issue of Current Directions in Psychological Science, a journal of the Association for Psychological Science, R. Key Dismukes, a scientist at the NASA Ames Research Center, reviews the rapidly growing field of research on prospective memory, highlighting the various ways in which characteristics of everyday tasks interact with normal cognitive processes to produce memory failures that sometimes have disastrous consequences.
Failures of prospective memory typically occur when we form an intention to do something later, become engaged with various other tasks, and lose focus on the thing we originally intended to do. Despite the name, prospective memory actually depends on several cognitive processes, including planning, attention, and task management. Common in everyday life, these memory lapses are mostly annoying, but can have tragic consequences. "Every summer several infants die in hot cars when parents leave the car, forgetting the child is sleeping quietly in the back seat," Dismukes points out.
To defend against prospective memory failures and their potentially disastrous consequences, professionals in aviation and medicine now rely on specific memory tools, including checklists. Research also reveals that implementation intentions, identifying when and where a specific intention will be carried out, can help guard against such failures in everyday life. Dismukes points out that having this kind of concrete plan has been shown to improve prospective memory performance by as much as two to four times in tasks such as exercising, medication adherence, breast self-examination, and homework completion.
Along with checklists and implementation intentions, Dismukes and others have highlighted several other measures that can help to remember and carry out intended actions:
- Use external memory aids such as the alerting calendar on cell phones
- Avoid multitasking when one of your tasks is critical
- Carry out crucial tasks now instead of putting them off until later
- Create reminder cues that stand out and put them in a difficult-to-miss spot
- Link the target task to a habit that you have already established
Research shows that the act of gesturing itself seems to accelerate learning, bringing nascent knowledge into consciousness and aiding the understanding of new concepts
There is a new conceptualization of intelligence that takes shape in the social and biological sciences. This conceptualization involves many lines of inquiry that can be loosely grouped under the title situated cognition: the idea that thinking doesn’t happen in some abstract, disembodied space, but always in a particular brain, in a particular body, located in a particular social and physical world. The moment-by-moment conditions that prevail in that brain, that body, and that world powerfully affect how well we think and perform.
One of the most interesting lines of inquiry within this perspective is known as embodied cognition: the recognition that our bodies play a big role in how we think. Physical gestures, for example, constitute a kind of back-channel way of expressing and even working out our thoughts. Research demonstrates that the movements we make with our hands when we talk constitute a kind of second language, adding information that’s absent from our words. It’s learning’s secret code: Gesture reveals what we know. It reveals what we don’t know. And it reveals (as Donald Rumsfeld might put it) what we know, but don’t yet know we know. What’s more, the congruence—or lack of congruence—between what our voices say and how our hands move offers a clue to our readiness to learn.
IMPLICATION: Embodied Cognition, Situated Cognition
Our memory system, according to cognitive psychology, is divided into the following 2 types:
Short-term memory that stores sounds, images and words, allows for short computations and filters information that either goes to long-term memory or is discarded.Long-term memory that allows us to store information based on meaning and importance for extended periods of time, affects our perception and constitutes a framework where new info is attached.
MAIN CHARACTERISTICS OF SHORT-TERM MEMORY
Short-term memory has 3 main characteristics:
- Brief duration that can only last up to 20 seconds.- Its capacity is limited to 7 ±2 chunks of independent information (Miller’s Law) and is vulnerable to interference and interruption.- Its weakening (due to many reasons, such as medication, sleep deprivation, a stroke, or a head injury, for example) is the first step to memory loss.
Short-term memory is responsible for 3 operations:
- Iconic, which is the ability to store images. - Acoustic, which is the ability to store sounds. - Working Memory, which is the ability to store information until it’s put to use.
For some scientists, working memory is synonymous to short-term memory, but truth is that working memory is not only used for information storage, but also for the manipulation of information. What’s important is that it’s flexible, dynamic and makes all the difference in successful learning.
MAIN CHARACTERISTICS OF LONG-TERM MEMORY
Information in Long-term memory is stored as a network of schemas, which then converts into knowledge structures. This is exactly why we recall relevant knowledge when we stumble upon similar information. The challenge for an instructional designer is to activate those existing structures before presenting new information and that can be achieved in a variety of ways, like with graphics, movies, curiosity-provoking questions, etc.
2 Types of Long-term memory
- Explicit: Conscious memories that include our perception of the world, as well as our own personal experiences. - Implicit: Unconscious memories that we use without realizing it.
Long-term memory is responsible for 3 operations
- Encoding, which is the ability to convert information into a knowledge structure.- Storage, which is the ability to accumulate chunks of information.- Retrieval, which is the ability to recall things we already know.
IMPLICATION: Lesson Design, Memory, Practice
You've heard the expression “practice makes perfect” a million times, and you've probably read Malcolm Gladwell's popular “10,000 hours” theory. But how does practice actually affect the brain?
Learning Rewires Our Brains
When we learn a new skill, whether it’s programming in Ruby on Rails, providing customer support over the phone, playing chess, or doing a cartwheel, we're changing how our brain is wired on a deep level. Science has shown us that the brain is incredibly plastic–meaning it does not “harden” at age 25 and stay solid for the rest of our lives. While certain things, especially language, are more easily learned by children than adults, we have plenty of evidence that even older adults can see real transformations in their neurocircuitry.
But how does that really work? Well, in order to perform any kind of task, we have to activate various portions of our brain. We've talked about this before in the context of language learning, experiencing happiness, and exercising and food. Our brains coordinate a complex set of actions involving motor function, visual and audio processing, verbal language skills, and more. At first, the new skill might feel stiff and awkward. But as we practice, it gets smoother and feels more natural and comfortable. What practice is actually doing is helping the brain optimize for this set of coordinated activities, through a process called myelination.
Practice Makes Myelin, So Practice Carefully
Understanding the role of myelin means not only understanding why quantity of practice is important to improving your skill (as it takes repetition of the same nerve impulses again and again to activate the two glial cells that myelinate axons), but also the quality of practice. Similar to how the science of creativity speaks about idle time and not crushing through one task after the other, practicing with a focus on quality is equally important.
IMPLICATION: Lesson Design, Memory, Practice
A good working memory is perhaps the brain's most important system when it comes to learning a new language. But it appears that working memory is first and foremost determined by our genes.Whether you struggle to learn a new language, or find it relatively easy to learn, may be largely determined by "nature." That's the conclusion of researchers from the Norwegian University of Science and Technology (NTNU), who have studied language skills in Norwegian elementary school students.
Tested on ten-year-old students
Mila Vulchanova, a professor at NTNU's Department of Modern Foreign Languages, led a study of approximately one hundred ten-year-old elementary school students from Norway. Her research suggests that a good working memory is a decisive factor in developing good language skills and competency.
"Our results show a clear statistical correlation between a high level of language competence and a good working memory in the students we tested," she says.
While Vulchanova is a linguist, her team included colleagues and students from the university's Department of Psychology and the Department of Scandinavian Studies and Comparative Literature to help with the research.
Brain training can help
When we learn a new language, information that is stored in the brain's memory storage space must be constantly maintained. The brain does this by taking in new linguistic information in the form of new words and auditory strings, and then integrating this with information that is already stored in the "mental lexicon."
This might not sound like good news for people who struggle with their working memory, but Vulchanova says not to give up hope.
"It is possible to train the working memory system, but it is not easy -- especially since the capacity of our working memory is inherited. Mind exercises such as word games or practicing saying numbers in the opposite direction are useful and simple ways to train your working memory," she says.
IMPLICATION: Working Memory