The fields of psychology and cognitive neuroscience have had some rough sledding in recent years.
|Scooped by Artur Alves|
"Statistical power is essentially the probability that a study will detect an effect of a given size if the effect is really there. It depends on two things: the sample size (the number of people in a study, for example) and the effect size (such as a difference in brain volume between healthy people and Alzheimer’s patients). The more people in the study and the bigger the size of the effect, the higher the statistical power.
Low statistical power is bad news. Underpowered studies are more likely to miss genuine effects, and as a group they’re more likely to include a higher proportion of false positives — that is, effects that reach statistical significance even though they are not real.
Many researchers consider a statistical power of 80 percent to be a desirable goal in designing a study. At that level, if an effect of a particular size were genuine, the study would detect it 80 percent of the time.
But roughly half of the neuroscience studies Munafò and colleagues included in their analysis had a statistical power below 20 percent. Those studies would fail to detect a genuine effect at least 80 percent of the time.
"He believes neuroscientists can take a cue from researchers in genetics and other fields who’ve combatted problems with underpowered studies by creating ways for scientists to pool their data. The OpenfMRI project led by Poldrack is one example of an effort to do this in neuroscience.
Giving scientists an incentive and making it easier to replicate each other’s findings — generally considered a distinctly unglamorous pursuit — is another approach to increasing the collective statistical power of a body of research, Munafò and colleagues suggest. Two efforts to do this in psychology, the Open Science Framework and the related Reproducibility Project, were launched recently by Munafò’s co-author Brian Nosek of the University of Virginia."