Putting psychological research to the test with the Reproducibility Project | Psycholitics & Psychonomics | Scoop.it
An ambitious new project is attempting to replicate every single study published in 2008 in three leading academic psychology journals.

 

'Calculations of the average statistical power of published psychology experiments hovers at around 50%. This means that conducting an average psychology experiment is roughly equivalent to flipping a coin, in terms of whether you get a statistically significant result or not.

 

Many statistically non-significant results are therefore not good evidence of “no effect”, and many statistically significant results that get published are false positives.........'

 

'............................In 2005, John P. Ioannidis made headlines when he claimed that up to 90% of published medical findings may be false. Ioannidis described conditions of small sample sizes, small effect sizes, publication bias, pressure to publish and flexible stopping rules — all the problems we identify above. His quantitative conclusions about error rates and false positives were based on simulations, not “real” data.'

 

'Unfortunately, looking at real data is just as disheartening. Over the past decade, a group of researchers attempted to replicate 53 “landmark” cancer studies. They were interested in how many would again produce results deemed strong enough to drive a drug-development program (their definition of reproducibility). Of those 53 studies, the results of only six could be robustly reproduced.'