Software experiment raises prospect of extra peer review.
When ecologist Carl Boettiger wrote a blog post in June calling for greater stringency in the peer review of scientific software in research papers, he hardly expected to stir up controversy. But in 54 comments on the post, researchers have debated how detailed such reviews should be; one said that it was a “trifle arrogant” of Boettiger, of the University of California at Santa Cruz, to insist that computer code attain his stringent standards before publication.
Now an offshoot of the Internet non-profit organization Mozilla has entered the debate, aiming to discover whether a review process could improve the quality of researcher-built software that is used in myriad fields today, ranging from ecology and biology to social science. In an experiment being run by the Mozilla Science Lab, software engineers have reviewed selected pieces of code from published papers in computational biology. “Scientific code does not have that comprehensive, off-the-shelf nature that we want to be associated with the way science is published and presented, and this is our attempt to poke at that issue,” says Mozilla Science Lab director Kaitlin Thaney.