On the Peer Reviewers’ Openness Initiative: Why should I care?
Science is the driving force behind what we consider reliable knowledge, and it usually works like this: a scientist has a theory, formulates a hypothesis that derives from this theory, designs a study to test that hypothesis, and, in the end, draws a conclusion regarding whether the initial theory was supported by that study—either through a controlled experiment or by a series of systematic observations. As more scientists become interested in studying the same phenomenon, this process becomes a cycle: the initial experiment is replicated (or closely repeated), and new modifications of old theories are posited that afford their own experiments.
This cyclical process has led to the accumulation of knowledge that we now rely upon. In that, a major role is also played by the trust that scientists have in the validity and truthfulness of each other’s work. Trust allows one to pick up the research from the edges of the current knowledge-base and work towards extending these limits, with the critical assumption that this is a reliable platform on which to build one’s own research. A break-down of trust, on the other hand, tends to feed societal polarization, with different social groups inhabiting intellectual discourses that don’t seem reconcible (Merton, 1972). Scandalous cases of research fraud, such as that of the Dutch psychologist Diederik Stapel, or, more curiously, instances in which scientists test the rigour of the publishing system by submitting (and having published) articles salted with nonsense (see the Sokal affair), become famous to the scientific community and the public opinion. These cases then threaten the trustworthiness of scientific findings, and of science itself, being indicative of a lack of solidarity in upholding the ethical standards of the scientific community.
In addition to such egregious – though rare – instances, there are various questionable research practices that undermine the perceived role of science, such as manipulating one’s statistical analyses in ways that lead to significant results, deleting or adding participants for the data to fit one’s theory, or using small and unrepresentative samples of participants. Scientists are motivated to engage in these practices due to the selective publishing of only significant results in most scientific journals, and the pressure to publish one’s work in such journals in order to maintain one’s position in academia.
The consequence is that reported results are sometimes not valid. Thus, for example, only 36% of the classical studies in social psychology were supported by a subsequent replication (Pashler & Wagenmakers, 2012). This suggests that most of what represents the actual knowledge-base in this field is a false positive. Given that the trustfulness of the scientific endeavor is negatively affected by each such case, various practices are being introduced to protect trust in science by maximizing the quality of research that gets published in scientific journals.
A possible avenue to better research quality is increased transparency: researchers might grant the scientific community access to their data, analyses, and materials (e.g., Archives of Scientific Psychology). This would allow other researchers to assess the quality of their peer’s findings and form an informed decision about their validity. Even though most members of the scientific community acknowledge the benefits of implementing greater transparency, this practice is still in its early stage. The reasons for this are various: the necessity to learn open practices, the perceived delay in the publication process that they may cause, and the possibility that one’s competitors will use the shared materials without adhering to open research practices themselves.
In order to address this dilemma – the conflict between the possible benefits of open practices on the research quality, and its possible drawbacks on one’s professional activity – the Peer Reviewers’ Openness (PRO) Initiative was created. This initiative proposes changes in the peer reviewing system so that it promotes open research practices.
Peer reviewers are scientists who assess the quality of their peers’ research papers before their publishing in scientific journals. Some of their responsibilities include checking whether the study meets the field’s theoretical, experimental, and statistical standards. The PRO Initiative therefore proposes that a new standard of openness should be included and assessed by peer reviewers. Researchers would then be expected to either grant open access to their data, materials and analyses, or – importantly, in terms of promoting transparency as a kind of openness – provide a justification for the lack thereof. Not meeting these conditions would result in reviewers withholding comprehensive review, and the paper not being recommended for publication.
Signing on as a supporter of the PRO Initiative is a commitment to apply the openness standard to one’s activity as a peer reviewer. Even though there are difficulties that might be expected to arise from implementing this standard – such as those related to changing one’s research habits, the need to educate young generation of scientists in this matter, and the possible ethical issues about sharing data or materials – these are ouweighted by the benefits that its implementation is expected to generate once openness becomes a well-established element of scientific conduct.
As a consumer of science, either communicated directly through the peer-reviewed scientific literature or through the media, it is important to be aware of the challenges that the scientific community faces in order to be able to discern between reliable and unreliable findings. The persuasive power that the scientific aegis confers to otherwise not very plausible statements is a widely-used marketing tool, to which many have fallen prey. In exploring what is known as knowledge, one has to realize that science is a social endeavor not without flaws, and take its fruits with a grain of salt. Moreover, one has the epistemic responsibility of checking, whenever possible, the validity of what one learns. Open research practices come as an acknowledgment of both the imperfections inherent to the system we have chosen to use in order to explore the world, and our responsibility and preparedness to perfect our mechanisms and collaborate in creating a more reliable and comprehensive picture of reality. And that is why you should care.
References
Merton, R. K. (1972). Insiders and outsiders: A chapter in the sociology of knowledge. American Journal of Sociology, 78(1), 9-47.
Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., Lewandowsky, S., Morey, C. C., Newman, D. P., Schonbrodt, F. D., Vanpaemel, W., Wagenmakers, E. J., & Zwaan, R. A (2016). The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. Royal Society of Open Science, 3.
Pashler, H. & Wagenmakers, E. J. (2012). Editors’ introduction to the Special Spection on Replicability in Psychologyical Science: A Crisis of Confidence? Perspectives on Psychological Science, 7: 528-530.