Reflections of an open science convert 2: Some challenges to maintaining open research practices

(This is part 2 of a 3-part series. Part 1 can be found here and Part 3 here)

In part 1 of this blogpost series I wrote about why I adopted open science practices. However, while I find them beneficial and important, they are not always easy to maintain. For a large part this has to do with operating in an environment that does not perceive the need for change with the same urgency as I do. With ‘environment’ I mean my workplace in Groningen, as well as the larger community of researchers in experimental psychopathology/clinical psychology.

“the academic system rewards completed publications rather than preregistrations”

One of the challenges I encounter pertains to preregistration. The idea behind preregistration is to control flexibility in data-analysis, also known as researcher degrees of freedom (Simmons, Nelson & Simohnson, 2011). Thus, the more detail, the more control. I tend to write detailed plans. Unfortunately, writing a detailed plan takes a lot of time. I choose to invest this time because I believe it will pay back in the later analysis and writing phases of a project. Even if it will not pay back, I think it is important to invest the time because of the more general benefits (e.g., post-hoc bias reduction). However, the academic system rewards completed publications, not preregistrations. For example, the criteria for tenure-track promotions at our university or membership of our Dutch-Flemish school for postgraduate training depend (partly) on the number of completed publications on my list within a given time frame. Thus, in terms of academic success it is much better to invest time in end-products than in detailed plans.

Perhaps it is because of the emphasis on completed publications in the academic system that some researchers decide to write preregistrations of, for example, one page. Although this may seem efficient, I think that a limited preregistration has even more disadvantages than not preregistering at all. For example, one of the papers I reviewed had a preregistered analysis plan that amounted to not much more than “we will do ANOVAs”. The paper contained a decision about removing outliers that, judging from the open data, yielded a different conclusion than when these outliers had been left in. Of course, if the data are open, readers can check the robustness of a finding for themselves. But how many readers would take the time to actually redo the analyses of every paper they find interesting? A statement about preregistration in a publication might give a false sense of security if that preregistration was minimal: researcher degrees of freedom may seem more under control than they actually are.

“I have not yet seen many manuscripts come back with an appropriate open materials/data statement”

A second challenge I encounter is persisting in my commitment to the PRO initiative. After accepting a review request, I first check whether the to-be-reviewed manuscript contains a statement about open data and materials. Usually I do not find such a statement – my impression is that most manuscripts in my area do not have them (yet). I then ask the editor to request it from the authors. The editors’ reactions vary. Some are sympathetic and take action. Mostly, however, they are reluctant to write the authors before reviews are in. In those cases, I offer a minimal review in which I explain the PRO concept and promise to provide a more comprehensive evaluation of a revised manuscript containing an open materials/data statement. I emphasize that providing such a statement in the manuscript does not mean that the data should be open at all cost: it may well be that privacy concerns preclude public data, and that makes a valid statement too.

Sadly, I have not yet seen many manuscripts come back with an appropriate open materials/data statement. (From one revised manuscript it appeared that the authors did not understand that “open” means available from a public repository: A statement was added that readers can request the data from the author.) Usually, a manuscript is rejected based on the comprehensive reviews of the other reviewers. I then find myself wondering about the net effect of my individual actions – the authors will most likely find my review obnoxious. Will they make their materials/data open when they submit the rejected manuscript elsewhere? Or will they do so in future papers? Perhaps things will be easier if openness becomes more mainstream and more reviewers in my research area sign the PRO initiative. In that sense, it is a positive development that more and more journals explicitly encourage open materials/data. However, this encouragement is often restricted to a non-binding guideline in the instructions for authors. It would help if all journals would actually enforce including an open materials/data statement in submitted manuscripts.

“not all research will be suitable to be submitted as a Registered Report”

Indeed, journal publication policies may help boost the reproducibility and replicability of the research in our field. Journals can also help reducing publication bias, that is, the underrepresentation of null-findings in published articles, by offering Registered Reports (RRs). This publication format allows the in-principle acceptance of papers, based on a sound rationale and method before any data are collected. Once the data are in and analyzed, a second review procedure follows. The crucial feature of RRs is that publication does not depend on the outcome of the study. All in all, this format tackles many problems: it eliminates post-hoc bias in analyses, its open character increases transparency, and the enhanced probability of publishing null-results helps against bias in the literature as a whole. However, not all research will be suitable to be submitted as a RR. In addition, it is less clear how RRs will solve problems associated with how researchers are rewarded by their universities. In part 3 of this blogpost series, I will write about this final and particularly persistent challenge to open science practices.

References:

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. doi:10.1177/0956797611417632

Note:

Image by Finn Terman Frederiksen, licenced under CC BY 2.0

Ineke Wessel (Twitter: @InekeWessel) received her PhD degree from Maastricht University. She studies (emotional) autobiographical memory. Her research interests include the involvement of memory in the origins and maintenance of psychopathology and the malleability of emotional memories themselves, including false / recovered memories. Her work applies to clinical psychology (e.g. Memory processes in Posttraumatic Stress Disorder), as well as forensic psychology (eyewitness memory). Relatively recently she became fascinated with the question of what the current replication crisis in psychology may mean for clinical psychology.


Websites


http://www.rug.nl/staff/j.p.wessel/


https://orcid.org/0000-0001-6312-376X


You may also like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.