Tag Archives: Preregistration

What we can learn from the ‘promise and pitfalls of preregistration’ meeting

Dr Mandy Wigdorowitz, Open Research Community Manager, Cambridge University Libraries

The promise and pitfalls of preregistration meeting was held at the Royal Society in March 2024. It was organised to address the utility of preregistration and initiate an interdisciplinary dialogue about its epistemic and pragmatic aims. The goal of the meeting was to explore the limitations associated with preregistration, and to conceive of a practical way to guide future research that can make the most of its implementation.

Preregistration is the practice of publicly declaring a study’s hypotheses, methods, and analyses before conducting a research study. Researchers are encouraged to be as specific as possible when writing preregistration plans, detailing every aspect of the research methodology and analyses, including, for instance, the study design, sample size, procedure for dealing with outliers, blinding and manipulation of conditions, and how multiple analyses will be controlled for. By doing so, researchers commit to a time-stamped study plan which will reduce the potential for flexibility in analysis and interpretation that may lead to biased results. Preregistration is a community-led response to the replication crisis and aims to mitigate questionable research practices (QRPs) that have come to light in recent years, some of which include HARKing (Hypothesising After Results are Known), p-hacking (the inappropriate manipulation of data analysis to enable a favoured result to be presented as statistically significant), and publication bias (the unbalanced publication of statistically significant findings or positive results over null and/or unexpected findings) (Simmons et al., 2011; Stefan & Schönbrodt, 2023).

The meeting brought together scholars and publishers from a range of disciplines and institutions to discuss whether preregistration has indeed lived up to these aims and whether and to what extent it has solved the problems it was envisioned to address.

It became clear that the problems associated with QRPs have not simply disappeared with the uptake and implementation of preregistration. From the perspective of meta-research, the success of preregistration appears to be largely disciplinary and legally dependent, with some disciplines mandating and normalising it (e.g., clinical trial registration in biomedical research), others greatly encouraging and (sometimes) requiring it (e.g., psychological science research), and others having no expectations about its use (e.g., economics research). The effectiveness of preregistration was shown to be linked to these dependencies, but also related to the quality and detail of the preregistration plan itself. Researchers are the arbiters of their research choices and if they choose to write vague or ambiguous preregistration plans, the problems that preregistration are assumed to address will inevitably persist.

Various preregistration templates exist (such as on the Open Science Framework, OSF) and some incentives for preregistration are recognised, such as the preregistration badges awarded by some journals, making it a systematic and straightforward exercise. In practice, however, it is not always the case that sufficient information is provided, and even in cases where preregistered plans are detailed, they are not always followed for various pragmatic or other (not always nefarious) reasons. As such, the research community are cautioned to not assume that preregistration equates to better or more trustworthy research. Rather, the preregistration plan needs to be critically reviewed as a standalone document in conjunction with the published study. This is important because preregistration plans that are usually deposited into repositories (e.g., OSF, National Library of Medicine’s Clinical Trials Registry) are seldom evaluated as entities of their own or against their corresponding research articles. Note that this is unlike registered reports which are a type of journal article that details a study’s protocol that does get peer reviewed before data is collected and if reviewed favourably, is given an in-principal acceptance regardless of the study outcomes.

Other discussions centred around the utility of preregistration in exploratory versus confirmatory research, whether preregistration can improve our theories, and how the process of conducting multiple but slightly varied analyses and selecting the most desired outcome (also referred to the ‘garden of forking paths’) affects the claims we make.

The overall sentiment from the meeting was that while preregistration does not solve all the issues that have arisen from QRPs, it ultimately leads to more transparency of the research process, accountability on the part of the researchers conducting the research, and it facilitates deeper engagement with one’s own research prior to any collection or analysis of data.

Since attending the meeting, I have taken away valuable insights that have made me critically reflect on my own research choices, and from a practice perspective, I have downloaded the OSF preregistration template and am documenting the plans for a research project.

Given the strides that have been taken toward improving the transparency, credibility and reproducibility of research, researchers at Cambridge need to consider whether preregistration plans should be included as another type of output that can be deposited on the institutional repository, Apollo. We have recently added Methods and preprints as output types which have broadened the options for sharing and which align with open research practices. Including preregistration could be a valuable and timely addition.  

References

Stefan, A. M., & Schönbrodt, F. D. (2023). Big little lies: a compendium and simulation of p-hacking strategies. Royal Society Open Science, 10(2), 220346. https://doi.org/10.1098/rsos.220346

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366. https://doi.org/10.1177/0956797611417632