What is a Registered Report and Why Should You be Writing One for JEPS?

by Vin Arceneaux, editor, Journal of Experimental Political Science

Over a year ago the Journal of Experimental Political Science joined hundreds of other journals in the psychological and biological sciences by accepting registered reports for submission. A registered report is a form of results-blind review. Researchers write a paper that — like a traditional submission — states a research question, reviews the literature, offers theory and hypotheses, details an experiment (or set of experiments) designed to test the hypotheses, and explains and justifies statistical approaches to analyzing the data. The only difference from a traditional paper is that the researchers stop here. Rather than going off and conducting the experiment, they submit the paper for review.

A registered report at JEPS undergoes a two-stage review process. In the first stage, reviewers offer commentary on whether the paper asks an important question, adequately reviews the literature, offers a strong theoretical explanation and sound hypotheses, and whether the experimental design is adequate for testing the hypotheses. Like most papers, even if reviewers are mostly pleased, some revisions will likely be in order. Registered reports that clear this stage are “accepted-in-principle” and author(s) are given time to conduct the study, write up the results, and resubmit the completed manuscript for the second stage of the review process. In this stage, reviewers are asked to assess whether the authors did what they said they were going to do in the registered report. If they did (or if they are able to justify any departures from their pre-analysis plan), the paper is formally accepted for publication. As long as the pre-analysis plan protocol was followed, the paper is accepted for publication irrespective of the empirical results.

As Chris Chambers, Brian Nosek, and other proponents of the Open Science movement have argued elsewhere (e.g., Chambers et al. 2014; Nosek et al. 2015, 2018), registered reports are an important step towards making scientific findings more credible and reproducible. The traditional review process often leads editors and reviewers to place too much importance on the novelty and cleaness of a paper’s results (Tell it like it is 2020). Consequently, the traditional review process has led to a form of publication bias in which equally important null findings fail to be published in high quality outlets (Franco, Malhotra, and Simonovits 2014), which in turn, has encouraged many researchers to engage in practices that undermine the scientific enterprise (John, et al. 2012). For example, p-values can only be taken at face value if they come from statistical models that were constructed before the results were observed. If a researcher incrementally adjusts a statistical model in an effort to lower the p-value for some parameter of interest (aka “p-hacking”), then the p-value is rendered meaningless. Likewise, if a researcher runs several models and then engages in hypothesizing after the results are known (aka “HARKing”), the p-value is equally meaningless. Registered reports make clear that the proposed statistical analyses were selected before observing the data, which renders the p-values interpretable. I should note that p-hacking and HARKing can happen even when researchers have good intentions, especially when reviewers in the traditional review process ask for additional analyses and offer advice on “reframing” the paper.

I have heard several objections to registered reports. One objection is the worry that they kill researchers’ ability to explore their data and discover unexpected results. Quite to the contrary, researchers are still free to explore their data and they are still free to discover unexpected results. Registered reports simply make clear which results were predicted and which results were discovered in the process of exploration. Making this distinction is vital for making science both more credible and more robust.

Another concern that I’ve heard — especially in connection with JEPS accepting registered reports — is that they will produce null findings and ambiguous results, on average, which will lower their impact. I am not very concerned about this possibility for two reasons. First, registered reports go through the review process and are vetted on the basis of their importance before results are observed. If anything, this will likely put more burden on registered reports to feature questions that the research community find to be important without the benefit of offering reviewers an arresting finding. Second, observational research suggests that registered reports receive more citations and have greater impact than comparable traditional articles (Hummer et al. 2019).

I hope I have convinced you — if you weren’t already — to write registered reports. The past few years of receiving these at JEPS as well as trying my hand at writing them as well, has convinced me that this is a skill we all need to develop. Although it is similar to writing a traditional paper, one has to think through many more steps that many of us were trained to do. I have also found it invaluable to mock up the study design, simulate the data that would result, and go through the act of analyzing the simulated data to diagnose errors in my experimental design, measurement approach, and often my thought process. I should have been doing this all along, no doubt, but a registered report provides both the structure and incentive to do so.

Because registered reports are not the norm (yet) in political science, I have created a FAQ page on the JEPS website (https://tinyurl.com/qtdm299) to aid authors in putting one together. I want to thank the many scholars who helped me vet these FAQs, including the JEPS editorial team. I am sure that we will edit and add to these from time to time as we learn more about registered reports in practice.

References

Chambers, Christopher D, Eva Feredoes, Suresh D Muthukumaraswamy, and Peter J Etchells. 2014. “Instead of ‘Playing the Game’ It Is Time to Change the Rules: Registered Reports at AIMS Neuroscience and Beyond.” 1(1): 4–17.

Franco, A., Malhotra, N. and Simonovits, G., 2014. Publication bias in the social sciences: Unlocking the file drawer. Science345(6203): 1502–1505.

Hummer, L.T., Singleton Thorn, F., Nosek, B. A., Errington, T. M. (2019). Evaluating Registered Reports: A Naturalistic Comparative Study of Article Impact (preprint). Open Science Framework. https://doi.org/10.31219/osf.io/5y8w7

John, L K, G. Loewenstein, and D Prelec. 2012. “Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling.” Psychological Science 23(5): 524–32.

Nosek, B.A., Alter, G., Banks, G.C., Borsboom, D., Bowman, S.D., Breckler, S.J., Buck, S., Chambers, C.D., Chin, G., Christensen, G. and Contestabile, M. 2015. Promoting an open research culture. Science348(6242): 1422–1425.

Nosek, Brian A, Charles R Ebersole, Alexander C DeHaven, and David T Mellor. 2018. “The Preregistration Revolution.” Proceedings of the National Academy of Sciences 115(11): 2600–2606.

Tell it like it is. 2020. Nature Human Behavior 4 (1): https://doi.org/10.1038/s41562-020-0818-9