Winners: Thad Dunning, UC Berkeley; Guy Grossmann, University of Pennsylvania; Macartan Humphreys, Columbia University and WZB Berlin Social Science Center; Susan D. Hyde, UC Berkeley; Craig McIntosh, UC San Diego; Gareth Nellis, UC San Diego Title of the winning book: “Information, Accountability, and Cumulative Learning: Lessons from Metaketa I” Publishing information: Cambridge U Press, 2019 Committee Members: Jaime Settle (Chair), College of William & Mary; Kevin Esterling, UC Riverside; Laura Paler, University of Pittsburgh.
Giving the section best book award to the Dunning, Grossman, Humphreys, Hyde, McIntosh, and Nellis book is an easy decision. The book is a landmark contribution in the methodology of experimental research and provides a new template for cumulative learning across RCTs. Although classified as an edited volume, the book is very different from the normal kind of edited volume that staples together related papers. Instead, the book is a comprehensive reporting of the results of the EGAP Metaketa I initiative. The initiative involved 31 investigators spanning 20 universities, and included 10 projects that shared common design features developed through the collaboration. Substantively, Metaketa I evaluated the impact of information provision on voting behavior in developing democracies, a topic of very wide interest across subfields of political science. More relevant to the experiments section are the methodological contributions for deploying and analyzing separate RCTs in a way that promotes cumulative learning. There were two main motivations for the authors to undertake this massive project. First, although political science has become adept at using RCTs to ensure the internal validity of studies, internal validity in itself does not advance the accumulation of knowledge in a field since an internally-valid finding in one setting (i.e., one place and time) does not enable others to know how the treatment effect would generalize to other contexts. Second, political science researchers are generally incentivized to conduct disparate and novel studies that prevent the comparability of studies across contexts, and so even in the rare circumstances that there are multiple RCTs on a topic across different contexts, there is no way to aggregate or compare the studies in order to facilitate generalization. The Metaketa I study overcomes these limitations by employing a comprehensive approach to research design. In the design, each individual study met the now widely-accepted best practices to ensure the credibility of findings by combining RCTs to ensure internal validity with research transparency best-practices such as a comprehensive preanalysis plan, open data and code, and third party replication, along with a strict requirement that all campus IRB committees approve all elements of the project. These best practices however are not new to political science. What is novel, and makes the book worthy of our award, is its innovations in the development and implementation of a comprehensive design that ensures comparability across studies. In particular, the authors collaborate to develop a “common arm” intervention of information provision that is implemented as identically as possible across all of the 10 different contexts, along with comparable outcome survey measures. The author team is extremely thoughtful in developing standards for evaluating what counts as “comparable” in interventions and outcome measures that are implemented in diverse contexts and cultures, spanning Africa, Asia and Latin America – that is, they take care to attend to the subtle issues of ensuring construct validity in the interventions and outcomes across the different contexts. With this basis of comparability across the individual studies, the project demonstrates how to investigate external validity to establish what interventions generalize or do not generalize to different contexts. Finally, the authors aggregate the results of the individual studies using a planned meta-analysis to harness the power of the individual studies to answer the research questions regarding the role of information provision in democratic practice. Although the main results show null findings, the null here is meaningful in that it helps to inform funders and agencies about what kinds of practices might not be worth supporting. Overall, the book is a monumental contribution to advance the agenda of the “credibility revolution” beyond one-off, discrete studies and shows the way forward for social science to begin to harness the power of RCTs to accumulate generalizable knowledge.
Susan D. Hyde
Marcus André Melo
Paula M. Pickering
Melina R. Platas