Subject Surprises

James N. Druckman, Northwestern University
for Lessons Learned the Hard Way in The Experimental Political Scientist, Spring 2021

My first experience implementing an experiment involved asking subjects to make strategic decisions in what was akin to an economic game. I presented the instructions and asked if anyone had questions. No one did. They then proceed to play the games, and remarkably, everything went so smoothly that their behaviors nearly perfectly matched the theoretical predictions. I thought I had mastered implementation until the next day, when I learned that many of the subjects in that session were graduate students in economics (even though we had advertised for participants across the entire campus). Suffice it to say, it did not go quite as smoothly from there. I quickly realized to plan for the unexpected when it comes to subjects’ reactions.

Of course, most important when it comes to participants is that they engage with the manipulation and perceive it as intended. Recent work by Kane and Barabas (2019) and Mutz (n.d.) offer crucial guidance on manipulation checks. Here I briefly touch on a distinct topic – that is, unpredictable reactions that impact basic data collections. I have often found this often occurs when I purposively sample a targeted population. For example, I attempted to embed a survey experiment in an Election Day exit poll in Minnesota in 2002. Suffice it to say, Election Day in Minnesota means cold and snow. Given these conditions, I thought it may help to offer a $5 incentive to take the survey. Much to my surprise, however, the incentive turned people away! Apparently, the social capital is so high in Minnesota that the response rate was higher when the student pollsters did not offer the $5. We stopped offering incentives once it became apparent everyone was declining the money and often subsequently declining to participate. The subjects seemed to feel they were contributing to the public good by taking the survey and helping students, but when money was involved, they felt insulted by our assumption they would demand pay. The money seemed to make them suspicious!

By contrast, a similar embedded experiment in an Illinois exit poll did require incentives – here we paid $5 and virtually no one declined. I had anticipated this via a small pilot on a previous local Election Day where I learned money was needed. Even so, we encountered a problem: extremely strict enforcement of election rules that kept the pollsters at least 100 feet from the exit to the booths. This made it difficult to find respondents, lowering our response rate and reducing our statistical power for the experiment.    

Moving to different experimental topics, another surprising experience occurred in a study using vignettes to assess whether athletic trainers display racial bias in pain assessments of student-athletes. During the study, we referred to participants as “trainers.” Dozens responded that they stopped mid-study, feeling insulted that we had not used their official titles of “Athletic Trainer” or “Certified Athletic Trainer.” One respondent contacted my IRB to complain for an hour about the uselessness of my project. Not deterred, I continued to conduct work with those involved in sports. Yet again, I quickly learned that student-athletes and coaches demand clear confidentiality during these studies – when I asked for respondents’ institution in the first study I attempted, I received virtually no responses. My next attempt emphasized anonymity, noting that we would not ask for the school, and the response rate jumped by 10%.

In yet another project, I learned again that payment does not always help. In an effort to experimentally study energy attitudes of scientists and members of Congress, we sent out surveys with $5 gift cards. Nearly 20% of the Members of Congress returned the gift cards without doing the study, saying we violated legal ethical guidelines (which, based on our understanding talking to the Congressional Research Service, we had not). A final example comes from a recent audit experiment of college admissions counselors, in which we failed to anticipate some important behaviors. Specifically, we received more auto-responses than real responses, and identifying auto-responses from real-responses required three individual coders combing through more than 1,000 e-mails. This turned out to be an unexpected drain on resources.

I am a strong advocate of finding targeted and interesting populations for experiments (e.g., Klar and Leeper 2019). Politics occur in all kinds of settings, so moving beyond students, adults, and even probability samples to look at intriguing groups remains underexploited. Yet, doing so requires anticipating responses of participants to being in the study. My main advice here is three-fold:  talk to members of the groups about the study before you start, anticipate the issues that may arise, and conduct pilots not just to test manipulations but also to assess response rates and response in general. And then always be ready for surprises.

References

Kane, John V., and Jason Barabas. 2019. “No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments.” American Journal of Political Science 63: 234-249.

Klar, Samara, and Thomas J. Leeper. 2019. “Identities and Intersectionality: A Case for Purposive Sampling in Survey‐Experimental Research.” In Paul Lavrakas, Michael Traugott, Courtney Kennedy, Allyson Holbrook, Edith de Leeuw, and Brady West, eds., Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment. Hoboken, NJ: John Wiley & Sons, Inc.

Mutz, Diana C. N.d. “Improving Experimental Treatments in Political Science.” In James N. Druckman, and Donald P. Green, eds., Advances in Experimental Political Science. New York: Cambridge University Press.

Acknowledgments

I thank Nicolette Alayon and Jeremy Levy for helpful comments.