{"id":350,"date":"2021-03-16T20:08:19","date_gmt":"2021-03-16T20:08:19","guid":{"rendered":"https:\/\/connect.apsanet.org\/s42\/?p=350"},"modified":"2021-10-18T13:03:42","modified_gmt":"2021-10-18T13:03:42","slug":"subject-surprises","status":"publish","type":"post","link":"https:\/\/connect.apsanet.org\/s42\/2021\/03\/16\/subject-surprises\/","title":{"rendered":"Subject Surprises"},"content":{"rendered":"\n<p>James N. Druckman, Northwestern University<br>for <em>Lessons Learned the Hard Way<\/em> in The Experimental Political Scientist, Spring 2021<\/p>\n\n\n\n<p>My first experience implementing an experiment involved asking subjects to make strategic decisions in what was akin to an economic game. I presented the instructions and asked if anyone had questions. No one did. They then proceed to play the games, and remarkably, everything went so smoothly that their behaviors nearly perfectly matched the theoretical predictions. I thought I had mastered implementation until the next day, when I learned that many of the subjects in that session were graduate students in economics (even though we had advertised for participants across the entire campus). Suffice it to say, it did not go quite as smoothly from there. I quickly realized to plan for the unexpected when it comes to subjects\u2019 reactions.<\/p>\n\n\n\n<p>Of course, most important when it comes to participants is that they engage with the manipulation and perceive it as intended. Recent work by Kane and Barabas (2019) and Mutz (n.d.) offer crucial guidance on manipulation checks. Here I briefly touch on a distinct topic \u2013 that is, unpredictable reactions that impact basic data collections. I have often found this often occurs when I purposively sample a targeted population. For example, I attempted to embed a survey experiment in an Election Day exit poll in Minnesota in 2002. Suffice it to say, Election Day in Minnesota means cold and snow. Given these conditions, I thought it may help to offer a $5 incentive to take the survey. Much to my surprise, however, the incentive turned people away! Apparently, the social capital is so high in Minnesota that the response rate was higher when the student pollsters did not offer the $5. We stopped offering incentives once it became apparent everyone was declining the money and often subsequently declining to participate. The subjects seemed to feel they were contributing to the public good by taking the survey and helping students, but when money was involved, they felt insulted by our assumption they would demand pay. The money seemed to make them suspicious!<\/p>\n\n\n\n<p>By contrast, a similar embedded experiment in an Illinois exit poll did require incentives \u2013 here we paid $5 and virtually no one declined. I had anticipated this via a small pilot on a previous local Election Day where I learned money was needed. Even so, we encountered a problem: extremely strict enforcement of election rules that kept the pollsters at least 100 feet from the exit to the booths. This made it difficult to find respondents, lowering our response rate and reducing our statistical power for the experiment.&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>Moving to different experimental topics, another surprising experience occurred in a study using vignettes to assess whether athletic trainers display racial bias in pain assessments of student-athletes. During the study, we referred to participants as \u201ctrainers.\u201d Dozens responded that they stopped mid-study, feeling insulted that we had not used their official titles of \u201cAthletic Trainer\u201d or \u201cCertified Athletic Trainer.\u201d One respondent contacted my IRB to complain for an hour about the uselessness of my project. Not deterred, I continued to conduct work with those involved in sports. Yet again, I quickly learned that student-athletes and coaches demand clear confidentiality during these studies \u2013 when I asked for respondents\u2019 institution in the first study I attempted, I received virtually no responses. My next attempt emphasized anonymity, noting that we would not ask for the school, and the response rate jumped by 10%.<\/p>\n\n\n\n<p>In yet another project, I learned again that payment does not always help. In an effort to experimentally study energy attitudes of scientists and members of Congress, we sent out surveys with $5 gift cards. Nearly 20% of the Members of Congress returned the gift cards without doing the study, saying we violated legal ethical guidelines (which, based on our understanding talking to the Congressional Research Service, we had not). A final example comes from a recent audit experiment of college admissions counselors, in which we failed to anticipate some important behaviors. Specifically, we received more auto-responses than real responses, and identifying auto-responses from real-responses required three individual coders combing through more than 1,000 e-mails. This turned out to be an unexpected drain on resources.<\/p>\n\n\n\n<p>I am a strong advocate of finding targeted and interesting populations for experiments (e.g., Klar and Leeper 2019). Politics occur in all kinds of settings, so moving beyond students, adults, and even probability samples to look at intriguing groups remains underexploited. Yet, doing so requires anticipating responses of participants to being in the study. My main advice here is three-fold:&nbsp; talk to members of the groups about the study before you start, anticipate the issues that may arise, and conduct pilots not just to test manipulations but also to assess response rates and response in general. And then always be ready for surprises.<\/p>\n\n\n\n<p><strong>References<\/strong><\/p>\n\n\n\n<p>Kane, John V., and Jason Barabas. 2019. \u201cNo Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments.\u201d <em>American Journal of Political Science <\/em>63: 234-249.<\/p>\n\n\n\n<p>Klar, Samara, and Thomas J. Leeper. 2019. \u201cIdentities and Intersectionality: A Case for Purposive Sampling in Survey\u2010Experimental Research.\u201d In <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=Lavrakas%2C+Paul\">Paul Lavrakas<\/a>, <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=Traugott%2C+Michael\">Michael Traugott<\/a>, <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=Kennedy%2C+Courtney\">Courtney Kennedy<\/a>, <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=Holbrook%2C+Allyson\">Allyson Holbrook<\/a>, <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=de+Leeuw%2C+Edith\">Edith de Leeuw<\/a>, and <a href=\"https:\/\/onlinelibrary.wiley.com\/action\/doSearch?ContribAuthorStored=West%2C+Brady\">Brady West<\/a>, eds., <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/book\/10.1002\/9781119083771\"><em>Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment<\/em><\/a>. Hoboken, NJ: John Wiley &amp; Sons, Inc.<\/p>\n\n\n\n<p>Mutz, Diana C. N.d. \u201cImproving Experimental Treatments in Political Science.\u201d In James N. Druckman, and Donald P. Green, eds., <em>Advances in Experimental Political Science<\/em>. New York: Cambridge University Press.<\/p>\n\n\n\n<p><strong>Acknowledgments<\/strong><\/p>\n\n\n\n<p>I thank Nicolette Alayon and Jeremy Levy for helpful comments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>James N. Druckman, Northwestern Universityfor Lessons Learned the Hard Way in The Experimental Political Scientist, Spring 2021 My first experience implementing an experiment involved asking subjects to make strategic decisions in what was akin to an economic game. I presented the instructions and asked if anyone had questions. No one did. They then proceed to &#8230; <a title=\"Subject Surprises\" class=\"read-more\" href=\"https:\/\/connect.apsanet.org\/s42\/2021\/03\/16\/subject-surprises\/\" aria-label=\"Read more about Subject Surprises\">Read more<\/a><\/p>\n","protected":false},"author":23449,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4],"tags":[],"class_list":["post-350","post","type-post","status-publish","format-standard","hentry","category-newsletter"],"_links":{"self":[{"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/posts\/350","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/users\/23449"}],"replies":[{"embeddable":true,"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/comments?post=350"}],"version-history":[{"count":0,"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/posts\/350\/revisions"}],"wp:attachment":[{"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/media?parent=350"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/categories?post=350"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connect.apsanet.org\/s42\/wp-json\/wp\/v2\/tags?post=350"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}