Exit Poll Experiments in Local Elections

Cheryl Boudreau, UC Davis
for Lessons Learned the Hard Way in The Experimental Political Scientist, Spring 2021

At the beginning of my career, my research primarily used laboratory experiments with college undergraduates to study the effects of political information on decision making.  I was eager to examine the effects of information on actual voters in a real-world election.  Could political information help voters choose candidates whose policy views are similar to their own?  Local elections in San Francisco presented an opportunity to investigate this question because of a unique convention that facilitates the measurement of candidates’ policy positions.  Specifically, candidates for local offices regularly answer the questionnaires that political party organizations, newspapers, and interest groups distribute to assess their policy views.

Unfortunately, these questionnaires often use open-ended questions that allow candidates to obfuscate their views.  My co-authors and I had an idea:  If we could partner with an organization that would include the yes/no policy questions we developed on its questionnaire, we could reliably measure candidates’ local ideological positions.  If we could ask voters these same questions on a written exit poll survey, we could develop comparable measures of their ideological positions.  If we could randomly assign political information across the surveys, we could also examine whether information strengthens the relationship between voters’ policy views and those of the candidates they choose.   

At this point, you might be thinking that there are a lot of “ifs” in the preceding paragraph…We had the same thought at the time!  Ultimately, we were successful in collaborating with several organizations on candidate questionnaires and administering our exit poll experiments (Boudreau, Elmendorf, and MacKenzie 2015, 2018, 2019).  Nonetheless, there were some bumps along the way and lessons we definitely learned the hard way.

Lesson #1:  Persistence Pays Off when Finding a Research Partner

Our study hinged upon our ability to persuade an organization to include our yes/no policy questions on its candidate questionnaire.  This turned out to be more difficult than we imagined.  It took dozens of emails, follow-up emails, phone calls, and meetings before we finally found a few organizations who were willing to listen.  Of those who listened, one agreed to put a subset of our policy questions on their existing candidate questionnaire and two others agreed to develop new questionnaires with us. 

Lesson #2:  Be Willing to Work with (and for) your Research Partner

One of the organizations we partnered with requested data analyses and interviews about our research in return.  This involved a fair amount of work, but was well worth it.  We also learned that the organization had different interests in the data than we did.  We spent hours estimating candidate ideal points, running regressions, and making fancy plots.  When we presented these results to its members, we received a mix of blank stares and head scratches.  They were confused, and in hindsight, I don’t blame them.  After talking past each other for the better part of an hour, we learned that the organization simply wanted rates of agreement between candidates and voters calculated.  We could have saved a lot of time by thinking about the project from the organization’s perspective and keeping things simple.   

Lesson #3:  The Real World is Messier than the Lab

At certain points, I found myself missing the control of the laboratory and the compliance of college undergraduates.  I learned that conducting experiments during a real-world election is challenging.  Candidates can refuse to answer the questionnaire or do so only after daily reminder emails.  Voters can get upset about survey questions, argue about their wording, or profess to know more about the issues than you do.  We even had to scrap an entire experiment at the last minute because the candidate of interest dropped out of the race due to a scandal.  My advice is to expect the unexpected and have back-up studies IRB-approved and ready to go. 

On a positive note, we learned that many voters are willing to sit and take a survey for academic research as they leave their polling places, especially when asked by polite undergraduates.  We were struck by the generosity of so many voters who took time to complete the survey, even using their cell phones as flashlights after dark.

Lesson #4:  Do Not Think that You Can Manage over 100 Undergraduate Research Assistants on Your Own

To implement our experiment, we stationed over 100 undergraduate research assistants at randomly selected polling places across the city.  Although the undergraduates had attended rigorous training sessions with us, nothing could have prepared them (or us!) for every contingency.  Some students encountered poll workers who (mistakenly) told them they were not allowed to be there.  Others faced angry voters who wanted to speak with the researchers.  Others were unsure how to answer voters’ questions.  Some needed food; after all, the campus dining hall was not open at 5am when our students left to drive from Davis to San Francisco.  While we managed to get around the city to address these issues, it was stressful.  After the polls closed that night, my co-authors and I were left driving around the city to collect the tables and chairs we had placed at each polling station.  Miraculously, they were all still there (even at 11pm in the grittiest neighborhoods).  We celebrated the fact that we had not lost any equipment or, more importantly, any undergraduates!  However, in subsequent studies, we learned our lesson and enlisted graduate and law student supervisors to help advise, troubleshoot, and set up/tear down.