Taylor N. Carlson and Seth J. Hill
In a June 23, 2016 referendum, citizens of the United Kingdom voted 52 percent to 48 percent to leave the European Union. The result surprised pundits and journalists who, supported by contemporaneous opinion surveys, had been confident that remain would win. The result surprised many voters, too. Anecdotes arose of voters who claimed they had wished to remain in the EU, but voted Leave. Their electoral behavior depended upon the belief that their fellow citizens would vote to remain.
Five months later in the United States, pundits and journalists held a similar level of confidence, again supported by most polls, that Hillary Clinton would defeat Donald Trump in the contest for president. As in the UK, Trump’s victory led to surprise for journalists and voters and recrimination for pollsters. Anecdotes arose of Americans who acted on a belief that Clinton would win, famously including the Director of the Federal Bureau of Investigation.
In our work published in the Journal of Experimental Political Science, we present results from an experiment on beliefs about the politics of other citizens with three key results. First, we find that individuals have reasonably accurate perceptions about the political choices of other individuals – less than five percentage points on average. Second, we find that Americans learn almost as much from another’s report of the Most Important Problem facing the nation as they do from another’s reported party attachment. Third, we randomized whether our participants were paid monetary incentives to accurately report their beliefs about others. Those with monetary incentives were more accurate and learned more, but the magnitude of difference was not dramatic (see our Appendix Table A1).
Our research speaks to a long literature on how social networks affect political behavior. Polling information can provide valuable cues about what the public thinks overall, and these perceptions of the mass public can affect our individual behavior. Similarly, our perceptions of who our friends and family voted for or what they think about political issues can influence our own opinions and candidate preferences. But existing work has provided less evidence on the accuracy of our perceptions
In our paper, we introduce a new experimental method that allows us – and future researchers – to answer these and related questions using a new design. Current approaches to measuring second-order beliefs (perceptions of what others believe) often ask individuals to report their estimates of the percentage of the population that holds some belief. Other approaches ask individuals to directly report who they think specific people within their social networks voted for or which party they prefer. These measurement strategies can be cognitively taxing and do not allow for nuanced measures of uncertainty. In fact, recent work points out that some of these methods conflate inaccuracy with uncertainty. Our experimental approach addresses these concerns with three innovations: (1) respondents report their beliefs as probabilities, allowing for more nuanced measures of bias and uncertainty, (2) respondents are given micro-incentives for accurate reports to reduce expressive responding and shirking, and (3) respondents report their beliefs iteratively, allowing researchers to evaluate learning.
We illustrate our method with an application to measuring perceptions of how others voted in the 2016 presidential election. We randomly paired participants in our study to four respondents to the ANES and asked them to report how likely they thought it was that the person voted for Donald Trump or Hillary Clinton in 2016. After reporting their belief on a scale illustrated below, participants repeated this task four more times, after they were given pieces of information about the ANES respondent, such as their gender, race, income bracket, state of residence, party identification, and their report of the most important problem facing the nation.
Apart from the methodological innovation that we hope can be widely used to measure second-order beliefs, our application revealed several important substantive findings that address the questions raised above.
How accurate are our perceptions of how others voted in 2016? Our results reveal that individuals are more accurate than previous research might lead us to think. While operational definitions of accuracy can vary, we examine the correlation between the probability participants thought an ANES respondent with presented characteristics would vote for Trump and the observed Trump vote share among ANES respondents who had those characteristics. We find a positive linear trend (Figure 2 in the paper).
What cues lead us to form these perceptions? We found that learning another’s party identification was the most informative cue, leading to a 30-point jump toward truth. Learning that someone was Black or Hispanic led to a 20-25 point improvement in accuracy. Also important was knowing what the respondent thought was the most important problem facing the country, leading to a 20-point improvement. In general, participants improved their accuracy with the presentation of information, but some cues were far more informative than others (Figure 3 in the paper). Future work could extend our framework to examine the influence of a new suite of cues, for example focusing on some of the non-political preferences that correlate with political views.
We finally explored the factors of bias in beliefs. First, we find that participants overestimated Trump support by 2.3 points on average. This bias is notably smaller than what previous studies on second-order beliefs have uncovered. Next, looking at the mechanisms proposed by previous research, we find evidence of egocentric bias: Clinton voters had a 3.5 point bias toward Clinton and Trump voters had an 8.6 point bias toward Trump. Participants tend to assume others voted for their preferred candidate. As illustrated below, we also find evidence for Different Trait Bias: perceptions were less biased when participants had more in common with the person they were evaluating.