Matthew Nanes and Dotan Haim
“Should abortion be legal?” “Do you support the Taliban?” “Would you report anti-government insurgents to the police?” Social science researchers often want to know the answers to sensitive questions. Oftentimes, the impediment to collecting such information is not that people do not wish to reveal their answer, but that they are unwilling to link it with their identity. In our article, we demonstrate that simply allowing respondents to enter answers into tablets themselves rather than talking to an enumerator can substantially increase their willingness to answer sensitive questions. This approach provides an alternative to more complex survey techniques like list experiments and forced choice experiments that frequently confuse respondents and yield inaccurate information.
We wished to measure the willingness of individuals in the rural Philippines to report insurgent activity to the authorities. The area we studied hosts a long-running, low-intensity conflict between the Philippine Government and a communist rebel group called the New People’s Army (NPA). Given the importance of citizen-provided information to counterinsurgency operations, we wanted to know how likely citizens were to proactively report information to the authorities. Of course, due to the topic’s sensitivity, citizens may refuse to answer the question or give the answer they thought the enumerator wanted to hear.
To investigate the best way to get around this obstacle, we compared three different survey techniques during face-to-face surveys conducted by local enumerators. At the beginning of the survey, respondents were randomly assigned to answer a set of questions in one of three ways. One-third of respondents answered the questions verbally, and the enumerator input the response into a tablet (“direct response”). For another one-third, the enumerator handed the respondent the tablet and allowed her to input her responses (“self enumeration”). Respondents then advanced the screen before returning the tablet to the enumerator. The final third of respondents answered the questions using a forced-choice experiment, a complicated technique which involves the respondent flipping a coin and answering the question truthfully or not depending on how the coin lands.
We first asked each respondent a “placebo” question: whether they had completed high school. Not only is the question not sensitive, meaning respondents should answer willingly and accurately, but we could verify the accuracy of responses against population-level estimates of high school completion. We then asked each respondent the sensitive question, whether they would report the activities of an armed group to the authorities.
Our results were somewhat surprising. They indicate that there are significant benefits to self-enumeration and raise important questions about the effectiveness of more complicated measures like the forced-choice experiment. We first explored response rates. As we expected, practically every respondent willingly answered the question about high-school completion. About one-quarter of respondents refused to answer the sensitive question, and the rate of non-response varied depending on the way in which the question was asked. The percentage of respondents who refused to answer the question was 29.0% among those who answered verbally, 25.2% among those who self-recorded their answers, and 20.9% among those using the forced-choice experiment. Both the self-enumeration and forced-choice techniques significantly improved response rates.
Next, we looked at whether the techniques led respondents to answer the sensitive question more honestly. Of those who answered, expressed willingness to report insurgents was about 61% for those answering verbally or self-recording their answers, but only 36.5% among those using forced-choice. If we stopped there, we might conclude that far fewer people would actually report insurgents to the police than are willing to say so to an enumerator, and the forced-choice experiment allows them to express their true preference by protecting their anonymity.
However, answers to the placebo question reveal a problem with forced choice. Whereas 52.4% and 51.8% of respondents using verbal and self-enumeration, respectively, claimed to have finished high school, only 25.6% of those using forced-choice made that claim. Yet, we know that about half of adults in our study’s area completed high school. In other words, at least for the non-sensitive question, answers from respondents using forced-choice were wildly inaccurate. Our best guess is that the forced choice experiment confused respondents and caused many not follow the instructions, leading to inaccurate estimates.
Our primary takeaway is that researchers using surveys to measure sensitive topics should be more conservative in their use of complex techniques. Even with experienced and well-trained enumerators, surveys conducted in a calm and secure location, and no language barriers between enumerators and respondents, a large portion of respondents apparently misunderstood the instructions. Instead, allowing respondents to self-enumerate their responses helps reduce threats to anonymity that complex techniques are designed to mitigate. We show that self-enumeration provides responses that are at least as accurate as verbal enumeration, while improving response rates on sensitive questions. Researchers and evaluators should seek out the simplest possible solution to meet their needs.