There is no printed copy available to order.
Abstract:
A previous DHS methodological report (MR24) examined the effects of interviewer characteristics on data quality in DHS surveys. That report examined if variation in 25 indicators of data quality, across 15 DHS surveys, could be attributed to the interviewers and their characteristics. According to MR24, interviewers who are older and better educated have lower levels of problematic outcomes, while interviewers with prior experience with a DHS survey or other surveys are often associated with statistically significant outcomes that are often in favor of better quality data. The results of MR24 did not account for the interviewer assignments to sampling clusters, where interviews are typically nested within a cross-classification of sampling clusters and interviewers. Moreover, the results did not control for the respondent characteristics. As an extension of that effort, the current report uses multilevel models to estimate interviewer effects in DHS surveys, while accounting for the structure of the interviewer assignments and the characteristics of both respondents and interviewers. Based on data from 24 recent DHS surveys and more than 100 questions from the Woman’s Questionnaire in each survey, this report examines interviewer effects across countries and across different characteristics of questions, such as length (longer versus shorter questions), sensitivity (questions on sensitive topics versus questions on non-sensitive topics), social desirability (questions prone to social desirability bias versus questions not prone to social desirability bias), complexity and/or difficulty (complex or difficult questions versus questions that are not complex or difficult), and question type (whether the information collected by the question was factual or non-factual). Long questions, non-factual questions, and questions on complex or difficult topics were associated with larger interviewer effects compared to the shorter questions, factual questions, and questions on less complex or difficult topics. These differences were consistent across most surveys. The analysis in this report can be extended to additional questions and surveys in the future. Results from these analyses can improve the quality of interviews and data collected by improving training for interviewers before fieldwork and monitoring interviewer performance during fieldwork.