Presidential Campaign

Polling: Which one should you believe?

Political opinion polling is facing critical challenges. 

This past summer, many pre-election polls dramatically underestimated the level of support among British voters to leave the European Union. “Brexit” was the latest in a series of embarrassments suffered by polling professionals over the last several years, following on the heels of similar miscalculations in critical elections in Greece, Scotland, Israel, and our own country’s 2014 midterm elections. The poorly predicted outcomes have prompted a lively debate among polling scientists.

{mosads}Accurate survey results are the result of representative samples, well-crafted questions, and the ability to reach a group of participants who are willing to provide accurate answers. It is unsurprising that poll results can go very wrong in an era when most phone calls are diverted to voice mail, landline usage is on the decline, spam clogs email accounts, and divisive elections split the population into bitterly opposing camps.

Human tendencies make polling all the more challenging. For instance, research shows people are more likely to be truthful about private topics, such as whom they plan to vote for, when they are privately filling out a poll online. They are less likely to be so honest in a phone conversation with an interviewer who could potentially disapprove of their choice.

Internet polls may largely avoid this “social desirability bias,” but they also have their drawbacks. Many suffer from undesirable sampling methods that yield unrepresentative results, which cannot be reliably corrected with standard statistical techniques.

Another potential source of error in election polling is the selection of who is most likely to vote in the election. Pollsters use various techniques, generally relying on a combination of past voting history and self-reported likelihood of voting in this election to select a subsample of “likely voters.”

While likely voter models allow researchers to report results among people who have the highest likelihood of voting, those models may exclude people who will register later in the season, potential first-time voters, or regular voters who choose to sit out this race.

A recent study has documented that participants may be more inclined to respond to a survey when their candidate is doing well. This exaggerates predicted swings in voter preference. One can offset the effect by weighting with past votes. For instance, if too many voters who supported President Obama in 2012 end up in a poll after a very good day for Hillary Clinton, these are down-weighted and the Romney voters of four years ago are up-weighted.  

Weighting cannot account for human behavior, though. Evidence suggests that people remember having voted for the winner, even when they didn’t. If more respondents today declared that they had voted for Obama than really did, then weighting with previous votes would give too little weight to likely Clinton voters, thereby underestimating her support.

Our USC Dornsife/Los Angeles Times Daybreak Poll attempts to minimize some of these polling pitfalls. Each week, we ask participants in an internet panel, the Understanding America Study (UAS), for their voting preferences. Participants have been drawn from postal addresses and we provide them with internet access if necessary.

This ensures that our sample is representative of Americans across all ages, locations, and socioeconomic status. It also avoids the problems associated with most internet panels, since we have a well-defined sampling frame. Furthermore, rather than using a likely voter models, we weight our respondents’ answers with their self-reported percent chance of voting.

Another difference in our poll is that we ask respondents for the percent chance they will vote for a candidate, rather than forcing them to choose one or the other. This means that they do not have to make a hard choice even if they remain unsure. (At this point in the race, only half of our respondents say they are 100 percent certain to vote for a particular candidate.)

Since we are essentially using the same respondents every week, observed changes are less likely to be the result of random differences in sample composition from one poll to the next. This is likely to dampen observed changes.

We weight with past voting behavior because we found it was successful in 2012, when we followed a very similar procedure in the “RAND Continuous Presidential Election Poll.” Our final prediction was a 3.32 points advantage for Obama. The final tally of the popular vote showed a 3.85 points advantage. Our prediction was at least a couple of points more in Obama’s favor than most of the other tracking polls.

The aim of our approach is to address a number of the most crucial challenges in today’s polling environment. Our procedures are fully described on our website, https://election.usc.edu and the data are available for anyone who registers as a user of the UAS. Of course, none of this promises that we will be closer to the final tally in November than any other poll.

We did extremely well four years ago, but as the saying goes, past results are no guarantee for the future.

Arie Kapteyn is a professor of economics and the executive director of the USC Dornsife College of Letters, Arts and Sciences Center for Economic and Social Research, where he also oversees the Understanding America Study. Dan Schnur is the director of the Jesse M. Unruh Institute of Politics at the University of Southern California, and is the founder and director of the USC Dornsife/Los Angeles Times poll series.


 

The views expressed by Contributors are their own and are not the views of The Hill.