The views expressed by contributors are their own and not the view of The Hill

Don’t believe the polls — just vote

Getty Images

Polling numbers reported on cable shows and in newspapers may miss the mark, and there are scientific reasons that explain why.

On election day, all we know for sure, is what we don’t know.

That’s why Americans should not rely on polling when deciding whether to vote or stay home on election day.

In survey methodology, there’s something called “total survey error.” It is the framework that guides survey design and informs survey quality. ­­­Often, election poll reporting focuses only on sampling error and overlooks coverage, non-response and measurement errors. Giving prominence to only one kind of survey error risks putting faith in results that are just plain wrong.                           

Here’s why: Polling is based on estimates of historic voter turnout. We have no idea what the 2018 electoral turnout model will look like.  When the types of voters we survey ends up being different from the voters we expected would turn out, coverage error results.{mosads}

Take Nevada for instance. Early voting, absentee and mail-in ballots have now exceeded the total number of ballots cast in the 2014 midterm election.  At this pace, more than one million Nevadans may cast votes by the time polls close on Tuesday evening. That’s just shy of the 2016 presidential election turnout. Two questions remain: who are these surge voters and are they different than typical midterm voters?

Some indicators suggest they are different. Thirty-seven percent more younger voters (18-29) cast early ballots than in the 2014 midterm. Registered Democrats have voted in higher percentages than in 2014 and 2016; and non-partisan registered voters exceed the percentage who voted in 2014, according to L2, a non-partisan voter database.

The NYTimes Upshot / Siena College Poll offers the most transparent view of the impact various turnout models can have on polling estimates. Additionally, their methodology explicitly details the bias that comes from non-response error when some folks refuse to talk to pollsters. They report: “People who respond to surveys are almost always too old, too white, too educated and too politically engaged to accurately represent everyone.”

When pollsters weight their results to adjust for this mismatch, weighting creates a design effect which increases the margin of sampling error; yet, not all polls include this adjustment.

In the closely watched U.S. Senate race in New Jersey, U.S. Senator Bob Menendez holds a narrow 5-point lead over his Republican challenger, according to the latest Rutgers-Eagleton Poll.  Weighted to the New Jersey adult population, the simple sampling error is lower than the margin of error adjusted for the design effect (+/- 3.6 percentage points). 

When these estimates are weighted to New Jersey registered voters, the adjusted margin of error is +/- 3.8 percentage points. The adjusted margin of error jumps to +/-5.1 percentage points when these estimates are weighted to “likely voters” in New Jersey.

Finally, measurement error occurs when respondents give us incorrect information or even tell us what they think we want to hear. In an Indiana study, only three-quarters of young voters knew whether they were registered to vote. In a Florida post-election analysis, 16 percent of respondents who told us they had voted early or were likely to vote, did not vote in 2016.

Sometimes voters offer inaccurate responses when they want to please or “satisfice” the interviewer. They may also alter their responses or opt to skip socially sensitive questions based on the perceived race, ethnicity or gender of the interviewer. For example, a respondent may conceal his true feelings about immigration policy if he senses the interviewer is Hispanic. Similarly, he may voice support for pay equity mandates if the interviewer is female.

Scholars refer to this as “interviewer effect” which is more likely to occur in telephone surveys.  Social desirability bias, which is a desire to be viewed favorably by others, will vary depending on the mode in which the survey is conducted.

Occasionally measurement error occurs from the way survey questions are asked. In an Ohio gubernatorial poll, the head-to-head horse race question only asked respondents their voting preference between the two major party candidates, while ignoring the independent candidates on the ballot. Unless you drill down and evaluate whether question wording and order have influenced the poll results, forecasting who’s ahead in tight races is unreliable.

Make no mistake, random probability surveys are highly efficient, versatile and generalizable when pollsters minimize total survey error.

What we do know for certain… Americans can only be accurately counted by voting on Tuesday.

Debbie Borie-Holtz is an assistant professor at the Bloustein School for Planning and Public Policy, Rutgers University where she teaches Methods courses and also conducts survey research at the Eagleton Center for Public Interest Polling. Follow her on Twitter @borieholtz.

Tags Bob Menendez Margin of error Opinion poll Sampling Survey methodology Voter segments in political polling Voter turnout Voting

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Top ↴
Main Area Bottom ↴

Most Popular

Load more