Polling Point of Contention: Despite Stigma, Pre-Election Polls Remain Reliable

Since Hillary Clinton’s 2016 Electoral College defeat, public sentiment towards presidential polling numbers often centers around skepticism. Although many were blindsided by that year’s election results, exit polling data tracked closely with popular vote results.
Presidential polling leading up to the general election is meant to represent a snapshot of the electorate’s preferences at a particular point in time. Rather than concretely evaluating how many votes the candidates are predestined to receive, polls are used as a tool to measure a level of support that could change in the months that follow.
“I think it’s important to remember that things can change for multiple reasons,” David Sterrett, senior research scientist with NORC at the University of Chicago, said. “And obviously, the more time there is between a poll and election, the more things can change. And that could be obviously news events, campaign events — all of those things can impact people’s attitudes.”
Consequently, opinion polls aimed at gauging the public’s attitude towards a candidate or issue can be misconstrued as predicting the outcome of an election in advance, Sterrett said. Election polls are not opinion polls, and vice versa.
While no poll can guarantee 100% accuracy, the methodology of the poll is crucial to consider. The Gallup U.S. Poll, which was instituted from 2008 until 2017, interviewed U.S. residents living in all 50 states and the District of Columbia who were age 18 and above.
Gauging the attitudes of anyone 18 years or older is likely to yield different results than polls aimed simply at surveying residents who are registered to vote. Each of these polls, however, can present constructive opportunities for academic research.
Gallup U.S. polling utilized a “dual-frame design,” which used list-assisted random-digit-dial methods to survey both landline and cellphone numbers, according to the Gallup website. Interviews were conducted by Gallup over the phone in both English and Spanish.
In order to correct for non-responses, double coverage of landline and cellphone users and unequal selection probability, Gallup weighted its final samples to match the U.S. population demographics. From 2008 through 2012, Gallup would routinely survey 1000 voting age adults per day. Between 2013 and 2016, that figure was cut in half to 500 adults per day.
Despite taking criticism at times for being misleading representations of the electorate, pre-election polling has historically been an accurate tool for predicting election outcomes. Statistically, there is no evidence that poll errors have increased over time or that there is a crisis in the accuracy of polling, according to an academic research paper published by Nature.
While some polls may focus on likely voters or registered voters, the Gallup pollsters were specifically targeting voting-age adults in order to gauge the public’s sentiments. During this time, Gallup would gauge the participant’s preference for president and would incorporate additional questions that covered topical political issues.
Because of the enormous range of factors that go into polling methodology, Sterrett said it is an oversimplification to say any one poll is “good” or “bad”. Despite the constant efforts by public opinion researchers and pollsters to refine their approaches, election polls can still be flawed and inaccurate.
In the run up to the 2012 presidential election, Gallup estimated Sen. Mitt Romney to have a 3-percentage point lead over incumbent President Barack Obama. However, the firm subsequently reviewed its surveying operations after Obama won the election by a margin of 51.1% to 47.2%.
These results triggered a six-month review of the firm’s methodology for polling. All aspects of Gallup’s system for surveying voters was scrutinized.
Part of the reason for the poll’s overstatement of Romney’s support stemmed from its phone surveys being conducted by region and not by time-zone, Frank Newport, editor-in-chief of the Gallup Poll told USA Today. Additionally, Gallup relied too heavily on listed phone numbers when conducting its landline surveys, causing an over-representation of older Republican-leaning voters.
Gallup conducted daily surveys, 350 days per year, before discontinuing this practice at the end of 2017. The firm now conducts its political surveys by calling 1500 voting-aged adults each week, rather than on a daily basis.
In contrast, the Quinnipiac University Poll conducts its surveys once every two weeks. This means that a random rough sample of 1000 registered voters are interviewed over the course of five to six days, leading to more careful consideration of the data.
Quinnipiac’s public opinion polling center conducts its independent surveys as both a public service and for academic research. This institution has conducted more than 1200 polling surveys since 1996 and, similar to Gallup, it conducts surveys regarding topical issues in addition to gauging the public’s presidential candidate preference.
Peter Bergerson, professor of public affairs at Florida Gulf Coast University, said polls can mirror election results only if they sample crucial demographics. Further, the sample reflecting national preferences typically cannot represent the outcome of each state’s individual elections.
“Election day is not just one election, it’s 51 elections,” Bergerson said. “So, the real value of a poll comes from how it represents actual, real voters. Not every registered voter actually votes in an election.”