Political Polling in 2020: A Story of Redemption?

November 2, 2020
Will Political Polling Redeem Itself in 2020?

Like many Americans, I have been spending far too much of my spare time recently checking out the latest polls, playing out various election scenarios, and monitoring battleground swing states. As someone who spends much of my day designing survey samples and weighting data to ensure representation, the subtle but often important differences in how various pollsters design their polls are a fascinating topic to me.

But after the polls failed to predict the Trump victory in 2016, public trust in political polling took a major hit, as many Americans thought the polls had failed. In 2018, the polls took another hit, after missed predictions in the Ohio and Florida gubernatorial races.

Political polling has always been a challenge, and the Electoral College makes polling in close presidential races particularly difficult. After each debacle, like The Literary Digest’s 1936 poll that famously predicted Kansas Governor Alfred Landon would defeat President Franklin Delano Roosevelt, and the 1948 Gallup prediction that New York Governor Thomas Dewey would defeat incumbent President Harry Truman, the polling industry has made leaps forward. Random sampling as the industry standard was a reaction to the first miss, and Gallup learned that you don’t stop polling weeks before the actual election. In 2016, the national polls were broadly criticized, despite predictions that were close to the actual margin, with Trump’s chances of winning being estimated at slightly better than the chances of a coin flip coming up heads twice in a row. The issues were primarily with state-level polling, in part because many colleges and polling firms joined the fray as a way of generating earned media.

American Skepticism

The 2016 election results showed areas for improvement in polling, and many polling firms took note, but the public has been slow to ‘buy-in’ again. Heading into Election Day 2020, Biden is holding an average 8.6% national lead and is positioned as a 9-1 favorite over President Trump according to FiveThirtyEight, a high-profile poll aggregator. For comparison, Hillary Clinton held a much smaller 3.5-point advantage with a 71% chance of victory according to FiveThirtyEight.

Despite signs pointing to a commanding lead by Biden, confidence that Biden will win the presidency is notably lower than it was for Clinton in 2016, with pundits defending against the potential for another high-profile miss that would further erode public trust.

With members of the media, the public and even the candidates themselves expressing doubts about the accuracy of the polls, it begs the question: Should we trust the polls this time? And if the polls are wrong this year, what does that tell us about public opinion polling in general?

Political Polling Faces Unique Challenges

Fortunately for market research and other non-political opinion polling, many of the toughest challenges that political polling faces do not apply to other polling in the same way.

Political pollsters have to contend with significant challenges relating to low social trust of polls and issues of social desirability bias. Low levels of social trust among segments of the electorate lead to both non-response and inaccurate reporting of voting intentions and candidate preferences. Recently, this phenomenon has been dubbed the ‘shy’ Trump voter, also known as the ‘silent majority’.

Beyond these issues, mail-in voting is a hot topic, particularly in the all-important battleground state of Pennsylvania where the US Supreme Court may soon decide on the legitimacy of mail-in votes that were mailed before Election Day but received after the polls close on November 3. Furthermore, it is unclear what proportion of mail-in ballots received will be deemed invalid due to issues such as signature mis-matches, failure to use a required privacy sleeve, and other factors.

There are also some wildcard factors to consider leading to Election Day, including spiking COVID-19 cases nationwide, a significant increase in first-time voters, and young voters turning out at higher rates than ever before. There are even some structural changes that polling has never faced like ranked choice voting in Maine.

Positive Signs for the 2020 Polls

While the polls are contending with significant challenges, there are many positive signs that suggest better accuracy this year than what we saw in 2016 or 2018.

High voter turnout:

Leading into the final weekend before Election Day, Texas has already exceeded its 2016 vote totals, suggesting a major increase in voter turnout for 2020. If the trends seen in Texas and other swing states hold, we will see unprecedented voter turnout, mitigating the impact of inaccurate self-reporting of voting behaviors.

The polls have been unusually stable and in agreement with each other:

Since June, President Trump’s approval rating and polling averages have held amazingly steady, despite many opportunities for the race to tighten after the presidential and vice presidential debates.

The “October Surprise” fizzled this year:

In 2016, Clinton appeared to suffer more negative fallout from late-breaking scandals than Trump. This year, neither Trump nor Biden seem to be as impacted by stories in the news cycle in the final stretch of their campaigns.

Undecided voters:

The proportion of likely voters who are still undecided is reportedly several percentage points lower in 2020 than in 2016. Quite simply, the potential to close the gap is smaller as many Americans have already voted, and most others have decided for whom they will vote.

Pollsters have gotten smarter:

Beyond these encouraging signs for pollsters, many of the most well-known and respected polls have made changes to their methodologies to correct for areas that were previously lacking. Commonly, polls are now incorporating corrections for education and better coverage of cell phone-only households—two factors that contributed to misses in key 2016 swing states.

Additionally, innovative techniques for prediction and cross-validation are being put in place in some polls. The USC Dornsife Center for Economic and Social Research, for instance, is not only asking about personal voting intentions this year, but also asking people how they think members of their social circle will vote, and how they think others in their state will vote, as a way to indirectly predict election results in a way that is less subject to false reporting. Time will tell if these techniques will provide consistent and accurate estimates, but new techniques like these illustrate an evolution in polling science.

Final Thoughts

Political polling and market opinion research have grown from the same roots, evolving from learnings made in each field. Today, they continue to address some common challenges facing each industry, including issues of non-response and sampling error. Furthering the challenge, political polling must also contend with voting dynamics that do not impact other types of opinion research. Regardless of how things turn out in the 2020 election, both industries will continue to enhance their techniques and borrow successful strategies from one another to account for the ever-changing world at large.

While it is not a sure bet that the polls will be correct this year, one thing is for certain: Americans will be glued to their television sets and smartphones on Tuesday, and likely for several days, if not weeks after.

If you’d like to learn more about the differences between political polling and market research, send us a note.

Send Us A Note

John LaFrance
Vice President, Research Methods & Sampling

John LaFrance is vice president of research methodology at Escalent. He leads a strong team of methodologists and sampling statisticians who are responsible for the development, implementation and execution of complex sampling and weighting designs. John’s team has expertise in survey methodology, research design, sampling theory, data weighting, multi-mode research and data quality best practices. John has helped Escalent stay on the cutting edge of research best practices, smartphone survey design, TCPA compliance, and multi-mode sampling techniques. He has published and presented on topics such as techniques for inclusion of cell phone dialing in CATI research, multi-mode research best practices, smart phone survey taking behaviors and cross cultural survey response variation. Prior to joining Escalent in 2008, John received an M.S. degree in survey methodology from the University of Michigan’s Institute for Social Research (ISR).