The Washington PostDemocracy Dies in Darkness

Despite the 2020 election results, you can still trust polling. Mostly.

Yes, there’s a problem with pre-election polling. But mass opinion polls don’t have that problem.

Analysis by
December 3, 2020 at 7:00 a.m. EST
A voting location in Mission, Kan. , on Oct. 20. (Charlie Riedel/AP)

Some pundits discussing the 2020 pre-election polls are treating the year as if it’s joined the trifecta of polling fiascos: the U.S. presidential elections of 1948, 2016 and 2020. In 1948, George Gallup confidently predicted that Thomas Dewey would defeat President Harry S. Truman; in 2016, pollsters expected Hillary Clinton to defeat Donald Trump.

In 2020, polls appear to have overconfidently predicted that Joe Biden would handily defeat incumbent President Trump.

While researchers are sorting out the final numbers, some observers are arguing that polling has outlived its usefulness. But while pre-election polls have their problems, mass opinion polling is quite different.

What went wrong with the 2020 pre-election polls

The American Association for Public Opinion Research (AAPOR) advised pollsters to make a number of corrections after 2016, including weighting the data to better represent White voters without college degrees; conducting more and better polls to be “averaged” right before the election; and better accounting for the fewer undecided voters and those leaning toward third-party candidates. But apparently these corrections did not work or were not widely enough made.

Crucial state polls were significantly off once again, especially in Michigan, Wisconsin and Pennsylvania. Yes, Biden won these states. But he did so by thinner margins than were found in pre-election polling results, which steadily forecast him winning by 4 to 5 percentage points or more. Further, Trump defeated Biden in states that were allegedly close, like Florida and Texas, by handier margins than expected. In Arizona and Georgia, the polls were within sampling error margins. But they were way off on several congressional races. Maine’s Republican Sen. Susan Collins won handily, despite pre-election polls showing her opponent leading. And while some expected a “blue wave” election that increased Democrats’ control of the House, Republicans gained House seats.

What went wrong? For one, polling was conducted during the pandemic. Many states were voting early or by mail in large numbers for the first time. They then had to count these votes under crisis conditions for voters, Postal Service, and state elections administrators. Disproportionately Democratic poll respondents reported that they were going to vote by mail; some may have failed to do so, ran into difficulties, or had their ballots lost in the mail, contributing to polling error.

Here's the problem with mail-in ballots: They might not be counted.

But here’s the challenge for pollsters: knowing who will actually vote compared with who is responding to polls. Pollsters have to estimate the composition of “likely voters,” based on what’s happened in the past, then accurately weight respondents’ answers for comparable demographic characteristics.

The AAPOR convened a new task force, like the one it had in 2016, to examine 2020 pre-election polling. Its postmortem forensics will need to use voter file data to check the survey estimates of likely voters, to see how pollsters underestimated turnout supporting Trump.

It will need to consider the same potential problems discussed during 2016. These include whether some respondents were “shy Trump voters,” reluctant to disclose their intentions, or whether Trump voters were less likely to respond to polls, which pollsters call “nonresponse bias.” This may not have just meant underestimating the number of Whites without college degrees who were likely to vote, but also underestimating likely voters in rural or small-town areas, who voted overwhelmingly for Trump.

Why did polls undercount Trump voters?

Some Republican-oriented polls may have done better. They may have weighted their data differently than did other pollsters, adjusting for Republicans’ non-responses, or weighted reported Trump support more to offset “social desirability bias,” or respondents’ desire to say what they believe an interviewer wants to hear.

The 2020 National Election Pool’s exit poll found roughly the same percentage of Democrats (37) and Republicans (36) had voted, with comparable percentages or more Republicans in key states. Pre-election polls often identified greater proportions of Democrats than Republicans as likely voters.

Don't trust the exit polls. This explains why.

None of that affects public opinion polling, which is quite different

But while election polling definitely has problems that need to be studied, some pundits are claiming that public opinion polling has failed entirely. That’s not so. Mass opinion polling is a very different animal from election forecasting polls. It occurs regularly between elections, examining all manner of political and social attitudes and behaviors. And it’s reliable and useful for political scientists and others who study U.S. democracy.

So what’s the difference? In pre-election polls, pollsters must estimate who will vote. In mass public opinion polls, pollsters don’t have that problem. Survey samples can be effectively weighted to match census data about the entire adult U.S. population and its subgroups, drawing on an enormous research literature on trends and patterns in public opinion, including partisan conflict today.

These polls can be benchmarked against high-quality government agency surveys, which have high response rates and draw on large samples. They can also be checked against similarly high-quality academic surveys, especially the NORC General Social Survey and the American National Election Studies.

Some might object that here, too, Republicans or Trump voters might fail to respond, skewing estimates of the public’s opinions. But they would be a smaller proportion of the public in a mass survey conducted outside of a heated election campaign. On things unrelated to the election, that group has less reason to avoid pollsters any more than the U.S. public in general. That avoidance mattered in election polls, because the difference between whom Democrats and Republicans supported for president was fully 90 percentage points. But when we’re looking at the U.S. public at large, including nonvoters, there are fewer strong partisans. What’s more, on issues other than whom they’ll choose for president or presidential approval, partisan differences in opinions are much smaller on average, roughly 36 points in one important study.

In other words, the opinion research community does need to continue examining possible sources of error in both election polling and mass surveys. And of course, it needs to encourage transparency in conducting polls and in archiving them for further scrutiny and research. But mass public opinion polling is alive and well.

Don’t miss any of TMC’s smart analysis! Sign up for our newsletter.

Robert Y. Shapiro is the Wallace S. Sayre professor of government and professor of international and public affairs at Columbia University, president of the Academy of Political Science, and chair of the Roper Center for Public Opinion Research.