Virginia Tech® home

The Evolution of Political Polling with Karen Hult

Karen Hult joined Virginia Tech’s “Curious Conversations” to chat about the history and evolution of polling, methods used in modern polling, and how politicians and the average person can interpret poll results.

The conversation highlights the importance of probability sampling and inferential statistics in generating accurate poll results, as well as the need for critical thinking when consuming poll results.

About Hult

Hult is an professor of political science at Virginia Tech, serves on the faculty of the School of Public and International AffairsCenter for Public Administration and Policy, with expertise in the U.S. presidency, federal and state politics, policy and governance, and federal and state courts. Her research is focused on organization and institutional theories, the U.S. presidency, U.S. national executive branch departments and agencies, policy, U.S. state politics, policy, and governance, and social science methodologies.

Related Content

(Music)

Travis Williams

If you pay attention to any form of news media, you've probably heard of political polling. But in a recent poll I conducted with myself, I realized that zero out of one people felt he actually knew very much about the history of polling, how it's evolved over time, and what all goes into the polls that we see today. Thankfully, Virginia Tech's Karen Holt was willing to help me gain a better understanding of all of these topics and more.

Karen is a professor of political science at Virginia Tech and serves on the faculty of the School of Public and International Affairs Center for Public Administration and Policy. We chatted about some the polling milestones in America and how it's evolved over time, as well as what some of the methodology is behind the polls that are created today. Karen also shared some insights as to what the average person should consider when they see polls in the media and her thoughts on what she personally finds as the most exciting part of the political calendar. And spoiler alert, it is not election season. And I should also mention that we recorded this podcast on July 16th, so some of the specific references may be a bit dated. But as fast as things move in politics these days, that was just going to happen no matter when we recorded. I'm Travis Williams, and this is Virginia Tech's Curious Conversations.

Travis

I'm really curious what the history of polling is. Have we always had polls and maybe how have they evolved over time?

Karen

Some of it, interestingly enough, has to do with what we mean by polls and polling. It turns out in Britain, for example, use of the word polling refers to casting a vote at an election or going to vote at an election location. So that's interesting. In the U .S. context, we think about polls as a kind of survey, if you will. And if we think of it as a sort of a survey, a counting or enumerating of people, maybe their views, maybe their opinions, their attitudes, and so forth, then it opens up a whole variety of other things. If we think of it that way, then we can think about a poll as a kind of a census. Well, censuses trace back at least to biblical times, if not before, in the Western world. But to the extent we're going to think about it as election polling, then that's a way of getting a sense of what election results are likely to be. So you can think about a purpose of polling is to forecast elections. You can also think about it as having some interest in what voters think, what their likely behavior is going to be, things like that. And then finally, you can think of it from a candidate's point of view. That is, do I develop campaign strategy? What do the voters think? What kinds of appeals will be more or less effective and things like that? I should add, of course, that a lot of surveying, is done by elected officials. So that presidents have long conducted various kinds of in -house ways of tracking what the public thinks, what the public wants, what some of the public concerns are. And we can trace that back all the way to the beginning of the presidency, where presidents started paying attention to folks who came by the White House, who could just walk in and talk to the president on certain kinds of Sunday receptions. Other times, Franklin Roosevelt, for example, had somebody counting his mail and tracking what the mail that the White House received said. And so those kinds of tallies of people's views have gone on for a long time. Really, though, we can go back probably, I'd say starting with John F. Kennedy, who began to make more in -house use and more partisan political use of polling results. And at least the...lore at the time of the Kennedy administration was that had an in -house pollster name that many, listeners may remember, Lewis Harris, who often was rumored to cast his questions in ways that reflected democratic goals and objectives. And in other cases, however, was an in -house pollster who reported and interpreted those polls to President Kennedy. Presidents since then have developed a lot or paid a lot of attention more or less to survey results and poll results. But that's moving a little bit too close to the present time. It turns out that for election, presidential election polling, for example, probably in a lot of people's notions, the key year for what we think of now as election polling a little bit more, if not scientific, at least we can think about plus or minus errors and those prediction errors and things like that, probably the key year is 1936. And 1936 is the year of the infamous or famous literary digest survey, which had been really good at forecasting presidential election victories and percentages of votes going back to 1924. The literary digest was, as the name implies, a quite popular magazine. But what the magazine started doing as far back as 1924 was to sending ballots out to a range of selected people. Well, who did they decide to send ballots to? By 1936, they were sending, they sent something like over 36 million ballots out by U .S. mail to people whose names they got from automobile registries and other kinds of listings of people. Telephone directories as well. And you can tell where this is going in terms of the 1936 infamy. It turns out that about 2 .3 million of those postcard ballots were returned to literary digest. They tallied them all up as they'd done in all the previous presidential elections going back to 1924. And they projected a landslide victory for 1936, Franklin Roosevelt was running for his first re -election. They predicted victory for his Republican opponent, Alf Landon. Landon lost by a landslide, lost by almost 50 % of the popular vote. Well, how could that be? It turns out you could look back at what I just described about those literary digest polls and could see what some of the problems may have been. For example, the lists of people to whom the mail ballots were sent were kind of a skewed or biased list. That is, they did not include people who did not own homes. They did not include people who did not own automobiles or have telephones. Well, we can think about why that would be. That was in the 1930s in the middle of the Depression. Many people were unemployed. So they probably were also over sampling Republicans and under sampling poor people, many of whom may have voted Democratic. So that was the first signal that said there is something about selection bias in who it is we poll, who we try to survey. Well, that's what happened in 1936. At the same time, I said they sent out well over 36 million mail -in ballots. They only got back about 2 .3 million. So we've got a big non -response bias Now don't know what that was reflecting either. So they're not getting a full sense of even the people that received the mail -in ballots in the first place. Big disaster. It turns out that literary digest went out of business within two years. Now at the same time in 1936, you had some new style novice scientific pollsters entering the scene. Now some of their names we recognized immediately. George Gallup, for example, Elmore Roper. Think about the Gallup poll, the Roper poll also at the time was a real entrepreneur on top of everything else, but a man named Archibald Croson. And what they all were doing in various ways of doing it was saying, you know, there is a more systematic way, my word, to do this kind of polling. They use what was then called a quota sampling method. And what quota sampling says is, let's think about the kinds of people who vote and try to make sure we have appropriate proportions of those people in who we ask how they're going to vote. So they tried to quota sample people based on their age, on their sex, and this is a time in which women had fairly recently gotten the vote, but they still were able to turn out to vote in many states, and they looked at where they lived and all of those kinds of things. And based on the quota sampling method, probably got a much better sample from which to forecast.

 

Each of these men with its rough quota sampling methods came up with a far closer forecast of what would likely happen in that election. So that is in many ways the beginning of what many people would call scientific polling. It was also the case that at the turn of the century, probability statistics methods were beginning to be examined and pioneered. And where did that happen? In the field of agriculture. And so the first probability sampling statistics, which we now call inferential statistics. That is, they're not just describing, they're not just counting up and dividing by the number of people sampled, and that's the numerator and the denominator is how many people they're trying to try and look at. But instead, what probability or inferential sampling allows one to do is what are the chances that based on a relatively small representative sample, a random sample, what are the chances we're going to come clear come near the actual result or the truth value. That's developing in the latter part of the 19th century. It's available to pollsters and social scientists generally as we enter into the 20th century. So that starts feeding into public opinion polling as well.

Travis

I'm curious, how are our modern polls, how do we go about getting those now? Are people called over the phone or do people reach out? Is there a variety of different ways? How do we generate polling now?

Karen

There are variety of different ways. As you might have anticipated, I was talking about quota sampling. pretty quickly gave way to, going back to that discussion of inferential statistics, that pretty quickly gave way to what we now talk about as probability sampling. And probability sampling then has the real advantage that it allows the results to be examined more systematically using inferential statistics.

So, a simple probability sample is drawn on a randomly sample. And that means every possible member of the population or the sampling frame, the larger group of people, the kind of people you want to survey, has an equal chance of being selected into that sample. And once one does that, we can do it random sampling, simple random sampling.

We could also divide the population up into various groups, what's called stratified random sampling, so that we want to make sure that we get sufficient numbers of, let's say, African -Americans and of Southern Baptists in our sample. divide the population with sampling frame up into groupings that we call strata. And then within each of those strata, religion, for example, race, ethnicity is another example randomly sample within that stratum and then pull them all together into a larger probability sample. But based on your larger question, what that allows us to do then is to say, I don't need to sample everybody in the population in order to come up with a defensible result. That's the probability sampling is what allows us to then use inferential statistics in which we can start judging of, let's say, a sample of 1200 people. Once it gets above really above a thousand or so, the size of the sample makes far less difference than the way that sample was drawn in terms of our ability to judge how accurate the responses from that sample are. So what that meant then is that if we moved away from the literary digest way of mailing out lots and lots of sample ballots and even from the quota sampling method that Gallup and others used, Gallup still uses to some extent now, into probability sampling, we have a little bit firmer grasp of quality of the sampling design, the sample design, and of the inferential process that gets made after people give their responses, if that makes sense. One of the other things people ask is, well, what makes a good survey? What makes a bad survey? A lot of the experts in this area are going to say, well, a threshold level is that what you want to have is a probability sample of some kind. There are lots of non -probability samples out there.

Think about the opt -in surveys that many people participate in online or are invited to give their opinions on and then that gets pulled together and summarized and things like that. Those can be really telling and informative, they don't produce as many confident or defensible results as a probability sample. So that's one key criterion. But that's just sampling error, which can cause all kinds of other difficulties with surveys.

Design, designs of polls make a huge difference too. And that leads to a bunch of non -sampling errors that we could talk about in the future too.

Travis

What does the average person need to be thinking about or considering whenever they're watching the news and they hear about new poll results say this? What types of things should we be thinking about or considering?

Karen

Well, you know, the kind of things that we say in our research methods classes, for example, when we teach about polling and surveying, among the things I tend to say, is first listen for who the pollster is, if that makes a difference to you, but also pay attention to what we don't often get in the reporting about polling is the level of confidence we should have in the results. Because in fact when you use inferential statistics, among the things that you can find out is not only the margin of polling error, that is the plus or minus we hear a lot. Now that's important, there's no doubt about that is this, it's not quite a point estimate, but what does it look like in terms of above and below that estimate? Given what? Given the size of the sample, given the nature of the sample, and given the confidence level that one should have in the results. So a lot of the statistics we're talking about have associated confidence levels. Real simple. How sure should I be that these results didn't happen by chance?

Should I be 95 % confident? Should I be 90 % confident? Should I be 90 % confident? All of those are basically saying, how much money would I bet on this being an accurate poll result? We should listen for the confidence levels. We also should want to know a little bit more about the wording of the question that was asked and to whom it was asked. Well, that's a lot of information to expect for people that are far busier doing other things and thinking about other things and so on. But that'd be among the things that...that I would suggest is important to talk about. The other thing I would observe, well two other things I'd observe, one is we're finding, we've been finding over time that increasing numbers of people are refusing to answer pollsters' questions. Whether it's exit pollsters after voting or a telephone call, seems like always during dinner, or some other time asking questions.

The refusal rates have really been increasing pretty dramatically to the point there are a range of pollsters who've been saying, it's not clear to us we can do telephone polls anymore because we're not getting an adequate response rate that allows us to be sure we're talking to everybody that we want to be talking to. So that response rate issue is something that people also may want to think about. The other thing they may want to think about is was there much care in analyzing those who did not respond?

The non -response bias and a lot of the polling information we get is never reported. It drives me crazy. I want to know if there was a difference between the people that did respond and the people that didn't respond. We don't often get as much of that information maybe as we should. And finally, another factor is that the levels of trust in institutions generally in the United States we know has been declining. Clearly that's there when it comes to trusting election reporting and trusting survey results. And to the extent that's true, there's an important skepticism that's good for all of us to have about what we hear about polls and poll numbers. It needs to be tempered though, I think, that some of the results are done by good, serious, ethical pollsters. And we have ways of figuring out who those folks are, and that's based on their past track record, the kinds of methods that they use and so on. Given that then, there may be some greater credibility attached to some polls rather than to other polls. And that again is a tough judgment call for a lot of people.

Travis

How do they go about reaching out to people to get like an instant response? You know, a big event might happen and they want to know how people respond to just that. Do they have go -to folks that they immediately reach out to or what's that process like?

Karen

Well, that's a great question. And we've just had an event in which there's been a lot of that polling that just now is being reported. And that's the assassination attempt on former president Trump. When he was beginning to see some polling that was being done going in to this Saturday. So those poll results are out there, but then immediately, well, not immediately, but probably right away on Sunday, he began to get some pollsters with existing survey pools, going back into those pools for people they haven't contacted with yet and noting first that they were contacted after this pivotal event and seeing whether and how their responses changed. In other cases, there may be some polling operations that do in fact use some kind of an online opt -in sort of way of developing samples, but they can based on those opt -in, I'm willing to take a survey. I'm going to give you some information about me. And then when the survey organization wants to create a survey, it can immediately then check the kinds of people it wants to include in a survey and get it out pretty quickly. So that you're combining then the pre -recruitment that you talked about, which has some other concerns about confidence, with you've got possible people on a broad panel. And then you go into that broad panel and develop more of a probability -based sample. to send an additional survey out to. How do politicians, how do they generally use these polls in different ways? Well, that's a great question for all kinds of reasons. The kind of polls that I've been discussing are what we label public polls. Most electoral campaigns and elected officials and others may well have private polls. That is, they work in -house with either contract pollsters or they have somebody on their staff that designs the polls and surveys themselves. Those surveys then can be administered so they're tailored exactly about what the candidate or the elected official wants to know more about. And so we talked a little bit earlier about presidential polling operations. If one looks at presidents like, former President Obama, among the things that his in -house polling operation did had to do with themes of speeches, and ways of framing language. And so we've got a fair amount of political science literature that says typically elected officials once they're in office do not poll so much to follow the polls. They often are going to be polling to figure out what the best ways are for them to frame and get the messages across that they want to get across to selected audiences. And so that's a little bit different use of polling than otherwise.

 

Campaigns, however, probably read a lot about our private polls say this, but the public polls say something different. That's some of what the Biden White House has been saying recently. That's not what our polls say, they argue. And what that refers to in part is they have different kinds of private polling. Results are not shared widely. And the questions they ask and to whom they ask them may be very different than a public poll. And so that's another way that candidates and people who work for candidates can use this kind of information. What's done with private polls, but also then looking at the variety of public polls and seeing where there's consistency, not so much consistency, where there may be evidence of opening among certain target groups with different responses, or maybe even between different polling operations and so forth in terms of the kinds of questions they ask or the order in which they ask the questions. All of those things can be fodder for a good campaign strategist, I think.

Travis

Well, as a political scientist and someone who studies elections and our government, where does election season rank for you as far as the most exciting times of the year?

Karen

Well, a couple of responses. One is most political scientists don't focus on elections. I do not. I know about elections because I have long taught US politics know a lot and teach about state and local government in the United States and other subnational areas around the world. And I've spent a fair amount of time teaching about Congress and so forth. So my interest in elections is really a democratic theoretical interest. is, elections are a key vehicle to think about linking the governed and the people that are making decisions on our behalf as representatives.

So that link is pretty important to me. I'm also, I teach and work in public administration and policy. And that means as well that I'm really focused on governing. I care a lot about the nature of the folks that get elected to office, what they promise to do, and then what they're able to do once they get into office. So for me, going back to your question, the election is extraordinarily important. There is no doubt about that. I think it informed electorate is one of the most challenging parts of having a representative democratic system. Given that though, too often in elections, we don't always remember that the person we're voting for also must govern and govern in ways that are responsive to what the voters indicated they might prefer, but also governing in ways that look at effective, appropriate and accountable governance. So what I get most excited about, quite frankly, following a presidential election is the transition period. And I spent a fair amount of time writing about and doing research on that as well. So one hopes that after this coming presidential election, there is not a period of uncertainty and conflict the way that there was in 2020. The person who becomes president -elect has to make a pretty quick transition between campaigning and governing. And that transition is critical, especially given the precarious state that the United States is in currently in terms of international affairs and in terms of some of the internal dynamics within the United States. So that transition to governing is also important. Also, maybe it's actually after the elections that's most exciting for you. Well, most exciting, but also most, I think it's the prose, not the poetry of politics, if you will. It's not what most people get really turned on about. But I think it's so significant because to the extent we're going to live together as a political community, we have to be able to figure out how we're going to resolve conflicts if there are any and then keep moving forward in directions we want to go.

(Music)

Travis

And thanks to Karen for sharing her insights related to political polling.

If you or someone you know would make for a great curious conversation, email me at traviskw at vt .edu. I'm Travis Williams, and this has been Virginia Tech's Curious Conversations.