07/29/2024 | Press release | Distributed by Public on 07/29/2024 06:55
Published: July 29, 2024
KFF Authors:
Table of Contents
Why Should You Pay Attention to Polls at all?Why Polls Matter
Understanding the Methods: Questions to Ask about PollsUnderstanding Methods
Examples of the Usefulness of Polls in Understanding Health PolicyUseful Examples
Polls and surveys are useful tools for understanding health policy issues. However, it takes time and training to understand how to interpret survey results and to decide which polls are useful and which might be misleading. The aim of this chapter is to help you learn how to be a good consumer of polls so they can be a valuable part of your toolkit for understanding the health policy environment. It begins by discussing why polls are an important tool in policy analysis and the caveats to keep in mind when interpreting them. It then discusses polling methodology and the questions you should ask to assess the quality and usefulness of a poll. The chapter ends with some real-world examples in which polling helped inform policy debates.
People sometimes ask if there is a difference between a "poll" and a "survey." The quick answer is that every poll is a survey, but not every survey is a poll (for example, large federal surveys like the Census or surveys of hospitals or other institutions would not be called polls). For purposes of this chapter, we use the terms interchangeably.
Polls have gotten a bad rap over the past few years, particularly around election times when they don't do a perfect job predicting who the winner of a given election will be. Given this, you may wonder why you should pay attention to polls when trying to understand health policy. There are six basic reasons why it's important for health policy scholars to understand public opinion:
Polls do not tell the whole story. Public opinion is just one part of the political and policymaking process. Public support for a given policy may seem clear based on a single survey question, but it can be quite malleable in the course of a public debate, and not all surveys measure this malleability. Small changes in survey question wording can sometimes lead to big changes in public support, so it's important never to rely on a single question from a single poll to make a conclusion about what the public thinks or knows. When possible, look for multiple questions on the same topics from multiple polls conducted at various times. If the answers are consistent, you can be more confident that the conclusion is correct. Sometimes a poll finding conflicts with your best sense of political reality when all available information is considered. In those instances, there's a good chance your "gut" is a better guide than what a given poll tells you.
There are limits to polling on complex topics like health care. When the public says they support a specific proposal for lowering health care costs, it doesn't mean they have fully thought through the details of that proposal and its implications. Rather, it may signal how important they think it is for policymakers to address the high cost of health care. And while some polls test this by asking follow-up questions that probe the public about trade-offs to any given policy approach, some health policy topics are just too complicated to reasonably ask the average American to weigh in on in a short survey.
Public opinion can't give you the "right" answer. While public opinion can tell you where the public stands on an issue, it cannot tell you what the right policy solution is in any given situation. For example, pollsters often ask people to rank the priority they give to different health issues before Congress. They may ask the public to rank the issues of prescription drug costs, the future of the Affordable Care Act, Medicaid expansion, the financial sustainability of Medicare, and so forth. But it turns out that real people aren't organized like congressional committees and don't put the issues neatly into policy buckets like pollsters do. What they are concerned about is the cost and affordability of health care, a concern that cuts across these issues. These ranking questions provide some information about what resonates most with the public, but that doesn't mean they should be treated as a rank-ordered list for policymakers to address starting from the top down. In addition, beyond telling you what the public thinks, polls can be just as useful for pointing out what the public doesn't understand about a given policy issue, allowing you to direct outreach and education efforts or figure out messaging that will resonate with the public if you are advocating for a policy change.
The science of survey research is complicated, but there are a few simple terms you can learn and questions you can ask when you encounter polls in your schooling and daily life. These include:
Population. Who is the population that the survey is claiming to represent? Polls can be conducted with many different populations, so it is important to know how researchers define the population under study. For example, a survey of voters may be useful for your understanding of a particular health care issue's importance in the election, but it might not be as useful for estimating how many people have had problems paying medical bills, since lower-income people (who may be the most likely to experience bill problems) are less likely to be voters and may be left out of the study entirely.
Sampling. How did researchers reach the participants for their poll, and was it a probability or non-probability sample? In a probability-based sample, all individuals in the population under study have a known chance of being included in the survey. Such samples allow researchers to provide population estimates (within a margin of sampling error) based on a small sample of responses from that population. Examples of probability-based sampling techniques include random digit dialing (RDD), address-based sampling (ABS), registration-based sampling (RBS), and probability-based online panels. Non-probability sampling, sometimes called convenience or opt-in sampling, has become increasingly common in recent years. While non-probability surveys have some advantages for some types of studies (particularly their much lower cost), research has shown that results obtained from non-probability samples generally have greater error than those obtained from probability-based methods, particularly for certain populations.
Data collection (survey mode). While there are many ways to design a survey sample, there are also many ways to collect the data, known as the survey mode. For many years, telephone surveys were considered the gold standard because they combined a probability-based sampling design with a live interviewer. Survey methodology is more complicated now, but it is still important to know whether the data was collected via telephone, online, on paper, or some other way. If phones were used, were responses collected by human interviewers or by an automatic system, sometimes known as interactive voice response (IVR) or a "robocall"? Or were responses collected via text message? Depending on the population represented, different approaches might make the most sense. For example, about 5% of adults in the U.S. are not online, and many others are less comfortable responding to survey questions on a computer or internet-connected device. While young adults may be comfortable responding to a survey via text message, many older adults still prefer to take surveys over the phone with a live interviewer. Some populations feel a greater sense of privacy when taking surveys on paper, while literacy challenges may make a phone survey more appropriate for other populations. Many researchers now combine multiple data collection modes in a single survey to make sure these different segments of the population can be represented.
Language. Was the survey conducted only in English, or were other languages offered? If the survey is attempting to represent a population with lower levels of English language proficiency, this may affect your confidence in the results.
Survey sponsor. Who conducted the survey and who paid for it? Understanding whether there is a political agenda, special interest, or business behind the poll could help you better determine the poll's purpose as well as its credibility.
Timing. When was the survey conducted? If key events related to the survey topic occurred while the survey was in the field (e.g., an election or a major Supreme Court decision), that might have implications for your interpretation of the results.
Data quality checks. During and after data collection, what data quality checks were implemented to ensure the quality of the results? Most online surveys include special "attention check" questions designed to identify respondents who may have fabricated responses or rushed through the survey without paying attention to the questions being asked. Inclusion of these questions is a good sign that the researchers were following best practices for data collection.
Weighting. Were the results weighted to known population parameters such as age, race and ethnicity, education, and gender? Despite best efforts to draw a representative sample, all surveys are subject to what is known as "non-response bias" which results from the fact that some types of people are more likely to respond to surveys than others. Even the best sampling approaches usually fall short of reaching a representative sample, so researchers apply weighting adjustments to correct for these types of biases in the sample. When reading a survey methodology statement, it should be clear whether the data was weighted, and what source was used for the weighting targets (usually a survey from the Census or another high-quality, representative survey).
Sample size and margin of sampling error. The sample size of a survey (sometimes referred to as the N) is the number of respondents who were interviewed, and the margin of sampling error (MOSE) is a measure of uncertainty around the survey's results, usually expressed in terms of percentage points. For example, if the survey finds 25% of respondents give a certain answer and the MOSE is plus or minus 3 percentage points, this means that if the survey was repeated 100 times with different samples, the result could be expected to be between 22%-28% in 95 of those samples. In general, a sample size of 1,000 respondents yields a MOSE of about 3 percentage points, while smaller sample sizes result in larger MOSEs and vice versa. Weighting can also affect the MOSE. When reading poll results, it is helpful to look at the N and MOSE not only for the total population surveyed, but for any key subgroups reported. This can help you better understand the level of uncertainty around a given survey estimate. The non-random nature of non-probability surveys makes it inappropriate to calculate a MOSE for these types of polls. Some researchers publish confidence estimates, sometimes called "credibility intervals," to mimic MOSE as a measure of uncertainty, but they are not the same as a margin of sampling error. It's also important to note that sampling error is only one source of error in any poll.
Questionnaire. Responses to survey questions can differ greatly based on how the question was phrased and what answer choices were offered, so paying attention to these details is important when evaluating a survey result. Read the question wording and ask yourself - do the answer options seem balanced? Does the question seem to be leading respondents toward a particular answer choice? If the question is on a topic that is less familiar to people, did the question explicitly offer respondents the chance to say they don't know or are unsure how to answer? If the full questionnaire is available, it can be helpful to look at the questions that came before the question of interest, as information provided in these questions might "prime" respondents to answer in a certain way.
Transparency. There is no "gold seal" of approval for high-quality survey methods. However, in recent years, there has been an increasing focus on how transparent survey organizations are about their methods. The most transparent researchers will release a detailed methodology statement with each poll that answers the questions above, as well as the full questionnaire showing each question in the survey in the order they were asked. If you see a poll released with a one or two-sentence methodology statement and can't find any additional information, that may indicate that the survey organization is not being transparent with its methods. The American Association for Public Opinion Research has a Transparency Initiative whose members agree to release a standard set of information about all of their surveys. For political polling, 538 recently added transparency as an element of their pollster ratings. Some news organizations also "vet" polls for transparency before reporting results, but many do not. This means that just because a poll or survey is reported in the news doesn't necessarily mean it's reliable. It's always a good idea to hunt down the original survey report and see if you can find answers to at least some of the questions above before making judgments about the credibility of a poll.
Election polling vs. issue polling. Election polls - those designed at least in part to help predict the outcome of an election - are covered frequently in the media, and election outcomes are often used by journalists and pundits to comment on the accuracy of polling. Issue polls - those designed to understand the public's views, experiences, and knowledge on different issues - differ from election polls in several important ways. Perhaps the most important difference is that, in addition to the methodological challenges noted above, election polls face the added challenge of predicting who will turn out to vote on election day. Most election polls include questions designed to help with this prediction, and several questions may be combined to create a "likely voter" model, but events or other factors may affect individual voter turnout in ways pollsters can't anticipate. Election polls conducted months, weeks, or even days before the election also face the risk that voters will change their mind about how to vote between the time they answer the survey and when they fill out their actual ballot. Issue polls do not generally face these challenges, so it's important to keep in mind that criticisms about the accuracy of election polls may not always apply to other types of polls.
The Affordable Care Act (ACA) is the largest health legislation enacted in the 21st century. From the time the legislation was being debated in Congress through its passage, implementation, and efforts to repeal it, the ACA has been the subject of media coverage, political debate, campaign rhetoric, and advertising. In each of those stages, polls and surveys have provided important information for understanding what was happening with the law.
Prior to passage, polls showed the public's desire for change in health care, particularly when it came to decreasing the uninsured rate and making health care and insurance more affordable. Despite this apparent consensus on the need for change, polls also helped shed light on some of the barriers to passing legislation. For example, survey trends demonstrated how the share of the public who expected health reform legislation to leave their families worse off increased over the course of an increasingly public debate in which opponents tapped into fears about how the proposed law might change the status quo.
After the law was passed, public opinion on the ACA was sharply divided along partisan lines, with majorities of Democrats viewing the law favorably and majorities of Republicans having an unfavorable view. However, surveys also painted a more nuanced picture beyond the overall partisanship, showing that majorities of U.S. adults across partisan lines favored many of the things the ACA did, including allowing young adults to stay on their parents' insurance until age 26, preventing health plans from charging sick people more than healthy people, and providing financial subsidies to help lower- and moderate-income adults purchase coverage. At the same time, polls showed that many adults were not aware that these provisions were part of the ACA, and that many others incorrectly believed the law did things it did not, such as creating a government-run insurance plan and allowing undocumented immigrants to receive government financial help to purchase coverage.
This combination of "the parts more popular than the whole" and incomplete public knowledge of the law provided some insight into why efforts to repeal the law were ultimately unsuccessful despite the relative unpopularity and deep partisan divisions on the law overall. When faced with the very real prospect of the popular parts of the law going away - particularly the protections for people with pre-existing health conditions - the public (and particularly Democrats and independents who had previously expressed lukewarm support) rallied to protect it. In fact, following concerted Republican efforts to repeal the law in 2017, the ACA has remained more popular than ever, with more adults expressing a favorable than an unfavorable opinion.
In addition to providing information about the public's evolving opinion and awareness of the law, surveys also helped provide information about people's experiences under the law. For example, a 2014 survey of people who purchase their own insurance found that 6 in 10 people enrolled in insurance through the new marketplaces were previously uninsured, and that most of this group said they decided to purchase insurance because of the ACA. Subsequent surveys showed that most marketplace enrollees were satisfied with their plans, but many reported challenges related to the affordability of coverage and care.
These are just a few examples of the ways surveys helped provide insights into the dynamics of a complex health policy at different points in time.
Another health policy issue where polls have provided useful information is the debate over a national, single-payer health plan. While the idea has been discussed for decades, public discussion was prominent most recently during the 2016 and 2020 Democratic presidential primaries, when Senator Bernie Sanders made "Medicare-for-all" a centerpiece of his campaign. Since 2017, a majority of U.S. adults have supported the idea of a national Medicare-for-all plan, but once again, polls also indicated why such a proposal had never become a political reality. For example, the public's reaction to the idea varies considerably based on the language used to describe it; while majorities view the terms "universal coverage" and "Medicare-for-all" positively, most have a negative reaction to "socialized medicine," and many are unsure how they feel about the term "single-payer health insurance." Surveys also demonstrate that while support starts out high, many people say they would oppose a Medicare-for-all plan if they heard common arguments made by opponents, such as that it would lead to delays in treatments, threaten the current Medicare program, or increase taxes. Polls like these and others that test different messages can help shed light on the public's likely reaction to real-world debates over policies, helping us understand some of the reasons why certain policies that seem to attract majority support in the abstract face an uphill battle once public debate and discussion about them begin.
Polls can also help shed light when sudden events create policy changes that immediately affect individuals' access to health care in different scenarios. A recent example is the Supreme Court 2022 decision in Dobbs v. Jackson that overturned Roe v. Wade and eliminated the nationwide right to abortion that had been in place since 1973. The Dobbs decision opened the door for states to pass their own abortion regulations, and many states had previously established "trigger laws" that made abortion illegal as soon as Roe was overturned.
Polls before and after the 2022 midterm election indicated how the overturn of Roe affected voter motivation, turnout, and vote choice. For example, polling in October 2022 showed abortion increasing as a motivating issue for voters, particularly among Democrats and those living in states where abortion was newly illegal. And election polling of voters showed how the Supreme Court decision played a key role in motivating turnout among key voting blocs that likely contributed to the Democratic party's stronger-than-expected performance in the midterms.
Understanding the impact of Dobbs is an area where polling of specific populations (including grouping individuals by the abortion laws in their state) is more useful than looking at the U.S. population as a whole. For example, in addition to shedding light on the dynamics of abortion as an election issue, polling in 2023 indicated widespread confusion about the legality of medication abortion, particularly among people living in states that had banned or severely limited the procedure. Surveys also shed light on the experiences of people living in different states; for example, a 2024 survey found that 1 in 5 women of reproductive age (18-49) living in states with abortion bans said either they or someone they know had difficulty accessing an abortion since the Supreme Court overturned Roe v. Wade due to restrictions in their state.
Well-designed surveys of under-represented groups can provide important information about health policy by amplifying the opinions and experiences of those whose voices are often left out of policy debates. Examples include:
Brodie, M., Hamel, L., & Kirzinger, A., The Role of Public Opinion Polls in Health Policy. In Altman, Drew (Editor), Health Policy 101, (KFF, July 2024) https://www.kff.org/health-policy-101-the-role-of-public-opinion-polls-in-health-policy (date accessed).