George Washington University

08/13/2024 | News release | Distributed by Public on 08/13/2024 11:51

New Study’s Findings Can Help Communicators Correct Online Misinformation

New Study's Findings Can Help Communicators Correct Online Misinformation

David Broniatowski of GW's School of Engineering and Applied Science was the paper's lead author.
August 13, 2024

File image

Misinformation online is always a concern, particularly in a presidential election year. Part of what can make online misinformation so effective is the clarity and relative simplicity with which it is conveyed. A research team headed up at George Washington University has shown that similar pithiness is necessary to persuasively correct misinformation and slow its spread.

Whether information is accurate or false, it is more effective if the bottom-line message is clear and simple, according to a new study in the Journal of Experimental Psychology: Applied. The study's lead author, David Broniatowski, is an associate professor of engineering management and systems engineering at GW.

Broniatowski and his team of researchers found that people are more likely to share misinformation if it's easy to understand and conveys a clear, simple message in a nutshell. However, the team also found that accurate information, conveyed similarly in a clear, simple way, can effectively deter people from sharing misinformation.

The researchers said their findings could help public health agencies and other expert communities respond more effectively to false information online. For communicators, the key is to clearly and simply convey a message in ways that allow audiences to grasp the bottom-line gist of the information presented. The gist of a meaningful message should be neither too simple ("This is false") nor too detailed, as in a decontextualized list of facts.

"These findings matter because they highlight practical ways to combat misinformation online," Broniatowski said. "By focusing on simple, yet insightful, explanations that align with people's values, we can more effectively reduce the spread of false information. This approach can improve public understanding and trust in accurate information across any number of topics."

The research team applied a psychological theory of cognition called fuzzy-trace theory to the sharing of misinformation online. The theory posits that when making decisions, people rely on simple, insightful, bottom-line meanings, or gists, rather than detailed, verbatim information. The researchers conducted two correlational studies and two controlled experiments. The correlational studies examined public data sets on Facebook to understand why certain false messages were shared, and the experiments tested the effectiveness of gist-based interventions in reducing the sharing and/or endorsement of misinformation.

The researchers found that people are more likely to share misinformation if it is easy to understand and conveys a simple, yet insightful, message-essentially, if the message explains the gistof the information. This reflects people's preference for straightforward and concise information that connects well with their values and beliefs.

This same methodology, the researchers found, is effective in responding to misinformation. Commentary that conveys a simple message explaining why misinformation is false is more likely to deter someone from sharing it. Comparatively, messages that are too simple (i.e., "This is false") or too detailed (e.g., a list of facts that lead people to "draw their own conclusions") are less likely to prevent someone from sharing misinformation.

The study is one of the first to systematically test the effectiveness of gist-based interventions in reducing the sharing and endorsement of misinformation, researchers said.

The authors suggested these findings can be useful for scientific communicators and experts, who can learn to convey why misinformation is false in a more effective way.

Other authors on the paper, "The Role of Mental Representation in Sharing Misinformation Online," include Ethan Porter, associate professor of media and public affairs and of political science, and Pedram Hosseini, a Ph.D. student in GW's Department of Computer Science, as well as Thomas Wood, associate professor of political science at the Ohio State University. The paper was supported by the Social Science Research Foundation, the John S. and James L. Knight Foundation, and the National Institute for Standards and Technology, the National Science Foundation, and the NIST-NSF Institute for Trustworthy AI in Law and Society.

Related Content