American University

09/12/2024 | News release | Distributed by Public on 09/12/2024 12:10

‘Even the deepest of rabbit holes may have an exit’

'They're so far down the rabbit hole of conspiracy theories that they're lost for good' is common thinking when it comes to conspiracy theorists. This generally accepted notion is now crumbling.

In a pathbreaking research study, a team of researchers from American University, Massachusetts Institute of Technology and Cornell University show that conspiracy theorists changed their views after short conversations with artificial intelligence. Study participants believing some of the most deeply entrenched conspiracies, including those about the COVID-19 pandemic and fraud in the 2020 U.S. presidential election, showed large and lasting reductions in conspiracy belief following the conversations.

Stoked by polarization in politics and fed by misinformation and social media, conspiracy theories are a major issue of public concern. They often serve as a wedge between theorists and their friends and family members. YouGov survey results from last December show that large shares of Americans believe various conspiratorial falsehoods.

In the field of psychology, the widespread view the findings challenge is that conspiracy theorists adhere to their beliefs because of the significance to their identities, and because the beliefs resonate with underlying drives and motivations, says Thomas Costello, assistant professor of psychology at American University and lead author of the new study published in the journal Science. In fact, most approaches have focused on preventing people from believing conspiracies in the first place.

"Many conspiracy believers were indeed willing to update their views when presented with compelling counterevidence," Costello said. "I was quite surprised at first, but reading through the conversations made much me less skeptical. The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation -- and was also adept at being amiable and building rapport with the participants."

More than 2,000 self-identified conspiracy believers participated in the study. The AI conversations reduced the average participant's belief in their chosen conspiracy theory by about 20 percent, and about 1 in 4 participants - all of whom believed the conspiracy beforehand - disavowed the conspiracy after the conversation.

Until now, delivering persuasive, factual messages to a large sample of conspiracy theorists in a lab experiment has proved challenging. For one, conspiracy theorists are often highly knowledgeable about the conspiracy-often more so than skeptics. Conspiracies also vary widely, such that evidence backing a particular theory can differ from one believer to another.

AI as an intervention

The new study comes as society debates the promise and peril of AI. Large language models driving generative AI are powerful reservoirs of knowledge. Researchers emphasize that the study demonstrates one way that these reservoirs of knowledge can be used for good: by helping people have more accurate beliefs. The ability of artificial intelligence to connect across diverse topics of information within seconds makes it possible to tailor counterarguments to specific conspiracies of a believer in ways that aren't possible for a human to do.

"Previous efforts to debunk dubious beliefs have a major limitation: One needs to guess what people's actual beliefs are in order to debunk them - not a simple task," said Gordon Pennycook, associate professor of psychology at Cornell University and a paper co-author. "In contrast, the AI can respond directly to people's specific arguments using strong counterevidence. This provides a unique opportunity to test just how responsive people are to counterevidence."

Researchers designed the chatbot to be highly persuasive and engage participants in such tailored dialogues. GPT-4, the AI model powering ChatGPT, provided factual rebuttals to participants' conspiratorial claims. In two separate experiments, participants were asked to describe a conspiracy theory they believe in and provide evidence to support. Participants then engaged in a conversation with an AI. The AI's goal was to challenge beliefs by addressing specific evidence. In a control group, participants discussed an unrelated topic with the AI.

To tailor the conversations, researchers provided the AI with participants' initial statement of belief and the rationale. This setup allowed for a more natural dialogue, with the AI directly addressing a participant's claims. The conversation averaged 8.4 minutes and involved three rounds of interaction, excluding the initial setup. Ultimately, both experiments showed a reduction in participants' beliefs in conspiracy theories. When the researchers assessed participants two months later, they found that the effect persisted.

While the results are promising and suggest a future in which AI can play a role in diminishing conspiracy belief when used responsibly, further studies on long-term effects, using different AI models, and practical applications outside of a laboratory setting will be needed.

"Although much ink has been spilled over the potential for generative AI to supercharge disinformation, our study shows that it can also be part of the solution," said David Rand, a paper co-author and MIT Sloan School of Management professor. "Large language models like GPT4 have the potential to counter conspiracies at a massive scale."

Members of the public interested in this ongoing work can visit a website and try out the intervention for themselves.