American University

10/15/2024 | News release | Distributed by Public on 10/15/2024 12:28

AI DebunkBot Effectively Persuades People from Believing Conspiracy Theories

Newspaper headline reads 'Fake News.' Image created with generative AI

Conspiracy theories are spreading on an unprecedented scale across the Internet and social media, including outlandish stories about alien abductions, government assassination plots, and politicians who can "geoengineer" the weather. More and more, these stories are weaponized for political purposes, leading many experts to conclude they are a real threat to democracy.

But now there might be hope. American University Psychology Professor Thomas Costello is part of a Massachusetts Institute of Technology (MIT) team of scientists who have created DebunkBot, an artificial intelligence bot that chats pleasantly with users while respectfully and factually debunking their conspiracy beliefs.

DebunkBot has a strong track record of persuasion. This is no easy feat, says Costello, who points out that people who believe that conspiracy theories are "really hard to persuade and don't often change their minds." DebunkBot has captured the imaginations of scientists and politicians and journalists - as well as lots of ordinary people who have visited the site to test its effectiveness on debunking their favorite conspiratorial beliefs.

So, in this era of rampant misinformation and conspiracy theories, can AI bots like DebunkBot make a difference? In this interview, Costello answers questions about AI, human nature, and the battle over the truth.

Can you tell us a bit about DebunkBot and what it's designed to do?

DebunkBot is based on a research study that was published in Science. We used GPT-4 Turbo, which at the time was OpenAI's most advanced large language model, to engage more than 2,000 conspiracy believers in personalized, evidence-based discussions. Participants were asked to describe a conspiracy theory they believed in, using their own words, along with evidence supporting their belief.

GPT-4 Turbo then used this information to persuade users that their beliefs were untrue, adapting its strategy for each participant's unique arguments and evidence. These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute specific evidence supporting each person's conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology's development.

How did things turn out?