Northwestern University

11/28/2024 | Press release | Distributed by Public on 11/29/2024 16:10

Misinformation exploits outrage to spread online, new Northwestern study suggests

Misinformation exploits outrage to spread online, new Northwestern study suggests

Mitigation strategies assumed that people desire to interact with and share accurate information on the internet. But that's not always true, researcher says

Media Information

  • Embargo date: November 24, 2024 7:00 PM CT
  • Release Date: November 28, 2024

Media Contacts

Shanice Harris

Journal: Science

  • Misinformation evokes more outrage than trustworthy news
  • Outrage aids the spread of misinformation as strongly as trustworthy news
  • Users are more likely to share outrage-evoking misinformation without reading it first

EVANSTON, Ill. --- A new study from Northwestern University and Princeton University has found that misinformation - politically motivated or otherwise - evokes moral outrage at a notably higher rate than factual or trustworthy information, and, when triggered, moral outrage has the power to blind people to red flags that might normally make them think twice before sharing.

"When misinformation evokes outrage, people are significantly more likely to share the article without actually clicking into it and reading it," said William Brady, co-lead study author and assistant professor of management and organizations at Northwestern's Kellogg School of Management. "There's a more impulsive or automatic reaction that we have when something elicits outrage in us. The problem is that when people are put into an outrage state from reading political news, they will share the misinformation at a higher rate."

The study will be published Thursday, Nov. 28 in the journal Science.

The study found that bad actors are playing on Americans' heightened feelings about deeply held beliefs. The purpose of one such misinformation campaign created by a Russian organization called the Internet Research Agency (IRA), was to sow disinformation and discord amongst the American people during the 2016 and 2020 political seasons.

"Many of the links were clearly aimed to evoke outrage," said Killian McLoughlin, co-lead study author and Ph.D. student at Princeton University. "One of them was misinformation in the form of an ad targeting Republicans that called immigrants parasites, stating 'about 20 million parasites live in the U.S. illegally. They exploit Americans and give nothing in return. Isn't it time to get rid of parasites that are destroying our country?'"

IRA articles on Twitter and Facebook from January 2017 to July 2017 produced 9,026 links and 3,329 tweets from 1,656 users. From August 2020 to February 2021, IRA domains on those same platforms produced 192,108 links and 10,550 tweets from 5,236 users.

"Our research shows that a lot of the misinformation that spreads the most widely is specifically the kind of information that is eliciting outrage," McLoughlin said. "And when you're outraged, you might be focused on sharing information that confirms the biases of your group rather than whether the information is accurate."

Brady said current research into misinformation assumes that people are motivated to engage with accurate information, so the perceived solution to that problem is to get people to think about accuracy.

"When we are having discussions on how to combat misinformation, we need to talk about solutions that could still be effective against the most outrage-inducing misinformation," he said. "As a part of existing efforts to curb misinformation, social media content moderation teams should also be asking 'what is the potential of this information to evoke outrage?'"

A general negativity bias does not fully explain the researchers' findings - outrage seems to be special, Brady said. "In our data, misinformation reliably evoked more outrage than negative emotion in general," he said. "This makes sense on the supply side: outrage is most directly linked to political conflict and can sow division in an information ecosystem. But it also makes sense on the demand side: when people share outrage against political outgroups, it can increase their reputation in their political ingroup."

Brady also said that since outrage-inducing content often gets more engagement than any other content, this incentivizes individuals to create and distribute more outrage content.

"Likes and shares teach people to express more outrage over time," she said. "Misinformation exploits our attraction to outrage - hitchhiking on our natural tendency to share it. Because our tendency to share outrage is so heavily reinforced, we might even share it out of habit, without stopping to read the article first. This process facilitates the spread of misinformation because it suggests people are paying less attention to the accuracy of the information."

Researchers are hoping to isolate the effect that social media algorithms can have on the consumption of misinformation, specifically studying how algorithms amplify the spread of toxic and conflictual content.

"The debates over the role of algorithms in promoting toxic content tends to be black and white," Brady said. "The truth is that both algorithms and human psychology are to blame. When our psychological biases interact with algorithms that amplify our biases, we end up with information ecosystems that overrepresent toxic and outrageous content."

The study is titled "Misinformation exploits outrage to spread online." In addition to Brady, other authors include Killian L. McLoughlin, Ben Kaiser and M.J. Crockett of Princeton University; Aden Goolsbee of Yale University; and Kate Klonick of St. John's University.