Université de Montpellier

12/10/2024 | News release | Archived content

[LUM#22] When AI passes, fake news dies

[LUM#22] When AI passes, fake news dies

[LUM Magazine, Podcast] - Published on December 10, 2024 in Science-Society

Covid, vaccination, global warming... Scientific subjects are omnipresent in major debates, and especially on social networks.
How can we sort the wheat from the chaff in all these assertions? At Lirmm, artificial intelligence is the answer.

Far from being confined to specialist journals, science is now infusing tweets, posts and comments of all kinds. " Everyone is talking about it, to give weight to arguments, or as a reaction to anxiety in society", explains Konstantin Todorov, researcher at the Montpellier Laboratory of Computer Science, Robotics and Microelectronics.1.

One observation leads to another: scientific facts are often presented in a simplified, decontextualized and misleading way. We observed this phenomenon a great deal during the Covid-19 epidemic, when numerous pseudo-scientific claims were circulating on the web, spreading bias and misinformation," recalls Sandra Bringay, a researcher at Lirmm. The mechanisms inherent in online platforms mean that controversial or false statements generate more interaction and interest", adds the specialist.

Combating misinformation

So how can we combat misinformation and improve understanding of complex scientific issues? For the two researchers, the answer lies in artificial intelligence. Together with Salim Hafid, a PhD student at Lirmm, they are proposing a hybrid AI approach dedicated to the interpretation of online scientific discourse. Their objective: to detect and classify scientific assertions in data from social networks.

As part of the Franco-German AI4Sci project, they have access to a huge database archiving all the tweets posted on X's ancestor, "a huge corpus to which we were able to gain access thanks to a collaboration with the German laboratory Gesis, a partner in the project", explains Konstantin Todorov. The computer scientists used this data for machine learning, using what are known as large language models, which enable them to associate concepts with text.

Setting the standard

"The idea is to teach the machine to recognize a scientific assertion, by checking, for example, whether there are references, publications, certain word combinations, the quality of the source... And to put it back into the accompanying media and scientific context", explains Sandra Bringay.
And to check whether these assertions are true or false? Konstantin Todorov replies: "In public discourse, what counts more than knowing whether the information is true, is better understanding how it is used. The aim is to move towards tools that give users flags, in other words reference points, to facilitate good reading practices."

On this project, the researchers are collaborating with sociologists and journalists with a broader objective: to combat strategies of manipulation and help create a healthy public and democratic discourse.

UM podcasts are now available on your favorite platforms (Spotify, Deezer, Apple podcasts, Amazon Music...).

  1. Lirmm (UM, CNRS, Inria, UPVD, UPVM)

Key words : IA, Artificial intelligence, LUM#22, LumLu, LUM Magazine, Research, Science, Company