NUS - National University of Singapore

10/02/2024 | News release | Distributed by Public on 10/02/2024 00:40

NUS researchers develop innovative approaches to tackle false information on multiple fronts

02
October
2024
|
14:20
Asia/Singapore

NUS researchers develop innovative approaches to tackle false information on multiple fronts

2024 1002 iGyro_1

As internet usage becomes an integral part of our daily lives, many people rely on various online sources for information. While the internet offers greater convenience and a wider range of news sources, the spread of false information has become one of the biggest challenges of this century, exacerbated by the rise of generative artificial intelligence. The spread of false information - whether in the form of mis-, dis- and mal-information (MDM) - can lead individuals and organisations to make harmful decisions, and has been shown to create societal divisions on critical and contentious issues.

A research team, comprising members from various faculties, including Faculty of Arts and Social Sciences, NUS Business School, School of Computing, College of Design and Engineering, Faculty of Law, and Lee Kuan Yew School of Public Policy, is addressing the issue of false information head-on through a programme known as Information Gyroscope(iGyro). This is a comprehensive five-year research initiative that seeks to identify and address vulnerabilities in the digital information pipeline, develop strategies to enhance digital resilience among online users, and promote behaviours that encourage engagement with trustworthy information. Led by Professor Chen Tsuhan, the interdisciplinary team of 40 researchers is committed to understanding and shaping the evolving digital information landscape.

"In aviation, a gyroscope provides stability and orientation guidance to maintain accurate control of an aircraft. Similarly, iGyro showcases our team's efforts to maintain stability in the face of a changing and chaotic information landscape. It also symbolises the interdisciplinary nature of the team, with expertise spanning disciplines such as social science, computer science, engineering, and law," said Prof Chen.

Adopting a holistic three-layered framework, the iGyro team places understanding and shaping human behaviour at the core of their research. The next layer of their framework is the technology domain, which aims to understand the different stages of the digital information pipeline, from creation to dissemination to consumption. Finally, the outermost layer of their framework studies the potential impact of mitigation strategies as well as the roles regulations and policies used to deploy these strategies.

The iGyro team published a journal article in Digital Government: Research and Practiceon 23 August 2024, which explained how the iGyro team has applied the three-layered framework to examine the lifecycle of content created by generative artificial intelligence, from creation to consumption. Placing a strong emphasis on human behaviour, the iGyro team highlighted vulnerabilities and advocated for adaptive and evidence-based policies to enhance information integrity and public trust in digital ecosystems.

Since its inception in 2023, the iGyro team has also made encouraging progress in developing tools to combat the spread of false information.

SNIFFER: A multimodal large language model to detect misinformation

Out-of-context misinformation, where authentic images are paired with false text that is not representative of the image, is one of the easiest and most effective ways to spread false information and mislead audiences. However, current technologies lack convincing explanations for their judgements, which is essential for debunking misinformation.

To tackle out-of-context misinformation, a team led by iGyro principal investigators Professor Wynne Hsu and Professor Lee Mong Li, who are from the NUS School of Computing, developed SNIFFER, a novel Multimodal Large Language Model (MLLM) designed to detect and explain out-of-context misinformation in images and captions.

SNIFFER uses a specialised artificial intelligence (AI) model to conduct a two-pronged analysis. The first step involves an internal check for consistency between the image and the caption. The second step draws information from external sources to examine the relevance between the context of the image and the provided caption. Based on the results of these two steps, SNIFFER will then determine the authenticity of the image-caption pair to arrive at a final judgement and an explanation of whether the pair is misleading or not.

SNIFFER has been found to surpass the performance of previous MLLM models by 40 per cent and carries out its misinformation detection tasks with higher accuracy compared to other state-of-the-art detection methods. The researchers hope that with further improvements, SNIFFER can be made available publicly to help users identify out-of-context information.

QACheck: A tool for question-guided fact-checking

The availability of reliable fact-checking tools is one way to combat the spread of false information. However, fact-checking through online sources involves a complex and multi-step reasoning process. Many existing fact-checking systems also lack transparency in their decision-making process, making it difficult for users to obtain a reasonable explanation for their conclusions.

To address this issue, iGyro principal investigator Associate Professor Kan Min-Yen, who is from NUS School of Computing, together with his research team, worked with international collaborators to develop the Question-guided Multihop Fact-Checking (QACheck) system, which steers the model's reasoning by posing a series of critical questions necessary for verifying a claim.

QACheck consists of five core modules: a claim verifier, question generator, question answering module, QA validator, and reasoner. Users can input a claim into QACheck, which then evaluates its accuracy and produces a detailed report outlining the reasoning process through a series of questions and answers. The tool also cites the sources of evidence for each question, promoting a transparent, explainable, and user-friendly fact-checking experience.

The team's next step is to boost QACheck's breadth and depth by integrating additional knowledge bases and incorporating a multi-model interface to support different data format, such as images, tables, and charts, to broaden the system's ability to process and analyse these formats.

2024 1002_iGyro_2
The team led by iGyro principal investigator Prof Simon Chesterman mapped out the distribution of laws against misinformation and its relationship between political freedom of countries around the world, based on information as of 2023.

Mapping out global legislation implemented against fake news

As digital information sources become sophisticated and evolve rapidly, regulations and policies must adapt and keep up with this dynamic landscape.

A team led by iGyro principal investigator Professor Simon Chesterman, who is from the NUS Faculty of Law, created an interactive map of the global landscape of legislative effortsagainst fake news and misinformation to illustrate how laws aimed at addressing MDM have evolved globally from 1995 to 2023.

Notably, the team found that these laws were initially introduced in countries with fewer civil liberties, particularly in Africa and Asia. More recently, Asian nations have contributed significantly to the rise in such legislation, often granting greater powers to their governments. The team also found that the expansion of these laws has accelerated most rapidly in Western countries, including the United States, Canada, and the European Union.

Through this interactive map, the iGyro team hopes to conduct a more in-depth analysis of the types of laws that govern digital information, and the effectiveness of different approaches adopted by different countries to combat fake news. Valuable insights gained from their research would help shape future policies for all countries.

"We hope that by developing innovative tools, such as SNIFFER and QACheck, and analysing the global legislative landscape against fake news and misinformation to shape future policies, we can create a reliable digital information ecosystem and empower users to have a trustworthy internet to access information," said Prof Chen.