Google LLC

09/12/2024 | Press release | Distributed by Public on 09/12/2024 07:25

DataGemma: Using real-world data to address AI hallucinations

Large language models (LLMs) powering today's AI innovations are becoming increasingly sophisticated. These models can comb through vast amounts of text and generate summaries, suggest new creative directions and even draft code. However, as impressive as these capabilities are, LLMs sometimes confidently present information that is inaccurate. This phenomenon, known as "hallucination," is a key challenge in generative AI.

Today we're sharing promising research advancements that tackle this challenge directly, helping reduce hallucination by anchoring LLMs in real-world statistical information. Alongside these research advancements, we are excited to announce DataGemma, the first open models designed to connect LLMs with extensive real-world data drawn from Google's Data Commons.

Data Commons: A vast repository of publicly available, trustworthy data

Data Commons is a publicly available knowledge graph containing over 240 billion rich data points across hundreds of thousands of statistical variables. It sources this public information from trusted organizations like the United Nations (UN), the World Health Organization (WHO), Centers for Disease Control and Prevention (CDC) and Census Bureaus. Combining these datasets into one unified set of tools and AI models empowers policymakers, researchers and organizations seeking accurate insights.

Think of Data Commons as a vast, constantly expanding database filled with reliable, public information on a wide range of topics, from health and economics to demographics and the environment, which you can interact with in your own words using our AI-powered natural language interface. For example, you can explore which countries in Africa have had the greatest increase in electricity access, how income correlates with diabetes in US counties or your own data-curious query.

How Data Commons can help tackle hallucination

As generative AI adoption is increasing, we're aiming to ground those experiences by integrating Data Commons within Gemma, our family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. These DataGemma models are available to researchers and developers starting now.

DataGemma will expand the capabilities of Gemma models by harnessing the knowledge of Data Commons to enhance LLM factuality and reasoning using two distinct approaches:

1. RIG (Retrieval-Interleaved Generation) enhances the capabilities of our language model, Gemma 2, by proactively querying trusted sources and fact-checking against information in Data Commons. When DataGemma is prompted to generate a response, the model is programmed to identify instances of statistical data and retrieve the answer from Data Commons. While the RIG methodology is not new, its specific application within the DataGemma framework is unique.