Fair Isaac Corporation

11/21/2024 | Press release | Distributed by Public on 11/21/2024 07:21

2025 Predictions for AI and GenAI: Responsibility, Rekindled

I predict that in 2025, companies will return to responsible operationalization of AI and GenAI to improve their business.

  1. Responsible AI Will Be Revisited

If you follow me on LinkedIn, you know that Responsible AI is a big topic for me. I first predicted the rise of Explainable AI, a key component of Responsible AI, in 2017.

The excitement around ChatGPT the past couple of years has moved responsible use of AI down on corporate priority lists. I predict that in 2025, consumers, businesses and banks will reembrace Responsible AI and fall in love with this concept again. We will see more organizations invest in AI governance frameworks, and tools and processes such as blockchain-based model development management, to build AI that is explainable, ethical and auditable.

Trust is at the root of this rekindled flame. It is at the heart of every company's relationship with its customers, and directly proportional to business success. Organizations that compete on transparency and auditability will go a long way in regaining customers' trust, and market share.

  1. RegTech Will Come to AI

Regulation technology (RegTech) was a big deal a few years ago, as banks and other companies looked to technology to help automate regulatory monitoring, reporting and compliance. I predict in 2025 that regulatory analytics and other new technology will become widely available to help companies demonstrate compliance with emerging AI regulations.

AI regulation will extend far beyond the financial services industry. I believe that organizations will pre-emptively come together to create Information Sharing and Analysis Centers (ISACs), formulating industry-based standards and methods to actually meet regulations. AI ISACs will focus on helping members interpret AI regulation and achieve balance with specific nuances within each industry, since the "how-to" is not specified within a general regulatory objective. RegTech will help smooth the way.

  1. GenAI Use Cases Will Get Real and Data-Domain Specific

As companies sort through their GenAI proof of concept projects (POCs), those that are serious about using large language models (LLMs) and other generative techniques in a responsible, value-based way will focus on the tenets of Responsible AI--which starts with mastering your own data.

In 2025, GenAI programs will be based on actively curated data that is relevant to specific business domains. Companies will curate and cleanse data the LLM should be learning from and remove huge amounts of data it shouldn't. This is a first step of responsible use; training data must be representative of the decisions based on it. Companies will differentiate themselves on their data strategies for LLM creation; after all, an LLM is only a rendering of the data on which it was built.

  1. Companies Will Build Their Own Small and Focused Language Models

Furthermore, in 2025 financial institutions and other companies will move on from POCs. We will see more and more companies building their own small language models (SLMs). We will see a rise in focused language models (FLMs) that can address the most undermining aspect of LLMs-hallucination-with a corpus of specific domain data and knowledge anchors to ensure task-based FLM responses are grounded in truth. These same FLMs will help legitimize Agentic AI applications that are still at their infancy, but also require laser-focused, task-specific LLMs that operate at high degrees of accuracy and control.

Widespread use of FLMs can create another positive result: reducing the environmental impact of GenAI. According to industry estimates, a single ChatGPT query consumes between 10 and 50 times more energy than a Google search query. At a higher level, the United Nations' most recent Digital Economy Report suggests that data centers run by Google, Amazon, Meta, Apple and Microsoft (GAMAM) alone were responsible for consuming more than 90 TWh (terawatt-hour, or 1,000 gigawatt-hours [GWh]) of energy), which is more than entire countries like Finland, Belgium, Chile or Switzerland. As companies look for ways to achieve sustainability goals other than buying carbon credits, FLMs can make a meaningful impact while delivering better results for businesses.

  1. AI Trust Scores Will Make It Easier to Trust GenAI

AI Trust Scores, such as those associated with FLMs, will make it easier for the public to trust GenAI. This secondary, independent, risk-based AI Trust Score, and strategies based on it, allow GenAI to be operationalized at scale and measure accuracy.

AI Trust Scores reflect three things:

  • The probability that key contextual data (such as product documentation) the task-specific FLM was trained on is used to provide the answer.
  • AI Trust Model's confidence that the FLM's output is based on enough statistical relevance. LLMs work on probability distributions, and if there's not enough training data to create a statistically significant distribution, the AI Trust Model will not be confident in the answer.
  • Alignment around knowledge anchors-that is, alignment with true facts versus data. Truth-versus-data is one of the most tenacious challenges with LLM technology.

AI Trust Scores can be operationalized in a proper risk-based system, so businesses can decide if they will trust an FLM's answer.

Looking Forward and Looking Back

It will be interesting to see how my AI and GenAI predictions play out in 2025. In looking back, my 2024 AI Predictions held pretty true:

  • Auditable AI Will Make Accountability Cool: Yes-the idea of using blockchain for model development management gained significant visibility and traction in 2024.
  • Small Will Be Beautiful: Yes--new, smaller approaches to LLM functionality, such as FLMs, have sprung up to temper the unwieldiness of LLMs.
  • Humans Will Reassert Themselves: Yes--as chatbot and other GenAI errors became increasingly evident in 2024, drawing criticism, even Gartner provided guidance on when GenAI should not be used.

How FICO Can Help You with Responsible AI