Capgemini SE

12/10/2024 | News release | Archived content

The evolution of hybrid AI: where deterministic and statistical approaches meet

The evolution of hybrid AI: where deterministic and statistical approaches meet

Jonathan Aston

Dec 10, 2024

They come from different worlds. Now they're meeting for the first time, and sparks are flying.

Artificial Intelligence (AI) has evolved through two distinct pathways in parallel over the decades: deterministic AI and statistical AI. These two approaches have historically shaped how AI systems learn, reason, and make decisions, each bringing its own strengths and limitations. One offers clarity and structure, while the other allows patterns to be learned from data and applied into the future. Each approach has a fundamentally different answer to the way AI systems learn, reason, and make decisions. Now, a new field of AI research aims to fuse these two forms of AI into one. If it works, things are about to get very interesting.

Deterministic AI: the early era

In the early days of AI research, the limitations were often computing power and memory. This led to AI being dominated by deterministic approaches, specifically symbolic AI. Symbolic AI uses formal logic and explicit rules. These rules-based systems allowed decisions to be made automatically based on intelligence; the intelligence was learned but was simple in nature and needed expert input.

The most famous of these early systems were expert systems in the 1970s and 1980s. These systems used if-then rules to mimic human decision-making, offering clear, explainable reasoning. For instance, for a given input in a factory setting, a set of actions are defined and taken based on the input. Deterministic AI provides clarity and precision, as every rule is explicitly defined. It is therefore very easy to understand how the system is working and see why the outcome was achieved. The challenge with these systems is their lack of flexibility and dependence on pre-defined knowledge. The answer to this problem? statistical AI.

The rise of statistical AI 

Statistical AI had always existed, but it was not easy to use. Statistics were often done statically, meaning the analyses that businesses relied on were based on a snapshot of the past. The idea is that if you use statistics, you are understanding the past to inform the future (often through policy changes) rather than machine learning, which is predicting the future from the past and acting accordingly. The emergence of fast computing changed the game regarding statistical AI. Now people could build and experiment with more complex models and much larger datasets. AI began to move into areas of increasing complexity and uncertainty.

Another noteworthy advancement was the increasing availability of machine learning libraries like ML++. These packages meant that machine learning models did not have to be built from scratch every time. A developer could now build a model on data and check whether machine learning could solve a business problem faster than ever before. In this way, AI proof of concepts emerged, allowing for testing the feasibility of an idea quickly.

However, the move away from expert knowledge being explicitly programmed into AI came with new challenges, especially around the explainability of the AI models built. This has now developed into an entire field of data science, and the more complex the model, the harder it is to understand (i.e., to explain the decision based on logical reasoning in a way the receiver can understand) and explain what the model is doing (i.e., explain the way the decision traces through the model, based on the functionality of the algorithm). The term "black boxes" came into use, highlighting that we don't know what happens in the "black box" and therefore cannot inspect it.

In many cases this might not be problematic, and a few ways developed to understand the models and trace back on the predictions they have made. In some cases (like with computer vision models), the visualization of model activation (i.e., which nodes in the model are triggered most or which areas of the image are most critical to the decision-making) can provide enough "explanation" for us to decide on the quality of that model for the task at hand.

The birth of hybrid AI: combining deterministic and statistical approaches

As AI matured, the need to combine the precision and interpretability of deterministic AI with the flexibility and adaptability of statistical AI became clear. Hybrid AI systems (Neurosymbolic AI) have emerged that could handle complex tasks with both structure and uncertainty.

Reasoning and logical inference: The essence of deterministic AI

One mechanism of structuring data for use by deterministic AI is Knowledge Graphs (KGs).

which can represent entities (such as people, places, or concepts) and then model these relationships in a structured, graph-based format. These graphs embody the symbolic AI tradition by providing a deterministic structure that can be reasoned through logically. For example, if John lives in London and Brian lives with John, then the fact that Brian must also live in London can be derived, while not being explicitly available through the data. Checking model quality automatically through logical constraints, inferring new and hidden knowledge, but also finding inconsistencies and flaws of information models, it is all within reach. The ability of knowledge graphs to handle complex and vast amounts of knowledge, in a structured way, can be very valuable. We can also look to include elements of statistical AI into deterministic AI, and in this way create hybrid AI.  

One example of this is a decision tree which can be used in conjunction with a knowledge graph to to add new elements into the graph, based on the statistical likelihood of a link being there. While a knowledge graph flags individual anomalies, the decision tree evaluates them collectively, enabling a more nuanced analysis. For example in a transaction for a credit card company, we can combine several factors such as the transaction amount being abnormally high, in an unusual location and into a new bank account. The probability of these individually can be 20%, 30% & 10% respectively but combined the likelihood of fraud is now 49.6%, which is greater than any of the individual factors alone. By leveraging the structured relationships and context in the knowledge graph, the decision tree identifies patterns that might go unnoticed, such as the compounded risk of anomalies occurring together. This integration enhances the accuracy of fraud detection by applying reasoning to the interconnected data in the knowledge graph, prioritizing cases for investigation with greater precision. The hybrid approach provides interpretability and scalability, as investigators can trace decisions back to both the individual features in the knowledge graph and the decision tree's logic, improving transparency and trust. These methods can be immensely powerful and showcase hybrid AI in action.  

Generative AI: The latest buzz in statistical AI.

Generative AI (Gen AI), which includes models like GPT-4, Google's Gemini, Meta's Llama 3, Mistral AI, etc., is the latest advancement in statistical AI and is the outcome of further advancements in computing power and availability. Huge models are built at great cost to be able to handle many different tasks, having learned on a huge corpus of information. They can generate text, images, and other forms of content.

However, being statistical in nature, these models can get things wrong, just like in machine learning, a model cannot predict the correct outcome 100% of the time. In Gen AI we have these incorrect predictions as well but instead of an incorrect prediction of churn for example, the incorrect prediction of a word leads to the next word being incorrect and entirely plausible sentences being created from an initial incorrect prediction. As you can imagine this can give mean a passage of incorrect text is created in a snowball effect from one incorrect prediction. Therefore, incorrect predictions can be more sinister in Gen AI than in other statistical AI methods. In Gen AI we call these effects hallucinations. To explore this subject further, consider reading this blog we previously published from the AI Lab. The chance of error means they are not as reliable as deterministic AI. But deterministic AI can be coupled with Gen AI to make them more reliable. Here we see the emergence of GraphRAG for instance, where graph technology is used with Gen AI to improve information retrieval from free text questions. This is another example of hybrid AI and is one being considered by many today to make Gen AI more reliable.   .

An LLM-generated knowledge graph built using GPT-4 Turbo: https://microsoft.github.io/graphrag/

Conclusion: The future of hybrid AI 

The journey of AI can be told as the parallel journeys of deterministic AI and statistical AI. However, the promise of AI throughout this whole journey has always been in the bridging of these two worlds, where AI not only holds and uses knowledge with reasoning but learns patterns from that data and applies those learnings to the real-world knowledge it has. The ability to abstract, reason, plan, predict and explain comes within reach, thanks to the combination of various types of AI models and methods of knowledge representation. This is where AI is heading, solving the challenges in both statistical and deterministic AI. The future of AI is hybrid AI enabling more intelligent, robust, and trustworthy systems. 

About AI Futures Lab

We are the AI Futures Lab, expert partners that help you confidently visualize and pursue a better, sustainable, and trusted AI-enabled future. We do this by understanding, pre-empting, and harnessing emerging trends and technologies. Ultimately, making possible trustworthy and reliable AI that triggers your imagination, enhances your productivity, and increases your efficiency. We will support you with the business challenges you know about and the emerging ones you will need to know to succeed in the future.

We create blogs, like this one, Points of View (POVs), and demos around these focus areas to start a conversation about how AI will impact us in the future. For more information on the AI Lab and more of the work we have done, visit this page: AI Lab. 

Meet the author

Jonathan Aston

Data Scientist, AI Lab, Capgemini's Insights & Data

Jonathan Aston specialized in behavioral ecology before transitioning to a career in data science. He has been actively engaged in the fields of data science and artificial intelligence (AI) since the mid-2010s. Jonathan possesses extensive experience in both the public and private sectors, where he has successfully delivered solutions to address critical business challenges. His expertise encompasses a range of well-known and custom statistical, AI, and machine learning techniques.

Related