06/01/2023 | Press release | Distributed by Public on 06/01/2023 08:55
Up until just a few years ago, artificial intelligence (AI) was something you mainly heard about in movies like 2001: A Space Odyssey, Terminator and Chappie. Today, ChatGPT and other new AI-powered chat and search tools are getting a ton of coverage in the media, social sites and around the workplace.
While there are some concerning outputs from generative AI, like when folks get chat AIs to say scary things, what many fail to realize is that AI is pretty darn useful in ways Hollywood would not bother to predict.
Image created by Dall:E2 - Oil painting in the style of Van Gogh of robot HAL from 2001 movie, reading a book about AIGiving credit where it's due, John Oliver (not in Hollywood, in New York) recently chased the AI headlines as only a skilled satirist can, making you laugh at the wacky stories about peoples' dark AI encounters while wondering why they are reported as news at all.
But Oliver hit a crucial point that's worth reiterating: There is a big difference between explainable AI and black box AI, or artificial intelligence that arrives at conclusions, makes decisions or gives an output that is unexplainable. As AI applications are put to work at more complex tasks like managing capacity on power grids and monitoring financial transactions for fraud, all of us at FICO firmly believe that the only type of AI to apply is the responsible, explainable, ethical kind.
When we hear stories about generative AI systems that declare love or a desire to die, we're often quick to equate it with a sentient being crying for help, rather than a mathematical model or algorithm regurgitating information from the data used to train it.
Even though AI systems are not, in fact, sentient beings, there are some significant concerns when we can't point to an explanation for their output. The dangers with unexplainable AI don't need to go as far as Skynet, WOPR, or VIKI to be problematic. AI that is biased, unexplainable and/or producing incorrect or inconsistent results can result in regulatory and compliance concerns for any financial institution unlucky enough to use it.
Training is a vital component to many facets of life. Whether it's spending time at the range to try and cure my slice (still working on that) or hitting the gym to work off holiday excesses, training allows us to incrementally improve. But we've all heard the adage - practice makes permanent, not perfect.
Anytime you are training, you have to ensure you are training on the right things or you'll be learning the wrong thing. In golf practice, that's often separating what you feel vs. what is real by using feedback aids like cones and pool noodles to help you really understand what your swing is doing and not what you think it is doing.
In that same way, any sort of AI has to be built up and trained correctly to avoid creating AI bias or potentially injecting human bias into the AI system. Using the right data to train an algorithm to achieve a specified and defined outcome is crucial. Any data used needs to be understood extremely well, and there needs to be a clear understanding of what types of outcomes or results are expected in order to avoid bias.
Oliver mentions in his rant that there are plenty of stories about AI training gone wrong, like when AI determined that the small rulers used in dermatological photos are malignant - because in the training data, the rulers appear in every photo with a malignant tumor. This kind of unintended outcome is not unique to AI, but it is an existential consideration for any sort of explainability.
This is why training interpretable AI with the correct types of data, with a representative and sufficiently large dataset, and understanding of what the rules are (we are looking at tumors, not rulers) are some necessary baselines for applying AI.
But equally important, especially in the context of fraud, is to ensure you are concrete on the objectives of the model and ensure you have proper training data to support those objectives. Models need to generalize and ensure that one behavior doesn't dominate.
Another key is to know when predictions and outcomes are drifting out of range. One solution is to apply constant updates, using both structured and unstructured data, which can help ensure that model performance continues to produce the expected results, without drifting towards bias in one direction or another. However, the gold standard is to ensure you have monitoring in place to see if there is drift in the data or scores over time.
For decades now, financial institutions (FIs) and fintechs have been deploying AI for a number of specific use cases, but only recently has the concept of generative AI begun to generate significant headlines. Here are some of the more popular use cases for AI & machine learning (ML), and notes on why interpretability is so important in each case.
Interpretable AI, deployed in an enterprise-class Platform, can make a whole range of financial, fraud, and customer experience use cases better. Though the general public seems obsessed with what they imagine a rogue AI might do, smart businesses realize the reality of AI is more like Dr Theopolis from Buck Rogers (friendly professorial AI) than MCP from Tron (evil mastermind AI).
For more of my latest thoughts on fraud, financial crime and FICO's entire family of software solutions, follow me on Twitter @FraudBird.