11/21/2024 | Press release | Archived content
Good afternoon and thank you all for joining today's event.
Today, I want to talk about credit scores and how we should think about them in the context of artificial intelligence. I think the United States should set a more explicit goal of shifting away from our outdated approach to credit scores toward one that is more predictive and transparent.
Roughly 35 years ago, credit scores started to take hold in America. Rather than retaining an individual consultant to help a lender sift through credit reports and files to develop models and formulas for underwriting, a new foundational model was developed by a company called FICO. FICO was able to develop this foundational model and a scoring system through a business arrangement with the three national credit reporting conglomerates.
Importantly, this created standardization that would ultimately make consumer loans more like a security. The U.S. was already accustomed to having bond issuances rated by companies like Moody's, S&P, and Fitch which gave investors the ability to standardize creditworthiness. With a standardized score, this created greater comparability and allowed illiquid loans to become more tradeable.
Several years later, FICO's dominance was further cemented when Fannie Mae and Freddie Mac began requiring mortgage lenders to submit a FICO score with loans sold to the mortgage giants. It's now common for securitizations of all types of consumer loans to incorporate FICO scores.
Like with AI, FICO's standardized score was seen as revolutionary, and it dramatically changed consumer finance. But there's real problems with standardized credit scores like FICO. Here's a few:
First, these credit scores can only make predictions about people with credit histories. This means that young people and others entering the financial system are penalized, often harshly.
Second, credit scores continue to be opaque, and the arguments for providing the cloak of secrecy to what is essentially a market utility are getting weaker.
Third, lenders report to the CFPB that credit scores are just not predictive enough. Many major lenders build their own proprietary scorecards to evaluate applications.
Fourth, mortgage lenders are angry about FICO's price gouging. We have evaluated this concern carefully, and FICO's price hikes are good for FICO but harmful to the competitiveness of the mortgage market.
We can do better than this. Recently, the CFPB finalized rules to accelerate open banking in the U.S., which will further reduce the salience of FICO scores by allowing consumers to permission their personal financial data from across their life as part of a loan application. However, there is still the issue of standardization.
To solve this, regulators and market participants could work to use artificial intelligence with a specific model that could be used by lenders, investors, and others to standardize a score. Ideally, this usage of artificial intelligence could serve as a market utility to develop an open source model with a cooperative financing arrangement to pay for ongoing iterations and testing.
Perhaps more importantly, it could be transparent about what the key inputs are to ensure that there is fairness, as well as proper governance about what types of data should be considered. This could serve as an important companion to achieve the overall goals of promoting competition and inclusion in a more open banking system.
In the short term, the CFPB and others need to do more work to address price gouging on credit scores, especially when it comes to the harmful effect on mortgage lenders and borrowers. We also need to make sure existing uses of AI comply with the law. But as we think about the future of AI in financial services over the long term, FICO's monopoly offers cautionary lessons. We'll need to continue to adjust government policies that push the market toward the use of traditional credit scores and instead create the conditions for transparent and competitive uses of AI. Thank you.