SEC - The United States Securities and Exchange Commission

10/10/2024 | Press release | Distributed by Public on 10/10/2024 12:43

Office Hours with Gary Gensler: Fraud and Deception in Artificial Intelligence

This video can be viewed at the below link.[1]

In 1950, the famous mathematician Alan Turing asked, "Can machines think?" What does that mean for securities law, particularly the laws related to fraud and manipulation?

Bad actors, ever since antiquity, have found new ways to deceive the public. With artificial intelligence, fraudsters have yet a new tool to exploit. But make no mistake. Under the securities law, fraud is fraud.

Former SEC Commissioner Kara Stein recently wrote about what happens when you combine AI, finance and the law of fraud. She spoke of three types of harm.

The first: Programmable Harm. It's kind of straightforward. If someone uses an algorithm and is optimizing to manipulate or defraud the public, that's just fraud.

Gets interesting when we get to the second category: Predictable Harm. Again, I think it's reasonably straightforward. In essence, has someone recklessly or knowingly disregarded a foreseeable risk in deploying a particular AI model? In essence, did they act reasonably?

You see, under the securities law, you aren't supposed to trade in front of your customers. That's called front running. You aren't supposed to spoof. In other words, place a fake order. You aren't supposed to lie to the public. Well, it's equally important that the robots, I mean AI models, don't do these things either. Investor protection requires that the humans who deploy the model put in place appropriate guardrails.

Some might ask, what if the AI models themselves are self-learning? They're changing. They're adapting. What if the AI models hallucinate, which we all know sometimes they do. I still believe that those who deploy the AI models should have appropriate guardrails for that as well.

That brings up the third category. In essence, when firms might deploy an AI model that create Unpredictable Harm. How do we hold those firms responsible? While some of that might play out in court, I think right now the harms are mostly either programmable or predictable, because we do know that these models may self-learn, may hallucinate. We do know that there are things that you're not supposed to do, like lie to the public.

You know, there was a famous early movie executive, Joseph Kennedy. He also went on to be the head of the Securities and Exchange Commission. And his son became president. Well, what did Chair Kennedy say in this seat 90 years ago? He said, "The Commission will make war without quarter on any who sell securities by fraud or misrepresentation." And I think he meant the AI model deployers as well.