10/03/2024 | News release | Distributed by Public on 10/03/2024 11:32
The Federal Trade Commission launched Operation AI Comply, announcing enforcement actions against five companies for alleged deception regarding artificial intelligence. The FTC's actions mark the latest U.S. scrutiny of AI-related misconduct.
On September 25, 2024, as part of a new enforcement "sweep" called Operation AI Comply, the FTC announced enforcement actions against five companies that allegedly used artificial intelligence (AI) to "supercharge deceptive or unfair conduct that harms consumers." According to the FTC, these cases showcase how "hype surrounding AI" is used to "lure consumers into bogus schemes" and to provide AI-based tools that themselves can be used to deceive consumers. In announcing the actions, FTC Chair Lina Khan stated that "[t]he FTC's enforcement actions make clear that there is no AI exemption from the laws on the books."
The sweep is the latest development in the FTC's continued focus on deceptive schemes involving AI. In January, the FTC hosted a Tech Summit on AI, where Commissioner Rebecca Slaughter emphasized that the FTC would leverage "the full panoply of [its] statutory tools" to understand the "incentives and consequences" of AI and pursue enforcement actions, where appropriate. And earlier this month, the Director of the Bureau of Consumer Protection, Samuel Levine, reiterated that the FTC is "taking a proactive approach to addressing AI-related harms," referring to a February settlement against three individuals and related entities for defrauding customers as part of an ecommerce money-making scheme that the company claimed was "powered by artificial intelligence."
As part of Operation AI Comply, the FTC announced enforcement actions against five companies (and related individuals). Four of the actions were unanimously authorized by the Commission and alleged that companies failed to live up to claims they made regarding their use of AI, a tactic known as "AI washing," with Commissioner Andrew Ferguson noting that the FTC's actions sought to hold the companies "to the same standards for honest-business conduct that apply to every industry." The fifth and final action involved claims by the FTC that a company's AI technology could potentially be used to mislead consumers, but lacked any allegations that consumers were actually misled, drawing a strong rebuke from two dissenting Commissioners. All of the complaints allege unfairness and/or deception in violation of Section 5 of the FTC Act; for defendants that did not resolve the charges, the FTC also alleged violations of the Consumer Review Fairness Act (15 U.S.C. § 45b) and the Business Opportunity Rule (16 C.F.R. Part 437).
The FTC's Operation AI Comply is further evidence of increasing scrutiny of AI-related misconduct. Other U.S. authorities - most notably, the Securities and Exchange Commission (SEC), the Department of Justice (DOJ), and various State Attorneys General (State AGs) - have warned about the risks of AI misuse and are increasingly pursuing enforcement actions against alleged wrongdoers.
The SEC has repeatedly warned against "AI washing" and inaccurate AI disclosures. In December 2023 and March 2024 speeches, SEC Chair Gary Gensler cautioned against "AI washing" by misleading investors as to a company's true AI capabilities, emphasizing that securities laws require "full, fair and truthful disclosure." Recent SEC enforcement actions underscore this focus:
The DOJ has similarly signaled an increased focus on the impact of AI on its enforcement efforts. In February, Deputy Attorney General Lisa Monaco announced a new initiative, Justice AI, to convene experts from academia, science, and industry "to understand and prepare for how AI will affect the Department's mission and how to ensure we accelerate AI's potential for good while guarding against its risks." In the same speech, she warned that the DOJ will utilize existing legal frameworks to pursue AI-related wrongdoing and that "our enforcement must be robust." To that end, Deputy Attorney General Monaco announced that "where prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI - they will." And most recently, DOJ updated its corporate compliance guidance in September to emphasize the importance of evaluating and managing AI-related risk.
At the state level, State AGs have followed suit, with several - including Texas, Massachusetts, and California - warning that companies employing AI must ensure those uses comply with existing laws.
Recent AI enforcement actions from the FTC and other U.S. authorities offer several key takeaways:
New application, same laws. As FTC Chair Khan stated in the announcement, "there is no AI exemption from the laws on the books." Companies should treat their implementation of AI as they do other areas of business, which require due diligence, testing, oversight, and disclosures. Companies should also consider establishing internal policies to govern use of AI, educating their boards on applicable disclosure requirements, and should stay up to date on guidance from agencies.