Fried, Frank, Harris, Shriver & Jacobson LLP

10/08/2024 | Press release | Distributed by Public on 10/08/2024 11:54

Operation AI Comply: FTC Steps Up Efforts to Scrutinize AI-Based Claims

Client memorandum | October 8, 2024

A series of recent enforcement actions and statements have underscored regulators' commitment to discouraging companies' misleading claims about artificial intelligence (AI).[1] On September 25, 2024, the Federal Trade Commission (FTC) took another high-profile step in this ongoing crackdown by announcing "Operation AI Comply,"[2] a set of five cases alleging various forms of AI-related deception.[3] Four of the FTC enforcement actions concerned allegedly deceptive claims about AI-driven services, and the fifth involved a company that offered a general purpose generative AI tool which purportedly allowed individuals to create "fake" consumer reviews.[4] More specifically:

  • DoNotPay ­- The FTC filed a complaint against DoNotPay, a company that claimed to offer an AI service that was "the world's first robot lawyer."[5] According to the complaint, DoNotPay falsely promised that its service would allow consumers to "sue for assault without a lawyer" and "generate perfectly valid legal documents in no time," and that the company would "replace the $200-billion-dollar legal industry with artificial intelligence."[6] DoNotPay could not deliver on these promises. The FTC alleges that the company did not conduct testing to determine whether its AI chatbot's output was equal to the level of a human lawyer, and that the company itself did not hire or retain any attorneys.[7] DoNotPay has agreed to a proposed FTC order settling the charges against it, principally in exchange for a $193,000 payment and notice to consumers who subscribed to the service between 2021 and 2023 warning them about the service's law-related limitations.[8]

  • Ascend Ecom - The FTC alleged that Ascend Ecom, an online business, falsely claimed its "cutting edge" AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts.[9] According to the complaint, the scheme has defrauded consumers of at least $25 million.[10] As a result of the complaint, an order has temporarily been issued halting the company from carrying out further business and putting it under the control of a receiver.[11] The case is ongoing.

  • Ecommerce Empire Builders - The FTC charged Ecommerce Empire Builders with falsely claiming to help consumers build an "AI-powered Ecommerce Empire" by participating in its training programs or buying a "done for you" online storefront for tens of thousands of dollars.[12] Relevant to AI, the complaint alleges that the training programs encouraged consumers to "[s]kip the guesswork and start a million-dollar business today" by supposedly harnessing the "power of artificial intelligence."[13] As a result of the complaint, an order has temporarily been issued halting the company from carrying out further business and putting it under the control of a receiver.[14] The case is ongoing.

  • FBA Machine - In June 2024, the FTC acted against a business opportunity scheme that allegedly falsely promised consumers that they would make guaranteed income through online storefronts that used AI-powered software.[15] According to the FTC, the scheme, which has allegedly operated under the names Passive Scaling and FBA Machine, cost consumers more than $15.9 million based on deceptive earnings claims that rarely, if ever, materialized.[16] The complaint further alleged that the company's marketing materials claim that FBA Machine uses "AI-powered" tools to help price products in the stores and maximize profits,[17] but these promises were ultimately unfounded. As a result of the complaint, an order has temporarily been issued halting the company from carrying out further business and putting it under the control of a receiver.[18] The case is ongoing.

  • Rytr - Since April 2021, Rytr has marketed and sold an AI "writing assistant" service for a number of uses, one of which was specifically "Testimonial & Review" generation.[19] According to the FTC's complaint, Rytr's service generated detailed reviews that contained specific, often material details that had no relation to the user's input, and these reviews almost certainly would be false for the users who copied them and published them online.[20] In many cases, subscribers' AI-generated reviews allegedly featured information that would deceive potential consumers who were using the reviews to make purchasing decisions;[21] the FTC's complaint and the surrounding circumstances dovetail with the FTC's recent ban on companies buying and selling fake reviews and testimonials.[22] The proposed order settling the FTC's complaint would bar the company from advertising, promoting, marketing or selling any service dedicated to-or promoted as-generating consumer reviews or testimonials.[23]

Notably, the FTC vote authorizing the staff to issue the Rytr complaint and proposed administrative order was 3-2, with Commissioners Melissa Holyoak and Andrew Ferguson voting no and issuing separate dissenting statements.[24] In her statement, Commissioner Holyoak voiced concern that the application of the Commission's unfairness authority under Section 5 in this case would stifle innovation and competition.[25] As a threshold matter, Commissioner Holyoak stated that she was skeptical of the likelihood of substantial injury, noting that there was no concrete allegation that any of the draft content generated in question was itself false or inaccurate.[26] Commissioner Holyoak further emphasized that "by banning Rytr's user review service the complaint fails to weigh the countervailing benefits Rytr's service offers to consumers or competition."[27] Commissioner Ferguson similarly shared hesitancy regarding the FTC's actions, citing chilling effects and "risks [to] strangling a potentially revolutionary technology in its cradle."[28] In his dissent, Commissioner Ferguson noted that treating as categorically illegal a generative AI tool merely because of the possibility of misuse is inconsistent with precedent and the public benefit.[29] The countervailing benefits highlighted by both dissents are sure to factor into any future FTC inquiries into companies that use AI to generate reviews or other forms of testimonial services.

The Operation AI Comply cases announced build on a number of recent enforcement agency actions involving claims about AI and coincide with the U.S. Department of Justice's Criminal Division's recent announcement concerning the latest revision of its Evaluation of Corporate Compliance Programs to include heightened focus into how companies manage AI-related risk.[30] As we have previously noted,[31] transparency and honesty are two crucial precepts to follow when seeking to make claims about AI. Despite the inherent promise and allure of incorporating AI into your business practices, before making any external statements about AI capacity, companies and investment advisers must: i) carefully scrutinize their AI systems' capabilities; ii) establish AI-governance policies and procedures; and iii) ensure that compliance and legal departments work with communications and marketing teams to scrutinize public statements and marketing materials before making AI-related disclosures. As enforcement agencies' focus on AI-related claims continues to expand, an old maxim remains relevant: Better safe than sorry.

[1] A. Ghavi, I. Graff, & D. Liberman, Navigating New Enforcement Scrutiny of 'AI Washing,' Law 360 (Sept. 6, 2024) https://www.law360.com/articles/1877051; SEC and DOJ Charge Founder of AI Hiring Startup with Fraud, Fried Frank (Jun. 17, 2024) https://www.friedfrank.com/news-and-insights/sec-and-doj-charge-founder-of-ai-hiring-startup-with-fraud-11881; SEC Targets Investment Firms Over "AI-Washing Claims", Fried Frank (Mar. 21, 2024) https://www.friedfrank.com/news-and-insights/sec-targets-investment-firms-over-ai-washing-claims-11668.

[2]FTC Announces Crackdown on Deceptive AI Claims and Schemes, Fed. Trade Comm'n (Sept. 25, 2024) https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes.

[3]See id.

[4]See id.

[5] Complaint ¶ 4, DoNotPay, Inc., Docket No. 232-3042 (Sept. 25, 2024).

[6]Id. at ¶ 12.

[7]Id. at ¶ 20.

[8]DoNotPay, Fed. Trade Comm'n (Sept. 25, 2024) https://www.ftc.gov/legal-library/browse/cases-proceedings/donotpay.

[10]See id.

[11]See supra note 2.

[12] Complaint ¶ 13, Empire Holdings Group and Peter Prusnowski, Case No. 24-CV-4949 (E.D. Penn. Sept. 18, 2024).

[13]Id. at ¶ 14.

[14]See supra note 2.

[15]Id.

[16]Id.

[17]Id.

[18]Id.

[19] Complaint ¶ 4, Rytr LLC, Docket No. 232-3052 (Sept. 25, 2024).

[20]Id. at ¶ 8.

[21]Id.

[22]See Federal Trade Commission Announces Final Rule Banning Fake Reviews and Testimonials, Fed. Trade Comm'n (Aug. 14, 2024) https://www.ftc.gov/news-events/news/press-releases/2024/08/federal-trade-commission-announces-final-rule-banning-fake-reviews-testimonials ("The rule will allow agency to strengthen enforcement, seek civil penalties against violators, and deter AI-generated fake reviews.").

[23]See supra note 2.

[24]See supra note 2.

[25] Dissenting Statement of Commissioner Melissa Holyoak p. 1, Ryter LLC, Docket No. 232-3052 (Sept. 25, 2024).

[26]Id. at p. 3.

[27]Id.

[28] Dissenting Statement of Commissioner Andrew Ferguson p. 1, Ryter LLC, Docket No. 232-3052 (Sept. 25, 2024).

[29]Id.

[30] The ECCP sets forth criteria for prosecutors to consider when determining the adequacy and effectiveness of a corporation's compliance program when a corporation comes within the remit of the DOJ's oversight authority. "Nicole Argentieri, principal deputy assistant attorney general and head of the DOJ's Criminal Division, said in prepared remarks at a conference held by the Society of Corporate Compliance and Ethics that prosecutors will evaluate how firms assess and manage the risks of new technology, including AI, both in their business and in their compliance programs."Sarah Jarvis, DOJ Adds AI Risk to Corporate Compliance Program, Law 360 (Sept. 23, 2024) https://www.law360.com/articles/1881546. Under this policy, prosecutors will consider the technology the company uses, whether the company has conducted a risk assessment of the use of that technology and the steps taken, if any, to mitigate risk associated with the use of that technology.

[31]See supra note 1.

This communication is for general information only. It is not intended, nor should it be relied upon, as legal advice. In some jurisdictions, this may be considered attorney advertising. Please refer to the firm's data policy page for further information.