11/04/2024 | Press release | Distributed by Public on 11/04/2024 10:37
Policymakers in the United States and abroad are grappling with how to advance innovation of inherently multi-use generative AI technologies while building guardrails to mitigate the risk of misuse and malicious activity. Despite differences of opinion among policymakers, there is near consensus that the right way to proceed with regulation is to focus on high-risk applications of AI, and to advance measures around transparency, testing, and evaluation to mitigate risks associated with low-risk applications. Reflecting this, legislative approaches to liability have focused squarely on those who use AI in ways that create harm, or those that develop AI tools that are intended to do so. No one in any jurisdiction has gone as far as to effectively ban generative AI because it couldbe used to generate speech that couldbe used to deceive a third party.
Until now, the FTC's attention to AI as an emerging areahas nonetheless led to enforcement based on fact patterns typical of Section 5 - the FTC Act authority that governs unfair or deceptive acts or practices and is used to protect consumers from fraud, schemes, and lax business practices.This is consistent with the April 2023 joint statementof FTC Chair Lina Khan and heads of three other federal agencies that affirmed "[e]xisting legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices."
That may be changing.
In September, the FTC announced a series of actions as part of the Operation AI Complylaw enforcement sweep. Four of these involve traditional deceptive schemes that fall well within the scope of the FTC's Section 5 authority, such as a "robot lawyer" that failed to deliver on lofty claims, and business opportunity schemes that made false promises about how AI could help consumers get rich. The Commission approved each of these on a unanimous 5-0 vote.
The fifth action in this package, however, breaks new ground. That action involved Rytr, a company that offers an AI tool to generate written content in dozens of "use cases" - things like "Email" and "Blogs" and "Testimonial & Review." The FTC claims that by offering the "Testimonial & Review" use case (now discontinued, in response to the FTC action), Rytr "provided the means and instrumentalities to its users and subscribers to generate written content for consumer reviews that was false and deceptive" and "engaged in an unfair business practice by offering a service that was intended to quickly generate unlimited content for consumer reviews and created false and deceptive written content for consumer reviews." (Analysis of Proposed Consent Order to Aid Public Comment, at 80566; see also In the Matter of Rytr LLC, Complaint, ¶¶ 15-17.)
The Commission approved the Rytr action on a 3-2 vote with strong dissents from Commissioners Melissa Holyoak and Andrew Ferguson. The concerns raised in these dissents go to the very heart of the FTC's case against Rytr.
For starters, as Commissioner Holyoak explains, "the complaint does not allege that users actually posted any draft reviews. Since the Commission has no evidence that a single draft review was posted, the complaint centers on alleging speculative harms that may have come from subscribers with access to unlimited output from across Rytr's use cases, which included draft reviews." (Holyoak at 2.) Speculative harms of this sort do not satisfy the Section 5 requirement that "the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition." (Id.)
This is only the tip of the iceberg. More troubling, the Rytr action represents a departure from the FTC's AI-related enforcement. Unlike the FTC's prior actions involving AI, which involve either the sorts of schemes or deceptions that have been the FTC's bread and butter or misrepresentations made to consumers (such as about the protection and use of personal data), the Rytr case involves none of this. Instead, it represents an effort to extend what is known as "means and instrumentalities" liability to actors who provide tools that could be used to the detriment of consumers.
The FTC has traditionally applied means and instrumentalities liability narrowly in two situations. As explained by Commissioner Ferguson, the first is when a product or service "is inherently deceptive" or "has no purpose apart from facilitating" a violation of Section 5. (Ferguson at 3.) This theory has been used "to pursue makers of push cards and punch boards custom-made for retailers to use in illegal marketing schemes" as well as "suppliers of mislabeled art." (Id. at 3-4.) The second situation involves "suppliers of misleading marketing materials that someone down the supply chain uses to deceive consumers," such as in a pyramid scheme. (Id.)
Means and instrumentalities is a form of direct liability, which requires active participation and knowledge of wrongfulness, as distinct from secondary theories of liability, which are not available to the FTC for Section 5 claims. (See, e.g., FTC, Trade Regulation Rule on Impersonation of Government or Businesses, Supplemental Notice of Proposed Rulemaking, at 15077, 15082 n.94 (Mar. 1, 2024).) The FTC has recently sought to expand the means and instrumentalities doctrine to reach acts or practices not inherently deceptive or misleading that have the potential to enable Section 5 violations by others.
This arose in connection with the FTC's Impersonation Rule, published in March. The final Rule rejected a proposal to extend liability to means and instrumentalities used to impersonate government, businesses, and their officials or agents, though the FTC initiated a supplemental rulemakingto consider expanding the Impersonation Rule in this way. In endorsing means and instrumentalities liability for impersonation, FTC Chair Khan statedthat it would enable liability, for example, for "a developer who knew or should have known that their AI software tool designed to generate deepfakes of IRS officials would be used by scammers to deceive people about whether they paid their taxes."
What's notable about Chair Khan's hypothetical is that it involves an AI tool designed to enable deception. That is consistent with how the FTC has traditionally invoked means and instrumentalities, as Commissioner Ferguson explains in his dissent. But it is at odds with the thrust of the Rytr action, which appears to be the first time the Commission has invoked means and instrumentalities to pursue a product or service that is not "necessarily deceptive like mislabeled art, or useful only in facilitating someone else's section 5 violation like lottery punch boards.". (Ferguson at 5.) Indeed, the Rytr tool "has both lawful and unlawful potential uses. A consumer could use it to draft an honest and accurate review. Or a business could use it to write a false review." (Id.)
While the Commission chose to pursue only a sliver of the capabilities of Rytr's generative AI tool, it's not clear what ultimately separates the one problematic use case - "Testimonial & Review" - from the others. One could just as easily use a function for generating "Email" to prepare a fictitious review.
And what makes the situation of Rytr unique from that of the many generative AI tools available to the public that provide users with unstructured prompts? Take ChatGPT, for example. In preparing this post, I asked ChatGPT to generate five fictional customer reviews for a seller of blue jeans. In virtually no time, ChatGPT delivered. "I've tried a lot of different brands, but these blue jeans are hands down the best," the first review started. "The fit is perfect, especially around the waist and thighs, which is usually a problem area for me. The material feels durable yet soft, and they haven't faded even after multiple washes. Shipping was fast too! I'm definitely getting another pair." (The other fictional reviews addressed additional facets of the jean-buying experience, including color, customer service, durability, affordability, shipping, and so on, none of which I prompted.) It is hard to see how this is any different from the concerns raised by the FTC in the Rytr action.
But perhaps this action, involving an under-the-radar company, is meant as a test case to explore how far the FTC can extend means and instrumentalities liability without new congressional authority. Commissioner Ferguson expressed concern with this possibility, calling the Rytr action "a dramatic extension" of the doctrine that treats the "sale of a product with lawful and unlawful potential uses as a categorical Section 5 violation because someone could use it to write a statement that could violate Section 5." (Ferguson at 5.) The same could be said, he continues, "of an almost unlimited number of products and services: pencils, paper, printers, computers, smartphones, word processors, typewriters, posterboard, televisions, billboards, online advertising space, professional printing services, etc. On the Commission's theory, the makers and suppliers of these products and services are furnishing the means or instrumentalities to deceive consumers merely because sometime might put them to unlawful use." (Id.)
Commissioner Holyoak, too, recognizes the harmful precedent that this action could set. As she writes: "Today's complaint suggests to all cutting-edge technology developers that an otherwise neutral product used inappropriately can lead to liability-even where, like here, the developer neither deceived nor caused injury to a consumer." (Holyoak at 5.)
Commissioner Ferguson is correct that "Congress has not given [the FTC] the power to regulate AI" distinct from its authority to "enforce[e] the prohibition against unfair or deceptive acts or practice." It is beyond dispute that this authority permits the FTC to investigate appropriate wrongdoing in connection with AI products and services, as the April 2023 joint statement makes clear. It comes down to what is appropriate wrongdoing within the scope of the FTC's authority. The FTC should be going after schemes and deception and misrepresentations of the sort represented in the other Operation AI Comply cases. The two dissentingcommissionerswholeheartedly endorse actions of this type. "But," again in Ferguson's words, the FTC "should not bend the law to get at AI. And we certainly should not chill innovation by threatening to hold AI companies liable for whatever illegal use some clever fraudster might find for their technology." (Ferguson at 10.)