Results

Baker & Hostetler LLP

10/01/2024 | Press release | Distributed by Public on 10/01/2024 12:28

FTC’s Operation AI Comply – Let’s Discuss Dissents

10/01/2024|6 minute read
Share

On September 25, 2024, the Federal Trade Commission (FTC) announced five administrative complaints and proposed orders involving companies that - according to the agency's press release - have allegedly relied on artificial intelligence (AI) to "supercharge deceptive or unfair conduct that harms consumers." The FTC has dubbed this new initiative "Operation AI Comply."

For over a year, the FTC has shared insights into its AI focus areas through enforcement, blogs, speeches and other public comments. This new series of cases underscores specific concerns, while detailed dissenting statements clearly indicate that numerous questions remain about the extent of the FTC's authority in this domain under the FTC Act.

Before discussing the most intriguing of these Operation AI Comply cases, we note that all five are administrative settlements. These are put on public record for 30 days, during which anyone can comment - in support or opposition. The FTC must then respond to these comments before finalizing the orders. If you want to file a comment, the deadline to do so should be at the end of October 2024, and comments for each case can be filed online at the Federal Register.

Rytr's AI Writing Assistant

Of the five newly announced cases, the one that immediately caught our attention was the FTC's action against Rytr. Rytr is marketed and sold as an AI "writing assistant" designed for generating written content. The FTC homed in on Rytr's ability to generate detailed consumer reviews, under the tool's "Testimonial & Review" feature, when given keyword prompts along with the desired language, tone and creativity level for the content. The complaint alleges that this service "generates detailed reviews that contain specific, often material details that have no relation to the user's input." The complaint further alleges that as a result, the reviews generated by this tool "would almost certainly be false" and ultimately deceptive to potential consumers. The complaint provides one example in which the term "this product" was used as an input in the Name Field with "dog shampoo" in the Review Title Field. In response to these limited inputs, the FTC generated an example of a very positive review provided by the AI tool: "As a dog owner, I am thrilled with this product. My pup has been smelling better than ever, the shedding has been reduced and his coat is shinier than ever . . . " Additionally, Rytr's AI tool appears not to have limited the number of reviews a particular user can create. According to the complaint, since Rytr's tool became available, "24 subscribers have generated over 10,000 reviews each, 114 subscribers have generated over 1,000 reviews each, and 630 subscribers have generated over 100 reviews each," resulting in the potential for a flood of fake reviews.

The complaint claims Rytr violated Section 5 of the FTC Act by giving subscribers "the means to generate false and deceptive" consumer reviews bearing no relation to the inputs, which could deceive potential customers relying on these reviews to make purchasing decisions. The complaint asserts two causes of action: (1) the tool provides "the means and instrumentalities" to commit deceptive acts and practices and (2) Rytr's service itself constitutes an unfair act or practice. The FTC voted 3-2 to authorize the complaint and proposed administrative order. Notably, Commissioners Melissa Holyoak and Andrew Ferguson, both Republicans, issued dissenting statements (and joined each other's) critiquing the legal theories set out in the Rytr case - both the unfairness and the means-and-instrumentalities allegations - making these dissents worth a closer read.

Holyoak's Dissent

Holyoak's dissent addresses the FTC Act's criteria for determining unfairness, and, as a threshold matter, questions the likelihood of substantial injury based on the facts pled. Holyoak classifies the complaint as "a misapplication of [the FTC's] unfairness authority." She is skeptical of the FTC's allegations that (1) "[Rytr] offered a service intended to quickly generate unlimited content for consumer reviews"; (2) the service offered "created false and deceptive written content for consumer reviews"; and (3) any injury from Rytr's practices was "not outweighed by countervailing benefits to consumers or competition." "Unfairness requires proof - not speculation - of harm," which Holyoak points out is markedly absent. Holyoak contends that the FTC failed to demonstrate that (1) any specific false or inaccurate content was produced by Rytr's AI tool or (2) any such content was ever posted online. She also comments that the FTC has not alleged that Rytr made any misrepresentations about the AI tool, failing to demonstrate that primary liability should attach to Rytr. Holyoak states, for instance, that any false or deceptive reviews would reflect on the product's user and not be misrepresentations attributable to Rytr.

Further, Holyoak challenges the FTC's claim that Rytr's AI tool offers "no legitimate benefits," asserting that it is typically advantageous when a tool like Rytr's "Testimonial & Review" feature helps users save time and accomplish tasks more efficiently. She argues that generative AI should be no exception, because "much of the promise of AI stems from its remarkable ability to provide such benefits to consumers using AI tools." Holyoak emphasizes the distinct stages in the writing process and praises generative AI's ability to create a preliminary draft that can inspire users to develop a more detailed and informed review. So, the review gets done faster and with less effort (which is great for the writer) while also giving better insights and detail (which is awesome for anyone interested in the reviewed product or service) - it's arguably a win-win!

Holyoak rebukes the majority for going "too far in its ban," stating that the complaint unjustly attributes full responsibility to "the neutral service itself" as a "source of harm" despite the developer neither misleading nor harming consumers. She criticizes the FTC's order prohibiting Rytr from selling any review or testimonial generation services, writing, "Banning products that have useful features but have the potential to be misused is not consistent with the Commission's unfairness authority. Nor is it consistent with a legal environment that promotes innovation. AI is a developing industry. It has vast potential. [The FTC] should take care not to squelch it by suggesting that merely providing draft content that could be used unlawfully is wrong."

Ferguson's Dissent

Ferguson's dissent examines the FTC's claim that Rytr violated Section 5 of the FTC Act by giving its users the "means and instrumentalities to deceive consumers." In his dissent, Ferguson writes that "[t]reating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense." His dissent explores the history of means-and-instrumentalities liability, a topic we won't dive into here, but Ferguson ultimately argues that Rytr does not fit into either traditional category of means-and-instrumentalities liability. Means-and-instrumentalities liability typically occurs when someone either (1) supplies a third party (not the consumer) with an unlawful, inherently deceptive product or service (or one that is solely intended to violate Section 5) or (2) makes false or misleading statements (or supplies marketing materials) to someone further down the supply chain, who then uses or repeats that statement to mislead consumers.

Ferguson calls the complaint a "dramatic extension" of the means-and-instrumentalities theory of liability, explaining that Section 5 does not "categorically prohibit a product or service merely because someone might use it to deceive someone else." Ferguson contends that the FTC's theory that Rytr's AI tool inherently violates Section 5 because "someone could use it to write a statement that could violate Section 5" is at odds with Section 5 precedents and inconsistent with adjacent areas of law, such as aiding-and-abetting liability. He also disputes the FTC's notion that Rytr's tool "has no or de minimis legitimate use," stating that a consumer could in fact use it to "draft an honest and accurate review." Ferguson notes that if the Rytr tool were used exclusively to produce false consumer reviews, there would be proof of this - yet, he notes, the complaint does not present a single example of a Rytr-generated review that has misled or deceived consumers. Ferguson additionally argues that Rytr's product holds value and utility for consumers. He further asserts that in similar means-and-instrumentalities cases, the FTC and courts have required knowledge - "[i]t is well settled law . . . [that] the originator [of a false or misleading representation] . . . is liable if it passes on a false or misleading representation with knowledge or reason to expect that consumers may possibly be deceived as a result."

Ferguson ultimately sides with certain public concerns in characterizing the FTC's "aggressive move into AI regulation [as] premature." In his dissent, he writes, "Congress has not given [the FTC] the power to regulate AI," and he cautions the FTC against stretching the law or broadening its review powers to "get at AI" or "chill innovation." Ferguson argues that deeming an AI tool illegal simply because it could be misused for fraud "threatens to turn honest innovators into lawbreakers and risks strangling a potentially revolutionary technology in its cradle." Finally, Ferguson briefly raises the free speech implications under the First Amendment, which limits the government's authority to regulate speech inputs, noting that Ryter's tool "quite literally" helps people speak.

About Those Other Four Cases

In case you're curious about the other four companies … the FTC said that three of them ran false AI business opportunity scams that misled people - one promised guaranteed income through AI-powered online storefronts; one claimed its AI-powered tools could help you make big money with online storefronts; and one said you could build an "AI-powered Ecommerce Empire" by joining expensive training programs. The last case involved the "world's first robot lawyer," which, according to the FTC, didn't deliver on its promises (phew!).