This quarterly update highlights key legislative, regulatory, and litigation developments in the third quarter of 2024 related to artificial intelligence ("AI") and connected and automated vehicles ("CAVs"). As noted below, some of these developments provide industry with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Legislative Developments
There continued to be strong bipartisan interest in passing federal legislation related to AI. While it has been challenging to pass legislation through this Congress, there remains the possibility that one or more of the more targeted bills that have bipartisan support and Committee approval could advance during the lame duck period.
-
Senate Commerce, Science, and Transportation Committee: Lawmakers in the Senate Commerce, Science, and Transportation Committee moved forward with nearly a dozen AI-related bills, including legislation focused on developing voluntary technical guidelines for AI systems and establishing AI testing and risk assessment frameworks.
-
In July, the Committee voted to advance the Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act (S.4769), which was introduced by Senators John Hickenlooper (D-CO) and Shelley Moore Capito (R-WV). The Act would require the National Institute of Standards and Technology ("NIST") to develop voluntary guidelines and specifications for internal and external assurances of AI systems, in collaboration with public and private sector organizations.
-
In August, the Promoting United States Leadership in Standards Act of 2024 (S.3849) was placed on the Senate legislative calendar after advancing out of the Committee in July. Introduced in February 2024 by Senators Mark Warner (D-VA) and Marsha Blackburn (R-TN), the Act would require NIST to support U.S. involvement in the development of AI technical standards through briefings, pilot programs, and other activities.
-
In July, the Future of Artificial Intelligence Innovation Act of 2024 (S.4178)- introduced in April by Senators Maria Cantwell (D-CA), Todd Young (R-IN), John Hickenlooper (D-CO), and Marsha Blackburn (R-TN)-was ordered to be reported out of the Committee and gained three additional co-sponsors: Senators Roger F. Wicker (R-MS), Ben Ray Lujan (D-NM), and Kyrsten Sinema (I-AZ). The Act would codify the AI Safety Institute, which would be required to develop voluntary guidelines and standards for promoting AI innovation through public-private partnerships and international alliances.
-
In July, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (S.3312), passed out of the Committee, as amended. Introduced in November 2023 by Senators John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Ben Ray Lujan (D-NM), and Shelley Moore Capito (R-WV), the Act would establish a comprehensive regulatory framework for "high-impact" AI systems, including testing and evaluation standards, risk assessment requirements, and transparency report requirements. The Act would also require NIST to develop sector-specific recommendations for agency oversight of high-impact AI, and to research and develop means for distinguishing between content created by humans and AI systems.
-
Senate Homeland Security and Governmental Affairs Committee: In July, the Senate Homeland Security Committee voted to advance the PREPARED for AI Act (S.4495). Introduced in June by Senators Gary Peters (D-MI) and Thomas Tillis (R-NC), the Act would establish a risk-based framework for the procurement and use of AI by federal agencies and create a Chief AI Officers Council and agency AI Governance Board to ensure that federal agencies benefit from advancements in AI.
-
National Defense Authorization Act for Fiscal Year 2025: In August, Senators Gary Peters (D-MI) and Mike Braun (R-IN) proposed an amendment (S.Amdt.3232) to the National Defense Authorization Act for Fiscal Year 2025 (S.4638) ("NDAA"). The amendment would add the Transparent Automated Governance Act and the AI Leadership Training Act to the NDAA. The Transparent Automated Governance Act would require the Office of Management and Budget ("OMB") to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. The AI Leadership Training Act would require OMB to establish a training program for federal procurement officials on the operational benefits and privacy risks of AI. The Act would also require the Office of Personnel Management ("OPM") to establish a training program on AI for federal management officials and supervisors.
Federal Executive and Regulatory Developments
The White House and federal regulators continued to pursue their AI objectives, relying on existing legal authority to support their activities. With the upcoming change in administration, new executive branch leadership will have the opportunity to revisit and, if they choose, alter the trajectory of the federal government's regulation of AI.
-
The White House: The White House announced, among other AI-related developments, the launch of a new Task Force on AI Datacenter Infrastructure to coordinate policy across the government. The interagency Task Force will be led by the National Economic Council, National Security Council, and White House Deputy Chief of Staff to provide streamlined coordination on policies to advance datacenter development operations in line with economic, national security, and environmental goals.
-
Federal Communications Commission ("FCC"): FCC Chairwoman Jessica Rosenworcel announced that she had sent letters to nine telecommunications companies seeking answers about the steps they are taking to prevent future fraudulent robocalls that use AI for political purposes. In addition, the FCC published a Notice of Proposed Rulemaking ("NPRM") that would amend its rules under the Telephone Consumer Protection Act ("TCPA") to incorporate new consent and disclosure requirements for the transmission of AI-generated calls and texts. The public comment period ended on October 25, 2024.
-
Federal Trade Commission ("FTC"): The FTC announced that it has issued orders to eight companies that offer surveillance pricing products and services that incorporate data about consumers' characteristics and behavior. The orders are aimed at helping the FTC better understand the opaque market for products by third-party intermediaries that claim to use advanced algorithms, AI, and other technologies, along with personal information about consumers. In addition, the FTC announced "Operation AI Comply," an enforcement sweep involving actions against five companies that rely on AI "as a way to supercharge deceptive or unfair conduct that harms consumers."
-
U.S. Patent and Trademark Office ("USPTO"): The USPTO issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI. The guidance provides background on the USPTO's efforts related to AI and subject matter eligibility, an overview of the USPTO's patent subject matter eligibility guidance, and additional discussion on certain areas of the guidance that are particularly relevant to AI inventions, including discussions of Federal Circuit decisions on subject matter eligibility. The guidance took effect on July 17, 2024.
-
U.S. Copyright Office: The U.S. Copyright Office released Part 1 of its report, Copyright and Artificial Intelligence, on legal and policy issues related to copyright and AI. Part 1 focuses on the topic of digital replicas, which it defines as "video[s], image[s], or audio recording[s] that ha[ve] been digitally created or manipulated to realistically but falsely depict an individual." The report recommends that Congress enact a federal digital replica law to protect individuals from the knowing distribution of unauthorized digital replicas.
-
Department of Homeland Security: ("DHS"): DHS Secretary Alejandro N. Mayorkas and Chief AI Officer Eric Hysen announced the first ten members of the "AI Corps," DHS's first-ever sprint to recruit 50 AI technology experts. The new hires are intended to play pivotal roles in DHS efforts to responsibly leverage AI across strategic mission areas. The ten inaugural AI Corps hires are technology experts with backgrounds in AI and machine learning ("ML"), data science, data engineering, program and product management, software engineering, cybersecurity, and the safe and responsible use of these technologies.
State Legislative Developments
States continued to pursue and enact new laws affecting the development, distribution and/or use of AI, expanding the legal patchwork of AI laws across the United States.
-
Algorithmic Discrimination & Consumer Protection: Illinois enacted HB 3773, which amends the Illinois Human Rights Act to require employers to notify employees if they are using AI for employment-related decisions. HB 3773 also prohibits the use of AI systems for employment decisions if the use results in discriminatory effects on the basis of protected classes or if the AI system uses zip codes as a proxy for protected classes.
Following the enactment of the Colorado AI Act (SB 205) in May, Colorado Attorney General Phil Weiser issued a request for public input on a list of pre-rulemaking considerations to inform future rulemaking and the ongoing effort to revise the law announced by state officials in June. The Attorney General is specifically seeking comment on SB 205's developer, deployer, and "high-risk AI" definitions, documentation and impact assessment requirements, and consistency with laws in other jurisdictions, among other topics. The informal input on rulemaking and revisions must be submitted through an online comment portal by December 30, 2024, and will be posted on the Attorney General's comment website after receipt.
-
Election-Related Synthetic Content Laws: Hawaii enacted SB 2687, prohibiting the distribution of materially deceptive AI-generated political advertisements during election years. California enacted AB 2839, prohibiting the distribution of AI-generated election communications that depict election candidates, officials, or voting equipment within six months of an election, and New Hampshire enacted HB 1596, prohibiting the distribution of deepfakes of election candidates, officials, or parties within three months of an election. California also enacted AB 2355, which requires AI disclaimers on political advertisements with content generated or substantially altered by AI. Finally, California enacted the Defending Democracy from Deepfake Deception Act (AB 2655), which requires online platforms to block deceptive AI-generated election content within six months of an election, label deceptive AI-generated election content within one year of an election, and provide users with mechanisms to report deceptive AI-generated election content.
-
AI-Generated CSAM & Intimate Imagery Laws: North Carolina enacted HB 591, prohibiting the disclosure or threatened disclosure of AI-generated intimate imagery with intent to harm the person depicted and the creation or distribution of AI-generated CSAM. New Hampshire enacted HB 1432, which prohibits the creation or distribution of deepfakes with intent to cause financial or reputational harm. California enacted three laws regulating AI-generated CSAM or intimate imagery: SB 926, which prohibits the creation and distribution of digital or computer-generated intimate imagery that causes severe emotional distress, AB 1831, which prohibits the possession, distribution, or creation of AI-generated CSAM, and SB 981, which requires online platforms to remove, and provide mechanisms for users to report, AI-generated sexually explicit deepfakes on the platform.
-
Laws Regulating AI-Generated Impersonations & Digital Replicas: Illinois and California each enacted laws regulating the creation or use of AI-generated digital replicas. Illinois HB 4875 amends the Illinois Right of Publicity Act to prohibit the distribution of unauthorized digital replicas, and California AB 1836 prohibits the production or distribution of digital replicas of deceased persons for commercial purposes without consent. Illinois and California also enacted laws regulating personal or professional services contracts that allow for the creation or use of digital replicas. The Illinois Digital Voice & Likeness Protection Act (HB 4762) and California AB 2602 both require such contracts to include reasonably specific descriptions of the intended uses of digital replicas and require adequate representation for performers.
-
Generative AI Transparency & Disclosure Laws: California enacted two laws that impose transparency and disclosure requirements for generative AI systems or services. The California AI Transparency Act (SB 942) requires providers of generative AI systems with over 1 million monthly users to provide AI content detection tools and optional visible watermarks on AI-generated content. Providers must also automatically add metadata disclosures to any content created using the provider's generative AI system. California AB 2013 requires developers of publicly available generative AI systems or services to post "high-level summaries" of datasets used to develop generative AI on their public websites, including information about the sources or owners of datasets and whether the datasets include personal information or data protected by copyright, trademark, or patent.
AI Litigation Developments
-
New Complaints with New Theories:
-
Right of Publicity Complaint: On August 29, two professional voice actors, along with the authors and publishers who own copyrights in the audiobooks they voiced, sued AI-powered text-to-speech company Eleven Labs for alleged misappropriation of the voice actors' voices and likenesses. The complaint brought claims for (1) Texas Common law Invasion of Privacy via Misappropriation of Likeness and Right of Publicity, (2) Texas Unjust Enrichment, (3) Misappropriation of Likeness and Publicity under New York Civil Rights Law Section 51, and (4) Violation of the DMCA Anticircumvention Provisions, 17 U.S.C. §§ 1201 and 1203. Vacker v. Eleven Labs Inc., 1:24-cv-00987 (D. Del.).
-
Patent and Antitrust Complaint: On September 5, Xockets filed suit against Nvidia, Microsoft, and RPX for allegedly appropriating its patented data processing unit (DPU) technology and committing antitrust violations, including forming a buyers' cartel and seeking to monopolize the AI industry. Xockets seeks to enjoin the release of Nvidia's new Blackwell GPU-enabled AI servers as well as Microsoft's use of DPU technology in its generative AI platforms. Xockets, Inc. v. Nvidia Corp., 6:24-cv-453 (W.D. Tex.).
-
Criminal Indictment: On September 4, the U.S. Department of Justice announced the unsealing of a three-count criminal indictment against Michael Smith in connection with a purported scheme to use GenAI to create hundreds of thousands of songs and use bots to stream them billions of times, allegedly generating more than $10 million in fraudulent royalty payments. United States v. Smith, 24-cr-504 (S.D.N.Y.).
-
Notable Case Developments:
-
On August 12, the court in Andersen v. Stability AI Ltd., 3:23-cv-00201 (N.D. Cal.), granted in part and denied in part defendants' motion to dismiss the first amended complaint. This case involves claims against Stability AI, Runway AI, Midjourney, and DeviantArt regarding alleged infringement of copyrighted images in connection with development and deployment of Stable Diffusion. For Stability AI, the court found sufficient allegations of "induced" infringement, but dismissed the Digital Millennium Copyright Act ("DMCA") claims with prejudice. For Runway AI, the court found that direct infringement and "induced" infringement had been sufficiently pled, based on allegations of Runway's role in developing and inducing downloads of Stable Diffusion and allegations that "training images remain in and are used by Stable Diffusion." For Midjourney, the court found that copyright, false endorsement, and trade dress claims had been sufficiently pled, but dismissed the DMCA claims. For DeviantArt, the court found that copyright claims had been sufficiently pled, but dismissed the breach of contract and breach of implied covenant claims with prejudice. For all defendants, the court dismissed the unjust enrichment claims with leave to amend.
-
On August 8, in the consolidated case of In re OpenAI ChatGPT Litigation, 3:23-cv-3223 (N.D. Cal.), the court partially overturned a discovery order requiring plaintiffs to share all methods and data used to test ChatGPT in preparation for litigation. Instead, the court ordered plaintiffs to disclose only the prompts, outputs, and account settings that produced the results on which the complaint was based, but not the prompts, outputs, or settings that produced results not relied on by the complaint. On September 24, OpenAI agreed to a "Training Data Inspection Protocol" for disclosure of "data used to train relevant OpenAI LLMs."
-
On September 13, the court in The New York Times Company v. Open AI Inc., 1:23-cv-11195 (S.D.N.Y. 2024), denied the defendants' motion to compel production of "plaintiff's regurgitation efforts," as well as its motion to compel discovery of originality and registration of the works at issue, which reached more than ten million works after an amendment to the complaint in August. This case involves claims against Microsoft and OpenAI regarding alleged infringement of copyrighted articles in connection with the training and deployment of LLMs.
II. Connected & Automated Vehicles
-
Federal Interest in Accelerating V2X Deployment: As we reported, on August 16, 2024, the U.S. Department of Transportation ("USDOT") announced Saving Lives with Connectivity: A Plan to Accelerate V2X Deployment (the "plan"). The plan is intended to "accelerate the deployment" of vehicle-to-everything technology ("V2X") and support USDOT's goal of establishing a comprehensive approach to roadway fatality reduction. The plan describes V2X as technology that "enables vehicles to communicate with each other, with road users such as pedestrians, cyclists, individuals with disabilities, and other vulnerable road users, and with roadside infrastructure, through wirelessly exchanged messages," and lays out short-, medium-, and long-term V2X goals for the next twelve years. These include increasing the deployment of V2X technology across the National Highway System and top metro areas' signalized intersections, developing interoperability standards, and working with the FCC on spectrum use. USDOT also intends to coordinate resources across federal agencies to support government deployment of V2X technologies and develop V2X technical assistance and supporting documentation for deployers, including original equipment manufacturers and infrastructure owner-operators.
-
Continued Attention on Connected Vehicle Supply Chain: As we reported, on September 26, 2024, the Department of Commerce published a notice of proposed rulemaking ("NPRM") in the Federal Register on Securing the Information and Communications Technology and Services Supply Chain. This NPRM follows an advance notice of proposed rulemaking ("ANPRM") from March 1, 2024. The proposed rule focuses on hardware and software integrated into the Vehicle Connectivity System ("VCS") and software integrated into the Automated Driving System ("ADS"). The proposed rule would ban transactions involving such hardware and software designed, developed, manufactured, or supplied by persons owned by, controlled by, or subject to the jurisdiction of the People's Republic of China and Russia. The NPRM cites concerns about malicious access to these systems, which adversaries could use to collect sensitive data or remotely manipulate cars. The proposed rule would apply to all wheeled on-road vehicles, but would exclude vehicles not used on public roads, like agricultural or mining vehicles.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs.