Covington & Burling LLP

12/16/2024 | News release | Distributed by Public on 12/16/2024 17:23

November 2024 Developments Under President Biden’s AI Executive Order

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (the "AI EO"), issued by President Biden on October 30, 2023. The first blog summarized the AI EO's key provisions and related OMB guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through October 2024. This blog describes key actions taken to implement the AI EO during November 2024 and potential implications of the 2024 U.S. election. We will discuss developments during November 2024 to implement President Biden's 2021 Executive Order on Cybersecurity in a separate post.

NIST Issues Final Report on Synthetic Content Risks

On November 20, the National Institute of Standards & Technology ("NIST") published the final version of NIST AI 100-4, Reducing Risks Posed by Synthetic Content, following a request for information in December 2023 and a draft for public comment in April 2024. The final report fulfills § 4.5(a) of the AI EO, which requires the Secretary of Commerce to submit a report identifying existing and potential "standards, tools, methods, and practices" for authenticating, labeling, detecting, and auditing synthetic content and preventing the production of AI-generated child sexual abuse material ("CSAM") and non-consensual intimate imagery ("NCII").

While noting that there is "no silver bullet to solve the issue of public trust in and safety concerns posed by digital content," the report identifies "provenance data tracking" (e.g., watermarks and digital signatures) and "synthetic content detection" as two state-of-the-art approaches for ensuring "digital content transparency" and reducing synthetic content risks. The report describes technical methods for ensuring robust and secure synthetic content watermarking and digital signatures, and outlines types of algorithms that may be used to distinguish synthetic images, video, audio, and text. Additionally, the report discusses hashing, filtering, testing, and other safeguards to prevent the creation of CSAM and NCII using generative AI tools.

Department of Education Issues Guidance on Discriminatory Uses of AI

On November 19, the Department of Education's Office of Civil Rights ("OCR") released guidance on "Avoiding the Discriminatory Use of Artificial Intelligence." The new guidance implements § 8(d) of the AI EO, which requires the Secretary of Education to develop "resources [that] address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities." Noting that federal civil rights laws "apply to discrimination resulting from the use of AI," the guidance provides 21 examples of uses of AI in educational settings that could result in an OCR investigation under Title VI of the Civil Rights Act, Title IX of the Education Amendments of 1972, or § 504 of the Rehabilitation Act. Examples of potential violations of these laws include the use of racially biased facial recognition technology, failure to respond to AI-generated deepfakes of students, and the use of AI systems for admissions or disciplinary purposes that fail to account for students' disabilities.

Departments of Commerce and State Host Inaugural Meeting of the International Network of AI Safety Institutes

On November 20, the Departments of Commerce and State convened the inaugural meeting of the International Network of AI Safety Institutes (the "Network") in San Francisco, California. The two-day meeting, which included AI developers, academics, scientists, and business and civil society leaders, convened with the goal of "address[ing] some of the most pressing challenges in AI safety and avoid[ing] a patchwork of global governance that could hamper innovation." At the meeting, the Network members-Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the UK, and the U.S.-focused specifically on managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems.

Ahead of the meeting, the Network members issued a joint mission statement that identified four initial priority areas for collaboration: AI safety research, best practices for AI testing, common approaches for interpreting AI tests, and global information sharing. Network members also committed to over $11 million in funding for a "joint research agenda" on mitigating synthetic content risks through content labeling techniques and model safeguards and issued a joint statement on risk assessments for advanced AI systems.

U.S. AI Safety Institute Establishes Inter-Agency Taskforce on AI National Security and Public Safety Risks

On November 20, the U.S. AI Safety Institute ("U.S. AISI") announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which will be chaired by U.S. AISI and include representatives from the National Institutes of Health and the Departments of Defense, Energy, and Homeland Security, with more federal agencies expected to join in the future. The goal of the TRAINS Taskforce will be to coordinate research and testing of advanced AI models across national security and public safety domains and prevent adversaries from misusing AI to undermine U.S. national security. According to U.S. AISI, the taskforce implements the "whole-of-government approach to AI safety" directed by the White House's AI National Security Memorandum, issued in October and previously covered here.

Potential Shifts in U.S. AI Policy Under the Incoming Trump Administration

Following the election of President-Elect Trump and Republican majorities in both houses of Congress, AI industry stakeholders anticipate significant changes to U.S. AI policy in 2025, including the revocation of the AI EO. It is unclear, however, whether the incoming administration will maintain or discontinue the over 100 other federal agency actions that have been completed pursuant to the AI EO. While the incoming administration is likely to halt ongoing Commerce Department rulemaking to implement the AI EO's dual-use foundation model reporting and red-team testing requirements-previously covered here-efforts to promote private-sector innovation, AI R&D, and competition with China are expected to continue. On November 26, the Council on Foreign Relations issued three recommendations for the incoming Trump Administration: the creation of an AI Commission to ensure AI safety, investments in AI research by universities and federal labs, and energy policies that meet the growing energy demands of AI data centers while reducing costs. We will address the second Trump Administration's likely approach to AI, including its possible consistency with the first Trump Administration's AI Executive Order No. 13859 and Executive Order No. 13960, and recent statements by the president-elect in greater detail in future blogs and alerts.