Covington & Burling LLP

10/09/2024 | News release | Distributed by Public on 11/09/2024 00:29

August 2024 Developments Under President Biden’s AI Executive Order

This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" (the "AI EO"), issued by President Biden on October 30, 2023. The first blog summarized the AI EO's key provisions and related OMB guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through July 2024. This blog describes key actions taken to implement the AI EO during August 2024. It also describes key actions taken by NIST and the California legislature related to the goals and concepts set out by the AI EO. We will discuss developments during August 2024 to implement President Biden's 2021 Executive Order on Cybersecurity in a separate post.

OMB Releases Finalized Guidance for Federal Agency AI Use Case Inventories

On August 14, the White House Office of Management and Budget ("OMB") released the final version of its Guidance for 2024 Agency Artificial Intelligence Reporting Per EO 14110, following the release of a draft version in March 2024. The Guidance implements Section 10.1(e) of the AI EO and various sections of the OMB's March 28 Memorandum M-24-10, "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence." The Guidance also supersedes the agency AI use case inventory requirements set out in Section 5 of 2020's EO 13960, "Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government."

The Guidance requires federal agencies (excluding the Department of Defense and Intelligence Community) to submit AI use case inventories for 2024 by December 16, 2024, and to post "publicly releasable" AI use cases on their agency websites. Appendix A of the Guidance lists information agencies must provide for each AI use case, including information on the AI's intended purpose, expected benefits, outputs, development stage, data and code, and enablement and infrastructure. Agencies must also address a subset of questions for AI use cases that are determined to be rights- or safety-impacting, as defined in OMB Memo M-24-10, such as whether the agency has complied with OMB Memo M-24-10's minimum risk management practices for such systems. For AI use cases that are not subject to individual reporting (including DoD AI use cases and AI use cases whose sharing would be inconsistent with law and governmentwide policy), agencies must report certain "aggregate metrics."

In addition to AI use case inventories, the Guidance provides mechanisms for agencies to report the following:

  • Agency CAIO determinations of whether agencies' current and planned AI use cases are safety- or rights-impacting, as defined in Section 5(b) and Appendix I of OMB Memo M-24-10, by December 1, 2024.
  • Agency CAIO waivers of one or more of OMB Memo M-24-10's minimum risk management practices for particular AI use cases, including justifications of how the practice(s) would increase risks to rights or safety or unacceptably impede critical agency operations, by December 1, 2024.
  • Agency requests and justifications for one-year extensions to comply with the minimum risk management practices for particular AI use cases, by October 15, 2024.

NIST Releases New Public Draft of Digital Identity Guidelines

As described in our parallel blog on cybersecurity developments, on August 21, the National Institute of Standards and Technology ("NIST") released the second public draft of its updated Digital Identity Guidelines (Special Publication 800-63) for public comment, following an initial draft released in December 2022. The requirements, which focus on Enrollment and Identity Proofing, Authentication and Lifecycle Management, Federation and Assertions, also address "distinct risks and potential issues" from the use of AI and ML in identity systems, including disparate outcomes and biased outputs, Section 3.8 on "AI and ML in Identity Systems" would impose the following requirements on government contractors that provide identity proofing services ("Credential Service Providers" or "CSPs") to the federal government:

  • CSPs must document all uses of AI and ML and communicate those uses to organizations that rely on these systems.
  • CSPs that use AI/ML must provide, to any entities that use their technology, information regarding (1) their AI/ML model training methods and techniques, (2) their training datasets, (3) the frequency of model updates, and (4) results of all testing of their algorithms.
  • CSPs that use AI/ML systems or rely on services that use AI/ML must implement the NIST AI Risk Management Framework to evaluate risks that may arise from the use of AI/ML, and must consult NIST Special Publication 1270, "Towards a Standard for Managing Bias in Artificial Intelligence."

Public comments on the second public Draft Guidelines are due by October 7, 2024.

U.S. AI Safety Institute Signs Collaboration Agreements with Developers for Pre-Release Access to AI Models

On August 29, the U.S. AI Safety Institute (AISI) announced "first-of-their-kind" Memoranda of Understanding with two U.S. AI companies regarding formal collaboration on AI safety research, testing, and evaluation. According to the announcement, the agreements will allow AISI to "receive access to major new models from each company prior to and following their public release," with the goal of enabling "collaborative research on how to evaluate capabilities and safety risks" and "methods to mitigate those risks." The U.S. AISI also intends collaborate with the U.K. AI Safety Institute to provide feedback on model safety improvements.

These agreements build on the Voluntary AI Commitments that the White House has received from 16 U.S. AI companies since 2023.

California Legislature Passes First-in-Nation AI Safety Legislation Modeled on AI EO

On August 29, the California legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). If signed into law, SB 1047 would impose an expansive set of requirements on developers of "covered [AI] models," including cybersecurity protections prior to training and deployment, annual third-party audits, reporting of AI "safety incidents" to the California Attorney General, and internal safety and security protocols and testing procedures to prevent unauthorized access or misuse resulting in "critical harms." Echoing the AI EO's definition of "dual-use foundation models," SB 1047 defines "critical harms" as (1) the creation or use of CBRN weapons by covered models, (2) mass casualties or damages resulting from cyberattacks on critical infrastructure or other unsupervised conduct by an AI model, or (3) other grave and comparable harms to public safety and security caused by covered models. Similar to the AI EO's computational threshold for AI models subject to Section 4.2(a)'s reporting and AI red-team testing requirements, SB 1047 defines "covered models" in two phases. First, prior to January 1, 2027, "covered models" are defined as AI models trained using more than 1026 floating-point operations per second ("FLOPS") of computing power (the cost of which exceeds $100 million), or AI models created by fine-tuning covered models using at least 3 x 1025 FLOPS (the cost of which exceeds $10 million). Second, after January 1, 2027, SB 1047 authorizes California's Government Operations Agency to determine the threshold computing power for covered models. For reference, Section 4.2 of the AI EO requires reporting and red-team testing for dual-use foundation models trained using more than 1026 FLOPS and authorizes the Secretary of Commerce to define and regularly