Shelley Moore Capito

07/24/2024 | Press release | Archived content

Capito, Hickenlooper Introduce Bipartisan Bill to Create Guidelines for Third-Party Audits of AI

WASHINGTON, D.C. - U.S. Senators Shelley Moore Capito (R-W.Va.) and John Hickenlooper (D-Colo.), both members of the Senate Commerce, Science, and Transportation Committee, introduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act.

This bipartisan bill directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.


"This commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I look forward to getting this bill and our AI Research Innovation and Accountability Act passed out of the Commerce Committee soon," Senator Capito said.

BACKGROUND:

Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any external verification.

The VET AI Act would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails. As Congress moves to establish AI regulations, evidence-based benchmarks to independently validate AI companies' claims on safety testing will only become more essential.

Specifically, the bill would:

  • Direct NIST, in coordination with the U.S. Department of Energy and National Science Foundation (NSF), to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
    • Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systems' development lifecycles.
  • Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
  • Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand

Full text of the bill can be found here.

# # #