Angus S. Jr. King

07/25/2024 | Press release | Distributed by Public on 07/25/2024 10:51

King, Colleagues Demand Answers From OpenAI Following Reports of Safety and Secrecy Concerns

WASHINGTON, D.C. - Following reports from whistleblowers and former employees at OpenAI voicing safety and security concerns, U.S. Senator Angus King (I-ME) joined four of his colleagues in calling on artificial intelligence (AI) research company OpenAI to honor its "public promises and mission" regarding essential safety standards. Since its founding in 2015, OpenAI has branded itself as a safety-conscious and responsible research organization.

In the letter to OpenAI CEO Sam Altman, the senators highlight recent reporting that OpenAI whistleblowers and former employees have sounded alarms about OpenAI's focus on 'shiny products' over safety and societal impacts, allowing AI systems to be deployed without adequate safety review and insufficient cybersecurity - as well as possible retribution against former employees who publicly air concerns. The senators also ask whether OpenAI's commitments on AI safety remain in effect and request that the company reform its non-disparagement agreement practices that could deter whistleblowers from coming forward.

"We write to you regarding recent reports about OpenAI's safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns," the Senators wrote.

The Senators continued, "Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company's governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies."

According to reports, the company has failed to honor its public commitment to allocate 20-percent of computing resources to AI safety, reassigned long-term AI safety team members, and required departing employees to sign life-long non-disparagement agreements under threats of taking back previously earned compensation.

On the letter, Senator King was joined by U.S. Senators Brian Schatz (D-HI), Ben Ray Lujan (D-NM), Peter Welch (D-VT), and Mark Warner (D-VA).

Senator King has been a leading voice in fighting threats from emerging technology, having served as the Co-Chair of the Cyberspace Solarium Commission - which has had dozens of recommendations become law since its launch in 2019. As a member of the Senate Intelligence and Armed Services committees, Senator King has been a strong supporter of increased watermarking regulations. In a September 2023 open Intelligence hearing, King asked Dr. Yann LeCun - a New York University Professor of Computer Science and Data Science at New York University - about what is technologically feasible in terms of implementing watermarks (a small icon or caption) for users to discern between real and artificially created content.

The FY2024 National Defense Authorization Act legislation includes a Senator King-led provision to evaluate technology, including applications, tools, and models, to detect and watermark generative artificial intelligence. Most recently, he joined the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) that would allow victims to sue perpetrators for up to $150,000 who create and share fake visual depictions to falsely appear to be authentic. During a hearing of the Senate Energy and Natural Resources Committee hearing, where he raised the question of what Congress and the private sector can do to combat fake and misinformation online.Most recently, he introduced legislation to combat non-consensual deep fake explicit images online.

The full text of the letter can be found here and below.

+++

Dear Mr. Altman,

We write to you regarding recent reports about OpenAI's safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns. We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company's identification and mitigation of cybersecurity threats.

Safe and secure AI is widely viewed as vital to the nation's economic competitiveness and geopolitical standing in the twenty-first century. Moreover, OpenAI is now partnering with the U.S. government and national security and defense agencies to develop cybersecurity tools to protect our nation's critical infrastructure. National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable.

Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company's governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies. The voluntary commitments that you and other leading AI companies made with the White House last year were an important step towards building this trust.

We therefore request the following information by August 13, 2024:

  1. Does OpenAI plan to honor its previous public commitment to dedicate 20 percent of its computing resources to research on AI safety?
    1. If so, describe the steps that OpenAI has, is, or will take to dedicate 20 percent of its computing resources to research on AI safety.
    2. If not, what is the percentage of computing resources that OpenAI is dedicating to AI safety research?
  1. Can you confirm that your company will not enforce permanent non-disparagement agreements for current and former employees?
  1. Can you further commit to removing any other provisions from employment agreements that could be used to penalize employees who publicly raise concerns about company practices, such as the ability to prevent employees from selling their equity in private "tender offer" events?
    1. If not, please explain why, and any internal protections in place to ensure that these provisions are not used to financially disincentivize whistleblowers.
  1. Does OpenAI have procedures in place for employees to raise concerns about cybersecurity and safety? How are those concerns addressed once they are raised?
    1. Have OpenAI employees raised concerns about the company's cybersecurity practices?
  1. What security and cybersecurity protocols does OpenAI have in place, or plan to put in place, to prevent malicious actors or foreign adversaries from stealing an AI model, research, or intellectual property from OpenAI?
  1. The OpenAI Supplier Code of Conduct requires your suppliers to implement strict non-retaliation policies and provide whistleblowers channels for reporting concerns without fear of reprisal. Does OpenAI itself follow these practices?
    1. If yes, describe OpenAI's non-retaliation policies and whistleblower reporting channels, and to whom those channels report.
  1. Does OpenAI allow independent experts to test and assess the safety and security of OpenAI's systems pre-release?
  1. Does the company currently plan to involve independent experts on safe and responsible AI development in its safety and security testing and evaluation processes, procedures, and techniques, and in its governance structure, such as in its safety and security committee?
  1. Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?
  1. What are OpenAI's post-release monitoring practices? What patterns of misuse and safety risks have your teams observed after the deployment of your most recently released large language models? What scale must such risks reach for your monitoring practices to be highly likely to catch them? Please share your learnings from post-deployment measurements and the steps taken to incorporate them into improving your policies, systems, and model updates.
  1. Do you plan to make retrospective impact assessments of your already-deployed models available to the public?
  1. Please provide documentation on how OpenAI plans to meet its voluntary safety and security commitments to the Biden-Harris administration.

Thank you very much for your attention to these matters.

Sincerely,

###