04/11/2023 | Press release | Archived content
Remarks of Alan Davidson
Assistant Secretary of Communications and Information
National Telecommunications and Information Administration
AI Accountability Policies: A Discussion with NTIA's Alan Davidson
University of Pittsburgh Institute for Cyber Law, Policy, and Security
Pittsburgh, PA
April 11, 2023
As prepared for delivery
Thank you, Beth. And thank you to the University of Pittsburgh for hosting this event, and bringing us all together for this timely discussion on artificial intelligence. I also want to thank Ellen Goodman, who you will hear from today, as well as the other leaders of NTIA's team for all their work on our AI initiative.
We at NTIA are here today because we see the benefits that responsible AI innovation will bring, and we want that innovation to happen safely. But we are concerned that's not happening today.
President Biden spoke to this tension just last week, at a meeting of the President's Council of Advisors on Science and Technology. He said, "AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security."
The President is right. We need to capture AI's benefits without falling prey to the risks that are emerging. That's why we want to make sure that we are creating an environment to enable trustworthy AI.
As a start, today it is clear that artificial intelligence systems - and particularly the massive innovations that are taking place in machine learning - will create new opportunities to improve people's lives. From the AI that accelerated a COVID vaccine - to technology for the visually impaired - to AI-fueled advances in medical diagnosis - these tools will change lives.
Deployed properly, these systems will create new economic opportunities, defend our national interests, and help tackle big societal challenges like climate change. And we are still in the early days of development of these systems.
So it is clear that artificial intelligence systems are going to be game-changing across many sectors. But it's also becoming clear there is cause for concern about the consequences and potential harms from AI system use. AI poses risks to privacy, security, and safety; potential bias and discrimination; trust and democracy; and it has implications for jobs and our economy.
We are already seeing examples of these negative consequences from the growing use of AI. Hidden biases in mortgage-approval algorithms have led to higher denial rates for communities of color. Algorithmic hiring tools screening for "personality traits" have been found non-compliant with the Americans with Disabilities Act. AI systems have been used to create "deepfake" images, audio, and video that deceive the public and hurt individuals.
These examples are just the tip of the iceberg.
Despite these risks, I am optimistic about AI. That is in part because policymakers and the public are paying a lot of attention relatively early in the deployment of this new technology.
In 2021, there were 130 bills related to AI passed or proposed in the U.S. That is a big difference from the early days of social media, cloud computing, or even the Internet.
The experience we have had with other technologies needs to inform how we look ahead. We need to be pro-innovation and also protect people's rights and safety. We have to move fast, because AI technologies are moving very fast.
The Biden administration supports the advancement of trustworthy AI. We are devoting a lot of energy to that goal across government.
Last week, President Biden highlighted our Administration's Blueprint for an AI Bill of Rights, which provides guidance on the design, use, and role of automated systems. As the President said, this is a guide to ensure that important protections are built into the AI systems from the start. He's called on Congress to act to promote responsible innovation, and appropriate guardrails to protect Americans' rights and safety.
Our colleagues at NIST have published an AI Risk Management Framework that serves as a voluntary tool that organizations can use to manage risks posed by AI systems. A range of federal agencies, from the SEC to HUD to the FDA, are looking at how AI affects specific sectors of our economy and society, and how existing laws apply.
There's still a lot more to do, and that's where today's announcement from NTIA comes in.
At NTIA we serve as the President's principal advisor for telecommunications and information policy. That means we think not just about what the law says, but also what the law ought to say.
While we continue to work on policy in the present moment, we are also looking over the horizon and formulating forward-looking recommendations. Europe and the states are moving ahead to address AI risks and promises now. We're here to advance federal policy and make sure we get it right.
To achieve American objectives while advancing American values, we need to develop policies to foster responsible innovation. AI systems should operate safely and protect rights. We want - and can have - an ecosystem that meets those needs while also fostering AI innovation.
That's why I am so pleased to announce today we are launching a request for comment on AI Accountability. We're seeking feedback on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems.
Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy. Policy was necessary to make that happen in the finance sector and it may be necessary for AI. Real accountability means that entities bear responsibility for what they put out into the world. The measures we're looking at are part of that accountability ecosystem.
AI systems are often opaque - it can be difficult to know whether they perform as claimed. For example:
Accountability policies will help shine a light on these systems and verify whether they are safe, effective, responsible, and lawful.
NTIA is seeking input on what policies should shape the AI accountability ecosystem. Our inquiry addresses topics such as:
Our initiative will help build an ecosystem of AI audits, assessments and other mechanisms to help assure businesses and the public that AI systems can be trusted. This, in turn, will feed in to the broader Commerce Department and Biden Administration work on AI.
This is vital work. One of our important goals is to create a detailed framework around how we think about these issues. We need more than an analysis of consequences. We need an understanding of the right answer.
That's hard, and we will need help. We want your input to help both technologists and policymakers understand the implications of their choices.
Policymakers in particular need to understand what guardrails to put in place to support responsible innovation. Good guardrails, implemented carefully, can actually promote innovation.
Guardrails and accountability let people know what good innovation looks like. They provide safe spaces to innovate while addressing the very real concerns we have about harmful consequences.
AI can unleash tremendous benefits. AI can also have tremendous impacts on our society and our rights. People will benefit immensely from a world in which we reap the benefits of responsible AI, while minimizing or eliminating the harms.
We look forward to your help in creating that better world.