John W. Hickenlooper

19/11/2024 | Press release | Distributed by Public on 20/11/2024 00:22

VIDEO: Hickenlooper Chairs Senate Hearing on Protecting Consumers from AI Deepfakes

Hickenlooper: "Working together, we can do a better job protecting Americans from these potential risks… but at the same time unleash the innovation that this country is known for."

WASHINGTON - Today, U.S. Senator John Hickenlooper chaired a hearing of the Senate Commerce Committee's Subcommittee on Consumer Protection, Product Safety and Data Security on how we can better protect consumers from artificial intelligence (AI)-enabled fraud and scams, including deepfakes and non-consensual intimate imagery.

"These AI-enabled scams don't just cost consumers financially, but they damage reputations and relationships, and equally important they cast doubt about what we see and what we hear online," said Hickenlooper in the hearing. "As AI-generated content gets more elaborate, more realistic, literally almost anyone can fall for one of these fakes. We have a responsibility to raise the alarm for these scams and these frauds, and begin to be a little more aggressive in what can be done to avoid them."

Hickenlooper was joined by Ranking Member Marsha Blackburn; Hany Farid, Professor at University of California Berkeley, School of Information; Justin Brookman, Director of Technology Policy at Consumer Reports, Mounir Ibrahim, Chief Communications Officer & Head of Public Affairs at Truepic, and Dorota Mani, an advocate and mother of a victim of non-consensual intimate imagery.

During the hearing, Hickenlooper and the other senators asked witnesses about how consumers can better recognize and avoid these new kinds of harms through labeling technology like watermarks, and how we can better support victims of deepfakes and AI-enabled scams.

"I think our constituents in our home states, but across the country and really around the world, are waiting for us to take the reins and strengthen America's leadership, but at the same time not compromising our commitment to innovation and transparency in AI," said Hickenlooper in the hearing. "This has been, can, and should remain a nonpartisan or bipartisan effort that focuses on certain core issues."

Last year, Hickenlooper also chaired a Senate hearing on how to increase transparency in AI technologies for consumers, identify uses of AI that are beneficial or "high risk," and evaluate the potential impact of policies designed to increase trustworthiness in the transformational technology.

Hickenlooper is also a cosponsor of the bipartisan TAKE IT DOWN Act, which criminalizes the publication of non-consensual, intimate imagery (NCII), including AI-generated "deepfake pornography", on social media and other online sites, and requires social media companies to have procedures to remove content upon notification from a victim. The TAKE IT DOWN Act, along with Hickenlooper's bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act, which creates guidelines for third-party audits of AI systems, and three other bipartisan AI-related bills passed out of the Commerce Committee in July and now head to the Senate floor for a full vote.

For a full video of Hickenlooper's opening remarks, click HERE.

Full text of Hickenlooper's opening remarks below:

"We stand today, I think, everyone agrees that we are at a crossroads where American leadership in AI is going to depend on which of many courses Congress takes going forward.

"This has been, can, and should remain a nonpartisan, or let's say a, bipartisan effort that focuses on certain core issues.

"I think promoting transparency in how developers build new models, adopting evidence-based standards to deliver solutions to problems that we are aware of, that we know of that exist in AI.

"And third, building Americans' trust in what could be a, and has been on occasion, a very disruptive technology.

"I think our constituents in our home states, but across the country and really around the world, are waiting for us to take the reins and strengthen America's leadership, but at the same time not compromising our commitment to innovation and transparency in AI.

"I believe in my heart and soul, I think most of us on this committee believe that American leadership is essential for AI, for a whole variety of reasons. I think we'll go through a lot today.

"We already know that the capabilities in the field of Artificial Intelligence, those capabilities are evolving and changing rapidly.

"Generative AI tools allow almost anyone to create realistic synthetic video. Synthetic, text, image, audio, video - you name it. Whatever content you can imagine we can now create.

"While AI will have enormous benefits, and I truly believe that, benefits in our daily lives in sectors like clean energy, medicine, workplace productivity, workplace safety.


"For all those benefits, we have to mitigate and anticipate the concurrent risks that this technology brings along with it.

"Just one example, this image behind me was created using AI tools depicting a young girl holding a dog in the aftermath of Hurricane Helene.

"We added a label to clearly show that the image is AI-generated.

"While not a real image, it appears to be extremely realistic at first glance, although I grilled my staff on exactly what were the details. How could a trained observer recognize this as synthetic?

"But I think that without a clear label, it really does take experience and training. A much closer look to see flaws in the image. The young girl's left hand is somewhat misshapen from a natural photograph…

"I think we recognize and should all own up to the fact that scammers are already using this new technology to prey on innocent consumers. There are a number of bad actors out there that see this as a vehicle for sudden and dramatic enrichment.

"In one example, scammers have cloned the voices of loved ones saying they've been kidnapped, they've been abducted, and this familiar voice is begging for ransom payments.

"Other deepfake videos show celebrities endorsing products or candidates who they had never endorsed, and really had no intention of endorsing.

"Many other troubling examples [exist that] include children, teens, and women depicted in non-consensual intimate and violent images or videos that in many occasions cause deep emotional harm.

"These AI-enabled scams don't just cost consumers financially, but they damage reputations and relationships, and equally important they cast doubt about what we see and what we hear online.

"As AI-generated content gets more elaborate, more realistic, literally almost anyone can fall for one of these fakes. I think we have a responsibility to raise the alarm for these scams and these frauds, and begin to be a little more aggressive in what can be done to avoid them.

"During our hearing today, we'll begin to understand and look at what tools and techniques companies and consumers can use to recognize malicious deepfakes, to be able to discuss which public and private efforts are needed to educate the public with the experiences and the skills necessary to avoid AI-enabled scams, and then thirdly: to highlight enforcement authorities we can establish to deter bad actors and prevent further harm coming to consumers.

"This Committee is already at work on this, and has already drafted, amended, and passed several bipartisan bills focused on AI issues. I'll just run through these:

"The Future of AI Innovation Act, fosters partnerships between government, the private sector, and academia to promote AI innovation.

"Validation and Evaluation for Trustworthy AI Act, creates a voluntary framework which will enable third-party audits of AI systems.

"AI Research, Innovation, and Accountability Act increases R&D into content authenticity, requires consumer AI transparency, and creates a framework to hold AI developers accountable.

"The COPIED Act, has not yet been considered by the Committee, but increases federal R&D in synthetic content detection and creates enforceable rules to prevent bad actors from manipulating labels on content.

"Lastly, the TAKE IT DOWN Act makes it a criminal offense to create or distribute non-consensual intimate images (NCII) of individuals.

"These are each bipartisan bills. They lay the groundwork for responsible innovation and address real problems with thoughtful solutions. They're not perfect, and I trust we'll come out with information where we get improvements from your sage wisdom.

"We look forward to working together to get these across the finish line and passed into law in the coming weeks.

"But we know that the bad actors, unfortunately, still continue to try and use this technology for fraudulent purposes.

"To combat fraud, the Federal Trade Commission, the FTC, recently adopted rules to prohibit the impersonation of government agencies and businesses, including through the use of AI.

"The FTC is also considering extending this protection to individuals, including through visual or audio deepfakes.

"This is one very good example of a federal agency taking decisive action to address a specific harm.

"We need to all encourage further targeted and specific efforts with this basic common sense rule.

"States across the country have begun to enact legislation - states being the laboratories of democracy - to try and address the creation and distribution of deepfake media.

"Again, the map behind me here shows states taking action.

"The yellow states have enacted legislation related to AI use in election contexts.

"States in purple have enacted legislation related to non-consensual intimate imagery.

"And states in red have enacted legislation related to both of these, or instead to address other AI-generated media concerns.

"As we can see, this is, right now, a patchwork of protections, which is defying the need for predictability, which pretty much any industry needs to prosper. A consistent federal approach would be tremendously beneficial to fill in these gaps and make sure we're protecting all Americans.

"Promoting responsible AI will also rely on our partners in the private sector.

"A number of companies: Anthropic, Google, Microsoft, OpenAI, a number of others, have made voluntary commitments to responsible AI practices.

"These include commitments to help Americans understand whether types of content they see is AI-generated.

"It can also be done through watermarks or similar technology that identifies the origin, ownership, or permitted uses of a piece of AI-generated content.

"Today, we're going to hear from leading experts, all of you, in artificial intelligence, and AI-generated media about, what from your perception what's already happening, but more importantly what else needs to be done, where we should be focusing.

"Hopefully working together, we can do a better job protecting Americans from these potential risks, and the scams and frauds, but at the same time make sure we unleash the innovation that this country is known for, and especially in this emerging technology.

"I don't have to go into the importance of the competition around AI. This is a competition some of our global rivals take very seriously, and if we are any less focused on that, that will be to our own detriment."

###