NCSL - National Conference of State Legislatures

10/07/2024 | Press release | Distributed by Public on 10/07/2024 11:06

The Right Tool for the Job Might Be AI

The Right Tool for the Job Might Be AI

It just depends on the job you need to do.

By Lisa Ryckman | October 7, 2024

Sen. Whitney Westerfield of Kentucky, left, leads a discussion of how AI can be used to safely and fairly enhance the delivery of government services. To his left are Daniela Combe of IBM, Sen. James Maroney of Connecticut and Jamia McDonald of Deloitte.

Consider government the ultimate customer service business, Connecticut Sen. James Maroney suggests.

Then consider artificial intelligence the ultimate tool to improve it.

"There are lots of ways we can use AI," Maroney told a session on AI and government at NCSL's 2024 Legislative Summit. "But I think there's few ways we need to look at this: What can we do now, and what can we do in the future? And what is low risk, and what is high risk?"

"Just because the technology allows you to do these amazing things doesn't mean that you should be doing that."

-Jamia McDonald, principal at Deloitte

Maroney says low-risk uses would include efficiency tools, scheduling, summarizing, and helping with writing and communication. Higher risk would be assisting with decision-making. But in either case people need to be involved to prevent failures in AI implementation, Maroney says. He points to a New York City chatbot-typically considered "low risk"-that advised people to cheat on their taxes. In Spain, an algorithm that assessed risk in domestic violence cases labeled one abuser as low risk, resulting in the death of his victim, he says.

"These are important use cases where we're implementing these, so we have to think and make sure that we're keeping the human in the loop, giving them the ability to override that judgment," Maroney says.

Alexi Madon, an IBM government relations executive, says AI could be thought of as a high-tech screwdriver. "Back in the day, a screwdriver was a big invention, and if you use a screwdriver to put together a writing desk and the writing desk sort of collapses on your knee one day, it might hurt your knee. But is it a life-changing experience? Probably not," she says. "(But) if I use the screwdriver to put together a piece of my car engine and that backfires and I get into a car accident because of it, that is a life-changing experience."

It all depends on how you're using the technology, Madon says. "Using a chatbot to find out what aisle the cough syrup is in at a retail store is wildly different than if I'm trying to access my bank account and it is using AI to recognize my voice."

Madon recommends that states devise a risk-management framework and install chief AI officers in every agency to understand what uses might work for them and how it should be implemented and regulated.

"Just because you can doesn't mean you should," says Jamia McDonald, principal at Deloitte. "Just because the technology allows you to do these amazing things doesn't mean that you should be doing that."

The key, she says, is first defining the problem you're trying to solve with AI.

"There's a wide spectrum of technologies available, but none of them are useful if you aren't clear on the problem set you're trying to solve," she says. "My best advice is, start there. Understand it, and the tool set follows, and the right application of the tools follow."

McDonald says with a security framework in place, "It is much easier to get to the right outcome and the right solution set. And you already have frameworks in place. Technology is not new in government. There's a new thing moving quickly, but the fundamentals all still apply. The uniqueness with generative AI in the last two years has been the acceleration of it, the limitless application of it. But the known frameworks and risks that you guys have already identified are just as applicable."

Lisa Ryckman is NCSL's associate director of communications.