10/07/2024 | News release | Distributed by Public on 10/07/2024 08:48
The buzz around artificial intelligence (AI) is everywhere. Re-invigorated by ChatGPT's release in late 2022, conversations surrounding AI paint dramatic pictures of how the tech will revolutionize life as we know it-for better and for worse. Such speculation makes it difficult for users and policymakers to parse the actual benefits and challenges of AI. The truth is that AI is a broad, complex term that encompasses a variety of technologies and applications. What we know as AI has actually been around for decades and powers activity across all sectors of life.
While ChatGPT has brought generative AI to the forefront of the AI governance conversation, not enough attention has been paid to predictive AI, especially when it is used in consequential decision-making. Understanding the potential harms and benefits of different AI applications can better inform safeguards and regulations around the creation, deployment, and governance of AI systems.
There is no single definition of AI shared across academia, industry, and government. Generally, AI is used as an umbrella term to refer to both a field of study and the machine-based systems that use mathematical models to analyze inputs in order to complete specific tasks, such as making predictions, recommendations, content, and decisions. AI goes beyond traditional data processing, with systems using data and algorithms (sets of rules or instructions) to learn, reason, problem-solve, process language, and perceive their environment-hence why we call these systems "intelligent."
"Artificial intelligence (AI) means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information of action."-National Artificial Intelligence Initiative[15 USC 9401(3)]AI systems can use machine learning (based on algorithms and statistical models); deep learning (based on complex layers of interconnected computing systems); or a combination of both to accomplish a variety of tasks, including processing data, making predictions, and creating content. Scientists use different training models depending on the intended purpose to "teach" AI systems. Common training models for AI systems include
AI is used for a variety of simple and complex tasks. To better understand AI applications and the potential implications for users-both positive and negative-AI can generally be thought of in two categories:
The potential for both predictive and generative AI is endless. AI allows for greater learning from existing data and can reduce certain types of administrative and repetitive work, increase productivity, and inform critical decision making for future scenarios. When implemented thoughtfully and with the appropriate safeguards, AI can advance a variety of sectors from labor, education, and healthcare to public administration, finance, and environmental management. In many of these fields, AI is already in use-though closer analysis of the risks and benefits is needed to determine if AI is an appropriate tool for all use cases.
Health insurance, a complicated field known for its administrative burden, has become a prime space for AI automation, with one McKinsey report estimating AI could result in billions of dollars in savings. The use of predictive AI to help process claims and calculate care coverage, however, has resulted in denied care to patients in need. An investigative series by STAT found that Humana, United Healthcare, and various Blue Cross Blue Shield plans used these predictive tools to deny coverage and restrict available care. Similarly, a ProPublica investigation found health insurance giant Cigna used a predictive algorithm to process insurance claims that led to bulk denials of claims without proper medical review. Health insurance companies are now facing class-action lawsuits for their wrongful AI use. Doctors are also fighting back against increased denials of treatment driven by predictive AI, using generative AI to write letters to insurers and appeal claim denials.
For all the potential benefits AI carries, there are also associated risks and harms for users. Programming and data sources, as well as the human and systemic context that shape AI models, can be insufficient and biased, leading to unfair, inaccurate, and discriminatory outcomes. In addition, AI systems and their outcomes are not always clear or explainable, which hampers the ability to ensure the systems are accurate and fair or allow those impacted by the systems to contest the decisions.
To mitigate potential harms while capitalizing on AI's benefits, more coordinated action is needed to address the challenges of the AI systems already in use and those yet to come. Together, users, policymakers, and industry must grapple with pressing questions about the use of AI, especially predictive AI, including the following: