Veradigm Inc.

06/29/2024 | News release | Archived content

Artificial Intelligence (AI) in Healthcare: Finding an Ethical Balance

Written by: Sarah Winslow and Cheryl Reifsnyder, PhD

Artificial intelligence (AI) is currently transforming healthcare. It's providing new ways to diagnose, treat, and monitor today's patients and helping providers deliver more personalized, targeted treatments.

However, even though AI is demonstrating increasing benefits and applications in healthcare, it's important to be mindful of the risks of not using AI responsibly. These risks include the potential for unconscious amplification of biases in patient diagnosis and treatment as well as risks to patient safety and privacy.

These and other potential risks make it crucial for healthcare professionals to develop a thorough understanding of how AI is used in healthcare and the possibility of bias and error in those uses. That's why we bring you this article, the first in a 3-part series covering AI in healthcare: To clarify some of the key ethical questions that need to be addressed for AI to be used responsibly in healthcare.

Potential risk #1: Patient privacy

Successfully integrating AI into healthcare requires great care with patient data usage and adherence to strict regulations designed to protect patient privacy. The Health Insurance Portability & Accountability Act (HIPAA) in the U.S. and the General Data Protection Regulation (GDPR) in Europe establish standards for safeguarding the privacy and security of patient data. Compliance with these regulations is crucial for all healthcare AI applications.

A recent U.S. survey showed that privacy is one of the primary patient concerns related to AI's increasing use in healthcare-perhaps due to the large amounts of healthcare-related data frequently collected by AI systems. AI systems also often rely on analysis of patient data to provide accurate diagnoses and treatment recommendations. If not properly handled, these data have great potential for misuse.

Despite existing regulations, numerous concerns remain about the privacy of patient data. The healthcare industry overall has proven to be particularly vulnerable to attack. According to IBM Security's Cost of a Data Breach Report for 2023, data breaches in the healthcare industry have an average cost of $10.93 million, making them the most expensive category of data breaches. Many of the organizations that initially developed and produced AI tools have had poor privacy protections, resulting in instances where patients were not fully informed of privacy issues or given control over their personal health information.

Another area of concern relates to the large volumes of private health data used to train AI tools. In some cases, these tools have created exposure risks because they memorize and retain patient information. Studies have shown that AI tools can often re-identify individuals from the data stored in large health data collections-even if those data were anonymized prior to use. In some cases, the AI could make detailed guesses about an individual's non-health data as well as re-identify the patient.

Potential risk #2: Patient safety

Despite AI's promise of improving patient safety and health outcomes in many scenarios, patient safety remains a primary ethical concern for AI's use in healthcare. One reason is that AI-powered tools are only as accurate as the medical devices used to procure the dataused in the AI's analysis. As medical devices are typically developed in wealthy countries using white, physically fit test subjects-groups that do not typically include racial and ethnic minorities-the data produced by these devices has great potential for bias.

The pulse oximeter provides a powerful example of this potential risk. Pulse oximeters are sensors that perform a noninvasive test to measure a patient's blood oxygen saturation level. It provides a faster and simpler alternative to the more accurate method of measuring blood oxygen saturation, which is to test an arterial blood sample. Due to its ease of use, the pulse oximeter has become a widely used tool for monitoring patients with many conditions. More and more frequently, data generated by pulse oximeters are fed into AI algorithms, which are then used to guide healthcare decisions.

However, multiple studies have shown that pulse oximeter data have an increased risk of inaccuracy when measuring patients with darker skin tones. They frequently overestimate the actual blood oxygen saturation in darker-skinned patients. Using data that overestimate blood oxygen saturation results in decreased accuracy from AI algorithms used to analyze these data.

When pulse oximeters were initially tested, the FDA's premarket guidance recommended that developers have a "range of skin pigmentation" represented in clinical studies for the devices. A "range" was suggested to include at least 2 subjects with dark skin pigmentation, or at least 15% of the study group, whichever was larger. An FDA panel is currently considering proposals to require clinical trials to include more diverse groups of patients.

Potential risk #3: Healthcare disparity

AI is a mirror that reflects the information provided, whether high-quality or low-quality, positive or negative. Unfortunately, that means AI systems perpetuate and amplify their creators' underlying biases. Those could be biases against particular gender, racial, or ethnic identities; they could also be broader, such as cultural biases that reflect the society in which the creators were educated. This type of cultural bias means that an AI system that seems effective in one location would be difficult to use reliably in another geographic area.

Bias can also arise if AI systems are trained using data that are not representative of the population for which they are being used. This can lead to potential inaccuracies when their algorithms are used to evaluate patients in minority groups not represented in the training data. For instance, some studies have found racial discrepancies and limitations in AI algorithms due to the lack of available healthcare data for women and minority populations. Any population-level and system-level data mined from domains with bias will likely carry that same bias into the implementation of the final algorithm.

AI-enabled tools used for healthcare applications require appropriate human oversight, testing, and risk mitigations to prevent them from making potentially dangerous errors. In this example, experts hold that bias could be reduced by using more diverse datasets when training the AI. However, bias will probably never be eliminated entirely from AI algorithms.

Need for ethical oversight

Although AI has the potential to improve patient health in many scenarios, it can also perpetuate injustices and inequities without proper oversight. That's partially because AI inherently lacks the ethics and morals that are natural to humans. A famous thought experiment called the Paperclip Maximizer illustrates this idea.

Originally published by Swedish philosopher Nick Bostrom in his 2003 paper, "Ethical Issues in Advanced Artificial Intelligence," the Paperclip Maximizer problem tasked an AI with the goal of making as many paperclips as possible. Although this goal might initially seem harmless, Bostrom shows how this nonspecific goal could result in disaster when the AI eventually realized that humans created a potential challenge to its goal. It would then conclude that all humans should be killed, freeing their atoms to be used to make more paperclips while simultaneously reducing the demand for paperclips. The AI wasn't hostile to humans-just indifferent and lacking the common sense knowledge and morality humans take for granted.

AI reaches conclusions based only on the data it's provided. Human oversight is required to add common sense to the equation. For instance, AI-powered scheduling software at a cancer treatment clinic identifies patterns among patients who miss appointments. Say the AI algorithm observes that a certain minority demographic misses a higher percentage of appointments. The algorithm might use these data to conclude that patients in this minority group are less motivated to adhere to their treatment recommendations-a conclusion that might seem reasonable to the AI in the absence of additional information provided by human oversight.

However, this example illustrates the importance of not accepting the conclusions of AI-powered algorithms without question; additional information exists that would alter these conclusions. Research reveals that transportation insecurity is a common issue in the U.S., particularly among patients requiring cancer treatment. Transportation insecurity can lead patients to skip, miss, delay, alter, or prematurely terminate required clinical care, increasing rates of cancer recurrence and mortality. In addition, transportation insecurity disproportionately impacts racial and ethnic minority, low-income, elderly, and rural patient populations.

A human healthcare professional could recognize the need to further explore the causes behind patients' poor treatment adherence, leading to the discovery that the missed appointments were likely caused by transportation insecurity. The AI could not offer this as a possibility unless explicitly programmed to do so.

Solving ethical issues related to AI in healthcare

AI has many potential benefits in healthcare, but only if the industry makes the effort to ensure that AI is used responsibly and ethically. In the past, healthcare decisions have been made almost exclusively by humans; adding AI to help with or make these decisions raises numerous ethical issues. Without proper governance and oversight, AI systems pose risks such as:

  • Biased decision-making
  • Patient privacy violations
  • Misuse of patient data

The key is to remember that AI is a tool-a valuable tool, but nonetheless, a tool that cannot replace the judgment, decision-making, critical thinking, and assessment skills of physicians, nurses, and other healthcare professionals. Responsible, ethical, and effective use of AI in healthcare requires awareness of its potential risks and vigilance to help prevent those risks.

As the role of AI in healthcare continues to grow and evolve, it will become increasingly important for healthcare providers to understand AI's weaknesses as well as its strengths. This will enable healthcare professionals to be vigilant in harnessing AI's potential benefits while applying human judgment to help prevent its misuse.