Results

Baylor College of Medicine

08/23/2024 | News release | Distributed by Public on 08/23/2024 07:13

The potential of AI in healthcare: A double-edged sword

Artificial intelligence (AI) is transforming healthcare, but is it a miracle cure or a Pandora's box? AI could rapidly revolutionize diagnosis, treatment and patient care; however, it also presents ethical and practical challenges.

Personal introduction to AI

My introduction to AI came through my position at DeepScribe, one of the many new AI "ambient scribe" companies gaining popularity. According to STAT's Generative AI tracker, nearly 90 health systems are currently experimenting with this technology.

I reviewed AI-transcribed recordings of physician-patient interactions and helped complete the necessary documentation fields. The system allowed me to highlight specific sections of the transcription, which were then automatically inserted into the appropriate medical notes. Any manual adjustments made to this automated process, such as changing the wording or organization, helped train the system to improve its accuracy and efficiency. This experience taught me how AI systems could improve medical documentation efficiency by learning to understand and process recorded medical conversations.

Benefits of AI in healthcare

AI can analyze substantial amounts of data quickly and accurately. DeepScribe offers a prime example of how AI can alleviate the administrative burdens on healthcare providers. By automatically generating clinical notes after each patient encounter, DeepScribe claims it reduces documentation time by up to 75%, allowing physicians to focus more on patient care and create stronger patient-provider relationships. Additionally, AI can improve operational efficiency by streamlining resource allocation, scheduling, and inventory management.

Many healthcare systems already are leveraging AI. Epic Systems, the leader in the electronic health records (EHR) hospital market, is powered by AI. Epic's AI algorithms can predict patient outcomes, identify potential health risks and suggest personalized treatment plans. AI's prowess is particularly evident in diagnostic imaging. Machine learning algorithms can detect anomalies in medical images with precision and speed, in some cases "outperforming" human radiologists. AI is also helping accelerate drug discovery and development, highlighting the significant benefits of AI to the pharmaceutical industry.

Risks and concerns

Despite the benefits of AI in healthcare, its development coincides with significant risks, concerns and gray areas that require acknowledgment. One major concern is an increased reliance on AI systems might diminish personal interactions between patients and healthcare providers. We need human connection, empathy and understanding - inherently human traits and elements of quality care that machines cannot be trained to emulate or replace in medical practice. For perspective, an AI diagnostic tool may recommend a treatment course for an elderly patient, relying on statistical probabilities, not considering the possible psychological and emotional impacts on the patient and their loved ones.

While AI offers the potential for improving patient care, including patient education and engagement, associated privacy and security concerns must be addressed. AI systems require significant amounts of data to function effectively, necessitating safeguards to protect sensitive information. By implementing stringent data protection measures, such as encryption and anonymization, healthcare organizations can ensure that patient data remains confidential while balancing the benefits of AI.

Equally important is obtaining informed consent from patients and ensuring individuals understand how their data will be used in AI systems. A recent publication highlights the importance of clear and transparent communication around data collection and use of AI in healthcare settings. To further improve trust and transparency, clear standards for data collection, storage and use need to be established.

Bias in AI algorithms threatens equitable care. If training data is skewed, AI systems may produce biased outcomes, disproportionately affecting marginalized and underserved populations. Addressing this issue requires careful attention to data quality and algorithmic fairness through diverse and representative datasets for training.

The future of AI in healthcare

AI applications are continually improving, becoming more sophisticated and capable of handling complex medical tasks. Integrating AI with other emerging technologies, such as wearable devices and telemedicine, could enhance healthcare delivery. While the future of AI in healthcare is promising, we need to approach it with caution and consistently address concerns.

Rather than replacing human interactions, AI should complement healthcare providers. Training programs for healthcare professionals on effective AI integration can facilitate a collaborative environment for technology and individuals. Regular audits and compliance checks can ensure adherence to data privacy measures. Continuous monitoring and updating of algorithms can help identify and correct biases and collaboration with ethicists and social scientists can improve the fairness of AI systems. Patients should be informed about how their data is used, and secure data storage practices should be holistically maintained.

To ensure the responsible and equitable use of AI, we must prioritize maintenance and adherence to frameworks for regulation and ethical guidelines. AI applications in healthcare must comply with HIPAA (standards for protecting sensitive patient information) to ensure data privacy and security. Explicit consent for data collection and control over our data addresses several privacy concerns. The U.S. Food and Drug Administration (FDA) has developed guidelines for AI and machine learning-based medical devices to emphasize transparency, real-world performance monitoring and addressing algorithmic bias. In February, the Biden-Harris Administration announced the first-ever consortium dedicated to AI safety, bringing together more than 200 leading AI stakeholders, including Baylor College of Medicine, to advance the development and deployment of safe and trustworthy AI.

Continuous updates, training and a focus on improving the patient experience are needed to uphold AI technologies that are transparent, understandable and accountable to both healthcare providers and patients. Ongoing research and collaboration among technologists, healthcare providers, patients and policymakers are needed to understand public perceptions and maximize AI's full potential in healthcare while addressing the associated risks.

By Shehzrin Shah, an administrative intern at the Center for Medical Ethics and Health Policy at Baylor College of Medicine