Jackson Lewis LLP

09/26/2024 | News release | Distributed by Public on 09/26/2024 08:55

Investigation of AI Training by Australian Radiology Provider Provides Important Reminder for U.S. Healthcare Providers

If there is one thing artificial intelligence (AI) systems need is data and lots of it as training AI is essential for achieving success for a given use case. A recent investigation by Australia's privacy regulator into the country's largest medical imaging provider, I-MED Radiology Network, illustrates concerns about the use of medical data to AI systems. This investigation may offer important insights for healthcare providers in the U.S. also trying to leverage the benefits of AI. They too grapple with where those applications intersect with privacy and data security laws, including the Health Insurance Portability and Accountability Act (HIPAA).

The Australian Case: I-MED Radiology's Alleged AI Data Misuse

The Office of the Australian Information Commissioner (OAIC) has initiated an inquiry into allegations that I-MED Radiology Network shared patient chest x-rays with Harrison.ai, a health technology company, to train AI models without first obtaining patient consent. According to reports, a leaked email indicates that Harrison.ai distanced itself from responsibility for patient consent, asserting that compliance with privacy regulations was I-MED's obligation. Harrison.ai has since stated that the data used was de-identified and that it complied with all legal obligations.

Under Australian privacy law, particularly the Australian Privacy Principles (APPs), personal information can only be disclosed for its intended or a secondary use that the patient would reasonably expect. It remains unclear whether training AI on medical data qualifies as a "reasonable expectation" for secondary use.

The OAIC's preliminary inquiries into I-MED Radiology may ultimately clarify how medical data can be used in AI contexts under Australian law, and may offer insights for healthcare providers across borders, including those in the United States.

HIPAA Considerations for U.S. Providers Using AI

The investigation of I-MED raises significant issues that U.S. healthcare providers, subject to HIPAA, should consider, especially given the growing adoption of AI tools in medical diagnostics and treatment. To date, the U.S. Department of Health and Human Services (HHS) has not provided any specific guidance for HIPAA covered entities or business associates concerning AI. In April 2024, HHS publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits - PDF. In October 2023, HHS and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector. More is expected.

HIPAA regulates the privacy and security of protected health information (PHI), generally requiring covered entities to obtain patient consent or authorization before using or disclosing PHI for purposes outside of certain exceptions, such as treatment, payment, or healthcare operations (TPO).

In the context of AI, the use of de-identified data for research or development purposes-such as training AI systems-can generally proceed without specific patient authorization where that the data meets HIPAA's strict de-identification standards. HIPAA generally defines de-identified information as data from which all identifiable information has been removed in such a way that it cannot be linked back to the individual.

However, U.S. healthcare providers must ensure that de-identification is properly executed, particularly when AI is involved, as the re-identification risks in AI models can be heightened due to the vast amounts of data processed and the sophisticated methods used to analyze it. Therefore, even when de-identified data is used, entities should carefully evaluate the robustness of their de-identification methods and consider whether additional safeguards are needed to mitigate any risks of re-identification.

Risk of Regulatory Scrutiny

While HIPAA does not currently impose specific obligations on AI use beyond general privacy and security requirements, the I-MED case highlights how AI-driven data practices can attract regulatory attention. U.S. healthcare providers should be prepared for similar scrutiny from federal and state regulators as AI becomes more integrated into healthcare systems.

In addition, there is increasing pressure on policymakers to update healthcare privacy laws, including HIPAA, to address the unique challenges posed by AI and machine learning. Providers should stay informed about potential regulatory changes and proactively implement AI governance frameworks that ensure compliance with both current and emerging legal standards.

Conclusion: Lessons for U.S. Providers

The ongoing investigation into I-MED Radiology's alleged misuse of medical data for AI training underscores the importance of ensuring legal compliance, patient transparency, and robust data governance in AI applications. For U.S. healthcare providers subject to HIPAA, the case offers several key takeaways:

  1. Develop/Expand Governance to Address AI. AI technologies, including generative AI, are affecting all parts of an organization, from providing core services, to IT, to HR, and marketing as well. Different use cases will drive varied considerations making a clear yet adaptable governance structure important for ensuring compliance and minimizing organizational risk.
  2. Ensure proper de-identification: When using de-identified data for AI training, healthcare entities should verify that their de-identification methods meet HIPAA's stringent standards and account for AI's re-identification risks.
  3. Monitor evolving AI regulations: With increased regulatory attention on AI, healthcare providers should prepare for potential legal developments and enhance their AI governance frameworks accordingly.

By staying proactive, U.S. healthcare providers can harness the power of AI while maintaining compliance with privacy laws and safeguarding patient trust.