10/01/2024 | Press release | Distributed by Public on 10/01/2024 14:37
As the summer winds down, regulatory updates related to digital health services certainly do not appear to be showing any signs of cooling off. It has been a busy summer, and below we have summarized several key updates for you to be aware of.
Key Takeaways:
Federal Regulation
While healthcare providers initially seemed somewhat cautious about the use of AI in healthcare in 2023, with a 2023 Pew Research survey revealing that nearly 60 percent of Americans would be uncomfortable with providers relying on AI for their healthcare, the use of AI in all facets of healthcare has seemingly exploded in the past year, and AI appears to be on the precipice of revolutionizing the industry. Despite this, the regulation of AI at the federal level in the healthcare industry remains somewhat fragmented, even though the Biden administration noted that developing appropriate safeguards was a priority in a 2023 Executive Order, summarized here. We note that some of the more recent efforts by the Department of Health and Human Services (HHS) to regulate the use of AI could be easily overlooked, as they are not necessarily housed in the usual regulatory suspects, as set forth below.
Section 1557 Regulations
HHS finalized the Nondiscrimination in Health Programs and Activities regulations (Section 1557 regulations) yet again on May 6, which implemented several nondiscrimination provisions required by Section 1557 of the Affordable Care Act. The Section 1557 regulations have a long and storied history, as they have been promulgated several times, but have continued to face court challenges each time and been withdrawn and reissued by each administration. The most recent Section 1557 regulations are no different, as they have been stayed in whole or in part by various courts. Nevertheless, the Section 1557 regulations are instructive for the healthcare industry on the direction that HHS may take regarding AI, as they seek to address the potential for discrimination in AI by implementing a nondiscrimination prohibition in the use of "patient care support decision tools," which are defined as "any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities." Covered entities are required to identify and mitigate any risks of discrimination in the use of such tools, as this section:
Practically speaking, the Section 1557 regulations place responsibility on covered entities to identify their use of AI tools covered by the regulations and take proactive steps to mitigate the known risks of discrimination or bias in these tools. Covered entities would be prudent to develop policies and procedures to track how their AI tools are used in decision-making, to formulate systems for monitoring the results and impacts of AI, and to address any uses that are prohibited by Section 1557 to ensure compliance.
Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule
In addition, the HTI-1 final rule from the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT (ASTP) addresses the use of AI by the developers of certified health information technology (IT) and requires developers to publicly disclose information about their risk management practices in the use of AI to covered entities to enable the covered entity to comply with the anti-discrimination requirements set forth in the Section 1557 regulations discussed above. Essentially, the rule requires certified health IT developers that supplycertain AI tools to (a) implement risk management practices, which include a risk analysis, risk mitigation and governance, and (b) analyze risks and potential adverse impacts associated with AI use. These practices must be publicly available to support transparency so users, patients, researchers and interested parties can understand the steps taken to identify and mitigate these AI-related risks.
State AI Laws
In the absence of comprehensive federal regulation, states are increasingly stepping in to fill the gap and passing far-reaching laws regulating the use of AI in healthcare. We have summarized below several state laws that seek to address how AI is used within the industry.
Colorado
Colorado's SB24-205 "Concerning Consumer Protections in Interactions with Artificial Intelligence Systems(Colorado AI Act), was enacted on May 17, goes into effect on February 1, 2026, and applies to entities that do business in Colorado and to Colorado residents. The Colorado AI Act is largely focused on high-risk AI systems, which are defined as AI systems that can are a substantial factor in making a consequential decision that has a material impact on the provision or denial of healthcare to consumers or on the cost of such care. The law requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of "algorithmic discrimination," and it imposes a number of administrative and disclosure obligations on such developers and deployers, including risk management policies and programs governing the high-risk AI systems and the obligation to notify the Colorado attorney general (AG) of algorithmic discrimination resulting from a high-risk AI system within 90 days of discovery.
Notably, the Colorado AI Act does not apply to covered entities subject to HIPAA as long as they are providing healthcare recommendations that are (1) generated by an AI system, (2) require a healthcare provider to take action to implement the recommendations, and (3) are not considered high risk. Thus, the Colorado AI Act draws a distinction between instances in which healthcare providers take additional action to review and use their own professional judgment before implementing the AI recommendation and situations in which an AI system automatically implements its own recommendation without provider oversight. Covered entities subject to HIPAA and the Colorado AI Act would be prudent to determine whether their use of AI systems would be considered high risk under the law and also whether their process requires a provider to take action to implement the recommendation from the AI system to determine whether they may be subject to the law. The Colorado AI Act also contains several other notable exceptions to which the law does not apply, including:
Since the law will not take effect until February 2026, entities still have a fair amount of time to assess its potential application and to develop appropriate processes and safeguards to ensure compliance.
Utah
The Utah Artificial Intelligence Policy Act (UAIPA) went into effect in May and is aimed at ensuring public transparency in the use of generative AI (gen AI). While the UAIPA requires all companies that use gen AI to interact with consumers to disclose that the consumer is interacting with the gen AI when asked, the law imposes a higher standard on "regulated occupations," as it requires regulated professionals (including health care professionals) to "prominently" disclose the use of gen AI. The disclosure may be verbal in the event of an oral conversation or through an electronic message prior to a written exchange. The UAIPA provides that failure to provide proper disclosure prior to the use of gen AI could violate Utah consumer protection laws and result in civil penalties of up to $5,000 per violation. Give the rapid proliferation of the use of gen AI in the healthcare industry, it is advisable for healthcare professionals and entities subject to the UAIPA to consider their compliance posture with the law and whether additional disclosures and transparency are warranted.
California
While California Gov. Gavin Newsom vetoed the landmark California AI Safety Bill on September 30 that was poised to have national ramifications on the use of AI due to concerns that the legislation might stifle innovation, he has recently signed seventeen other laws targeting the use of AI both in and outside the healthcare industry. We have briefly summarized several of these California laws below, which could affect the daily operations of healthcare industry providers and stakeholders, and we will continue to monitor any developments.
Medicare Physician Fee Schedule Updates
While the healthcare industry has been anxiously waiting for several months for congressional action to extend the pandemic flexibilities needed to ensure the availability of telehealth services to Medicare beneficiaries regardless of geographic location or site of service as set forth in the Consolidated Appropriations Act, the Centers for Medicare & Medicaid Services (CMS) acknowledged in its proposed regulation for the 2025 Medicare Physician Fee Schedule (MPFS) that these flexibilities are slated to expire on December 31 unless Congress takes action. CMS notes in the MPFS that the Medicare restrictions on geographic location and site of service eligibility for telehealth services may once again take effect for services furnished after January 1 without congressional action because CMS does not have the authority to extend these flexibilities. CMS acknowledged the healthcare industry's grave concerns about maintaining access to care, which could impact millions of Medicare beneficiaries if these statutory flexibilities expire, and sought comments in the MPFS about the impact of returning to pre-pandemic restrictions on the use of telemedicine.
While the industry will need to continue to wait with bated breath in the hopes that Congress will take appropriate action by year-end to ensure ongoing meaningful telehealth access for Medicare beneficiaries, the MPFS also proposed several notable telehealth measures, including:
Accreditation for Virtual Providers
Given the wide proliferation of telehealth since the pandemic, the Joint Commission and the National Committee for Quality Assurance (NCQA) each recently launched accreditation programs for virtual telehealth providers. The Joint Commission began accepting applications for accreditation on July 1 and the NCQA will begin accepting applications in November. The telehealth standards issued by these accreditation organizations could serve as a helpful checklist for optimizing an organization's telehealth program to ensure compliance and that quality care is being rendered.
Section 1557 Anti-Discrimination Provisions for Telehealth
The Section 1557 regulations, discussed above, also include considerations for telehealth providers. The regulations provide that communications before, during and after telehealth appointments must be accessible to individuals with disabilities and to individuals with limited English proficiency (LEP), as well as their companions. Notably, the HHS Office for Civil Rights also referenced its joint guidance with the Department of Justice regarding nondiscrimination in telehealth, which requires covered entities to ensure effective communication and the provision of auxiliary aids for individuals with disabilities and language assistance services for individuals with LEP; see our previous analysis here. The regulations provide flexibility for providers to determine how best to serve these patient populations and sought public comment on this approach and whether it would be more beneficial to promulgate specific accessibility standards for telehealth platforms.
Healthcare Organizations Write Letters in Response to Anticipated DEA Telehealth Regulations
Proposed and unfinalized regulations by the Federal Drug and Enforcement Agency's (DEA) seeking to restrict providers' ability to issue prescriptions via telehealth were recently leaked while undergoing review by the White House creating an uproar in the provider community, which responded by sending letters signed by more than 330 provider organizations urging the White House and Congress to extend telehealth prescribing flexibilities. The DEA previously proposed removal of the flexibilities granted during the COVID-19 pandemic, which waived the requirement for an in-person visit that established a provider-patient relationship before prescriptions could be written for the patient, but the DEA retracted the proposal after receiving more then 38,000 comments to that rule and extending the flexibilities through December 2024. As reported, the leaked rule in its current (not final) state would similarly require providers to conduct an in-person visit via telehealth before prescribing Schedule II drugs. The proposed rule would also not allow more than half of a provider's prescription to be issued during a telehealth appointment, and would mandate that prescribers review all 50 states' prescription drug monitoring programs before prescribing for a patient with whom they have not had an in-person visit. If finalized as written, the regulations could significantly impact the delivery of care via telehealth.
OIG Recommends More Oversight of Remote Patient Monitoring in Medicare
On September 24, the HHS, Office of the Inspector General (OIG) issued a report on the use of remote patient monitoring (RPM) in the Medicare program. RPM involves the collection of patient health data via a connected medical device, such as blood pressure monitors, pulse oximeters, and blood glucose meters, that automatically transmits the data to a provider, who then uses the data to treat the patient. Medicare began reimbursing RPM services in 2018 and the OIG report notes that the use of RPM in skyrocketed between 2019 and 2022. The OIG previously flagged fraud and abuse concerns in RPM use in a Consumer Alert and noted in the report that at least 43 percent of patients who received RPM services did not receive all three required components for reimbursement (education and set up, device supply, and treatment management), raising concerns about proper use. OIG also noted that the Medicare program lacks information for oversight of RPM use, including who ordered the monitoring, and information about the type of health data that is collected and monitored. OIG recommended that CMS take the following steps:
Healthcare providers that utilize RPM should assess their compliance with existing billing and coding reimbursement and documentation requirements for RPM.
As providers' efforts to refine and optimize their compliance with the Information Blocking Rule were spurred on this summer by the rather draconian enforcement mechanism for Information Blocking Rule violations that became effective on July 31 and could result in harsh monetary penalties (as previously analyzed here), the ASTP published theHTI: Patient Engagement, Information Sharing, and Public Health Interoperability rule (HTI-2 proposed rule) on August 5, proposing updates to criteria for its Health IT Certification Program and modifications to the existing information blocking regulations (the latter of which will be the focus of this summary). Notably, the ASTP Director, Micky Tripathi, issued a statement when the rule was first published noting that the HTI-2 proposed rule "is a tour de force. [ONC has] harnessed all the tools at ONC's disposal to advance HHS-wide interoperability priorities."
Examples of Interference That Could Be Deemed Information Blocking
In the HTI-2 proposed rule, the ASTP proposed codifying a non-exhaustive list of practices that would constitute interference with the access to, exchange or use of electronic health information (EHI) for purposes of the prohibition of information blocking. While the ASTP has already signaled in its published guidance that many of the practices listed below could be deemed information blocking and the proposed regulatory list does not stray greatly from that prior guidance, the list and related commentary nonetheless provide some insight into how the ASTP would view such practices and how they now could be codified as regulations rather than mere regulatory guidance. Notably, the commentary on the HTI-2 proposed rule provides that (a) for a practice to be blocking information, all elements of the definition must be met and the entity engaging in the practice must meet the requisite knowledge standard, and (b) information blocking does not include practices required by law or that meet an exception. The following are listed as examples of interference with EHI in the HTI-2 proposed rule:
The ASTP also proposed that the following omissions could be viewed as interference:
It is advisable for entities subject to the Information Blocking Rule to compare this list of proposed practices that are likely to be deemed interference that could result in information blocking with their existing practices and to refine their practices accordingly if an exception does not apply.
Proposed Modifications to Information Blocking Exceptions
The Information Blocking Rule contains a number of exceptions, as previously summarized here, and the HTI-2 proposed rule contains notable modifications to several of these exceptions.
New Information Blocking Exceptions Proposed
The ASTP proposed two new exceptions to the Information Blocking Rule:
* * *
As seasons change, so does the regulatory landscape of the healthcare industry. Stakeholders can expect increasing movement in the digital health space, especially with regard to the regulation of AI, potentially resulting in a patchwork of laws that could impact multistate operations - not unlike the healthcare privacy and telehealth sectors that vary at the state level. As the air cools, we anticipate that digital health regulations will continue to heat up, and we will continue to monitor major legislative and regulatory changes impacting the healthcare industry.