Baker & Hostetler LLP

10/01/2024 | Press release | Distributed by Public on 10/01/2024 14:37

What’s Hot in Healthcare: Digital Health Regulatory Update

10/01/2024|16 minute read
Share

As the summer winds down, regulatory updates related to digital health services certainly do not appear to be showing any signs of cooling off. It has been a busy summer, and below we have summarized several key updates for you to be aware of.

Key Takeaways:

  • Federal agencies are using various regulatory levers, such as Section 1557, to require transparency and prohibit discriminatory practices in the use of artificial intelligence (AI) in the healthcare industry.
  • In the absence of comprehensive federal legislation, states are proposing and enacting myriad laws targeting AI tools, some of which could directly impact healthcare stakeholders.
  • Telehealth is impacted by Section 1557 and ongoing federal legislative inaction, while other regulatory proposals are aimed at refining information blocking regulations.

AI

Federal Regulation

While healthcare providers initially seemed somewhat cautious about the use of AI in healthcare in 2023, with a 2023 Pew Research survey revealing that nearly 60 percent of Americans would be uncomfortable with providers relying on AI for their healthcare, the use of AI in all facets of healthcare has seemingly exploded in the past year, and AI appears to be on the precipice of revolutionizing the industry. Despite this, the regulation of AI at the federal level in the healthcare industry remains somewhat fragmented, even though the Biden administration noted that developing appropriate safeguards was a priority in a 2023 Executive Order, summarized here. We note that some of the more recent efforts by the Department of Health and Human Services (HHS) to regulate the use of AI could be easily overlooked, as they are not necessarily housed in the usual regulatory suspects, as set forth below.

Section 1557 Regulations

HHS finalized the Nondiscrimination in Health Programs and Activities regulations (Section 1557 regulations) yet again on May 6, which implemented several nondiscrimination provisions required by Section 1557 of the Affordable Care Act. The Section 1557 regulations have a long and storied history, as they have been promulgated several times, but have continued to face court challenges each time and been withdrawn and reissued by each administration. The most recent Section 1557 regulations are no different, as they have been stayed in whole or in part by various courts. Nevertheless, the Section 1557 regulations are instructive for the healthcare industry on the direction that HHS may take regarding AI, as they seek to address the potential for discrimination in AI by implementing a nondiscrimination prohibition in the use of "patient care support decision tools," which are defined as "any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities." Covered entities are required to identify and mitigate any risks of discrimination in the use of such tools, as this section:

  • Prohibits recipients of federal financial assistance, which includes providers participating in Medicare, Medicaid, the Children's Health Insurance Program and HHS-administered health programs and activities (referred to in the regulations as covered entities and not to be confused with the Health Insurance Portability and Accountability Act (HIPAA) definition of the same term), from discriminating against patients through the use of patient care support decision tools:
  • Requires covered entities to make reasonable efforts to identify uses of patient care decision support tools in their health programs that use input variables or factors that measure race, color, national origin, sex, age or disability
  • Requires covered entities to make reasonable efforts to mitigate the risk of discrimination resulting from the use of identified patient care decision support tools

Practically speaking, the Section 1557 regulations place responsibility on covered entities to identify their use of AI tools covered by the regulations and take proactive steps to mitigate the known risks of discrimination or bias in these tools. Covered entities would be prudent to develop policies and procedures to track how their AI tools are used in decision-making, to formulate systems for monitoring the results and impacts of AI, and to address any uses that are prohibited by Section 1557 to ensure compliance.

Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule

In addition, the HTI-1 final rule from the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health IT (ASTP) addresses the use of AI by the developers of certified health information technology (IT) and requires developers to publicly disclose information about their risk management practices in the use of AI to covered entities to enable the covered entity to comply with the anti-discrimination requirements set forth in the Section 1557 regulations discussed above. Essentially, the rule requires certified health IT developers that supplycertain AI tools to (a) implement risk management practices, which include a risk analysis, risk mitigation and governance, and (b) analyze risks and potential adverse impacts associated with AI use. These practices must be publicly available to support transparency so users, patients, researchers and interested parties can understand the steps taken to identify and mitigate these AI-related risks.

State AI Laws

In the absence of comprehensive federal regulation, states are increasingly stepping in to fill the gap and passing far-reaching laws regulating the use of AI in healthcare. We have summarized below several state laws that seek to address how AI is used within the industry.

Colorado

Colorado's SB24-205 "Concerning Consumer Protections in Interactions with Artificial Intelligence Systems(Colorado AI Act), was enacted on May 17, goes into effect on February 1, 2026, and applies to entities that do business in Colorado and to Colorado residents. The Colorado AI Act is largely focused on high-risk AI systems, which are defined as AI systems that can are a substantial factor in making a consequential decision that has a material impact on the provision or denial of healthcare to consumers or on the cost of such care. The law requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of "algorithmic discrimination," and it imposes a number of administrative and disclosure obligations on such developers and deployers, including risk management policies and programs governing the high-risk AI systems and the obligation to notify the Colorado attorney general (AG) of algorithmic discrimination resulting from a high-risk AI system within 90 days of discovery.

Notably, the Colorado AI Act does not apply to covered entities subject to HIPAA as long as they are providing healthcare recommendations that are (1) generated by an AI system, (2) require a healthcare provider to take action to implement the recommendations, and (3) are not considered high risk. Thus, the Colorado AI Act draws a distinction between instances in which healthcare providers take additional action to review and use their own professional judgment before implementing the AI recommendation and situations in which an AI system automatically implements its own recommendation without provider oversight. Covered entities subject to HIPAA and the Colorado AI Act would be prudent to determine whether their use of AI systems would be considered high risk under the law and also whether their process requires a provider to take action to implement the recommendation from the AI system to determine whether they may be subject to the law. The Colorado AI Act also contains several other notable exceptions to which the law does not apply, including:

  • Developers of AI systems that are either approved, certified or cleared by the Food and Drug Administration
  • Developers of AI systems that are in compliance with standards imposed by the ASTP on certified health IT developers

Since the law will not take effect until February 2026, entities still have a fair amount of time to assess its potential application and to develop appropriate processes and safeguards to ensure compliance.

Utah

The Utah Artificial Intelligence Policy Act (UAIPA) went into effect in May and is aimed at ensuring public transparency in the use of generative AI (gen AI). While the UAIPA requires all companies that use gen AI to interact with consumers to disclose that the consumer is interacting with the gen AI when asked, the law imposes a higher standard on "regulated occupations," as it requires regulated professionals (including health care professionals) to "prominently" disclose the use of gen AI. The disclosure may be verbal in the event of an oral conversation or through an electronic message prior to a written exchange. The UAIPA provides that failure to provide proper disclosure prior to the use of gen AI could violate Utah consumer protection laws and result in civil penalties of up to $5,000 per violation. Give the rapid proliferation of the use of gen AI in the healthcare industry, it is advisable for healthcare professionals and entities subject to the UAIPA to consider their compliance posture with the law and whether additional disclosures and transparency are warranted.

California

While California Gov. Gavin Newsom vetoed the landmark California AI Safety Bill on September 30 that was poised to have national ramifications on the use of AI due to concerns that the legislation might stifle innovation, he has recently signed seventeen other laws targeting the use of AI both in and outside the healthcare industry. We have briefly summarized several of these California laws below, which could affect the daily operations of healthcare industry providers and stakeholders, and we will continue to monitor any developments.

  • AB 3030 requires health care providers to disclose their use of gen AI tools when it is used to generate communications to patients regarding their clinical information, including during telehealth encounters, with a clear disclaimer informing the patients that the communication was generated by AI. Entities subject to the law must include clear instructions for patients to directly communicate with a human healthcare provider. Importantly, these requirements do not apply to communications generated by a gen AI tool that is then read and reviewed by a human licensed or certified healthcare provider. For written communications involving physical and digital media (letters, emails and other messages), the disclaimer must be prominently displayed at the beginning of each communication. For written communications involving continuous online interactions - including chat-based telehealth - the disclaimer is required to be displayed prominently during the entire interaction. Similarly for video communications, the disclaimer must be prominently displayed for the duration of the interaction. Audio communications must also include a verbal disclaimer at the beginning and end of the interaction.
  • SB 942, the California AI Transparency Act, requires companies that create, code or otherwise produce a gen AI system that has more than 1 million monthly visitors and is publicly available in California to make an AI detection tool freely available, including through its website or mobile app, and must meet certain criteria. The law also contains disclosure requirements that identify content as generated by AI, which must be permanent or difficult to remove. Violations of the law would result in a daily $5,000 fine, enforceable by the California AG.
  • SB 1120, Health care coverage: utilization review, establishes requirements on health plans and insurers that use AI for utilization review and utilization management decisions, and puts limits on the automated use of AI, an algorithm, or other software to ensure that a licensed professionals retains ultimate responsibility for making individualized medical necessity determinations.

Telehealth

Medicare Physician Fee Schedule Updates

While the healthcare industry has been anxiously waiting for several months for congressional action to extend the pandemic flexibilities needed to ensure the availability of telehealth services to Medicare beneficiaries regardless of geographic location or site of service as set forth in the Consolidated Appropriations Act, the Centers for Medicare & Medicaid Services (CMS) acknowledged in its proposed regulation for the 2025 Medicare Physician Fee Schedule (MPFS) that these flexibilities are slated to expire on December 31 unless Congress takes action. CMS notes in the MPFS that the Medicare restrictions on geographic location and site of service eligibility for telehealth services may once again take effect for services furnished after January 1 without congressional action because CMS does not have the authority to extend these flexibilities. CMS acknowledged the healthcare industry's grave concerns about maintaining access to care, which could impact millions of Medicare beneficiaries if these statutory flexibilities expire, and sought comments in the MPFS about the impact of returning to pre-pandemic restrictions on the use of telemedicine.

While the industry will need to continue to wait with bated breath in the hopes that Congress will take appropriate action by year-end to ensure ongoing meaningful telehealth access for Medicare beneficiaries, the MPFS also proposed several notable telehealth measures, including:

  • The continued ability to provide direct supervision remotely. The MPFS extends the ability to provide "direct supervision," which requires the supervising practitioner to be "immediately available", to allow providers to be immediate available remotely via real-time audio and visual interactive telecommunications through December 31, 2025.
  • Permitting audio-only telehealth in certain circumstances. CMS acknowledged that some patients may not be capable of, or do not consent to, the use of video technology due to broadband limitations, etc. However, as long as the telehealth practitioner is technically capable of using an interactive telecommunication system that includes audio and video equipment permitting two-way, real-time interactive communication, CMS will permit claims furnished via two-way, real-time, audio-only communication technology for telehealth provided to a Medicare beneficiary in their home and noted that such claims will need to be submitted with CPT medical coding modifier 93.

Accreditation for Virtual Providers

Given the wide proliferation of telehealth since the pandemic, the Joint Commission and the National Committee for Quality Assurance (NCQA) each recently launched accreditation programs for virtual telehealth providers. The Joint Commission began accepting applications for accreditation on July 1 and the NCQA will begin accepting applications in November. The telehealth standards issued by these accreditation organizations could serve as a helpful checklist for optimizing an organization's telehealth program to ensure compliance and that quality care is being rendered.

Section 1557 Anti-Discrimination Provisions for Telehealth

The Section 1557 regulations, discussed above, also include considerations for telehealth providers. The regulations provide that communications before, during and after telehealth appointments must be accessible to individuals with disabilities and to individuals with limited English proficiency (LEP), as well as their companions. Notably, the HHS Office for Civil Rights also referenced its joint guidance with the Department of Justice regarding nondiscrimination in telehealth, which requires covered entities to ensure effective communication and the provision of auxiliary aids for individuals with disabilities and language assistance services for individuals with LEP; see our previous analysis here. The regulations provide flexibility for providers to determine how best to serve these patient populations and sought public comment on this approach and whether it would be more beneficial to promulgate specific accessibility standards for telehealth platforms.

Healthcare Organizations Write Letters in Response to Anticipated DEA Telehealth Regulations

Proposed and unfinalized regulations by the Federal Drug and Enforcement Agency's (DEA) seeking to restrict providers' ability to issue prescriptions via telehealth were recently leaked while undergoing review by the White House creating an uproar in the provider community, which responded by sending letters signed by more than 330 provider organizations urging the White House and Congress to extend telehealth prescribing flexibilities. The DEA previously proposed removal of the flexibilities granted during the COVID-19 pandemic, which waived the requirement for an in-person visit that established a provider-patient relationship before prescriptions could be written for the patient, but the DEA retracted the proposal after receiving more then 38,000 comments to that rule and extending the flexibilities through December 2024. As reported, the leaked rule in its current (not final) state would similarly require providers to conduct an in-person visit via telehealth before prescribing Schedule II drugs. The proposed rule would also not allow more than half of a provider's prescription to be issued during a telehealth appointment, and would mandate that prescribers review all 50 states' prescription drug monitoring programs before prescribing for a patient with whom they have not had an in-person visit. If finalized as written, the regulations could significantly impact the delivery of care via telehealth.

OIG Recommends More Oversight of Remote Patient Monitoring in Medicare

On September 24, the HHS, Office of the Inspector General (OIG) issued a report on the use of remote patient monitoring (RPM) in the Medicare program. RPM involves the collection of patient health data via a connected medical device, such as blood pressure monitors, pulse oximeters, and blood glucose meters, that automatically transmits the data to a provider, who then uses the data to treat the patient. Medicare began reimbursing RPM services in 2018 and the OIG report notes that the use of RPM in skyrocketed between 2019 and 2022. The OIG previously flagged fraud and abuse concerns in RPM use in a Consumer Alert and noted in the report that at least 43 percent of patients who received RPM services did not receive all three required components for reimbursement (education and set up, device supply, and treatment management), raising concerns about proper use. OIG also noted that the Medicare program lacks information for oversight of RPM use, including who ordered the monitoring, and information about the type of health data that is collected and monitored. OIG recommended that CMS take the following steps:

  • Implement additional safeguards to ensure that RPM is used and billed appropriately in Medicare.
  • Require that RPM be ordered by a provider and that the ordering provider be listed on the claim,
  • Develop methods to identify what health data are being monitored,
  • Conduct provider education about billing of RPM, and
  • Identify and monitor companies that bill for RPM.

Healthcare providers that utilize RPM should assess their compliance with existing billing and coding reimbursement and documentation requirements for RPM.

Information Blocking

As providers' efforts to refine and optimize their compliance with the Information Blocking Rule were spurred on this summer by the rather draconian enforcement mechanism for Information Blocking Rule violations that became effective on July 31 and could result in harsh monetary penalties (as previously analyzed here), the ASTP published theHTI: Patient Engagement, Information Sharing, and Public Health Interoperability rule (HTI-2 proposed rule) on August 5, proposing updates to criteria for its Health IT Certification Program and modifications to the existing information blocking regulations (the latter of which will be the focus of this summary). Notably, the ASTP Director, Micky Tripathi, issued a statement when the rule was first published noting that the HTI-2 proposed rule "is a tour de force. [ONC has] harnessed all the tools at ONC's disposal to advance HHS-wide interoperability priorities."

Examples of Interference That Could Be Deemed Information Blocking

In the HTI-2 proposed rule, the ASTP proposed codifying a non-exhaustive list of practices that would constitute interference with the access to, exchange or use of electronic health information (EHI) for purposes of the prohibition of information blocking. While the ASTP has already signaled in its published guidance that many of the practices listed below could be deemed information blocking and the proposed regulatory list does not stray greatly from that prior guidance, the list and related commentary nonetheless provide some insight into how the ASTP would view such practices and how they now could be codified as regulations rather than mere regulatory guidance. Notably, the commentary on the HTI-2 proposed rule provides that (a) for a practice to be blocking information, all elements of the definition must be met and the entity engaging in the practice must meet the requisite knowledge standard, and (b) information blocking does not include practices required by law or that meet an exception. The following are listed as examples of interference with EHI in the HTI-2 proposed rule:

  • Delay on new access - Delaying patient access to new EHI, such as diagnostic testing results, so clinicians can review the EHI
  • Portal access - Delaying patient access to EHI in a portal when the system has the technical capability to support automated access , exchange or use of the EHI via the portal
  • Application programming interface (API) access - Delaying the access, exchange or use of EHI to or by a third-party app designated and authorized by the patient when there is a deployed API able to support the access to, exchange or use of EHI
  • Nonstandard implementation - Implementing health IT in ways that are likely to restrict access to, exchange or use of EHI with respect to exporting EHI, including but not limited to exports for transitioning between health IT systems
  • Contract provisions - Negotiating or enforcing a contract provision that restricts or limits otherwise lawful access , exchange or use of EHI
  • Noncompete provisions in agreements - Negotiating or enforcing a clause in any agreement that prevents or restricts an employee (other than the entity's own employees), a contractor or a contractor's employee who accesses, exchanges or uses the EHI in the entity's health IT from accessing, exchanging, or using EHI in other health IT in order to design, develop, or upgrade such other health IT
  • Manner or content requested - Improperly encouraging or inducing requestors to limit the scope, manner or timing of EHI requested for access, exchange or use
  • Medical images - Requiring that the access, exchange or use of any medical images (including photographs, X-rays and imaging scans) occur by exchanging physical copies or copies on physical media (such as a thumb drive or DVD) when the provider and the requestor possess the technical capability to access, exchange or use the images through fully electronic means

The ASTP also proposed that the following omissions could be viewed as interference:

  • Not exchanging EHI under circumstances in which such exchange is lawful
  • Not making EHI available for lawful use
  • Not complying with another valid law enforceable against the actor that requires access, exchange or use of EHI
  • Failure of a certified API Developer to publish API discovery details as required by the maintenance of certification requirement in the maintenance of certification requirements
  • Failure of an API Information Source to disclose to the Certified API Developer the information necessary to publish the API discovery details required by the certification program regulations

It is advisable for entities subject to the Information Blocking Rule to compare this list of proposed practices that are likely to be deemed interference that could result in information blocking with their existing practices and to refine their practices accordingly if an exception does not apply.

Proposed Modifications to Information Blocking Exceptions

The Information Blocking Rule contains a number of exceptions, as previously summarized here, and the HTI-2 proposed rule contains notable modifications to several of these exceptions.

  • Privacy Exception.The Privacy Exception to the Information Blocking Rule protects instances when an actor does not fulfill an EHI request to protect an individual's privacy. ASTP is proposing to revise the Unreviewable Grounds sub-exception of the Privacy Exception to broaden its applicability to actors who are not covered entities or business associates under HIPAA. It also proposes modifying the Individual's Request Not to Share EHI sub-exception to allow an individual's request for a restriction on the use and disclosure of his or her EHI be honored regardless of whether or not applicable law requires disclosure of the EHI against the individual's wishes.
  • Infeasibility Exception. The current Infeasibility Exception to the Information Blocking Rule permits a request for EHI to be denied when the request is infeasible. The ASTP has proposed modifying this exception in the following ways. First, requestors must currently be notified within 10 business days of receipt of the request if their request is infeasible. ONC has proposed extending this timeframe to notify a requestor within 10 business days after a determination of infeasibility is made without unnecessary delay based on a reasonable assessment of the facts. In addition, ASTP provides modifying the following:
    • Segmentation Condition. ASTP proposed revising the Segmentation Condition to clarify when EHI cannot be unambiguously segmented due to applicable law and further proposed to modify the Segmentation Condition to apply when it is permissible to withhold EHI under the Privacy Exception or the proposed Protecting Care Access Exception (analyzed below).
    • Third Party Seeking Modification Use Condition. ASTP proposed revising this condition to clarify that that it applies to both business associates of HIPAA covered providers and contractors of those health care providers that are not subject to HIPAA.

New Information Blocking Exceptions Proposed

The ASTP proposed two new exceptions to the Information Blocking Rule:

  • Protecting Care Access Exception. The ASTP proposed the new Protecting Care Access Exception to protect actors when they choose not to disclose EHI when they believe in good faith that a disclosure would risk exposing a patient, provider or facilitator of lawful reproductive healthcare to legal action. The Protecting Care Access Exceptionis aimed at addressing providers adjusting to the changing legal landscape involving access to reproductive healthcare, and we note that there is significant overlap between the HIPAA Privacy Rule to Support Reproductive Health Care Privacy regulations, published earlier this year and analyzed here, and the exceptions to the Information Blocking Rule. The Protecting Care Access Exception appears be an effort by the ASTP to align with HIPAA privacy protections for patients seeking and providers facilitating reproductive healthcare in instances where other Information Blocking Rule exceptions may not apply. For this exception to apply, the following three conditions would need to be satisfied:
    • Threshold Condition. The practice must be based on good-faith belief that the person seeking, obtaining, providing or facilitating reproductive healthcare could be exposed to legal action, be tailored and no broader than necessary to reduce such risk and be implemented consistent with an organizational policy that meets specific requirements or on a case-by-case determination.
    • Patient Protection Condition. The practice must affect only the access, exchange or use of specific EHI that it is believed in good faith could expose the patient to legal action because the EHI shows or would carry a substantial risk of supporting a reasonable inference that the patient obtained, inquired about or expressed interest in reproductive healthcare or has a health condition for which reproductive healthcare is often sought. The practice must also be subject to nullification by explicit request or at the direction of the patient that the access, exchange or use of EHI occur despite the risks.
    • Care Access Condition. When the practice is implemented for reducing the risk of exposure to legal action against healthcare providers or persons involved in providing or facilitating reproductive healthcare that is lawful under the circumstances, the practice must affect only the access, exchange or use of the EHI the actor believes could expose the providers and facilitators to legal action.
  • Requestor Preferences Exception. The second proposed exception to the Information Blocking Rule would permit an actor to tailor the access, exchange or use of EHI to a requestor's preferences. The ASTP recognized that a requestor may prefer to have less EHI be made available or may not want particular EHI to be available immediately. The ASTP provided example scenarios, such as patient may not want their test results made available in their patient portal or may wish their test results be delayed in the patient portal until after their healthcare provider can explain the results to the patient. The Requestor Preferences Exception requires that the requestor make the request to limit or delay the provision of the EHI for a specified period of time in writing and without improper encouragement. The response would then need to be tailored to the request and implemented consistently in a nondiscriminatory manner. The exception requires the steps that will be taken to comply with the request to be explained to the requestor in plain language and that the requestor be notified of changes in the actor's ability to maintain the requestor's tailoring, in addition to other transparency requirements. Lastly, actors would be required to act on any later requests from the requestor to reduce or remove any such restrictions.

* * *

As seasons change, so does the regulatory landscape of the healthcare industry. Stakeholders can expect increasing movement in the digital health space, especially with regard to the regulation of AI, potentially resulting in a patchwork of laws that could impact multistate operations - not unlike the healthcare privacy and telehealth sectors that vary at the state level. As the air cools, we anticipate that digital health regulations will continue to heat up, and we will continue to monitor major legislative and regulatory changes impacting the healthcare industry.