top of page

State Regulation of AI in Health Care Could be Banned by House Budget Bill

AI and the Scales of Justice (Shutterstock.com)
AI and the Scales of Justice (Shutterstock.com)

Tucked into the reconciliation bill released by the House Energy & Commerce Committee last week is a clause that could stymie state efforts to regulate the use of AI in health care.  The clause, found within a section of the bill titled the “Artificial Intelligence and Information Technology Modernization Initiative,” would impose a ten-year moratorium on any state or local laws or regulations that “regulat[e] artificial intelligence models, artificial intelligence systems, or automated decision systems . . . .”


The primary exceptions are as follows. State and local laws and regulations whose “primary purpose . . . is to remove legal impediments to, or facilitate the deployment or operation of,” AI would be permitted.  Further, laws and regulations of general applicability that nonetheless impact AI would also be exempted from the new AI clause. The clause also appears to contain de minimis language that would exempt laws and regulations that do “not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee or other requirement[s]” upon AI systems. 


As currently drafted, the AI clause’s broad language would be certain to lead to litigation were it to become law. For example, would restrictions on the use of mental health chatbots be permitted under California licensure laws and regulations, or would this be considered an impermissible regulation of an AI system?  California prohibits “person[s] who practice[ ] or attempt[ ] to practice” medicine without a license (Business and Professions Code § 2052).  How should a “person” be defined in the context of AI-based medical tools?


While the reconciliation bill is still in flux, if passed, it is likely to significantly hinder California’s early efforts to regulate the use of AI in health care.  On January 1, 2025, two California laws aimed at regulating the use of AI in health care took effect: AB 3030 – which requires providers to disclose, in certain circumstances, when generative AI is used to create written or verbal communications – and SB 1120 – which prohibits health insurers from making coverage decision based entirely on AI algorithms. The AI clause’s broad moratorium prohibiting the regulation of AI systems could impact both laws:


Assembly Bill (AB) 3030


AB 3030 requires providers that use generative AI to create written or verbal communications pertaining to clinical information to patients to include in these communications (1) a disclaimer indicating that generative AI drafted the message, and (2) clear instructions describing how a patient may contact a provider.  Messages that are read and reviewed by a certified or licensed provider prior to dissemination are exempted.


By dictating the contents of AI-generated messages, the law undoubtedly regulates AI systems.  However, whether the inclusion of a disclaimer and contact instructions can be considered a “substantive” requirement is debatable.  The same is true as to whether the law helps to facilitate the adoption of AI: the law sanctifies the use of generative AI messages if the use is transparent or reviewed by a human.


Senate Bill (SB) 1120


SB 1120—also known as the “Physicians Make Decisions Act”—prohibits AI systems from “deny[ing], delay[ing], or modify[ing] health care services based, in whole or in part, on medical necessity.”  It also limits determinations of medical necessity to licensed physicians or licensed health care professionals, and it includes detailed requirements for the use of AI systems in utilization review or utilization management functions.


The law appears vulnerable to federal preemption if the AI clause’s moratorium were to become law.  The detailed utilization review requirements imposed by the statute would most likely be struck because they are substantive design requirements tailored for AI systems.  However, the medical necessity requirements imposed by the law are fuzzy.  Could this regulation be considered to fall within the exemption for rules of general applicability if its purpose is to prohibit the unlicensed practice of medicine? 


Conclusion


While the fate of the AI clause remains uncertain, it does raise important questions that will need to be addressed as AI begins to play an increasingly large role in the highly regulated field of health care.     


For more information, contact Avi Rutschman at avi@athenelaw.com or (818) 688-2063.

 
 
 
bottom of page