The Healthcare Leadership Council (HLC) has released an AI policy roadmap to help guide the administration and Congress toward a long-term solution to ensure AI is safely overseen in healthcare. 

They promote a risk-based approach, which says AI should be regulated based on its risk to the user, as well as propose intermediate steps to help achieve national AI policy rather than separate state-based approaches. 

The policy framework was created through a series of interviews with healthcare technologists at HLC’s member organizations, which include hospitals, health systems, payers, pharma and group purchasing organizations. The report was written in collaboration with ZS, a management consulting and technology firm. 

President and CEO of HLC, Maria Ghazal, said that regulators need to find the “Goldilocks” way to regulate AI—regulation can’t be over strong, so as to crush innovation, but it can’t be toothless either. HLC’s recommendations, and ongoing engagement with the administration, should help them find a comfortable middle, she said.

“Despite growing momentum and interest in its potential, adoption of AI across the healthcare ecosystem is inconsistent and inhibited,” the report says. “The current landscape is marked by evolving, disparate federal and state policy frameworks, regulatory uncertainty, infrastructure development needs, challenges in data interoperability, and growing concerns around fairness and trust.” 

HLC is in its 37th year and convenes healthcare organizations to discuss sector-wide issues. The group has 50 members, and 27 of them contributed to the report. All members were able to review the report before publication.

Some of the “easier” tasks the administration could do on its way to achieving a nationwide policy framework for AI would be creating standard definitions of the technology, evaluating existing federal and state laws on AI, incentivizing workforce training and establishing guidelines for data cleansing and standardized intake. 

HLC points to three major barriers to AI adoption in healthcare: governance and regulatory complexity, data and infrastructure, and skill gaps and end user trust. It lists nine policy recommendations and 25 examples of tactics that policymakers could implement, though the tactics are meant to be illustrative and not prescriptive, HLC said.

To reduce governance and regulatory complexity, HLC recommends centralizing legislation by reviewing the state legislative landscape and establishing a state law moratorium on AI laws, and creating a comprehensive federal policy framework. 

Also under this category are suggestions for modernizing regulations like HIPAA—to cover more entities and stay current with technology and security updates—and clarifying liability. 

HLC is in favor of a temporary moratorium on state AI laws, though it recommends one that is narrowly scoped. The group wants to ensure that some areas of the law are carved out of the moratorium, like mental health protections for youth.

“In an absence of that federal framework, some kind of pause is aligned with our stakeholders' views, but it's like a narrowly scoped pause in state AI legislation ... there are certain things that [states] will likely need flexibility to do,” Clara Keane, senior policy director at HLC, said in an interview. “An example is mental health for kids. We don't want to do a freeze that's going to prevent states from acting there in the absence of the federal framework.”

To modernize data and infrastructure, HLC recommends expanding transparency standards and mitigating bias. It points to the transparency and disclosure requirements set out in HTI-1 and suggests creating a configurable AI nutrition label, which was part of the regulation. 

The Office of the Assistant Secretary for Technology Policy (ASTP/ONC) recently slashed many of HTI-1’s requirements in its new HTI-5 rule.  

The group also recommends establishing continuous monitoring guidelines, defining AI development guidelines and developing AI validation pathways. For validating AI, it provides examples of the nationwide network of AI assurance labs once touted by the Coalition for Health AI, as well as programs established at health systems, such as Vanderbilt University Medical Center and the University of California, San Francisco Health.

To make up for skill and trust gaps, HLC recommends enhancing workforce training and education. Some of the tactics it suggests in this area are incentivizing employers to implement workforce training and adding AI education to K-12 schooling.