The healthcare industry has become imbued with artificial intelligence—nearly every point solution vendor is selling AI as part of their offering, as evidenced by the HLTH 2025 conference in Las Vegas just last week.
Companies all around the show floor touted AI-backed solutions, even outside the conference’s “AI Pavilion.” The show’s annual Digital Health Awards program is nixing AI-specific categories next year because companies in all its categories, like women’s health and behavioral health, are using AI in notable ways.
Companies that spoke with Fierce Healthcare onsite at the conference said the biggest change from last year’s conference is the level of maturity and trust with artificial intelligence technology.
While the technology has matured beyond AI scribes to incorporate agents that can act on behalf of an individual, the healthcare industry has not coalesced around governance and evaluation strategies to oversee the performance of these tools post-deployment.
Health system buyers are more comfortable with AI solutions, as are clinicians who are increasingly using AI on the front lines of care.
Navina is a company that provides an AI co-pilot for value-based care organizations. Ronen Lavi, Navina's co-founder and CEO, cited trust and maturity of AI as the biggest change in the last year. Citing healthcare’s narrow margins and high levels of administrative work, Navi said, “Right now, everybody's open to [trying] it.”
Perhaps the diffusion of AI in everyday life, outside of healthcare settings, is contributing to providers’ increased willingness to use AI, posited Katherine Eisenberg, M.D., DynaMed's senior medical director.
Eisenberg oversees Dyna AI, a generative AI tool that surfaces clinically relevant information at the point of care. She also works extensively on governance and evaluation of the product.
“In the last year, the general experience, not even healthcare-specific, with AI tools has exploded so much, as well as uses for healthcare, that adoption is just skyrocketing, and actually we're starting to almost see it be the expectation that AI is part of the experience,” Eisenberg said. “That is just this very dramatic shift in what our user base is starting to expect from us.”
And yet, all three experts agreed that governance and evaluation of AI is lagging behind the technology. They also agreed that AI vendors are responsible for the oversight and performance of the tools once they are implemented in a health system.
Health systems still have varied levels of internal governance and AI knowledge, multiple sources said. Eisenberg said it’s easier to partner with systems that have some AI expertise because they are more ready to engage in a partnership with the vendor.
These health systems also recognize the hard work DynaMed is doing to evaluate and govern their technology. “We're moving at the pace that's comfortable for us in terms of implementing governance and evaluation alongside the technology piece, but you can see that different organizations are taking different approaches there,” Eisenberg said.
She added that her organization has to be prepared to partner with health systems that have zero governance processes and those that have ample internal expertise.
CVS Health is using AI to automate pharmacy processes, freeing up pharmacists’ time to do more work at the front of the house and directly with patients. Chief Technology Officer Tilak Mandadi said, “We have AI governance where no use case using AI would be developed unless it goes through the governance and passes the governance filters that we put in place.”
Demetri Giannikopoulos, chief AI officer at Rad.AI, an AI radiology startup, promotes what he calls the “Swiss cheese effect” of AI governance. Essentially, it means having multiple levels of overlapping governance structures between the vendors, IT teams and clinicians.
While a slew of companies have cropped up that provide third-party monitoring of AI models deployed in healthcare, Giannikopoulos said that third parties often can’t fully understand the intended use and limits of the models. That work is best left to the model developer, he stated.
Many organizations have begun releasing guidance on what information AI vendors should disclose and how health systems can vet vendors. The American Heart Association and the American Medical Association are two of the latest entrants, announcing at HLTH the creation of an AI Assessment Lab and a Center for Digital Health and AI, respectively.
Speaking of the number of private groups that have raised awareness about AI governance, Eisenberg said: “Maybe that's a signal, right? Our market, whatever the government is doing, state governments are doing, like our market is pretty much demanding it.”
Giannikopoulos tells industry colleagues that they should just pick one framework and start working on it, rather than agonize over the best framework or group to align with.
“Just pick something and do it,” Giannikopoulos said. “If you start somewhere, you give something for people to tear down to some degree. Versus, if you're both just like, ‘well, where should I start?’ Then you can't really have a constructive conversation.”
The specter of federal regulation also hangs over AI vendors, as they question at what point the Food and Drug Administration or Congress will add hoops for them to jump through.
“I think, right now, it's ‘run fast,’” Lavi said. “I think we'll keep on running fast, but they will start to raise questions and regulations and FDA’s role in AI … does it need to be regulated? Do we have to do some research around it to approve it? Today, not so much.”