In the back half of 2025, a slew of organizations announced forthcoming artificial intelligence certification programs that would offer providers and developers the opportunity to align their AI practices with technical and ethical standards.
Some groups have merely announced the intent to create certification programs, while others have operationalized those programs. Each is attempting to hit on an unaddressed portion of the AI governance and certification market.
The result, thus far, is a burgeoning marketplace of third parties offering benchmarks for the use of AI in healthcare.
It’s a far cry from the city-on-a-hill idea of a nationwide network of AI assurance labs, once touted by the Coalition for Health AI (CHAI). While it once may have seemed like a catch-all solution for providing oversight to an industry of thousands of players, the proposal is no longer in the works.
The landscape has fractured.
Heading into 2026, the industry faces yet another new year without clear rules for health AI. Though, the Trump administration has offered hints and begun to set priorities for the direction it will take health AI.
President Donald Trump has issued an executive order to preempt state laws on AI in favor of a national framework, which has yet to be released. Department of Health and Human Services (HHS) Deputy Secretary Jim O’Neill has said that Silicon Valley needs more clarity on AI regulation.
The HHS' health IT arm released a request for information Friday seeking input on how the department can promote the use of AI in clinical care. If 2025 serves as precedent, a request for information can quickly turn into a new initiative.
This administration has straddled the fence in digital health, using its powers to push the industry to innovate while foregoing regulation. It has leveraged voluntary initiatives and CMS Innovation Center models to do so.
Two organizations have released concrete programs that provide a standardized set of checks for AI developers and users: the Utilization Review Accreditation Commission’s (URAC) AI Accreditation Program and the Consumer Technology Association’s (CTA's) Predictive AI standard.
Accreditation and standards setting could fall into HHS' non-regulatory paradigm: "States and people at the federal level are all saying 'We think this accreditation is a nice sort of middle ground to stifling regulation,'" Shawn Griffin, M.D., president and CEO of the URAC, said in an interview.
“In a more deregulatory context, we're definitely seeing eyes on industry standards, because they are representative of solutions that folks can benchmark against,” Kerri Haresign, senior director of tech and standards at the CTA, said in an interview. “I think that that's a really exciting opportunity for the standards.”
Both organizations have seen quick uptake from the industry, which seems to be looking for third-party assurance of its practices to assuage customers’—or patients’—concerns about the unbridled use of AI.
“The bench of opportunities to address here, I think, is very deep,” Haresign said. “I think we are still in our early phases of defining a lot of these technical requirements.”
Here is a snapshot of those deployments thus far and where they may gain traction in 2026.
URAC: “Accreditation is a nice middle ground”
Since the URAC launched its AI accreditation in mid-September, it has witnessed unprecedented interest and uptake in its program, its president and CEO Shawn Griffin, M.D., told Fierce Healthcare in an interview.
The AI accreditation program has two tracks, one for developers of AI and one for users of AI such as health systems. The URAC convened nearly 30 organizations that gave input on the accreditation program.
“What we're looking at is the quality of the program, not the commas or the coding in the tool,” Griffin said. “Our program includes things like contracting. It includes data, it includes transparency. So a lot of the places where you see states that have sort of gone into certain areas within AI, those are covered in our program, to some extent, we just happen to have a broader view as to all the different things that are necessary for quality overview, and especially in healthcare.
About 10 organizations have so far signed contracts to undergo the accreditation process, including startups and academic medical centers, Griffin said. Some organizations are doing both programs in parallel as both users and developers of health AI.
Businesses have also reached out expressing interest and seeking to gain information on what the accreditation means for their organization since the launch of the program.
The URAC has been accrediting healthcare organizations and programs for 35 years. It offers accreditations in pharmacy benefit management, health equity and telehealth, among many others.
“We've had very positive responses, both from organizations beginning the process to apply for accreditation, as well as regulators and legislators wanting to know more about it and understand what it can provide as a complement to regulation, both at the state and at the federal level,” Griffin said.
Griffin said more than 20 states have approached the organization to learn about the accreditation and determine whether they could create specific programs for their states, write legislation to promote use of an accreditation program in their state, or use the accreditation for their own internal departments.
Federal policymakers have also taken an interest in promoting accreditation as a way to set guardrails for AI that can be flexible with the rapid progression of the technology.
They’ve done so in the past with the No Surprises Act and the Affordable Care Act (ACA). For the ACA, health plans on the exchanges must be accredited by a recognized entity.
Accreditation is more flexible than regulation and can be updated quickly if an issue arises. Moreover, the process engages organizations proactively to audit their practices, which the federal government can’t do on a large scale.
“It's not unusual for laws to recognize both the complementary role that accreditation can do [sic], Also, we're a force multiplier for regulators,” Griffin said. “We always say our job is to set a higher bar for quality.”
But, accreditation programs can face a chicken and egg problem, Griffin said. If regulators don’t require accreditation, organizations aren’t likely to do it. Regulators aren’t likely to require it if few organizations are already doing it, he said.
Griffin expects uptake of the URAC’s AI accreditation to further gain traction in 2026. As the URAC gains experience accrediting users and developers of AI, it will likely update its program, he said. However, the group does not currently have plans to offer new AI accreditations.
The URAC touts that its AI accreditation program is the first in the nation. While many organizations have released best practices on AI, an independent review is a different quality bar altogether.
“Best practices are one thing, but actually an independent review is a higher standard, and that's what we expect in healthcare often,” he said.
Consumer Technology Association: “It’s a great way to show trust”
The CTA launched a predictive health AI standard in September. The standard sets benchmarks for data verification, explainability, quality control, recalibration, basic deployment testing and full deployment testing.
The predictive health AI standard is written for any premarket solution, not just those that meet the FDA definition of a medical device. The CTA designed the standard to be flexible, so developers can use it for solutions with a variety of risk levels.
The standard could be used for a single algorithm or an entire solution with multiple component parts. Haresign said they wanted the standard to be generalizable. Though, it does not include generative AI, ambient scribes, natural language processing or how AI models are trained.
“The standard identifies a series of ‘shoulds’ and ‘shalls,’ and so there's certain kind of baseline requirements that we're setting, and then it has a series of other things that can be considered and looked at, because we want to be nimble to how fast this area is moving, and also to that the range of healthcare solutions for AI is also on a huge variety of risk scales,” Haresign said.
The CTA has developed more than 150 standards across industries, 30 of which lay out technical requirements for digital health technologies. Since 2019, the CTA has developed five health AI standards.
Compliance with CTA standards is self-declared. It does not offer a “badge” or “stamp” for being compliant with the standard. Moreover, organizations don’t undergo review by the CTA itself, thus the organization does not know the extent to which the standard has been used.
“We've been hearing really strong and positive feedback from our members because it provides a lot of flexibility in how folks can do it,” Haresign said. “One of the other goals that we've really focused on is, I don't want to say speed in a negative way, but speed in a sense of meeting the market's needs for getting these standards out. “
Like Griffin at the URAC, Haresign also predicted that the predictive AI standard will be updated relatively quickly, given the pace of innovation in the industry. The American National Standards Institute requires that standards be updated at least every five years.
She said the CTA will update the standard in part based on industry feedback. She offered an anecdote of the organization’s step counting standard for wearable devices, which was quickly leveled up because the industry said the standard was too easy to meet.
“I think we've definitely seen an uptick in interest across the ecosystem," Haresign said of the AI standards and certification market. "I think it's a great way to show trust. And I think it's also being able to be industry driven.”
The CTA also has other health AI standards in the works. It is working on a standard for post-implementation monitoring of AI, an area of major concern for health systems. Likewise, the CTA is developing standards for synthetic data for health AI solutions and technical considerations for procurement of AI solutions.
“We're absolutely looking at when the right moment to begin to think about some of this stuff for generative AI comes, I think that that's in our conversations, into 2026,” she noted.
Haresign said organizations that are considering going through an AI accreditation program or looking to align with technical standards should consider the diversity of the stakeholders that helped develop the program and the rigor of the development.
“From my perspective, I think in some ways it can be harder to understand the certification, accreditation, insert the various word here, without understanding what are the technical standards that are underpinning that, and then how were those developed," she said. "What level of industry input and rigor went into it? So we've really been focused on the latter piece of identifying those technical standards and making sure that they're having that broad input across all aspects of the ecosystem.”