Editor's Note: Reporting for this story was supported by a National Institute for Health Care Management journalism grant award.
Artificial intelligence is promised to be one of the single biggest revolutions the healthcare industry has ever seen. It could save lives by catching diagnoses earlier. It can automate tasks that have burned out the healthcare workforce and have created a physician shortage crisis.
Likewise, the technology has severe risks. Trust the technology too much and lives could be at stake. Underrepresented racial and ethnic groups could suffer unduly at the hands of an improperly trained and managed algorithm.
At its outset, the Coalition for Health AI (CHAI), a nonprofit group with thousands of members working to devise best practices for the responsible use of artificial intelligence in healthcare, offered a beacon of light.
Its headlining initiative—a national network of AI assurance labs—promised a path forward to vet technology before it went into use at hospitals and clinics. The Biden administration, citing the Food and Drug Administration’s (FDA’s) lack of staff and resources, also leaned into a promise of an industry-led AI assurance effort.
Instead, CHAI has created confusion for the industry, CHAI affiliates and healthcare executives told Fierce Healthcare.
"It convened one of the largest and most motivated coalitions of organizations committed to working together to establish how to use one of the most powerful technologies of our professional careers to advance healthcare, and has thus far failed to deliver," one executive who worked closely with CHAI, and also requested anonymity to speak candidly, told Fierce Healthcare.
At the tail end of 2023, CHAI leadership proposed, in a JAMA article, the idea of establishing a nationwide network of AI assurance labs to "provide assurance that the use of AI is fair, appropriate, valid, effective, and safe." CHAI’s original assurance lab concept drew on third-party assurance frameworks common in regulated industries, where independent assessors test products against defined safety standards.
CHAI is led by Brian Anderson, M.D., a former family physician who once treated low-income patients in Massachusetts before moving into the health-tech ecosystem. He helped found the organization in 2021 while serving as MITRE’s chief digital health physician.
Since then, CHAI has struggled to deliver on its most ambitious promises. The organization’s signature idea—a national network of AI assurance labs—has quietly fallen away. Anderson now describes the concept as a mistake.
At a moment when much of the healthcare industry is tightening its belt, CHAI has managed to do the opposite. Founding health systems, including Duke University Health System, Mayo Clinic, Stanford Health and Johns Hopkins pledged $1.25 million to the organization, according to an agreement obtained by Fierce Healthcare.
Startups are also contributing financially. Instead of a membership fee, some have entered into revenue-sharing arrangements with CHAI, according to one startup. The terms of those agreements were not disclosed.
CHAI’s focus on convening the industry to define responsible AI has led it down a myriad of paths. It frequently adds new initiatives and partnerships, shifts project timelines and even shifts entire approaches—like that of the assurance labs—without clarifying the status of past projects.
Some of its initiatives have included the assurance labs, an ecosystem of AI governance providers, AI model cards and an AI outcomes registry. It has announced working groups on generative AI, prior authorization, Medicaid work requirements and a faith-based approach to AI alongside the Vatican, among others.
CHAI has announced a possible chapter and conference in Spain and in Singapore, with promises to expand abroad. Anderson and CHAI employees have also been traveling to Puerto Rico to work with federally qualified health centers on the island.
The group has maintained that the rapid iteration of its big projects and plans is in keeping with its goal to be responsive to its community and to stay on the nose of responsible AI.
The co-founder and COO of a partnered startup, Signal1’s Mara Lederman, said to think of CHAI itself as a startup. It would be normal for a technology startup to change its language every three months, she posited.
For the self-proclaimed largest responsible AI group in the world, one has to wonder: is this scattershot approach risky, or just self-defeating?
An organization mired in controversy
March - July 2024
Since the organization re-launched as a nonprofit in March 2024, it has drawn the attention of the business and policy worlds, from San Francisco to Washington, D.C. Its aim: to solve the most prescient issues of the burgeoning use of AI in the U.S. healthcare system.
CHAI’s members include major health systems and community health centers, incumbent technology companies and startups, medical professional societies and advocacy organizations.
The group emerged at a time when most of the industry was in its infancy of AI understanding, and CHAI appeared, seemingly, with a method for navigating the high-stakes technology.
It’s something that federal regulators still have not yet solved—how can the FDA review a technology that changes every day? How can rural America benefit from AI innovation? What safety standards does AI need to be used in a hospital?
The questions are seemingly endless and wildly complicated.
When CHAI launched as a nonprofit, it drew early backing from Washington. Two senior Department of Health and Human Services (HHS) officials joined its board as nonvoting members: Micky Tripathi, Ph.D., then the national coordinator for health IT, and Troy Tazbaz, who had just been appointed to lead the FDA’s Center for Devices and Radiological Health. Both Tripathi and Tazbaz have since left HHS with the change in administration.
Republican lawmakers criticized the FDA’s involvement with CHAI in a letter sent in June 2024. They asked the agency to respond to questions about the FDA's role in CHAI's quality assurance labs and how the assurance labs depart from FDA's medical device approval authorities. The Republicans mostly took issue with the involvement of legacy technology companies like Google and Microsoft in CHAI.
"We support your objectives, but caution against outsourcing FDA equities to organizations, like CHAI, that currently do not ensure the small business community has a place in their governance,” the lawmakers wrote in the letter.
In July 2024, Tazbaz and Tripathi resigned from CHAI’s board. Tripathi told Fierce Healthcare at the time that he resigned after being appointed chief AI officer at HHS and co-chair of the HHS AI Task Force. As for Tazbaz, the FDA said, “there is no longer a need to engage in the organization at that level.”
CHAI also drew negative attention for its inclusion of legacy technology companies like Amazon and Microsoft. Its detractors, which included healthcare venture fund a16z, argued that CHAI's leadership was driven by representatives from incumbents like Google, Microsoft and Mayo Clinic, which would give big tech companies more control over market entry for AI and squash innovative startups.
The assurance lab idea came with a mountain of potential issues, particularly around intellectual property—especially if the labs were to be housed at academic medical centers. More recently, the concept has drawn criticism from some Trump officials, who have pointed to CHAI’s ties to the Biden administration and accused the group of attempting to shape—or capture—AI regulation.
AI assurance lab initiative gains momentum
August - December 2024
Despite the backlash the organization faced, the concept of a national network of AI assurance labs stuck.
It showed up in speeches that HHS officials in the Biden administration gave at conferences, including former FDA Commissioner Robert Califf, M.D., and former HHS Deputy Secretary Andrea Palm. Palm even announced in September 2024 that the Biden administration would launch a national network of assurance labs at the Health Datapalooza conference in Washington, D.C.
Anderson told Fierce Healthcare at the time that he had no knowledge of what that announcement meant.
Up until the final days of Biden's presidency, the administration touted the assurance lab idea. In HHS’ Strategic AI plan, published days before Donald Trump took office, HHS wrote that public-private collaboration to assure AI could be a direction for the agency to explore.
"Given the large volume and diversity of anticipated AI applications needing some evaluation and the need to take local considerations into account, HHS anticipates the need for a public/private approach to quality assurance of AI used in healthcare. To help anchor a nationwide quality assurance approach, HHS may consider whether there are areas where rulemaking may be appropriate to enable successful governance practices and oversight of the use of AI in healthcare delivery and financing, for example, by motivating and supporting nationwide public-private approaches to validate AI,” HHS officials wrote in the document.
The Strategic Plan cites the foundational publication in JAMA that CHAI leadership wrote to establish the idea of a nationwide network of AI assurance labs.
At the CHAI's October 2024 Global Summit in Las Vegas, CHAI members gathered to tease out some of the most basic questions about their proposed national network of AI assurance labs.
Approximately 80 people from member organizations hashed out the questions they had about assurance labs, which included how the labs would disclose conflicts of interest.
The discussion was moderated through Chatham House rules, a set of guidelines that allow participants in a meeting to share information from a discussion without revealing the identity of the speaker. Chatham House rules were instituted at the roundtable discussions so media outlets such as Fierce Healthcare could attend.
Some of the questions addressed at the meeting included: Who are the primary customers of the assurance labs? Who would pay for model validation? How would under-resourced healthcare organizations be able to access the insights of the labs?
While CHAI members were still hashing out basic questions about the labs, Anderson was telling reporters that it would have two assurance labs up and running by the end of the year.
CHAI’s pivot from AI testing to AI governance
January - June 2025
The assurance lab concept collapsed near the beginning of President Trump’s second term, Anderson told Fierce Healthcare. It wasn’t one conversation or one moment that killed the assurance labs, he said, but a series of moments across CHAI’s lifetime.
CHAI's pivot to establish what it calls assurance resource providers was not announced until six months later.
One conversation Anderson recalled having near the beginning of 2025 was with “one of the best resourced hospitals in the country” about the “existential” issues they were facing with AI governance.
AI governance is a framework for managing AI at an organization, and it’s one that health systems have struggled with. Once the switch has been flipped, AI governance is the process by which a health system keeps tabs on the model—if it’s working correctly, if it’s drifting in its performance, if it’s working for some groups but not others.
For this large and well-resourced health system, which Anderson declined to name, its AI governance was already costing millions of dollars per year, he said, just to oversee a handful of models. Scaling the number of models, then, would be impossible. There wasn’t enough money.
Somewhere amid similar conversations with other providers, CHAI decided it wanted to tackle the issue of AI governance rather than AI procurement. That decision marked the collapse of the assurance labs touted by CHAI and Biden officials for more than a year.
An executive who spoke to Fierce Healthcare and had direct knowledge of CHAI's work said the organization rushed to define tactical deliverables "instead of rallying a community around a shared north star and then identifying, together, the most important work that needed to be done in pursuit of that north star."
This massive change in direction was not widely advertised. In fact, CHAI made no announcement about the shift until June 2025. Even then, Anderson told Fierce Healthcare the shift was a mere difference of "naming conventions."
CHAI's big shift in direction came in late May 2025, when it announced that it had partnered with an AI company called Beekeeper AI that, in conjunction with the Icahn School of Medicine at Mount Sinai and Morehouse University School of Medicine, would validate AI models for heart failure.
Icahn and Morehouse supplied BeeKeeper AI with datasets from their health systems that BeeKeeper AI can use to test how well algorithms for heart failure work from different vendors. BeeKeeper has a secure platform, called EscrowAI, that creates a secure test environment for the model at hand to run on the de-identified data sets.
The Beekeeper AI model is likely very similar to what an individual assurance lab would have looked like. Fierce Healthcare reported at the time that CHAI's assurance lab effort had finally begun.
At the same time, Anderson explicitly said at a CHAI member convening at Stanford University on June 3 that "the assurance labs are not dead," referencing a STAT newsletter that correctly asserted so.
In retrospect, Anderson said the assurance lab concept was a misstep for the organization.
“Our initial hypothesis [was] that the pre-procurement use case for assurance was the one that would be most interesting to our doctors and nurses,” Anderson said. “It wasn't as things turned out, right? It was the post-deployment monitoring, and we're going to continue to have hypotheses that need to be tested, that probably will be proven incorrect, and we need to adjust, right? I mean, I think we need to be open to making changes and adjusting as we continue to stay committed to our members.”
Also in early June, CHAI announced four other companies, Signal1, LensAI (now Complira), ALIGNMT AI and Ferrum, would become partner entities the organization calls assurance resource providers (ARPs).
CHAI also called BeeKeeper AI an ARP entity, though the services the three companies provide differ markedly from the BeeKeeper AI partnership. This cohort of companies provides various solutions to manage AI within healthcare organizations.
Lederman, the COO of Signal1, told Fierce Healthcare that the company provides an “AI management system,” drawing on the idea of a customer management system or a tool for revenue cycle management.
“In the last, I would say, 12 to 18 months, [health] systems have really been accelerating the number of AI tools they try,” Lederman said in an interview in September 2025. “Now, they're all of a sudden in this place where they're having a moment of reckoning, both where they feel a lot of anxiety about how quickly they've wrapped up without having visibility into how these tools are performing—whether they're safe and whether they're delivering value—and also realizing that the processes and the systems they may have put in place when they were in pilot mode aren't going to work when they go from 50 tools to 100 to 300, and so that's the problem our company solves.”
The ARPs are trusted partners vetted by CHAI that other CHAI members should look to as potential partners for AI governance, according to Anderson’s organization.
The companies agree to CHAI’s responsible AI standards and incorporate CHAI products into their platforms. For example, Signal1 can create reports for its customers on the use of their AI tools. One of its report types is the CHAI model card.
When CHAI was still pursuing its AI assurance labs, Anderson said it had at least 32 companies that had expressed interest in using their technology to validate AI tools. At the time, he was promoting the idea of a national network of roughly 30 labs that would span the geography and demographic makeup of the country.
When asked if he ever notified the companies that the assurance labs were nixed, Anderson said that the same companies are now the ARPs.
“So a lot of the same companies that you see on our ecosystem list were some of those initial companies. Companies like Signal1, ALIGNMT, Pacific AI, Ferrum, Gesund, Beekeeper, those were all the groups that we were having those initial conversations with back in October ... as we matured what the ecosystem space looks like. You've seen them. They're still there. They're just now focused on the governance space," he said.
When asked if they were planning to become part of the assurance labs, Signal1 and Complira said they have been involved in a variety of CHAI efforts—neither confirming nor denying participation.
Signal1 told Fierce Healthcare it didn’t begin its formal relationship with CHAI until February 2025, though the representative noted that it likely engaged with CHAI before this date. It signed the agreement to be an ARP in June 2025, when CHAI announced several ARPs at its Stanford convening. The official certification agreement was signed in October 2025.
Lederman knew little about the assurance labs because her organization had not gotten involved with CHAI until after the effort ended, she said in September 2025. Two other ARPs, Complira (formerly LensAI) and ALIGNMT, also said they became members of CHAI in spring 2025.
It is unclear if the “32 companies” that expressed interest in participating in the labs were made aware of the pivot.
Nonetheless, providing internal AI governance at a health system is markedly different than providing third-party AI assurance. Regardless of whether the companies were slighted by the pivot, the rest of the healthcare industry lost out on the grand vision of assurance labs.
In an email sent to Fierce Healthcare on Feb. 11, a representative of Signal1 said CHAI was undergoing a rebrand of its ARP program, and it asked ARP companies to immediately stop referring to themselves as certified by CHAI.
When asked to clarify the rebrand, CHAI said: “CHAI was started by clinicians and grounded in science,” Anderson wrote in an email. “Some of our founders had a hypothesis that ‘assurance labs’ would be a strong model, so we started pursuing that. But as we grew, brought on more clinicians and health systems, and evolved with the rapidly changing industry, we learned that demand was for a whole ecosystem of tools and frameworks. We then pivoted based on what our community was actually telling us they needed, resulting in a broader Partner Program versus our original ARP work.”
Fallout with the Trump administration
July - December 2025
In the latter half of 2025, as CHAI moved forward on its new plan to focus on building partnerships with assurance resource providers rather than setting up testing labs, its relationship with the White House soured. The organization faced escalating attacks by Trump officials due to its ties to big tech.
At first, the Trump administration seemed open to hearing CHAI's ideas. The organization met with the Trump transition team days after Donald Trump’s win in the Nov. 4, 2024 presidential election, Anderson said. Throughout the early months of the administration, CHAI met with several different agencies within HHS to discuss the issues with AI in healthcare and how the administration could engage the private sector.
Anderson said the meetings were scheduled at the request of the Trump administration, and he even attended a July 2025 event at the White House to unveil the AI Action Plan “Winning the Race, America’s AI Action Plan.”
In October, a bombshell hit CHAI. HHS Deputy Secretary Jim O’Neill and FDA Commissioner Marty Makary published an op-ed in the Washington Examiner that denounced CHAI as trying to act as a “quasi-regulatory” body and stand between industry and government.
“CHAI’s ties to the Biden administration render it incapable of impartial regulatory guidance. Even more problematic, CHAI had Biden appointees on its board while HHS Secretary Xavier Becerra was developing plans for third-party audits of healthcare AI developers. Not surprisingly, CHAI was designated as a top candidate.”
“It’s like a self-licking ice cream cone, a virtual and unethical syndicate,” O’Neill and Makary wrote.
“We’re hitting reset. Under Secretary Robert F. Kennedy Jr., HHS will not allow CHAI—nor any other nonprofit group, think tank, or company—to operate as an implicitly government-backed regulator or policymaker. While we welcome the emergence of a robust consortium of voices offering suggestions for best practices, we will not force taxpayers to fund assurance labs nor any other regulatory model that we cannot hold accountable on their behalf," O'Neill and Makary wrote.
Kennedy reposted the op-ed on X, calling CHAI a “cartel."
In a letter to its members following the publication of the scathing editorial by Makary and O’Neill, CHAI's CEO said, "We are eager to learn more about the concerns from HHS and how CHAI can better represent our membership and their priorities to this group of stakeholders,” as reported by STAT.
The group had planned another member convening in San Diego in early November. Members of the media were uninvited to the event. A representative of CHAI told a member of the press that CHAI wanted to do a smaller event rather than the one they planned.
There were also hints that companies' support for CHAI was weakening. After the news of the Trump administration’s disapproval of CHAI broke, the nonprofit organization lost founding members Microsoft and Amazon. Eric Horvitz, Microsoft’s chief scientific officer, stepped down from the board, Politico reported.
In an interview this month, Anderson said the O’Neill op-ed came out of left field. He thought the group was on good terms with the administration based on the multiple meetings they had with the administration.
“We had a great set of conversations through November and December, and then in January, honestly, really hit the ground running … I was very excited and hopeful,” Anderson said. “The vision of CHAI has always been to bring the private sector close to the public sector to help mutually inform each other. And I was excited and hopeful that that would, you know, bear fruit over the subsequent years with the current administration. So, I was really surprised when Deputy Secretary O'Neill came out saying what he did about us.”
Anderson continued: “When we've engaged with government, it's been at their request. We're a good resource because of the community we have and the expertise that we bring. We have real doctors, real nurses, real startup innovators disrupting the space. Those are the kinds of people that our elected officials need to hear about, need to hear from.”
When asked what he thought of the Trump administration’s approach to AI, Anderson said he supported many of the proposals, such as building more data centers, creating an AI evaluation ecosystem and ensuring access to heterogeneous sets of data.
The Trump administration has touted a fast-moving, pro-innovation stance on AI. It does not want to hold the industry back with regulation, officials have said.
Anderson praised leadership at HHS, such as Mehmet Oz, M.D., Chris Klomp, Amy Gleason, Tom Keane, M.D., and Jay Bhattacharya, M.D., Ph.D. He also lauded the Centers for Medicare and Medicaid Services’ effort to try to bring health data to Americans through the Health Tech Ecosystem initiative.
On Friday, it was reported that O’Neill would be leaving his position at HHS as the agency restructures for the midterm elections.
CHAI's direction wasn't clearly communicated to members last year, according to the executive who requested anonymity. "I don't believe there were efforts to communicate internally to members before the fallout with the administration, at which point CHAI leadership realized that their members were their most important asset and improved internal communications," the executive said.
One bright side to the political fallout, according to the executive, is that CHAI's focus shifted from strategic market positioning of the CHAI brand to more action in service of its membership.
"I believe that there is a new chapter of clarity and continuity," the executive noted.
CHAI's strategic repositioning
CHAI is now coalescing around the idea of being a voice for healthcare providers. Anderson himself is a physician, and, in the last year, CHAI has inked partnerships with the National Association of Community Health Clinics (NACHC) and the Joint Commission, a hospital accreditor.
Even as it lost Microsoft and Amazon to the O’Neill op-ed, Anderson said it did not lose any provider organizations. In fact, he said CHAI had added the University of Texas, University of Virginia Health, Emory and the Iowa Primary Care Association as members since that time.
Anderson touted that CHAI has 1,000 unique individuals contributing to its workgroups on prior authorization, generative AI and mental health chatbots, for example.
“This space today is moving so quickly, and it's so easy to misunderstand one another, and it's easy to disagree in a way that confuses people,” Anderson said. “And I think that has certainly happened with CHAI. You know, people confuse our interest in building trust through transparency as being, you know, a quasi-regulatory entity or a gatekeeper. We're not that.”
Anderson continued: “There's a lot of nuance in things like, how do I innovate and move fast, but do that safely with guardrails. It's not like you have to choose one or the other. You can do both at the same time and bearing with one another as we work through that together in this large community is the work we have in front of us.”
The organization has also expanded its partner ecosystem to now include health tech companies ModelOp, Parachute and Citadel AI, as well as advisory firm EisnerAmper.
As AI adoption grows at a rapid pace, the healthcare industry still lacks clear rules for health AI. Though, the Trump administration has offered hints and begun to set priorities for the direction it will take with health AI.
Trump issued an executive order to preempt state laws on AI in favor of a national framework, which has yet to be released. HHS officials have said that Silicon Valley needs more clarity on AI regulation.
While CHAI has struggled to bring its deliverables to life, many other organizations have stepped in to fill some of the gaps in AI assurance.
In the back half of 2025, a slew of organizations announced artificial intelligence certification programs that would offer providers and developers the opportunity to align their AI practices with technical and ethical standards. URAC and the Consumer Technology Association are two such organizations.
The American Heart Association is working on an AI Assessment Lab in partnership with Dandelion Health to validate predictive AI for cardiovascular conditions and related diseases. The AI Assessment Lab mirrors the AI assurance lab concept that CHAI was once pursuing, but the effort is smaller and more defined than CHAI’s grand concept.
The result, thus far, is a burgeoning marketplace of third parties offering benchmarks for the use of AI in healthcare. It’s a far cry from the city-on-a-hill idea of a nationwide network of AI assurance labs, once touted by CHAI.
CHAI’s role "leading the charge in setting responsible AI standards," as its website touts, is unclear, according to many of the sources who spoke to Fierce Healthcare, as it has charted a radically new course from its initial proposal in 2024. To date, it has released a framework for the eventual playbooks it plans to release.
The resistance from the Trump administration last year marked a turning point for CHAI, according to the executive who requested anonymity, as it spurred the organization to commit to action and focus on the needs of its members.
"However, I doubt that they will regain the trust of the members they lost. I also think that they've left a permanent question about the value of industry consortia to driving forward progress in health AI without offering more risk than reward for coming to the table," the executive said.
Editor’s note: After reporting for this story was underway, the reporter accepted a position at a healthcare AI company. The company was not a source for this article.