Health AI leaders split on utility of AI regulatory sandboxes as Ted Cruz says state AI moratorium still on the table

Health AI leaders seem split on the usefulness of regulatory sandboxes in addressing the problems that AI developers and provider organizations are facing when it comes to the novel technology. 

The two most significant federal AI regulatory proposals in recent months, the SANDBOX Act introduced by Senator Ted Cruz, R-Texas, and Trump’s AI Action plan, aren’t targeting the most burdensome areas for developers, experts said.

A moratorium on state AI laws died over the summer in Congress, and in recent months, Cruz and the White House have advocated for AI sandboxes at the federal level to exempt startups from "burdensome regulation." As advocates shrug at the sandbox proposal, Cruz says a state-level moratorium is still on the table.

In July, the Trump administration put forward an AI Action Plan to promote American competitiveness in AI, pitting the country in an AI race against China. The AI Action Plan proposed the creation of regulatory sandboxes for AI where developers could experiment with the technology without the burden of federal regulation. 

Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act in September, which would allow AI deployers and developers to apply to modify or waive AI-specific regulations that could impede their work in testing, experimenting, or providing AI tools to consumers. Companies could apply for a suspension of specific federal rules for a period of two years, with renewal available up to a decade.

Applicants must state how the AI product would benefit consumers and how its benefits outweigh its risks. The program would require applicants to publicly disclose that it has obtained the waiver and to disclose that the product is experimental.

The developer or deployer would not be immune to criminal liability for a consumer's right to sue. The legislation would require an annual report to Congress on which agency rules were most often waived and how consumers engaged with the product, allowing the legislature to consider how it wants to permanently regulate AI.

When introducing the legislation on Sept. 10, Cruz said: “[The SANDBOX Act] embraces our nation’s entrepreneurial spirit and gives AI developers the room to create while still mitigating any health or consumer risks. The AI framework and SANDBOX Act ensure AI is defined by American values of defending human dignity, protecting free speech and encouraging innovation.”

The SANDBOX Act focuses on the temporary suspension of federal AI rules as opposed to a multi-year ban on new state AI laws.

During the Republican budget reconciliation process that culminated in the passage of HR 1, Trump’s “One Big Beautiful Bill,” Cruz and Sen. Marsha Blackburn, R-Tenn., introduced a 10-year moratorium on state AI laws. The provision was ultimately withdrawn because it lacked support for passage, Cruz said on stage at the POLITICO event.  

Randi Siegel, partner at Manatt Health, explained the state AI moratorium fight during a recent webinar hosted by the professional services firm.

“This moratorium faced bipartisan backlash over concerns that if we were essentially going to have a moratorium, the federal government needs to fill that vacuum and have some sort of comprehensive AI regulation to protect children and against deep fakes and some other risks related to artificial intelligence,” Siegel said. 

Cruz’s office told Fierce Healthcare that the SANDBOX Act is one part of an AI framework the Senate Commerce Chair is working on. A staffer pointed to Cruz’s statement at the POLITICO AI Forum earlier this month, where he said he is still championing a state moratorium on AI laws. 

The Coalition for Health AI CEO Brian Anderson publicly affirmed his support of the SANDBOX Act in an opinion article published by The Hill last week. 

In Anderson’s opinion article, entitled "The Sandbox: Finally Washington takes action to fix healthcare", Anderson wrote that AI sandboxes could help bolster public trust in health AI tools. 

“This bill marks a major step forward in addressing the deep dysfunctions within our healthcare system,” Anderson wrote. “It will create 'regulatory sandboxes' for AI developers to test and launch new technologies under clear conditions, with accountability for safety and risk. This will be especially valuable for startups, giving them a clearer path to bring innovations to the people who need them most.”

AI startup leaders and legal experts at Manatt Health offered a more skeptical view of the policy framework. They argued that the sandboxes don’t solve the major issues facing health AI developers and implementers.

Seigel noted that healthcare innovators are struggling with the ever-increasing patchwork of state AI laws that are cropping up in the absence of a comprehensive federal framework. The proliferation of state AI laws is difficult for startups with small compliance teams to handle. 

“I’m not sure it's super beneficial to a lot of innovators, because it's not preempting state law … It's only an alternative to complying with federal regulations,” Siegel said during the webinar. “If you are playing in the sort of FDA-regulated space, or the gray area where you're not really sure if you're FDA-regulated or not, this provides a pathway for you to potentially innovate in a less costly way, without having to go through the approval process. But it's definitely not solving the patchwork issue … I'm not sure how impactful it's going to be from a macro innovation perspective.”

Troy Bannister, founder and CEO of OnboardAI, a compliance startup, said that many AI startups, like his company, are not subject to FDA rules, thus reducing the utility of AI sandboxes at the federal level.

“A lot of startups—I'd say 90% plus of startups—are falling in this kind of clinical decision support category, where they're saying, ‘We are not making clinical decisions. We are just recommending things.’ And so we're kind of exempt from this kind of 510K FDA zone, and that's where everybody's starting right now. And so they're kind of outside of the FDA zone, which then puts the onus back on the hospital or implementer to do all the diligence.” 

Some CDS tools do require FDA approval to be sold commercially. Mark Sendak, co-founder and CEO of Vega Health, drew on his experience at Duke Health working on deploying COVID-19 tests in schools. The speed with which the tests were made—though necessary given the ongoing pandemic—made educators skeptical about using them.

He said the same concept applies to making exceptions to the rules for AI.

“I think it's really hard to embrace the premise that carve-outs of regulations somehow give you an opportunity to build trust with innovation,” Sendak said. “I think it can actually put people in a very defensive posture when considering how to use the tools.”

Demetri Giannakopoulis, chief AI officer at radiology startup RadAI, cautioned developers to still ensure their products are safe for use in healthcare even if federal regulations are waived.

“That does put a lot of the requirements on local sites, frankly, or ideally, they'll probably partner with Vega, maybe Onboard, one of these solutions to make sure that they're not putting themselves in a position where they've taken on undue risk or undue liability by leveraging these tools,” Giannakopoulis said.