As the United States moves beyond the first months of the second Trump Administration, artificial intelligence has emerged as a central focus across multiple areas of the narrowly Republican-controlled government, especially within the health sector. Developers and deployers of AI-enabled technologies continue to promote AI as an innovative tool for improving care delivery, while clinicians and patients express both enthusiasm and concern about its integration.
With public awareness expanding far beyond its science fiction origins, elected officials, regulators and executive agencies can no longer ignore the lasting influence of AI on healthcare and life sciences.
Congressional movement and administrative activity
The 118th Congress, under the Biden Administration, engaged AI policy with cautious optimism. Democrats emphasized risk mitigation, while Republicans positioned AI as a valuable resource for clinicians. In contrast, the Trump Administration has moved quickly, launching federal efforts despite divergent viewpoints from U.S. Department of Health and Human Services (HHS) agency leadership about balancing innovation with risk. Cybersecurity breaches, health data exposures and AI-related harms to children have triggered bipartisan concern.
While the Administration continues to express confidence in AI's promise, enacting effective policy remains difficult. Meanwhile, industry trends reflect accelerating adoption and acquisition throughout healthcare and life sciences.
OMB guidance
On April 3, 2025, the Office of Management and Budget (OMB) released two artificial intelligence (AI)-focused memorandums, M-25-21, and M-25-22, with implications for the HHS, including the U.S. Food and Drug Administration (FDA), Centers for Medicare and Medicaid Services (CMS), National Institutes of Health (NIH) and other subagencies. Additionally, these memos are important for those developing or deploying AI-enabled software or AI/machine learning (ML) tools with healthcare applications, as the memos provide substantial insight on the Trump Administration's positioning on AI applications broadly.
The memorandums repeal and replace memorandums introduced under the Biden Administration, and some key continuities remain, such as risk management for high-impact AI, emphasis on fairness/ethics and governance during the procurement of AI tools/products. However, M-25-21 and M-25-22 place greater emphasis on transparency, encouraging innovation and America First policies.
Legislative and appropriations activity
Congressional proposals related to health AI are increasing but remain uncoordinated. Most bills target specific applications and fail to gain momentum due to competing budget priorities.
One notable proposal, ultimately rejected, called for a 10-year pause on federal AI regulation and a $500 million investment to modernize federal IT infrastructure. Though it promised flexibility for innovators, critics cited unacceptable risks to patients and clinicians. The proposal’s failure highlights Congress's continued difficulty in advancing meaningful AI legislation.
CMS and FDA activity
In May, the CMS issued a Request for Information titled Improving Technology to Empower Medicare Beneficiaries. The agency sought input on how AI could help individuals understand coverage options, explore care pathways, and evaluate costs. This effort was followed by plans to launch a national provider directory and introduce digital insurance cards. These initiatives indicate CMS’s interest in leveraging AI for both administrative and patient-facing purposes, though implementation challenges remain.
Federal funding for AI in healthcare is shrinking. A proposed Medicare allocation for AI fraud detection was removed from budget considerations. This setback affects providers in states facing financial constraints and regulatory complexity. Many may delay AI investments due to uncertainty and limited resources.
In June, the Food and Drug Administration expanded its use of AI to help review medical product applications. Commissioner Makary introduced a general-purpose language model designed to reduce review times and identify discrepancies in sponsor submissions. While early results appear promising, questions remain about how the FDA will validate its own tools and ensure safe use aligned with its regulatory standards.
Transparency and public expectations
Transparency continues to shape AI discourse. Clinicians, administrators, patients and developers seek clarity about how AI systems are built, trained and evaluated. Growing demands for disclosures, similar to nutrition labels, signal a shift in public expectations. Many want access to performance metrics and assurances that AI tools are fair, safe and accountable.
State-level regulation and divergence
States are stepping into the policy vacuum. By March 31, 2025, 42 states had proposed legislation related to AI in healthcare, with six states enacting laws focused on transparency, public sector use and task force development. States now have full authority to regulate AI in healthcare, including establishing rules around patient safety, liability and disclosure. This decentralized approach may foster innovation but also leads to inconsistent standards and access across regions.
Looking ahead
The conversation about AI in health is expanding in scope. As tools evolve in complexity and capability, from diagnostic support to facial recognition and image analysis, legal and technical definitions must keep pace. AI is already reshaping patient experiences, clinical practices and market dynamics. With federal legislative efforts stalled and administrative actions subject to legal review, innovation continues at full speed. The essential question is not whether policymakers can keep up but how and whether they will do so responsibly and equitably.
To ensure effective and ethical integration of AI in healthcare, those on the front lines—developers, deployers, clinicians and patients—must use clear, accurate language when discussing these technologies. They have a responsibility to engage policymakers by sharing firsthand insights, real-world use cases and patient-centered experiences.
If federal leadership defers to state-led regulation, it becomes even more critical for those closest to AI to shape the standards that will govern its use. By learning from past mistakes in the development of confusing (and occasionally contradictory) privacy policy both in the U.S. and abroad, we have the opportunity to create a more responsible and inclusive framework for the future of AI in healthcare.
Sarah Starling Crossan is a public affairs advisor in Holland & Knight's Washington, D.C. office who focuses on regulatory and legislative policies at the intersection of healthcare transformation and emerging technology.
John Vaughan is a public policy and regulation attorney in Holland & Knight's Century City office with a background in regulatory affairs, corporate governance and compliance across healthcare, life sciences, biotech and emerging technology.