AMA releases AI governance toolkit for health systems

The American Medical Association (AMA) has released new guidance for health systems looking to develop and implement artificial intelligence within their organization.

The new Governance for Augmented Intelligence toolkit, built in collaboration with Manatt Health, is an eight-step module. It guides health systems from the initial steps of establishing executive accountability and governance structure through policy development, vendor evaluation, oversight and organizational readiness for the launch of new AI tools.

The toolkit also includes worksheets, sample forms, example AI policy documents and other resources for organizations to reference and can be completed by physicians for Continuing Medical Education credit.

“There is excitement about the transformative potential of AI to enhance diagnostic accuracy, personalize treatments, reduce administrative and documentation burden, and speed up advances in biomedical science,” the toolkit reads. “At the same time, there is concern about AI's potential to worsen bias, increase privacy risks, introduce new liability issues and offer seemingly convincing yet ultimately incorrect conclusions or recommendations that could affect patient care.

“Establishing AI governance is important to ensure AI technologies are implemented into care settings in a safe, ethical and responsible manner.”

Recent survey data suggest that health system leaders are eyeing AI and that the technology is in use across most organizations, either as a pilot program or for less-structured internal use cases such as scribing. That said, relatively few organizations report fleshed out governance or strategy, including around issues such as data use or vendor vetting.

Of note, the AMA’s toolkit outlines areas in which a health system’s AI policy “should, at minimum, articulate.”

Among these are descriptions of the permitted uses of approved and public available AI tools “such as using AI for developing drafts of marketing materials, patient-facing communications, project outlines and research summaries” as well as prohibited AI uses “such as entering patients’ [personal health information] into publicly available AI tools.”

Other must-haves for health system AI policy involve transparency guidelines, policy on how long certain information will be retained and AI trainings “for everyone in the organization who uses AI.”

From there, health systems should review their existing policies and procedures to determine whether any changes or cross-references to the AI policy are necessary, which could be relevant for policies on informed consent, data security, contracting and antidiscrimination. Any working group focused on these changes should also be aware of any relevant federal and state laws, the AMA and Manatt Health advised in the toolkit.

“Technology is moving very, very quickly,” said AMA Chief Medical Information Officer and Vice President of Digital Health Innovations Margaret Lozovatsky, M.D. “It’s moving much faster than we’re able to actually implement these tools, so setting up an appropriate governance structure now is more important than it’s ever been because we’ve never seen such quick rates of adoption.”

Similar to its other advocacy and policy work, the physician association’s toolkit primarily refers to AI as “augmented intelligence,” reflecting its view of AI as a supplement for healthcare professionals rather than a replacement. The group has also broadly lobbied (PDF) in favor of additional regulatory guidance for AI, including generative AI, in healthcare, against new physician liability for the use of AI and for guardrails on payers’ use of AI for prior authorization. 

Industry, investors and government have made clear their interest in expanding AI adoption and development.

AI-enabled startups have captured 62% of all venture capital digital health dollars, or nearly $4 billion, during the first six months of 2025 alone. A recently published AI Action Plan from the White House outlines steps to promote the technology—chiefly, deregulation, standards adoption and a series of regulatory sandboxes to test commercial AI innovations.