CHAI and Joint Commission release guiding document for health systems using AI

The Coalition for Health AI (CHAI) and the Joint Commission released a guidance document that outlines the key tenets of organizational management for the use of artificial intelligence as health systems move to rapidly adopt the technology.

The guidance document does not give many specifics on how hospitals and health systems can actually implement the principles, which include ongoing monitoring and patient privacy protections. Those details are promised to come later in a series of playbooks the two organizations will be codeveloping with industry stakeholders. 

The playbooks will tee up a voluntary AI certification program offered by the Joint Commission next year. 

The Joint Commission is one of the largest accreditors of hospitals and health systems in the country; it was started in 1951. The level of trust and credibility the Joint Commission has is the key reason that CHAI entered into the partnership, said CHAI CEO Brian Anderson, M.D., in an interview with Fierce Healthcare.

CHAI and the Joint Commission announced the start of their partnership in June, and the Responsible Use of AI in Healthcare (RUAIH) guidance document is the first product of the collaboration. Anderson said the document, though high level, is designed to provide transparency into the work between the two organizations. 

It will also act as a “forcing function” for community feedback, which is an important piece of the development process for the upcoming Joint Commission playbooks on AI. 

“This is our attempt to get some of the initial feedback out and some of the initial findings of the workshops and the conversations that we're having, and then just starting a much more robust, more intentional set of feedback and conversations with our health system members," he said.

CHAI is leading the work on the playbooks, including the direction of the content, a Joint Commission spokesperson said in an email. 

Anderson said the playbooks will be tailored to different sizes of healthcare delivery organizations, which have differing amounts of resources with which to deploy and monitor AI. The working groups that CHAI will convene to develop the playbooks will be segregated by type and size of organization, he said. 

Anderson said there will be an effort to include federally qualified health centers in the discussions.

“The real challenge I put to our team is to build these playbooks in such a way that they meet individual health systems where they're at,” Anderson said. “Because we do not want to create playbooks that are only going to be supporting major big health systems. The intentional effort here, for example, is to include specific working groups with federally qualified health centers. And as we think about, for example, the use of AI that is very unique to them—Medicaid use cases.”

The organizations will meet online through a series of workshops and webinars CHAI will host from now until the end of the year. Organizations can also provide feedback through a submission form on CHAI’s website.

The Joint Commission said the voluntary certification program will not be separated by size of institution, though the size and resources of the applicant will be taken into account. 

The guidance recommends health systems have a formal AI governance structure with a designated individual with appropriate experience in technology or healthcare. Health systems should develop policies and procedures on the review, procurement and implementation of AI; and, they should put in place protocols for safety, risk and privacy. 

To build consumer trust, health systems should have a mechanism to disclose the use of AI and educate patients on the tools and their benefits. The guidance also recommends hospitals educate staff and patients about how AI is being used. Patients should be notified when AI directly impacts their care, and they should know how their data might be used by AI. Health systems should obtain consent for the use of AI when relevant, according to the guidance.

Health systems need to comply with HIPAA breach notification requirements if AI exposes patients’ data and establish business associate agreements with vendors. The framework offers some specific data protection protocols like using encryption, strict access controls, regular security assessments and an incident response plan. 

Ongoing quality monitoring is an important aspect of AI governance, the guidance document says, and it should be risk-based and scaled for the setting. Health systems should ask vendors how they did validation, if they can do local validation and how relevant biases were evaluated. 

CHAI and the Joint Commission say that health systems have a variety of options for ongoing quality monitoring such as internal development of monitoring systems and establishing contractual agreements with third-party vendors, including their electronic medical record company.

Health systems should identify responsible parties for ongoing monitoring and evaluating of AI locally and engage in regular validation and quality testing. Importantly, healthcare organizations should establish lines of communication with vendor companies for any relevant performance updates.

The RUAIH guidance advocates for voluntary, blinded reporting of AI safety events to independent organizations like patient safety organizations. The voluntary, and confidential, disclosure of adverse events can help the industry recognize harmful patterns and limit the need for regulation. CHAI touts its upcoming Health AI Registry, which enables voluntary reporting for quality improvement and accountability. 

Provider organizations should request information from vendors on known risks and biases of AI solutions and ask how bias was evaluated and for which populations. Risk and bias should be assessed during local validation and as part of ongoing monitoring. CHAI promotes its Applied Model Card for vendors to identify known risks and biases.

The last category of the framework is education and training. Organizations should educate staff about the AI use cases deployed in the system and define and document how users will be given information about AI. 

Staff and providers should know where they can access information about the tool and the organization’s AI procedures and policies. The health system should evaluate whether specific tools require additional training. The guidance recommends education initiatives on AI literacy and change management.