Health Level Seven International (HL7), a global standards development body, is diving into artificial intelligence to create frameworks and standards for safe and interoperable AI in healthcare.
The nearly 40-year-old nonprofit organization has hundreds of standards for the sharing of electronic health information, including Fast Healthcare Interoperability Resources (FHIR). Historically, its standards have become part of federal regulation for health IT.
Because of the organization’s highly technical understanding of data, technology and clinical workflows, it is well positioned to work on standardizing the technical components of AI.
Daniel Vreeman, HL7’s chief standards development officer and inaugural chief AI officer, said the organization would likely build upon its existing platform standards like FHIR to create specifications for AI within those frameworks.
“We will definitely be creating, and have already created, some new specifications or guidance documents around the use of AI,” Vreeman said. “So for example, one that we have today describes sort of an AI and machine learning data life cycle. So best practices for thinking about data representation, the life cycle from model development to deployment. That's an example of a new specification. We have a project now that's kicking off … focused on AI transparency with FHIR. What are certain best practice and the techniques you can use when you're doing stuff with FHIR data in order to enable and enhance the transparency of AI solutions that are using it.”
Some AI could lean on existing standards like SMART on FHIR to connect applications that have FHIR data, CDS Hooks to plug into third-party decision support applications or CQL to share clinical logic, Vreeman said.
“The modern API-based representation of data, and the data model that FHIR has sort of established is for sure the future, and is more AI ready, AI friendly,” Vreeman said.
One of HL7’s key approaches for standards development for AI is ensuring models use high-quality data. Creating good models requires good data, Vreeman explained.
“Part of it has to do with tagging and identifying and allowing characterization of the data that goes into the model training, as well as making sure that you know outputs from the system are appropriately tagged … ensuring that the data representation at each of those steps is tracking and facilitating in a common way is going to help get us towards this idea of a transparent and a safe AI innovation wave,” Vreeman said.
He continued: “What HL7 can bring in is the specific techniques of the data representation for describing the cohort, giving you very fine-grained ability to provide that summary information about the key characteristics of that population, the exact demographics, the exact conditions that they had, concomitant medications in a standardized framework.”
Two other fundamental principles for HL7 are open standards and collaboration. HL7 will convene stakeholders such as model developers, health systems, payers and policymakers to work on standards for AI.
The group plans to leverage some of its standing work groups and new parties to understand the key business and data exchange problems facing the public and private sectors on AI. Involving stakeholders in the standards creation process facilitates uptake in the standards once they are created, Vreeman said. The groups can also help pilot the standard to test it before it goes live.
“We actually need other parties in the ecosystem to come together to work on this,” Vreeman explained. "HL7 provides sort of a governance framework to help turn that collection of ideas and into a standard that is fully transparent across its entire life cycle, so that anybody who's interested or would be potentially affected if this were … to be regulated in some way, anyone who's interested can participate in the process as it goes.”