OpenAI has launched a suite of tools for healthcare enterprises.
The idea behind OpenAI for Healthcare is to help organizations deliver more consistent and high-quality care while maintaining HIPAA compliance, according to a company press release. The healthcare products include ChatGPT for Healthcare. The announcement follows OpenAI’s ChatGPT Health, also launched this week, to help consumers understand their health information and get personalized answers to medical questions.
ChatGPT for Healthcare is a workspace for researchers, clinicians and administrators, powered by GPT-5 models that went through doctor-led testing. The workspace allows for integrations with enterprise tools like Microsoft SharePoint. It was not immediately clear if the tool is interoperable with EHR systems.
The tool’s responses draw on millions of peer-reviewed studies, public health guidance and clinical guidelines and will offer clear citations, the announcement pledged. The press release did not say if guardrails have been placed to prevent hallucination of sources.
Early adopters already rolling out ChatGPT for Healthcare include Boston Children’s Hospital, Cedars-Sinai Medical Center, Stanford Medicine Children’s Health, AdventHealth, HCA Healthcare, Baylor Scott & White Health, Memorial Sloan Kettering Cancer Center and University of California, San Francisco (UCSF).
“At Cedars-Sinai, we are applying AI to transform care by reducing administrative burdens, augmenting clinical reasoning and providing our care teams with more time for meaningful patient connection,” Craig Kwiatkowski, PharmD, senior vice president and chief information officer at Cedars-Sinai, said in the announcement. “Working with OpenAI on tools like ChatGPT for Healthcare will allow us to accelerate and scale this work, bringing trusted capabilities into daily workflows while building confidence across our workforce.”
Early work with a custom OpenAI-powered solution allowed Boston Children’s to “prove value in a secure environment" and “establish strong governance foundations,” John Brownstein, senior vice president and chief innovation officer at the hospital, said in the press release. “ChatGPT for Healthcare offers a path toward operational scale, providing an enterprise-grade platform that can support broad, responsible adoption across clinical, research, and administrative teams.”
The workspace also offers templates for common tasks like drafting discharge summaries, patient instructions, clinical letters and prior auth support. The tool can also be used to summarize recommended care pathways based on medical evidence and institutional guidance or to consider differential diagnoses and their likelihood.
Crucially, patient data and protected health information (PHI) will remain under an organization’s control, with options for data residency, audit logs, customer-managed encryption keys and a Business Associate Agreement (BAA) with OpenAI to support HIPAA-compliant use. Content shared with ChatGPT for Healthcare is not currently used to train models, per the announcement.
Also launched is OpenAI API for Healthcare, a platform for developers to take advantage of the latest models, including GPT-5.2. This way, they can embed AI directly into healthcare systems and workflows. Eligible customers can apply for a BAA to support HIPAA compliance. The APIs can be used to build healthcare apps like patient chart summaries, care team coordination and discharge workflows. Examples of companies using OpenAI APIs for healthcare cited in the announcement were Abridge, Ambience and EliseAI.
The announcement did not surprise Peter Bonis, M.D., chief medical officer of Wolters Kluwer Health, a global provider of evidence-based medical information. While tech holds promise to relieve challenges, "the stakes are exceptionally high in healthcare," Bonis said. Possibly misleading or incorrect answers are a serious concern for both providers using LLMs as well as patients, Bonis told Fierce Healthcare over email in reaction to the news. "Whether OpenAI's applications prove safe and effective over time is uncertain," he wrote.
Wolters Kluwer, a 30-year-old brand with millions of users, took its time cautiously developing a genAI clinical decision support tool. Its answers only draw from the company's expert-authored and peer-reviewed content and is closed off to the broader web, which the company says eliminates the chance of hallucination. The outputs cite sources and the step-by-step rationale used to generate them.
OpenAI has partnered with over 260 doctors across 60 countries in the last two years to evaluate GPT-5.2 model performance, using real clinical scenarios. The group has reviewed over 600,000 model outputs in 30 areas of focus. Two different OpenAI benchmarking tools find OpenAI models outperform competitors and human baselines.
Despite this, cautioned Bonis, validation of models is a major undertaking and 260 clinicians and 600,000 outputs "may not be nearly sufficient to anticipate and correct the spectrum of gross and subtle errors that can occur" with AI. Bonis also pointed out that consensus has not been reached on optimal benchmarks to assess model performance, and that OpenAI's benchmarking tool HealthBench offers a limited perspective on clinical validity over time.