Four in 10 healthcare professionals have encountered unauthorized artificial intelligence tools in their organizations, and 17% have used one, a new survey finds.
The survey (PDF)—released this week by Wolters Kluwer Health, which offers AI tools designed for use in healthcare—reached 518 healthcare professionals, including providers and administrators, in December 2025.
Specifically, 15% of docs in the survey admitted to using an unauthorized tool, and 19% of admins did. One in 10 respondents said they’ve used an unauthorized AI tool for direct patient care. The unauthorized access of AI tools is referred to as “shadow AI” in a report on the findings.
“Clinical and administrative teams want to adhere to rules surrounding AI usage, but if the organization hasn’t provided guidance or approved solutions, they’ll experiment with generic tools to improve their workflows,” the report said.
Inconsistent tools can create security oversight problems, the report noted, and expose orgs to security breaches or data privacy violations. For instance, the report cited a 2025 IBM study that found 97% of orgs that had an AI-related security incident had lacked proper AI access controls. These issues can harm customer trust and be costly, the report said.
Half of respondents in the survey cited the need for faster workflows as the reason they use unauthorized AI tools in the workplace. But admins used tools more often for efficiency in their daily work, including data analysis, predictive analytics and admin tasks, while providers used it for data analysis, patient scheduling and patient engagement. Providers were more likely to say they used unauthorized tools out of curiosity than admins.
Administrators were found to be three times more likely to be actively involved in healthcare AI policy development than providers (30% compared to 9%). This suggests policy ownership is more centralized within hospital admin roles, per the report. Admins were a bit more likely to be very familiar with their organizations’ AI policies.
The survey also found nearly 90% of respondents believe AI will significantly improve healthcare in the next five years, with admins being slightly more optimistic. However, half of respondents recognized patient safety as a top AI risk. Nearly half were also concerned about privacy risk.
“Ultimately, addressing shadow AI is not about restricting access to productivity tools. Leaders must understand why teams are using unsanctioned tools and which challenges they’re trying to solve, and then identify enterprise-level tools that can accomplish these goals safely and securely,” the report said.