After weeks of back-and-forth around a high-profile copyright case, a federal judge approved Anthropic’s offer to pay $1.5 billion for using pirated materials in training of their generative AI models. This outcome shapes how we train AI models—and how much we can trust them.
Three main questions about fair use have come to light in response to the case:
- Can platforms take and transform internet content? Courts say yes today. But how do platforms check whether their sources are accurate?
- What if the original source is pirated? This is why we’re seeing billion-dollar settlements.
- What if AI outputs compete with original sources? Is that still fair use?
These questions matter everywhere. But, in healthcare, they are a matter of life and death.
Meet your doctor’s new assistant
Your doctor regularly looks things up to help manage your care. This practice now often involves the use of gen AI or an AI chatbot. The Anthropic copyright case highlights a crucial issue: How accurate are these models if they are trained on the entire internet—a chaotic mix of credible and false information and pirated (and sometimes out-of-date) copies of meticulously researched and peer-reviewed content?
It should be no surprise that they are regularly wrong. Here are real mistakes made by AI tools doctors are using today:
- Recommending surgery on the carotid artery for a patient who did not need it
- Noting that a specific antidepressant could be safely stopped cold turkey when abrupt withdrawal was known to be hazardous
- Avoiding influenza vaccination in a patient with an allergy to eggs even though it was safe to vaccinate the patient
- Proposing the wrong starting treatment for a patient with rheumatoid arthritis
- Recommending drugs known to cause fetal harm without inquiring as to whether the patient was pregnant
- Citing that an appropriate drug for high blood pressure be avoided in a patient who required it
The "move fast and break things" ethos of Silicon Valley is profoundly dangerous in medicine. In healthcare, an AI error isn’t just a bug; it's a potential tragedy. Many popular gen AI models do not reliably synthesize credible sources of complex (and at times conflicting) medical information, nor can they replicate the deep expertise of clinical experts to produce reliable and accurate responses.
A growing body of research highlights the challenges of using “off the shelf” gen AI for clinical reasoning:
- Models present fabricated "facts" (hallucinations) with confidence
- They give wrong or outdated information that even experts might miss
- They struggle with messy, real patient data that arrive over time
- These systems often perpetuate and amplify cognitive biases hidden in training data
Everyone loves short, quick answers, but overly efficient chatbot answers can strip away crucial context, encouraging a dangerous autopilot mindset among clinicians, or "automaticity": following advice without proper consideration. Emerging data also suggest that overdependence on gen AI could lead to deskilling of the medical profession.
Winning the AI race depends on effective copyright policy
The evolution of the AI economy depends on AI applications that are "fit for purpose." They must deliver real impact to achieve sustainable value. Initial data suggest there is a gap.
A recent MIT study found that 95% of corporate AI projects fail to produce a return on investment because generic models are not good enough. Similar data have been published in healthcare. Successful implementation of AI applications requires deep collaboration between developers and domain experts, with immense consideration given to how the tools integrate into user workflow. In high-risk domains like healthcare, extensive and ongoing testing is essential to ensure the models are safe and effective over time.
Despite these considerations, U.S. policy and the Anthropic case have drifted toward the view that training AI on existing content is "fair use." One argument is that forcing AI developers to license content is impractical and would put the U.S. behind other countries without such prohibitions.
Here’s the critical point: Protecting intellectual property won’t make us lose the AI race. It is a compelling strategic advantage. The next leap in AI capability, especially in specialized fields like healthcare, requires access to proprietary, curated data and deep collaboration across stakeholders. Creators of high-value data will not give it away to train the very models that threaten their existence. They will, however, become enthusiastic partners in an ecosystem where their work is valued. This creates a sustainable cycle: AI companies get the fuel they need, creators are funded to continue their vital research and society benefits. This is the true path to winning the future and, in the meantime, possibly saving our lives.
Peter Bonis, M.D., is chief medical officer at Wolters Kluwer Health.