16 Jan 2026

OpenAI and Anthropic Spotlight Governance Risks as Generative AI Enters Healthcare

OpenAI and Anthropic have announced new healthcare-oriented generative AI offerings, prompting renewed debate among healthcare leaders about governance, accountability, and patient safety as these tools move closer to clinical workflows. While the technologies promise efficiency gains and improved access to information, experts warn that their use introduces legal, ethical, and operational risks that health systems must address before widespread deployment.

OpenAI recently unveiled its Healthcare product suite, which includes ChatGPT for Healthcare, aimed at supporting patient care and reducing administrative burden. According to the company, health systems such as AdventHealth, Boston Children’s, Cedars-Sinai, HCA, Memorial Sloan Kettering, and Stanford Medicine are using the tool to integrate medical evidence into clinical work. Separately, electronic health record vendor Elation Health announced that it has integrated Anthropic’s Claude AI into its clinical insights program, reporting that physicians are receiving answers to clinical questions 61% faster.

As these tools are embedded into electronic health records and patient-facing applications, experts caution that risks extend beyond data privacy. The potential for large language models to generate inaccurate information, often delivered with a tone of diagnostic certainty, remains a key concern. “While data privacy is table stakes, the real governance challenge is the decoupling of certainty from accountability,” said Adam de la Zerda, CEO and founder of Visby Medical.

The absence of clear clinical liability for AI-generated recommendations raises questions about who bears responsibility when harm occurs. “This creates a dangerous gap where a patient might anchor on an authoritative-sounding AI summary that lacks professional nuance,” de la Zerda said. “Governance must ensure that we don't replace human judgment with an algorithm that is confident, but ultimately unaccountable.”

Industry leaders emphasize the need for disciplined governance frameworks that define accountability before tools are deployed. “Success won't come from model quality alone,” said Dr. Chase Feiger, CEO and cofounder of Ostro. “It'll come from whether organizations bring real discipline around governance, accountability and how medicine actually works.”

Experts advise healthcare boards and executives to clarify who is accountable for AI-influenced decisions, how those decisions would be defended in malpractice or peer-review proceedings, and whether certain clinical domains should be restricted until accuracy and limitations are better understood. While generative AI may offer value before and after the point of care, stakeholders stress that its role within clinical decision-making must remain clearly bounded to avoid overreliance and unintended patient harm.

Click here for the original news story.