09 Dec 2025

World Health Organization Urges Governments To Clarify Liability For AI In Healthcare

AI adoption in healthcare is accelerating faster than the legal and ethical frameworks needed to govern it, the World Health Organization (WHO) has warned. A new WHO/Europe analysis urges governments to create national AI strategies aligned with public health goals, invest in digital and AI workforce skills, and establish clear accountability rules—including determining who is responsible when an AI system makes an error or causes harm. Dr. Natasha Azzopardi-Muscat, director of health systems at WHO/Europe, said the world stands at a “fork in the road,” with AI poised either to relieve pressure on health workers and improve care or to heighten safety risks and deepen inequities.


Survey data from 50 WHO European region countries shows that 64% already use AI-assisted diagnostics—mainly imaging—and half have adopted AI chatbots for patient engagement. Yet only 8% have legal liability standards for AI in healthcare, and 86% cite legal uncertainty as a major barrier to adoption. Financial limitations were the second largest barrier. Dr. David Novillo Ortiz, regional advisor for AI and digital health, said clinicians may hesitate to rely on AI without clear rules, and patients may lack recourse if harmed. WHO recommends clarifying accountability, creating redress mechanisms, and ensuring AI systems are assessed for safety, fairness, and real-world performance.


Despite rapid adoption, very few countries have national AI health strategies—only 4 countries currently have one, and 7 more are developing plans. Countries are motivated by improving patient care (98%), reducing workforce pressures (92%), and increasing efficiency (90%). WHO highlighted examples of emerging best practices, such as Estonia’s unified data platform enabling AI tools, Finland’s national AI training for health professionals, and Spain’s pilots for AI-based early disease detection.


The concerns raised by WHO echo findings from a recent Global Government Forum study on NHS England’s digital transformation. Many NHS trusts are experimenting with AI but lack consistent governance frameworks. Some described their environment as a “wild west,” working to impose guardrails before AI systems become too deeply embedded. NHS England chief executive Jim Mackey cautioned against seeing AI as a singular fix for systemic problems and emphasized the need to integrate AI thoughtfully across clinical and operational settings.


Public trust will rely heavily on transparency, said Dr. Birju Bartoli, chief executive at Northumbria Healthcare NHS Foundation Trust. Many patients already assume AI is being used, but confidence depends on clear communication about purpose, safety, and oversight. As long as AI demonstrably improves outcomes and patient experience, she noted, it has a meaningful role to play in the future of healthcare.


Click here to read the original news story.