World Health Organization urges governments to clarify liability for AI in healthcare

Image: iStock.com/Tippapatt
AI adoption in healthcare is outpacing legal guardrails, the World Health Organization (WHO) has warned. The United Nations agency is calling on countries to develop AI strategies that align with public health goals; invest in AI workforce skills; and strengthen legal and ethical safeguards. This includes clarifying who is responsible if an AI system makes a mistake or causes harm.
“We stand at a fork in the road,” said Dr Natasha Azzopardi-Muscat, director of health systems, WHO/Europe. “Either AI will be used to improve people’s health and wellbeing, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours.”
Based on survey responses from 50 of the 53 member states in the WHO European region, which includes the UK, the study found that 32 countries (64%) are already using AI-assisted diagnostics, particularly in imaging and detection. Half of the countries in the region have introduced AI chatbots for patient engagement and support, while 26 (52%) have identified priority areas for AI in health, though only a quarter have allocated funding to implement those priority areas.
Read more: We must drive the value out of digital investments, NHS chief Jim Mackey tells Newcastle conference
Responsibility for harm
Despite this progress, fewer than one in 10 countries (8%) have liability standards for AI in health, which determine who is responsible if an AI system makes an error or causes harm. Further, almost nine out of 10 countries (86%) say legal uncertainty is the primary barrier to AI adoption, followed by financial constraints (78%).
“Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong,” said Dr David Novillo Ortiz, regional advisor on data, artificial intelligence and digital health. “That’s why WHO/Europe urges countries to clarify accountability, establish redress mechanisms for harm, and ensure that AI systems are tested for safety, fairness and real-world effectiveness before they reach patients.”
On their top motivations for adopting AI in health, countries most frequently cited improving patient care (98%), reducing workforce pressures (92%), and increasing efficiency and productivity (90%).
The study found that only four countries (8%) have a dedicated national AI strategy for health, and a further seven (14%) are developing one.
“AI is on the verge of revolutionising healthcare, but its promise will only be realised if people and patients remain at the centre of every decision,” said Dr Hans Henri P. Kluge, WHO regional director for Europe. “The choices we make now will determine whether AI empowers patients and health workers or leaves them behind.”
WHO cited several examples of countries integrating AI into healthcare, such as Estonia linking electronic health records, insurance data and population databases into a unified platform that now supports AI tools; Finland investing in AI training for health workers; and Spain piloting AI for early disease detection in primary healthcare.
Event: As part of the Innovation 2026 event programme, Global Government Forum is bringing together leaders from across the NHS and wider public sector to discuss how to achieve the 10 Year Health Plan. Find out more
The AI ‘wild west’
Global Government Forum recently published a research study on digital transformation in the NHS in England, based on interviews with chief digital and information officers in trusts.
It found that many trusts are using or exploring AI across both back-office and clinical domains. Some but not all trusts said they have formal AI policies in place, and some have implemented measures such as AI and data ethics committees and AI working groups.
Several acknowledged a gap between experimentation and governance: “It feels very wild west”, one said, and another summarised their approach as: “We’re now trying to get our arms around AI before it gets its arms around us.”
Download the research study: A Fresh Mandate for Digital Leadership in the NHS
AI was also discussed during a launch event for the report.
Jim Mackey, chief executive of NHS England, said that often there can be “an attraction to single things that are going to fix all our problems, and AI feels like one of them” but “it’s more complicated than that”.
He said the NHS must “socialise” AI more broadly across clinical and operational settings: “We’ve got to find some common ground. It’s not going to change everyone’s life tomorrow, nor is it so dangerous that we shouldn’t touch it.”
“We just have to find ways of bringing it to life” to show the benefits, he added.
Dr Birju Bartoli, chief executive at Northumbria Healthcare NHS Foundation Trust, said that many patients already expect that NHS organisations are using technology such as AI but stressed that public confidence will hinge on openness and communication.
“Being upfront about why we’re using [AI] and what the checks and balances are – that’s what will build confidence,” she said. “Ultimately, patients want to be seen quickly, have a good outcome and an honest conversation.”
She said that as long as AI helps improve the patient experience, it can play a role.
Sign up: The Global Government Forum newsletter provides the latest news, interviews and features on AI, data, workforce, and sustainability in government
link
