Computer man

WHO releases AI ethics and governance guidance for large multi-modal models

Geneva – The World Health Organization (WHO) is releasing new guidance on the ethics and governance of large multi-modal models (LMMs) – a type of fast growing generative artificial intelligence (AI) technology with applications across health care.

The guidance outlines over 40 recommendations for consideration by governments, technology companies, and health care providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

LMMs can accept one or more type of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. 

LMMs are unique in their mimicry of human communication and ability to carry out tasks they were not explicitly programmed to perform. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”

The World Health Organization (WHO) has issued new guidance emphasizing the diverse applications of Language Models in Medicine (LMMs) for health-related purposes. These applications span five broad categories, including diagnosis and clinical care, where LMMs play a role in responding to patients’ written queries. 

Patient-guided use is another domain, utilizing LMMs to investigate symptoms and treatment options. Additionally, these language models contribute to clerical and administrative tasks by documenting and summarizing patient visits within electronic health records. 

The guidance recognizes their significance in medical and nursing education, offering trainees simulated patient encounters. Lastly, LMMs contribute to scientific research and drug development by aiding in the identification of new compounds. 

The WHO underscores the potential of Language Models in Medicine across these domains, showcasing their versatility and impact on various aspects of the healthcare ecosystem.

While LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions. 

Furthermore, LMMs may be trained on data that are of poor quality or biased, whether by race, ethnicity, ancestry, sex, gender identity, or age.

The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs. 

LMMS can also encourage ‘automation bias’ by health care professionals and patients, whereby errors are overlooked that would otherwise have been identified or difficult choices are improperly delegated to a LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.

To create safe and effective LMMs, WHO underlines the need for engagement of various stakeholders: governments, technology companies, healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including their oversight and regulation.

“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” said Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.

The World Health Organization (WHO) has issued comprehensive recommendations for governments and developers regarding the development and deployment of Language Models in Medicine (LMMs). 

Governments are urged to invest in not-for-profit or public infrastructure, offering accessible computing power and public datasets to developers across sectors while adhering to ethical principles. 

They are further advised to implement laws, policies, and regulations ensuring ethical obligations and human rights standards in LMMs used for healthcare. 

Governments should assign regulatory agencies to assess and approve LMMs, introducing mandatory post-release auditing and impact assessments. Developers are encouraged to engage diverse stakeholders, including potential users, medical providers, and patients, in transparent design processes. 

The guidance emphasizes that LMMs must be designed for well-defined tasks with accuracy and reliability, aligning with the improvement of health systems and patient interests. 

These recommendations aim to foster ethical, inclusive, and responsible development and use of LMMs in the healthcare domain.