
The World Health Organization (WHO) has issued a statement urging caution in the utilization of extensive language model tools (LLMs) generated by artificial intelligence (AI) to protect human well-being, safety, autonomy, and public health. LLMs, such as ChatGPT, Bard, Bert, and others, have gained significant popularity and are being increasingly explored for their potential to support various health-related purposes.
While there is excitement surrounding using LLMs to improve access to health information, enhance decision-support systems, and even augment diagnostic capabilities, WHO emphasizes the need to examine the risks associated with their implementation carefully.
The organization highlights that the same level of caution exercised for any new technology must be applied consistently when dealing with LLMs. This includes adhering to critical values such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.
One of the primary concerns WHO raises is the potential bias in the data used to train AI models. If the training data is biased, it can generate misleading or inaccurate health information, which poses risks to health, equity, and inclusiveness.
Furthermore, LLMs can generate responses that may appear authoritative and plausible to the end user. However, these responses can be completely incorrect or contain serious errors, particularly regarding health-related inquiries.
WHO also highlights the ethical aspect of LLM usage, noting that these models may be trained on data for which consent may not have been explicitly obtained for such purposes. Additionally, there might be challenges in protecting sensitive data, including health data, that users provide to an application to generate a response.
Another significant concern is the potential for LLMs to be misused to create and disseminate highly convincing disinformation through text, audio, or video content. This misinformation can be difficult for the public to differentiate from reliable health content, leading to harmful consequences.
While WHO recognizes the potential benefits of new technologies in improving human health, including AI and digital health, the organization recommends that policymakers prioritize patient safety and protection. In contrast, technology firms work towards the commercialization of LLMs.
WHO stresses the need for rigorous oversight and emphasizes that clear evidence of benefit should be measured before the widespread use of LLMs in routine healthcare and medicine, whether by individuals, care providers, health system administrators, or policy-makers.
To ensure the responsible development and deployment of AI for health, WHO underscores the importance of applying ethical principles and appropriate governance.
ADVERTISEMENT
In its guidance on the ethics and governance of AI for health, WHO has identified six core principles: protecting autonomy, promoting human well-being, safety, and the public interest; ensuring transparency, explainability, and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity, and promoting AI that is responsive and sustainable.
In summary, WHO’s call for caution and ethical considerations in using AI-generated LLM tools in healthcare highlights the need for careful examination of risks, adherence to ethical principles, transparency, and prioritizing the well-being and safety of individuals.
The organization’s recommendations aim to guide policy-makers, healthcare professionals, and technology firms in leveraging AI technologies responsibly and effectively to pursue improved healthcare outcomes.