Advanced Computing in the Age of AI | Thursday, March 28, 2024

WHO Releases Its First-Ever Report Offering Guiding Principles for AI Use in Healthcare 

Fueled by ethics and privacy concerns about the use of AI in healthcare, the World Health Organization (WHO) has issued a report calling for the use of six “guiding principles” for creating and using ethically and safely managed AI to protect patients while improving their care.

The 165-page report, “Ethics and Governance of Artificial Intelligence for Health,” was unveiled this week as a guide compiled from the advice, research and recommendations of 20 experts in public health, medicine, law, human rights, technology and ethics.

Produced jointly by WHO’s Health Ethics and Governance unit in the department of Research for Health and by the department of Digital Health and Innovation, the document strives to analyze the benefits and challenges of AI. It is also written to suggest policies, principles and practices that can guide its use to benefit patients while avoiding misuse that would undermine human rights and legal obligations of healthcare facilities.

According to WHO, the growing use of AI for health presents governments, healthcare providers and communities with choices that must be made when considering how to improve patient care while respecting and protecting ethical and privacy boundaries, the report states.

The report is the result of two years of work by WHOs panel of appointed international experts and their support staff members.

AI is already being used in some wealthy countries to improve the speed and accuracy of disease diagnosis and screening, assist in clinical care, strengthen health research and drug development and to help with disease surveillance, outbreak response and health systems management, according to the report. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services, said WHO.

But despite those potentially beneficial uses of AI, the report also cautions against overestimating the benefits of AI for health, especially when its use would be funded at the expense of other core healthcare investments and strategies needed by patients around the world.

WHO issued the group’s first-ever report on AI in healthcare to help its member nations as they deal with new and changing technologies, including AI and genome editing, while also looking at their ethical and human rights aspects, Rohit Malpani, a WHO consultant who helped write the report, told EnterpriseAI. “This is in line with WHO’s 13th Global Program of Work, which emphasizes that WHO should be at the forefront of new technologies.”

The document was also produced because the uses of AI are proliferating rapidly, both as a standard of care for some diagnoses, and more broadly for other uses due in part to the COVID-19 pandemic, said Malpani. The pandemic has accelerated the use of AI for specific public health and medical functions including surveillance, outbreak response, and contact tracing, he added.

In addition, because there are significant investments from the public and private sector in healthcare, this requires oversight and engagement to ensure that design and use of technologies such as AI is appropriate, he continued.

Rohit Malpani, WHO consultant

“Artificial intelligence presents a tremendous opportunity for all countries to improve their health systems to better meet the many challenges to health and well-being around the world,” he said. “As an organization, WHO is focused on engaging upon these technologies to best enable and assist health care systems, providers, and patients achieve better outcomes. Yet we also recognize that all technologies, and especially AI, come with significant ethical challenges and risks. And so, we have put significant effort into producing this report because we recognize our responsibility to member states to make sure governments can achieve the right balance between encouraging and using such technologies.”

Six Principles for Ethical and Safe AI Use in Healthcare

Here are the six principles listed in the new WHO report:

  1. Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected and patients must give valid informed consent through appropriate legal frameworks for data protection.
  2. Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.
  3. Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
  4. Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
  5. Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
  6. Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.

These six principles will be used to guide future WHO work involving the use of AI in healthcare so that it can be used to benefit patients, while protecting their privacy and conforming to sound and fair ethics policies, according to the agency.

“We hope this report is just a first step in a process of building a collaborative approach with many other stakeholders to both keep up with this rapidly evolving field and ensure that ethics related guidance remains relevant,” said Malpani. “The report includes forty-seven recommendations directed at designers, technology companies, civil society and government to implement the guidance. WHO is ready to work with all these stakeholders to put these ideas into practice.”

One of the biggest surprises throughout the process of compiling the report was that due in part to the COVID-19 pandemic, the uses of AI for health evolved dramatically, said Malpani. “This is one of the reasons the report is a living document to keep up with a field that is expected to rapidly evolve in the coming decade,” he said.

Fighting for AI Ethics is a Growing Concern

As the uses of AI technologies expand within healthcare, enterprises and governments daily, concerns about the intrusion of AI into the everyday lives of people continue to grow, causing deep reflection on how AI ethics should be addressed and how another related issue – AI bias – should also be identified and prevented.

To help address these growing concerns for enterprise and other AI users, more and more organizations have been looking at these issues, including the software industry trade group, BSA The Software Alliance. In June, BSA released its own reportConfronting Bias: BSA’s Framework to Build Trust in AI, which aims to offer a flexible and knowledge-based strategy that can be used across industries to deal with AI biases and related issues.

The BSA framework is arriving just as the European Union recently proposed strict regulations to govern AI, even as similar discussions are also arising in the U.S. and in other nations. In reaction, BSA says it is also calling for the U.S. government to draft legislation and regulations that will mitigate the risk of bias in high-risk uses of AI, while also encouraging both government and private sectors to use its new AI Risk Framework as a guide.

The issue of bias in AI surfaces because analyzed AI data can unjustifiably yield less favorable or harmful outcomes based on a person’s demographic characteristics, according to BSA. The framework is designed as an assurance-based accountability mechanism that can be used by AI developers and AI deployers to organize and establishing roles, responsibilities and expectations for internal processes, as well as a training, awareness and education platform.

U.S. companies, however, need to start thinking about the consequences of those actions now if they intend to do business with EU member nations in the future so they can meet the requirements that are set there, according to BSA.

EnterpriseAI