The WHO indicates how to regulate artificial intelligence for health

0
82
The use of artificial intelligence (AI) in healthcare holds promise but creates challenges, and the World Health Organization has published a report setting out guidelines that governments can follow to avoid problems.

Artificial intelligence (AI) is an unstoppable phenomenon that can have many advantages, but also numerous drawbacks if not used appropriately. Perhaps this is why the World Health Organization (WHO) has published a new report in which it proposes a series of regulatory considerations about AI in the field of health that it considers fundamental.

In this report, WHO experts highlight the importance of establishing the security and effectiveness of artificial intelligence systems and that appropriate systems are made available as soon as possible to those who need them, in addition to promoting dialogue between the parties. stakeholders, including developers, regulators, manufacturers, healthcare professionals and patients.

Pros and cons of using AI in the healthcare field

Artificial intelligence tools could transform the healthcare sector thanks to the increasing availability of healthcare data and the progressive advancement of analytical techniques (whether machine learning, logic-based or statistics). WHO recognizes the potential of AI to improve health outcomes through its contribution to clinical trials and the improvement of medical diagnosis, treatment, self-care and person-centred care, and as a complement to knowledge. , skills and competencies of health professionals.

“Artificial intelligence holds great promise for health, but it also poses serious challenges, including unethical data collection, threats to cybersecurity, and amplification of bias or misinformation.”

For example, AI could be beneficial in environments where there is a shortage of medical specialists and assist, for example, in the interpretation of retinal scans and radiological images, among many other applications. However, there is a ‘dark side’ because AI technologies are being deployed too quickly, and sometimes without fully understanding how they work, which could benefit or harm end users, including healthcare professionals and patients. .

When using health data, AI systems could have access to sensitive personal information, and this makes it necessary to establish legal and regulatory frameworks that guarantee privacy, security and integrity, something that the new publication of The OMS.

“Artificial intelligence holds great promise for health, but also poses serious challenges, including unethical data collection, threats to cybersecurity, and amplification of bias or misinformation,” said Dr. Tedros Adhanom Ghebreyesus, Director General. of the WHO. “This new guidance will help countries regulate AI effectively to realize its potential, whether in cancer treatment or tuberculosis detection, while minimizing risks.”

AI systems are complex and depend not only on the code with which they are designed, but also on the data with which they are trained, which comes from clinical environments and interactions with users, for example. Better regulation can help control the risks that AI can amplify biases in training data.

For example, it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies, or even failures. To help mitigate these risks, regulations can be used to ensure that information about the characteristics (such as gender, race, and ethnicity) of people appearing in the training data is included and that data sets are made representative. intentionally.

WHO proposals to regulate AI for health

The new WHO publication aims to set out key principles that governments and regulatory authorities can build on to develop new guidance or adapt existing guidance on AI at national or regional level, and outlines six areas for AI regulation. AI for health.

  • To build trust, the publication emphasizes the importance of transparency and documentation, for example, documenting the entire product lifecycle and tracking development processes.

  • For risk management, issues such as “intended use”, “continuous learning”, human interventions, training models and cybersecurity threats must be addressed comprehensively, with models that are as simple as possible.

  • Validating data externally and being clear about the intended use of AI helps ensure security and facilitate regulation.

  • A commitment to data quality, for example through rigorous evaluation of systems before launch, is vital to ensure that systems do not amplify biases and errors.

  • The challenges posed by important and complex regulations, such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the US, are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, at the service of privacy and data protection.

  • Encouraging collaboration between regulatory bodies, patients, healthcare professionals, industry representatives and government partners can help ensure that products and services comply with regulations throughout their lifecycle.

Source: World Health Organization (WHO)

Previous articleThese are the Movistar customers who can renew the router with WiFi 6 for free
Next articleThe squid game: the challenge, Scott Pilgrim and movie premieres in the Netflix news for November