The life sciences sector is crucial in society, encompassing basic sciences, applied sciences, and translational research. This sector includes medical devices, pharmaceuticals, and healthcare services, which are increasingly integrated with artificial intelligence (AI) to enhance diagnostic accuracy, personalise treatments, and improve patient outcomes.

The life sciences sector in the European Union (EU) is regulated by a comprehensive set of laws and regulations designed to ensure the safety, efficacy, and quality of medical products, devices, and services., including:

  1. Regulation (EU) 2017/745 on Medical Devices (MDR) and Regulation (EU) 2017/746 on In Vitro Diagnostic Medical Devices (IVDR) governing the market placement and service of medical devices and in vitro diagnostic devices, ensuring their safety and performance.
  2. Directive 2001/83/EC on the Community Code Relating to Medicinal Products for Human Use, which outlines the requirements for the marketing authorisation of medicinal products.
  3. Regulation (EU) No 536/2014 on Clinical Trials on Medicinal Products for Human Use, harmonise the rules for conducting clinical trials in the EU.
The AI Act

The recently approved AI Act will also apply in this sector, aiming to create a legal framework for AI that ensures safety, transparency, and fundamental rights protection while fostering innovation. Critical articles relevant to the life sciences sector include:

Article 5 of the AI Act prohibits AI practices that manipulate human behaviour to cause harm or exploit vulnerabilities of specific groups. In the life sciences context, this intersects with ethical considerations in medical research and treatment. For instance, AI systems used in clinical trials or personalised medicine must not manipulate patients' decisions or exploit their vulnerabilities, aligning with ethical principles outlined in the MDR, IVDR, and clinical trials regulation.

Article 6 classifies AI systems intended for medical devices as high-risk. This directly intersects with the MDR and IVDR, which classify medical devices based on their risk to patients. AI systems incorporated into medical devices must comply with the AI Act and the medical device regulations, ensuring they undergo rigorous conformity assessments pursuant to the sector-specific regulations and continuous monitoring.

Article 9 requires providers of high-risk AI systems to establish a risk management system. This aligns with the MDR and IVDR requirements for a quality management system (QMS) in the life sciences. The QMS must address risks associated with using AI in medical devices, including data integrity, algorithm bias, and system reliability.

Article 10 emphasises the need for high-quality datasets, data governance, and data management practices for high-risk AI systems. This requirement intersects with the GDPR, which mandates robust data protection measures, mainly when processing health data. AI systems in life sciences must ensure data accuracy, completeness, and compliance with data protection regulations, safeguarding patient privacy and data security.

Article 15 of the AI Act mandates that high-risk AI systems implement appropriate measures to manage cybersecurity risks. This includes ensuring the resilience of AI systems against attacks and unauthorized access. In the life sciences context, this requirement aligns with the MDR and IVDR which require manufacturers to implement a risk management system throughout the lifecycle of medical devices, including cybersecurity risks. Cybersecurity is paramount in the life sciences sector, given the sensitive nature of health data and the critical function of medical devices, and is mandated through both the AI Act as well as the above-captioned sector-specific regulations. The GDPR also mandates implementing appropriate technical and organisational measures to ensure data security and protect against cyber threats.

Article 17 requires comprehensive technical documentation for high-risk AI systems, including cybersecurity measures. This is also captured with MDR and IVDR. Both regulations require technical documentation that includes cybersecurity aspects, ensuring that medical devices are secure by design and during their intended use.

One should also mention that the obligations incumbent on General Purpose AI Models are also applicable should the latter be deployed in this sector as well as the faculty to use the AI Regulatory Sandbox for testing.

The life sciences sector is at the forefront of integrating AI and technology to improve healthcare outcomes. Adhering to comprehensive regulatory frameworks, ensuring data protection, and maintaining cybersecurity are crucial to fostering innovation while safeguarding public health and safety. Balancing regulatory compliance with innovation poses challenges for AI developers and medical device manufacturers. The intersection between the AI Act and sector-specific regulations highlights the need for a harmonised approach to AI governance in healthcare.

As AI continues to evolve, ongoing collaboration between market surveillance authorities, the AI Market Surveillance Authority, other regulatory authorities, industry stakeholders, and the public will be essential to navigate the complexities of this dynamic sector.

This article by Dr Ian Gauci was first published in The Times of Malta on the 9th June 2024.

 

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content