EU AI Act: Implications for Healthcare AI

In a previous blog, we discussed the recently approved  EU AI Act and its framework for classifying and regulating Artificial Intelligence (AI) systems based on risk levels. In this blog we will go deeper into how the Act will apply to AI used within the healthcare sector, and how explainable AI may help in achieving the safety and transparency goals set by law.

AI promises to revolutionize healthcare. AI systems can analyze vast amounts of medical data to identify patterns invisible to the human eye, predict disease risk with greater accuracy, and even assist doctors in making diagnoses. This could lead to earlier interventions, more personalized treatment plans, and ultimately, improved patient outcomes.

However, with these benefits come significant risks that should not be ignored.  Some of the key concerns are:

  • Patient Harm Due to Errors: AI systems are not infallible. Errors in data, algorithms, or implementation can lead to misdiagnoses, inappropriate treatment recommendations, and even harm to patients.
  • Perpetuating Bias in Healthcare: AI is only as good as the data it is trained on. Biases present in healthcare data  (e.g. absence of data about certain patients groups) can be amplified by algorithms, leading to unfair and discriminatory outcomes for certain patient groups.
  • Lack of Transparency: Many AI systems, particularly complex deep learning models, are often referred  as "black boxes." Without understanding how an AI system arrives at a decision, itis difficult to assess its reliability and identify potential biases.
  • Data Privacy and Security: The use of AI in healthcare often relies on vast amounts of patient data. Ensuring the privacy and security of this sensitive data is crucial.
  • Accountability for AI-Driven Decisions: Who is responsible if an AI system makes a mistake that harms a patient? This question remains a complex issue, particularly in the healthcare field.

Regulations are needed to mitigate these risks, while taking advantage of the benefits AI can bring us.

How will Health AI be regulated?

The EU AI Act establishes regulations for AI development and use across various sectors, including healthcare. As with all other systems, AI systems used in healthcare will be classified and regulated according to their risk level. Many applications of AI in the healthcare sector should fall into the high-risk category and will be expected to comply with the requirements for this level of risk.

According to the AI Act, high-risk systems are those “intended to be used as a safety component of a product, or [that are themselves] a product, covered by the Union harmonisation legislation, listed in Annex II” and are “required to undergo a third-party conformity assessment”.

Annex II lists Medical Devices and In-vitro Diagnostic Medical Devices (IVD), meaning that AI that satisfies the definition of medical device and is subject to third party conformity assessment under the Medical Device Regulation (MDR) will be considered high-risk.

The MDR defines Medical Devices as “any (…) software (…) intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes:

  • Diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease;
  • Diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability;
  • Investigation, replacement or modification of the anatomy or of a physiological or pathological process or state;
  • Providing information by means of in vitro examination of specimens derived from the human body, including organ, blood and tissue donations.  (…) “

The MDR also uses a risk-based approach to classify medical devices into categories and determine conformity assessment requirements. The categories are (simplified):

  • Class I Devices: Low risk, non-invasive, devices or appliances, used for basic functions and that pose minimal danger to patients. These include sterile devices and measuring devices.
  • Class IIa Devices: low to medium risk devices, implanted/applied within the body60 minutes to 30 days.
  • Class IIb Devices: medium to high-risk devices, implanted/applied within the body for periods of 30 days or longer.
  • Class III Devices: high risk, invasive devices. Usually inserted into the nervous system or central circulatory system, and used for supporting vital function, or in critical procedures.

Class I devices do not require third party conformity assessment and would, therefore, not be considered high-risk under the EU AI Act. All the others require it, making them high-risk AI systems. Additionally, most IVD medical devices also require conformity assessment and would be considered high risk as well.

In addition to Medical Devices, specific health-related use cases in Annex III of the EU AI act are also considered high-risk. These are biometric categorization, AI used to determine eligibility for healthcare, and AI used for emergency healthcare patient triage or dispatch systems.

In summary, AI used for biometric categorization, to make decisions on diagnosis, eligibility for healthcare, triage, emergency dispatch or therapy, monitor physiological processes, or is used for contraception, is classified as high-risk under the EU-AI Act. Other healthcare-related AI systems should be considered limited risk.

Depending on risk level, healthcare AI will be expected to comply with certain requirements under the AI Act.

Limited risk systems will have to comply with transparency obligations and copyright law. Providers and deployers will have to ensure that content is labelled as AI generated, and that people are aware they are interacting with or exposed to AI.

High risk systems will have to comply with multiple regulations around risk management, data governance, monitoring and record-keeping, transparency, and human oversight. Providers will have to:

  • Establish, implement, and maintain a risk management system
  • Establish and implement a quality management system in its organisation
  • Draw-up and keep up to date the technical documentation
  • If possible, save the automatically generated logs of the AI system
  • Undergo conformity assessment and potentially re-assessment of the system (in case of significant modifications) prior to its placing on the market or putting into service
  • Conduct post-market monitoring
  • Take corrective action if the system does not comply with requirements and inform the relevant national competent authorities/notified bodies of the non-compliance and corrective action taken
  • Upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements
  • Register AI system in EU database
  • Affix CE marking and sign declaration of conformity.
  • Collaborate with market surveillance authorities.

Users will be required to:

  • Use high-risk AI systems in accordance with the instructions of use
  • Monitor the functioning of the AI systems,
  • Report any malfunctions to providers
  • Carry out a data protection impact assessment where applicable.

Explainable AI (XAI): Not Mandatory but Beneficial

One critical aspect emphasized by the EU AI Act is the importance of transparency and human oversight, particularly in high-risk AI systems. While the Act does not mandate Explainable AI (XAI), it can be helpful in accomplishing trust and transparency, and, therefore, complying with the obligations defined by the Act.

The field of XAI appeared in response to one of AI’s most critical challenges: understanding how AI models arrive at their decisions. Often, machine learning algorithms operate as black boxes, deriving their power from complex decision-making processes built upon extensive iterations over training data. This makes it difficult for humans to comprehend their inner workings and raises concerns about the fairness, ethics, and accountability of the decisions.

Explainability is, then, central for ensuring that AI decisions are justified and ethical, as AI systems become increasingly integrated into various aspects of society. Indeed, in domains such as healthcare, where AI decisions directly impact human lives, ensuring that AI systems can provide clear, understandable explanations for their decisions is particularly important for the safety of the patients.

XAI employs techniques and methods that allow for each decision made by the machine learning algorithm to be traced and explained, based on 3 main principles:

  • Prediction accuracy: accuracy of the predictions is a key measure of AI's effectiveness in everyday operations. It is determined by running simulations and comparing the XAI output to the training data results.
  • Traceability: tracing the decisions of the AI systems, the data used by the AI system, and the connections made during the decision-making process is important for transparency.
  • Understandability: it is imperative that people understand the methodology and behavior of the AI system and how and why it has been used.

XAI provides a framework for more transparent and trustworthy AI, allowing for more accountability and fairness in the use of these systems. While not mandatory under the EU AI Act, using XAI techniques when developing AI systems intended to be used in healthcare could be a way to ensure safety and transparency, as required by the Act.

The EU AI Act presents both opportunities and hurdles for the future of AI in healthcare. While further clarifications may be necessary to align it with existing regulations, and ensure regulatory entities are capable of carrying-out the responsibilities assigned to them by this new legislation, the Act underscores the importance of prioritizing patient safety, fairness, and ethical standards in AI-driven healthcare solutions. Developers and healthcare professionals and institutions who use these solutions should always be mindful of their potential risks and seek to employ methodologies and practices to mitigate them. The goal is to make the most of AI's potential for improving patient outcomes while maintaining transparency, integrity, and trust in its deployment.

Resources:

Interpreting the EU Artificial Intelligence Act for the Health Sector – Paper detailing how the EU AI Act applies to the Health Sector, including examples of how different AI applications will be classified.

Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts – EU Study on the use of AI in healthcare and potential risks and social and ethical impacts

The role of explainable AI in the context of the AI Act – An analysis of the use for XAI in the context of the EU AI Act, its requirements, challenges, and opportunities