(EU) Regulation 2024/1689, the so-called "AI Act" (AIA), and the MDR/IVDR

Software (including AI) for medical purposes is regulated in Europe and the United Kingdom as a medical device. It requires comprehensive assessment before being placed on the EU and UK market under Medical Device Regulations (MDR) and the EU In Vitro Diagnostic Medical Devices Regulation (IVDR).

The European Union (EU) has introduced new legislation on Artificial Intelligence (AI): The “AI Act” (Regulation 2024/1689) because the current framework does not fully address the ethical and transparency risks associated with AI.

Like the General Data Protection Regulation (Regulation 2016/679 - GDPR), the AI Act applies to providers wherever they are in the world if they place or put into service an AI system in the European Union (EU).

The AI Act is one piece of new AI-related legislation and must be read in the context of changes proposed on product liability and AI liability.

The AI Act has profound implications for Medical Devices and In-vitro Diagnostic Medical Devices (MDR/IVDR) Regulations. The AI Act categorizes AI used in medical devices and in-vitro diagnostic devices (MDR/IVDR) as high-risk, which includes AI utilized for

  • diagnosing,
  • physiological monitoring, and
  • guiding treatment choices.

 

The Impact on Deployers* of High-Risk AI Systems

Unlike the MDR/IVDR, which places responsibilities on economic operators in the supply chain, the AI Act also puts responsibilities onto the deployers of AI systems. These deployers will have new obligations, including:

  • Taking appropriate technical and organizational measures to ensure that AI systems are used following their instructions for use (IFU)
  • Assigning human oversight to competent, trained people
  • Monitoring and surveillance
  • Maintaining system logs when these are under their control
  • Undertaking, where applicable, data protection impact assessments.

 

The Impact on Manufacturers of AI Act for Medical Devices:

  • Medical devices are “high risk if placed in the Class IIa category (or higher) under the Medical Devices Regulations MDR), or if placed as Class B (or higher) under the IVDR.
  • Additional requirements for manufacturers of AI systems/products within the MDR/IVDR scope (high-risk devices/products) include:
    • Governance and data management requirements for training and testing data sets,
    • New record-keeping requirements, including the automatic recording of events (logs) over the system’s lifetime,
    • Transparent design requirements so deployers can interpret the output and use it appropriately,
    • Human oversight design requirements, and
    • Accuracy and cybersecurity requirements.
  • More specifically, the 'AI Act' specifies that manufacturers of high-risk devices/products are required
    • To establish a risk management system throughout the product's lifecycle,
    • To implement data governance
      • to attest that data is free from errors, and
      • to provide technical documentation attesting that their products comply with the AI Act.
      • data collection,
        • data origin,
        • data quality,
        • data suitability,
        • data availability,
        • bias and control measures,
        • identification of data gaps,
        • assessment of identified data gaps in patient populations.
      • preprocessing details such as
        • data labeling,
        • data cleaning
        • aggregation of data.
      • Data governance must also cover the design processes:
      • The data set needs to be relevant and appropriate, with statistical properties to justify
        • the intended purpose,
        • the patient population,
        • the characteristics of the AI device/system.
      • To establish a quality management system (QMS) to ensure compliance, including
        • Transparency of design/technical specifications, including applicable Regulations, harmonized standards and CS,
        • Transparency of the AI operation to enable deployers to interpret a AI system’s output,
        • PMS and Vigilance,
        • Instructions for use (IFU) that provide "concise, complete, correct and clear" information to deployers,
      • To establish data management, including
        • the general logic and algorithms of the AI system,
        • the training methodologies,
        • implemented technologies,
        • training data sets and a general statistical description of them,
        • the human oversight measures,
        • the validation and testing procedures, including
          • The results will be disclosed as proof of meeting the AI's technical specifications.
          • A description of the validation and testing data and its main characteristics.
        • controller logs.

 

 PMS and Vigilance

  • PMS and vigilance are elements within the MDR/IVDR and AI Act. The AI Act builds on the foundations of the MDR/IVDR but with specific aspects of AI systems.
  • A template for the post-market monitoring plan, "PMS Plan" and PMS elements will be published.
  • The vigilance process of reporting serious incidents is an exemption in the AI Act for MDR/IVDR devices.
  • An additional reporting requirement for AI producers is communicating with competent authorities about threats to individual rights.

 

 Does the AI Act my Medical Device?

  • The first consideration is whether the AI Act, based on a risk-based approach, applies to a medical device product.
  • To find out, manufacturers need to:
    • Read the AI Act's recital 12 to understand how AI systems differ from traditional software systems.
    • Understand that medical devices/IVDs are considered high-risk systems, whether AI is integrated into a hardware medical device or stand-alone.
    • More specifically, Article 6 of the AI Act introduces a two-step process to understand whether a medical device is an AI system:
      • If the product falls under Article 6.1 and the product or the AI system as a safety component, or a product listed in one of the legislations in Annex I (including MDR and IVDR) and undergoes a conformity assessment (i.e., all products except for class I devices under the MDR or IVDR),

then it is considered an AI system under the AIA.

      • Also, in the health area, it is necessary to read Article 6.2 and consider whether the product falls into Annex III of the AIA – a long list of systems, including private and public healthcare systems - and so would be considered an AI system.

 

 Combined and Single CE Marking

  • Manufacturers could have an integrated system for both regulations, given the overlap, a single declaration of conformity, and CE marking to demonstrate compliance with
    • AI Act, and
    • MDR or IVDR
  • When the AI Act talks about providers, there is considerable overlap with what manufacturers mean under the MDR/IVDR.
  • If the AI ​​product falls within the scope of the MDR/IVDR:
    • A combined CE marking and declaration of conformity is sufficient to demonstrate conformity to MDR/IVDR and the AIA.
    • Integration of corresponding processes is possible, including a consistent and integrated QMS structure on top of the existing measures of the MDR/IVDR.

 

 Record Keeping

  • The AI Act calls for automatically recording events, which would operate like a "black box". In other words, it would keep an internal device track of and record how the AI device works. This should enable appropriate performance evaluation and be linked to PMS and vigilance data.
  • Human oversight, a risk minimization measure needing a risk assessment, must be a feature, with human-machine interface tools and a means for humans to interrupt the process, such as, e.g., a "big red stop button".
  • Evidence of cybersecurity's accuracy and robustness must be maintained throughout the lifecycle and disclosed in the technical documentation. Warnings in the IFU are not sufficient. A robust resistance to susceptibility to mistakes, errors, or inconsistencies must be integrated.

 

Compliance with other Regulations 

Contact our office for more information.

 

Definition Artificial Intelligence (AI) System

(EU) Regulation 2024/1689 (“AI ​​Act”) defines an Artificial Intelligence system as

"A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments". (Article 3(1))

 

Definition Provider

"A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge" (Article 3(3))

 

*Definition Deployer

"A natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI ​​system is used in the course of a personal non-professional activity" (Article 3(4))

Whitepaper