The AI Act and Its Impact on Clinical Trials in Europ
Artificial Intelligence is rapidly transforming clinical trials, offering opportunities to improve patient recruitment, trial efficiency, and data analysis. The European Union has taken a significant step to regulate this technology by introducing the AI Act, the world’s first comprehensive legal framework for AI. We are now one year into its phased implementation, and sponsors, CROs, and other stakeholders must carefully navigate its implications for ongoing and future clinical trials.
Purpose of the AI Act
The AI Act aims to ensure that AI systems placed on the EU market are safe, transparent, traceable, and non-discriminatory. It establishes a risk-based framework, classifying AI systems into unacceptable, high-risk, limited-risk, and minimal-risk categories. Clinical research applications will typically fall under the high-risk category, requiring robust oversight, validation, and documentation.
Timeline
The AI Act is being implemented in phases, with obligations entering into force gradually. Two important phases still lie ahead, scheduled for 2026 and 2027, when full application of the Act and high-risk AI requirements will come into effect:
- August 2, 2026: The AI Act will become fully applicable, marking two years since its official entry into force
- August 2, 2027: The requirements for high-risk AI systems integrated into regulated products (as outlined in Annex II) will come into force.
It is worth noting that MedTech Europe has requested extensions to the implementation timelines, citing lack of harmonized standards, insufficient capacity, and the risk of bottlenecks in compliance.
Looking for more information: check European Commission Q&A: https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683
AI Act and EMA Guidance
Compliance with EMA guidance remains mandatory for sponsors, even under the EU AI Act. Sponsors and CROs must align AI Act compliance with EMA expectations, ensuring that AI systems meet both legal requirements and EMA’s scientific standards.
To date, EMA has issued a draft reflection paper on the use of artificial intelligence throughout the medicinal product lifecycle. This draft outlines principles for the safe and ethical application of AI in drug development and clinical trials, including recommendations on algorithm validation, data quality, risk management, and monitoring of patient safety.
https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines
Other key EMA documents that complement the AI Act include:
- Guideline on Computerised Systems and Electronic Data in Clinical Trials (2023) – provides requirements on system validation, data integrity, and security.
https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/guideline-computerised-systems-and-electronic-data-clinical-trials_en.pdf - Guiding Principles for the Use of Large Language Models (LLMs) in Medicines Regulation (2024) – ensures responsible application of advanced AI models in regulatory processes.
https://www.ema.europa.eu/en/news/harnessing-ai-medicines-regulation-use-large-language-models-llms - Network Data Steering Group Workplan 2025-2028 – sets out EMA’s priorities in digital transformation, data interoperability, and AI in regulatory science. https://www.ema.europa.eu/en/news/harnessing-ai-medicines-regulation-use-large-language-models-llms
Impact on Clinical Trials
For clinical research, the AI Act introduces significant compliance responsibilities. A key aspect is the risk classification of AI systems. Systems deemed “high risk” are subject to the strictest set of requirements under the AI Act. In the context of clinical trials, AI tools are likely to fall into this high-risk category if they are involved in patient recruitment, trial design optimization, data management, decision-making or are AI embedded medical devices (Point 50 in the EU AI Act). This classification means that sponsors and CROs must implement robust validation, documentation, monitoring, and human oversight measures to comply with the high-risk requirements.
Sponsors and CRO’s now face new responsibilities, including:
- Risk management – identifying and mitigating potential hazards associated with AI use.
- Data governance – ensuring training datasets are high-quality, representative, and free from bias.
- Transparency and explainability – making AI decision-making processes understandable to healthcare professionals and patients.
- Human oversight – ensuring AI-supported decisions, particularly those affecting patient care, are reviewed by qualified professionals.
Failure to meet these requirements could result in penalties of up to 7% of annual turnover or €35 million, whichever is higher.
Challenges and Opportunities
Implementing the AI Act presents challenges:
- Regulatory complexity: aligning AI Act requirements with existing regulations like MDR, IVDR, GDPR, and EMA guidance.
- Cost implications: updating AI systems and processes may require additional investment.
- Staff training: ensuring personnel understand new regulations and procedures.
Yet, the AI Act also opens opportunities: building trust with patients, improving trial efficiency, and enabling safer, more effective use of AI in clinical research.
Preparing for the Future
The AI Act is a milestone in the responsible use of AI in clinical trials. Sponsors, CROs, and research institutions must strengthen data governance, transparency, and oversight to ensure compliance and patient safety. Early adaptation, guided by EMA recommendations, will not only ensure compliance but also maximize the benefits of AI for faster, smarter, and safer clinical research.
At Clinmark, we bring together deep clinical trial expertise with keen awareness of emerging AI regulations. Guided by our core values: Fast & Flexible, Diligent Experts, and Smart Global, we support sponsors in navigating trial design, data management and regulatory alignment. Let’s explore how we can support your clinical trials. Contact us to learn more.