Good AI Practice: EMA and FDA Establish 10 Global Principles

Global regulators are setting a unified standard for the digital transformation of the pharmaceutical industry. The European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) have released a joint document defining 10 key guiding principles for Good AI Practice in drug development.

Context: From Molecule to Patient

Artificial Intelligence is no longer a futuristic concept—its use throughout the drug product life cycle has increased significantly in recent years. In the new document, dated January 2026, regulators define AI as system-level technologies used to generate or analyse evidence across all phases: from nonclinical and clinical studies to manufacturing and post-marketing surveillance.

The implementation of these technologies aims to:

  • Promote innovation and reduce time-to-market.
  • Decrease reliance on animal testing by improving the prediction of toxicity and efficacy.
  • Strengthen regulatory excellence and pharmacovigilance.

10 Principles of Good AI Practice

The document outlines a common set of principles intended to lay the foundation for developing good practice that addresses the unique nature of these technologies. Regulators have identified 10 fundamental principles that developers must adhere to:

1. Human-centric by design

The development and use of AI technologies must align with ethical and human-centric values.

2. Risk-based approach

Validation, risk mitigation, and oversight should be proportionate to the context of use and the determined model risk.

3. Adherence to standards (GxP)

AI technologies must adhere to relevant legal, ethical, technical, and scientific standards, including Good Practices (GxP) and cybersecurity requirements.

4. Clear context of use

Every technology must have a well-defined context of use—clearly specifying the role and scope for why it is being used.

5. Multidisciplinary expertise

Expertise covering both the AI technology and its context of use must be integrated throughout the technology’s life cycle.

6. Data governance and documentation

Data provenance, processing steps, and analytical decisions must be documented in a traceable and verifiable manner. Privacy and protection for sensitive data must be maintained.

7. Model design and development

Development should follow best practices in software engineering. Data must be “fit-for-use,” and models should prioritize interpretability, explainability, and transparency.

8. Risk-based performance assessment

Assessments must evaluate the complete system, including human-AI interactions, using metrics appropriate for the intended context of use.

9. Life cycle management

Scheduled monitoring and periodic re-evaluation are required to ensure adequate performance and address issues such as data drift.

10. Clear, essential information

Information regarding performance, limitations, and underlying data must be presented in plain language that is accessible to the intended audience, including patients.

Industry Impact

Regulators emphasize that as the use of AI in drug development evolves, so too must good practice and consensus standards. This initial collaborative work identifies areas for future international harmonization, research, and the creation of educational tools. The ultimate goal is to ensure that new technologies reinforce the requirements for demonstrated quality, efficacy, and safety.


Source: EMA/FDA: Guiding principles of good AI practice in drug development (January 2026)

spot_img

Expert Articles

spot_img