AI in Pharmacovigilance: Insights from the latest CIOMS Report
Artificial intelligence has reached a turning point in pharmacovigilance. What was once explored mainly through pilots and proofs of concept is now being used in real safety operations. The latest report from the Council for International Organizations of Medical Sciences (CIOMS) – 2025 CIOMS Working Group XIV report reflects this shift, offering a shared, internationally aligned perspective on how AI should be applied responsibly in drug-safety activities.
Rather than focusing on specific technologies, the report sets out principles designed to remain relevant as tools and regulations evolve. Its aim is to help regulators, life sciences organizations, technology providers, and pharmacovigilance professionals adopt AI in ways that improve efficiency without compromising patient safety, data protection, or trust.
Below, we explore what’s driving AI adoption and how the CIOMS report translates into everyday practice.
What’s driving the shift toward AI in pharmacovigilance
Pharmacovigilance has become more demanding with each passing year. Case volumes continue to rise, while safety teams are expected to work across an expanding range of data sources, including spontaneous reports, clinical trials, scientific literature, real-world evidence, and digital channels.
At the same time, many core activities still rely heavily on manual review. As workloads increase, maintaining speed, consistency, and timely oversight becomes more difficult.
AI is being adopted to help teams manage this growing complexity.
Today, AI-supported tools are already used for:
- Case intake and processing, including data extraction, medical coding, and translation
- Identifying duplicate cases across large safety databases
- Supporting signal detection through automated screening and prioritization
- Triage, helping urgent cases reach specialist reviewers faster
- Search and summarization, including early uses of large language models
While some capabilities are still evolving, many are now embedded in everyday pharmacovigilance workflows.
A principles-based framework for responsible AI
CIOMS deliberately avoids prescribing detailed technical requirements. Instead, it outlines a set of guiding principles intended to remain applicable as AI technologies, regulations, and expectations continue to change.
These principles provide a practical foundation for scaling AI use while maintaining appropriate oversight and accountability.
Seven principles shaping responsible AI use in drug safety
- Apply a risk-based approach
Not all AI systems pose the same level of risk. Oversight should reflect how much influence a system has on decisions and the potential impact of errors. - Preserve human oversight
Human accountability remains central. The report distinguishes between systems where humans make final decisions and those where automation plays a larger role but is continuously monitored. - Ensure validity and robustness
AI must perform reliably in real-world settings, supported by appropriate testing, representative data, and ongoing monitoring. - Support transparency
Stakeholders should understand what a system is designed to do, how it uses data, and where its limitations lie. - Protect data privacy
Responsible use depends on privacy-by-design practices, strong data governance, and regular impact assessments. - Address fairness and bias
Datasets should reflect real patient populations, with performance evaluated across relevant subgroups. - Establish clear governance and accountability
Effective governance includes documentation, version control, traceability, and continuous monitoring across the AI lifecycle.
How this report influences pharmacovigilance work
The CIOMS report highlights changes that are already shaping how pharmacovigilance operates:
- More continuous and near real-time safety monitoring
- Earlier use of AI during drug development, shifting some focus from post-approval detection to earlier risk identification
- Emerging clinical applications, including AI-supported identification of drug-related conditions
- Hybrid decision-making models where responsibility is shared between humans and AI
- Increasing expectations around transparency, auditability, and data protection
AI is not replacing pharmacovigilance professionals. It is changing how expertise is applied, allowing teams to focus more on interpretation, clinical judgment, and patient impact.
What changes for pharmacovigilance teams in practice
Many teams are already seeing practical shifts in how work is organized.
AI is increasingly used to support high-volume, time-sensitive activities. Routine tasks such as case intake, data extraction, coding, and duplicate checks are more often supported by automation, freeing reviewers to focus on clinical assessment and decision-making.
Expectations around oversight are also becoming clearer. Teams are expected to understand how AI tools are used within workflows, where human review is required, and how performance is monitored over time. This places greater emphasis on documentation, validation, and traceability as part of standard operations.
Other practical changes include:
- Closer collaboration between pharmacovigilance, IT, quality, and data governance teams
- More clearly defined roles for AI monitoring, escalation, and exception handling
- Increased focus on data quality, representativeness, and bias awareness
- Greater scrutiny of how tools handle sensitive patient information
Rather than transforming pharmacovigilance overnight, AI is reshaping how work is organized and governed. Teams that adapt successfully treat AI as part of the operating model, embedding oversight and accountability into everyday processes.