Purpose and Background
This policy defines the governance, validation, and ethical operation of Artificial Intelligence (AI) systems within GxPVigilance.
It ensures all AI use supports patient safety, compliance, and transparency in accordance with ISO 9001, ISO/IEC 42001, the Privacy Act 1988 (Cth), and applicable international frameworks.
GxPVigilance uses AI to enhance efficiency and regulatory readiness—not to replace human oversight.
All AI activities are conducted in accordance with the principles of Clarity, Control, Confidence, and Integrity.
Authority and Applicability
Authority
This policy is issued under the GxPVigilance Quality and AI Management Systems and references:
- Privacy Act 1988 (Cth) and Privacy and Other Legislation Amendment Act 2024
- Australian Privacy Principles (APPs)
- ISO/IEC 42001: Artificial Intelligence Management Systems
- EU AI Act (2025–2026 rollout)
- 21 CFR Part 11 / EU Annex 11 (as applicable)
Applicability
Applies to all GxPVigilance employees, contractors, consultants, and AI systems, including:
- Closed agents (internal, secure environment)
- Open agents (public-data only)
- Client-specific AI instances deployed under consent and oversight
Definitions and Terms
- Intended Use (IU): Document describing the AI system’s purpose, scope, and decision role.
- Cat-A/B/C System: Internal support, client-facing support, or regulatory submission use classification.
- Validation: Documented evidence proving AI performance and reliability for its IU.
- Explainability: Ability to interpret model decisions through measurable logic (e.g., SHAP, LIME).
- Data Drift: Statistical deviation between live data and the original training distribution impacts performance.
- High-Risk AI: System supporting decisions with regulatory or patient-safety impact.
Policy Statement
GxPVigilance commits to using AI safely, ethically, and transparently.
All AI applications:
Operate under documented Intended Use, validation, and oversight.
Maintain traceable audit trails and version control.
Comply with data protection, consent, and bias mitigation requirements.
Remain explainable, monitored, and human-supervised at all times.
Roles and Responsibilities
| Role | Responsibility |
|---|---|
| Director, Pharmacovigilance & AI Governance | Policy owner; oversees validation, risk, and regulatory alignment. |
| AI Ethics Committee | Approves validation packages, reviews bias/explainability reports, monitors ethical use. |
| IT & Security Lead | Ensures encryption, access controls, and secure vendor configurations. |
| Data Privacy Officer | Manages consent lifecycle and data withdrawal records. |
| Auditors / Project Leads | Validate AI outputs, review logs, ensure operational compliance. |
| All Staff and Contractors | Complete AI competency training and report incidents. |
Procedures and Implementation
AI Selection, Intended Use, and Validation
- Each AI system must include a documented Intended Use Statement describing context, scope, and decision authority.
- Categorise AI systems as Cat-A (internal), Cat-B (client-facing), or Cat-C (regulatory/submission) to determine validation depth.
- Validation follows GAMP 5 lifecycle stages: Concept → Project → Operation → Retirement.
- Define pre-set acceptance criteria and metrics (accuracy, precision, recall, F1-score, bias index, explainability).
- Validation requires independent test data separate from development sets.
- Link validation to IU, risk assessment, and CAPA through traceable documentation.
- No AI system may be released for use without formal sign-off by the Director and AI Ethics Committee (Cat-B/C).
Vendor Qualification (AI Platforms/Services)
Vendors (e.g., Microsoft, model providers) must be assessed for:
- Security, encryption, access control, and breach management.
- Assurance that data inputs are not used for model training unless explicitly approved.
- Compliance with APPs, GDPR, ISO 42001, or equivalent.
- Contractual audit rights, change notifications, and SLAs defining uptime and support.
Consent and Data Governance
AI use involving client data requires documented informed consent prior to deployment.
Consent details must specify:
- Purpose, scope, and retention.
- Withdrawal rights and contact method (Data Privacy Officer).
Consent Withdrawal
- Clients may withdraw consent at any time by written notice.
- AI processing must cease within five business days of acknowledgment.
- Data post-withdrawal must be deleted or pseudonymised unless legally required to retain.
- Each withdrawal is logged and auditable.
Data Minimisation and Protection
Collect only data essential to the intended use.
Apply encryption at rest and in transit.
Store AI-generated files within SharePoint under QMS document control.
AI Data Governance
Define data quality standards (accuracy, timeliness, completeness).
Maintain training dataset dossiers recording sources, labeling methods, and known limitations.
Ensure test datasets are independent.
Implement data drift monitoring; deviations trigger revalidation.
Conduct bias testing across relevant population or use parameters.
Explainability and Interpretability
Implement SHAP or LIME analyses for Cat-B/C systems.
Record confidence scores; low-confidence outputs route to human review.
Maintain reasoning audit trails with input, model version, and key features logged.
AI-generated regulatory documentation must include methodology and model detail appendices.
Continuous Monitoring, Revalidation, and Change Control
Track KPIs: model accuracy, recall, F1, override rate, and confidence spread.
Monthly internal performance reviews; quarterly AI Ethics reviews for high-risk systems.
Trigger revalidation upon:
Model updates or parameter changes.
Significant KPI deviation.
Input data drift or schema modification.
Regulatory guidance updates.
All model updates pass through QMS Change Control procedures.
Electronic Records and Signatures (Conditional)
For projects within FDA/EU GMP scope:
Systems must meet 21 CFR Part 11 / Annex 11 expectations.
Apply secure user IDs, two-factor e-signatures, validated audit trails, and retrievable archives.
Risk Management
All AI systems are listed in the AI Risk Register with defined mitigations and performance thresholds.
Revalidation triggers and residual risks are reviewed quarterly.
High-risk (Cat-C) systems require dual approval for release or decommissioning.
Training, Role Competence, and Refreshers
- Baseline training: All personnel must complete AI literacy, ethics, and privacy training before system access and annually thereafter.
- Role-specific modules:
AI Ethics Committee: bias, fairness, and risk frameworks.
Developers/ML engineers: validation, GAMP 5, drift monitoring.
Auditors: AI audit evidence and inspection techniques.
Project Leads: consent, risk communication, governance integration.
- Competency assessment: Documented through test scores or performance demonstration.
- Refresher frequency: Quarterly for high-risk roles; annual for all others.
Incident Management and Escalation
AI Incident Classification
| Tier | Definition | Response |
|---|---|---|
| Minor | Unexpected AI output, no external impact | Correct within 5 business days |
| Significant | Incorrect output reaches client/regulator; potential compliance risk | Notify Policy Owner within 24 hrs; CAPA within 5 days |
| Serious | Patient-safety or data-breach incident | Immediate escalation <24 hrs; notify regulator as required; full RCA & CAPA within 10 days |
All incidents are logged and linked to CAPA verification and management review.
Monitoring, Review, and Communication
- Annual: regulatory scan and AI KPI summary.
- Annual: technical and compliance review (bias, validation, data drift) [If Applicable]
- Annual: comprehensive policy review and training evaluation.
- Emergency updates: issued within 14 days of any regulatory or risk change.
Note: Frequency based on level of risk.
Integration with Other Frameworks
This policy must be read with:
- Privacy Policy HR-POL-001
- Information Security Policy HR-POL-004
- Data Governance Framework AI-SOP-001
EU AI Act Readiness (Conditional)
Activated only for projects involving EU operations or data.
- Classification: Determine system category (unacceptable/high/limited/minimal).
- Compliance: Implement risk management, human oversight, technical documentation, and post-market monitoring.
- Incident Reporting: Establish reporting to EU authorities for serious events.
- Conformity Assessment: Conduct self-declaration or third-party audit as required.
Quality Assurance and Audit
- Quarterly internal audits assess AI validation, data governance, and monitoring records.
- Independent reviews by external auditors may occur biennially.
- Findings inform CAPA tracking and management reviews.
Continuous Improvement
GxPVigilance commits to continual enhancement of AI governance by:
- Reviewing drift and KPI data monthly.
- Updating validation templates with each regulatory change.
- Conducting annual “ethical stress-tests” to review bias and explainability quality.
- Partnering with AusAiLab for research and cross-audit benchmarking.
Associated Documents
Governance & QMS
QMS-MAN-001 – Quality Management System Manual
HR-POL-001 – Privacy Policy
HR-POL-004 – Information Security Policy
AI-SOP-001 – AI Governance Framework
AI-TMP-004 – AI Risk Register
QMS-FRM-004 – Change Control Form
- AI-WIP-001 – AI Usage & Performance Monitoring (Work Instruction)
- AI-SOP-006 – Vendor Qualification for AI Tools
- AI-SOP-003 – AI Incident Response & Escalation
- AI-SOP-004 – AI Training & Competency Management
- AI-SOP-005 – AI Output Documentation & Audit Trails
- AI-TMP-002 – Internal AI Governance Checklist
- AI-TMP-001 – CAPA Tracking Log
- AI-TMP-003 – Local AI Use KPI Dashboard
- TRN-AI-001 – AI Literacy Fundamentals
- TRN-AI-002 – Role-Specific AI Modules
- TRN-AI-003 – Quarterly Micro-Refreshers
