Skip to content Skip to sidebar Skip to footer

AI and Medical Device Software Regulation: The Australian Perspective 2026

Abstract digital dashboard showing a central padlock icon surrounded by connected interface symbols, representing AI medical device software security, validation, and compliance controls.

What Australian Sponsors Must Know in 2026

For Australian sponsors of software-based medical devices, 2026 marks a clear regulatory inflexion point. The TGA’s February 2026 guidance is anchored by its July 2025 report, “Clarifying and Strengthening the Regulation of Medical Device Software, including Artificial Intelligence.” This guidance has materially reset compliance expectations for everyone developing, supplying, or maintaining AI-enabled SaMD (Software as a Medical Device) in Australia. AI medical device software regulation in Australia is now a strategic imperative, not a background activity. This article examines the three pillars driving that reset. It also identifies the risks most sponsors underestimate, and provides a practical compliance checklist.

The Australian Regulatory Framework: Technology-Agnostic but Demanding

Australia’s medical device framework regulates products based on their intended purpose, not the technology they use. Regulation is triggered when software meets the definition of a medical device under section 41BD of the Therapeutic Goods Act 1989 — regardless of whether the underlying system runs on a rule-based algorithm or a large language model.

Technology-agnostic does not mean AI gets a lighter compliance pathway. AI-enabled products introduce novel risks—data drift, algorithmic bias, opaque decision pathways. They must satisfy the same Essential Principles as any other medical device, but need more extensive evidence. The framework is demanding because it makes no concessions for AI novelty. AI medical device software regulation in Australia is grounded in risk. AI introduces risk categories that conventional device regulation was not designed to address directly.

Two-panel infographic explaining the TGA’s technology-agnostic approach to AI medical device software and the core pillars of AI compliance in Australia.

Pillar One: What Does Your Intended Purpose Actually Protect?

The Regulatory Trigger Is What You Say, Not What You Build

Short answer: Intended purpose is established by everything a manufacturer communicates publicly — labelling, instructions for use, advertising, UI claims, and technical documentation. Review all product materials regularly, as poorly drafted product pages can reclassify a product faster than any engineering update.

Software or AI products are regulated as medical devices when intended for diagnosis, prevention, monitoring, prediction, prognosis, or treatment of a disease, injury, or disability. A tool described as a wellness insight sits outside the frame. The same tool, repositioned as a cardiovascular risk prediction tool for clinicians, is a regulated medical device requiring ARTG inclusion. Sponsors must treat the intended purpose as a living document, reviewed at every product update cycle.

The Prediction Gap and Exemption Risks

A critical gap confirmed in the TGA’s 2025 review: AI tools used for prediction or prognosis currently default to Class I because existing rules 4.5(1) and 4.5(2) do not address predictive or prognostic functions. Monitor proposed amendments to anticipate reclassification to Class IIa or higher. If your product carries any predictive functionality, proactively plan for possible reclassification in the near term.

CDSS (Clinical Decision Support Software) exemptions also carry ongoing obligations. An exemption is not an exclusion. Exempt software remains subject to TGA oversight, and the exemption must be actively reassessed whenever the intended purpose changes or a meaningful update is deployed. For AI-enabled CDSS, this is a continuous obligation, not a one-time determination.

Pillar Two: What “No Black Boxes” Demands in Practice

AI-Specific Evidence — Five Mandatory Domains

Short answer: TGA’s February 2026 guidance sets clearer expectations for classification, evidence generation, and lifecycle management for AI-enabled software. Ensure your evidence addresses five domains: AI model alignment with intended purpose, algorithm and model design, data representativeness for Australian populations, risk management covering overfitting, bias, and drift, and ongoing clinical evidence throughout the product lifecycle.

The IMDRF Good Machine Learning Practice for Medical Device Development (IMDRF/AIML WG/N88 FINAL:2025), adopted by the TGA, provides the operational structure. GMLP is now a practical prerequisite for the inclusion of AI-enabled SaMD in ARTG.

Making AI Legible: Three Things Reviewers Need to See

In practice, algorithmic transparency means three demonstrable elements:
  • How the model reaches its output — documented reasoning, no opaque decision pathways
  • Data independence — training and testing datasets demonstrably separate, representative of Australian clinical sub-populations
  • Risk controls — actively manage overfitting, bias, and data drift throughout the product lifecycle.
Use Essential Principle 12.1 for programmable medical devices as your guide for meeting these obligations.

Synthetic Data — A Supplementary Role Only

TGA guidance confirms that synthetic data may support model training and validation, provided a clear rationale is documented. However, it will not replace clinical data for core compliance claims. The Australian population generalisability requirement imposes a constraint that neither internationally sourced nor purely synthetic datasets alone can satisfy.

Pillar Three: The TGA Action Plan — What Is Coming in 2026

The TGA’s July 2025 review produced 14 findings from 53 stakeholder submissions. Focus on integrating these directly relevant findings into your sponsor planning process:
  • Predictive tool reclassification is explicitly on the reform agenda.
  • Adaptive and generative AI require new change control approaches — current static-model assumptions do not apply to systems that change post-deployment
  • Consumer health and digital mental health exclusions are under urgent review and may narrow significantly.
  • Definitions reform — “manufacturer” and “sponsor” may not map cleanly onto AI developer-deployer-distributor ecosystems; legislative clarification is under consideration.
Treat emerging instability as a key message: If your software uses AI to influence clinical decisions or patient care, prepare to be in scope for full compliance. Build a design compliance infrastructure that can be continuously updated, rather than relying on fixed documentation at launch.

Scope Creep: The Regulatory Time-Bomb in Your Backlog

Scope creep — the gradual shift in a product’s intended purpose through iterative updates — is the most underestimated risk in the current AI SaMD environment. Two structural drivers: exclusion erosion, where products gradually perform medical device functions without recognition, and conditions drift, where intended purpose shifts as new features arrive.

For AI products, the risk is structural. A clinical documentation tool adds a differential diagnosis panel. A triage prioritisation model’s outputs begin driving treatment decisions. An adaptive algorithm, retrained on expanded data, quietly serves a broader patient population than originally approved.

The TGA’s guidance is clear: changes that alter the intended use or performance require regulatory approval before release. Build an internal scope creep radar — an intended-purpose register mapped to roadmap items, change control templates with a “regulatory impact” field, and periodic cross-functional review across regulatory affairs, clinical, product, and data science teams.

 

Maintaining a Defensible State of Control Post-Market

Conventional post-market surveillance was designed for static products. Adaptive AI systems can change behaviour after deployment. The TGA is developing guidance on thresholds for significant change in adaptive AI. Until that guidance is published, apply a predetermined change approach: document anticipated modifications, define validation methodologies, and establish performance thresholds that trigger formal regulatory review.
 

Flowchart infographic showing how AI software updates can trigger new TGA obligations, including ARTG inclusion and higher-risk reclassification.

International Signals: What EU and US Frameworks Mean for AI Medical Device Software Regulation

Two developments should inform Australian planning. The EU AI Act’s high-risk AI obligations apply from August 2026. All SaMD is automatically classified as high-risk AI, triggering mandatory requirements for explainability, bias detection, and human oversight, in addition to existing MDR obligations. The FDA’s PCCP (Predetermined Change Control Plan) framework, finalised in December 2024, allows manufacturers to pre-define planned device modifications within a single marketing submission. The TGA is examining this approach with a view to international alignment.

No formal Australian PCCP pathway exists yet. Sponsors who structure change control documentation in PCCP-equivalent terms now will be positioned ahead of that transition. Design for the strictest common denominator: build to standards that satisfy TGA, EU, and FDA expectations simultaneously.

Practical Checklist for Australian Sponsors in 2026

AI medical device software regulation in Australia rewards sponsors who treat compliance as a design-phase discipline. Evidence packages, scope creep controls, and post-market monitoring cannot be meaningfully retrofitted once a product is in clinical use.

Before or at launch

Audit intended purpose for all AI use cases — existing and pipeline.

Classify all products under current and proposed SaMD rules, including predictive tools.

Build a GMLP-aligned evidence package from the design phase.

Establish Algorithmic Transparency & Data Generalisability: Document exactly how the model reaches its output (the “no black boxes” imperative), prove the independence of training and testing datasets, and provide statistically informed justifications that the model is generalisable to the local Australian clinical population (purely synthetic or international datasets are generally insufficient).

Implement a compliant Quality Management System (QMS) & Software Standards: Ensure conformity assessment procedures are met, including QMS implementation (e.g., ISO 13485) and adherence to software life cycle processes (IEC 62304).

Implement change control with a “regulatory impact” field for all AI model updates and feature additions.

Draft pre-market PCCP-equivalent documentation: Proactively document anticipated modifications, validation methodologies, and impact assessments for adaptive models before launch, anticipating the TGA’s move toward international PCCP alignment.

Establish post-market monitoring — performance dashboards, drift detection, adverse event linkage — before go-live.

Ongoing

Engage with TGA consultations through 2026, particularly on classification amendments and adaptive AI guidance.

Track EU AI Act high-risk obligations (August 2026) and FDA PCCP implementation for dual-market products.

Reassess intended-purpose status at every major product update cycle.

Conduct Post-Market Clinical Follow-up (PMCF) and Annual Reporting: Continuously collect direct clinical evidence (including real-world evidence) to demonstrate ongoing safety and performance, and submit mandatory annual reports to the TGA if dictated by your device’s classification category.

Conclusion

The TGA’s 2026 posture is clear: AI-enabled SaMD must be governed with the same evidentiary discipline as any other medical device. Given AI’s unique risk profile, this often means even more scrutiny. Sponsors who treat AI medical device software regulation in Australia as a design constraint rather than just a post-development compliance hurdle will experience less regulatory friction and gain a stronger market position. Predictive tool reclassification is approaching. Guidance on adaptive AI is in development. International obligations are tightening. Sponsors who embrace this regulatory moment as an invitation to build properly will move the fastest and with the least drag.

Readers should verify regulatory status directly with the TGA before making compliance decisions.

Common Questions and Answers

Q1. Does my AI tool automatically become a regulated medical device because it uses machine learning?

No. Regulation depends on the product’s intended purpose, not whether it uses AI or machine learning; if it is not intended for diagnosis, monitoring, prediction, prognosis, or treatment, it is unlikely to be regulated as a medical device.

Q2. Our AI product is currently Class I. Should we be concerned about reclassification?

Yes, particularly if it has a predictive or prognostic function. The TGA’s 2025 review identified these tools as under-classified and proposed moving many to Class IIa or higher.

Q3. We have a CDSS exemption. Does that mean we are outside TGA regulation?

No. A CDSS exemption removes the need for ARTG inclusion in some cases, but the software remains subject to TGA oversight and must be reassessed whenever intended purpose or functionality changes.

Q4. What does the TGA actually mean by “algorithmic transparency”?

It means sponsors must show how the model reaches its outputs, that datasets are independent and representative of Australian populations, and that risks such as bias, overfitting, and data drift are controlled. These expectations are primarily enforced through Essential Principle 12.1.

Q5. Can synthetic data satisfy TGA clinical evidence requirements?

Synthetic data can support training and validation if its methodology and rationale are documented. However, it will generally not replace the clinical data needed to demonstrate compliance, especially for Australian population generalisability.

Q6. How should sponsors manage “scope creep” in AI products?

Sponsors should implement continuous intended-purpose monitoring and change control processes to assess model updates and new features before release. These controls need to be embedded in the product development lifecycle, not treated as retrospective compliance checks.

Q7. Does the TGA currently recognise Predetermined Change Control Plans (PCCPs) for AI medical devices?

Not formally yet, but as of March 2026 the TGA has said it is examining the FDA’s PCCP concept with a view to international alignment. Sponsors should still prepare for PCCP-like expectations by documenting anticipated changes, impact assessments, and validation methods in advance.

Q8. Why are AI-based predictive and prognostic tools currently considered a regulatory “gap,” and how are their classifications expected to change?

They are considered a gap because existing rules do not explicitly account for predictive functions, so many currently default to Class I despite the potential for significant clinical harm. The TGA has proposed reclassifying many of these tools to Class IIa or higher depending on risk and intended user.

Q9. Can developers rely entirely on synthetic or international datasets to satisfy TGA clinical evidence requirements for AI models?

No. Synthetic and international datasets may support development, but they generally cannot replace the clinical evidence needed to show the model is applicable to Australian populations.

Q10. How does the TGA address the “black box” nature of AI models, and what specific evidence is required to ensure algorithmic transparency?

The TGA effectively requires sponsors to justify how the model produces its outputs and to demonstrate safety, performance, and fitness for purpose through documented design, data, clinical evidence, and risk controls. This includes evidence addressing AI-specific risks such as overfitting, bias, and data drift.

Disclaimer

This article is provided for educational and informational purposes only. It is intended to support general understanding of regulatory concepts and good practice and does not constitute legal, regulatory, or professional advice.

Regulatory requirements, inspection expectations, and system obligations may vary based on jurisdiction, study design, technology, and organisational context. As such, the information presented here should not be relied upon as a substitute for project-specific assessment, validation, or regulatory decision-making.

We have no commercial relationship with some of the entities, vendors, or software referenced. Any examples are illustrative only, and usage may vary by organisation and their needs.

For guidance tailored to your organisation, systems, or clinical programme, we recommend speaking directly with us or engaging another suitably qualified subject matter expert (SME) to assess your specific needs and risk profile.