The regulator has updated its expectations.
Note for readers. This piece reads against the broad direction of regulator commentary on AI in claims management as of April 2026. Practitioners should always read the source guidance directly and treat any commentary, including this one, as an aid to interpretation rather than a substitute for the document itself.
Why this matters now
For most of 2025, Comcare's posture on AI in claims management was watchful. The regulator was meeting with scheme operators, sampling early use cases, and signalling expectations through speeches and informal channels. April 2026 marks the point at which those expectations move from informal to documented.
Practitioners who have been waiting for clearer regulator commentary now have it. The themes below are the practical shape of what scheme operators are reading.
The five themes
The guidance, in plain practitioner terms, lands on five themes.
Theme one. AI is a tool, not a decision maker. The regulator's first line is the same line the SRC Act draws. Section 14, section 16, section 19 and adjacent provisions vest decision making power in delegated humans. AI can prepare drafts. AI cannot lawfully make a determination on its own. Any workflow that drifts toward AI making the decision is outside the regulator's expectations.
Theme two. The reasoning trail must hold. Where AI has been used in drafting, the case manager must be able to articulate the reasoning that led to the determination. This means walking a reviewer through the legal test, the evidence, the analysis, and the conclusion. AI can support each step but cannot replace any of them. A determination that cannot be defended at this level of detail is a determination at risk on review.
Theme three. Privacy is non-optional. The guidance speaks plainly to de-identification. Claimant identifiers, claim numbers, treating practitioner names, and other personal information must not flow into AI tools that process input on infrastructure outside the scheme operator's privacy boundary, except where a clear Privacy Impact Assessment supports the use. Most case management uses of AI will require de-identification as a working control.
Theme four. Tools must be approved, not adopted. The expectation is that scheme operators maintain a register of approved AI tools. Individual case managers do not adopt tools personally. Tools are procured, assessed, registered, and only then made available to case managers. Shadow AI use is incompatible with the regulator's posture.
Theme five. Training is mandatory, not optional. The guidance is explicit that case managers using AI tools must understand both the capabilities and limitations of those tools. This is not satisfied by a one-off email. It is satisfied by formal training, refresher cycles, and documented competency.
What changed and what did not
Two things changed. First, the regulator's expectations are now documented. Where previously a scheme operator could point to good faith experimentation, the bar is now stated. Second, the reasoning trail standard is articulated more crisply than before.
Three things did not change. The SRC Act remains the SRC Act. The decision making power still sits with delegated humans. The privacy principles that have applied to claim files for decades still apply.
De-identification callout. Any practical work undertaken to align with the regulator's expectations should be done with de-identified data. Use [CLAIMANTNAME], [CLAIMNUMBER], [CONDITION], [TREATINGPRACTITIONER] in any sample workflows or training material. The regulator's emphasis on privacy is a signal, not a suggestion.
What to do this quarter
For most scheme operators, the practical response is a five-step plan that can fit inside a quarter.
Step one. Inventory AI use. Find out what tools are in use, by whom, for what tasks. Most operators discover that the inventory is broader than they thought.
Step two. Establish the approved list. Decide which tools, with which Privacy Impact Assessments, are formally approved. Anything outside the list is paused.
Step three. Update file note templates. Make AI use a routine line item in determinations. The line is short. It says AI was used, what it was used for, that the inputs were de-identified, and that the case manager reviewed the output.
Step four. Brief delegates on the reasoning trail. Run a short workshop on how to walk a reviewer through the four-step trail. This is training, not policy. It is the practical defence of every determination.
Step five. Sample-audit. Pull a sample of recent AI assisted determinations. Walk each through the reasoning trail standard. Treat any gap as a training opportunity, not a personal failing.
The practitioner-facing takeaways
Three things matter most for case managers reading the guidance.
First, the regulator is not anti-AI. Nothing in the guidance prevents AI being used to draft, to summarise, or to support analysis. The regulator is anti-bad-AI. Where the workflow keeps the human at the centre, the use is supported.
Second, the documentation burden is small. The file note template change is one or two lines. The reasoning trail standard reflects work case managers should already be doing.
Third, the privacy expectation is not negotiable. De-identification is the single highest-leverage control. Scheme operators that have not yet embedded it as a desk habit should make that the priority before any other change.
Risks and guardrails
Three risks emerge from how scheme operators implement this guidance.
Implementation theatre. The risk is that the operator updates a policy document and considers the work done. The control is to test implementation through sample audits and training observation, not policy review.
Tool list drift. The risk is that the approved list lags actual practice and case managers use unapproved tools because they are easier. The control is to make the approved list useful, current, and visible.
Training as compliance. The risk is that training becomes a tick-box exercise. The control is to make training role-specific, scenario-based, and connected to live workflows.
For practitioners
- Confirm that any tool you use is on your scheme operator's approved list
- Document AI use in your file note as a routine line item
- Flag any AI output you cannot fully explain before relying on it
- Treat the de-identification step as a hard control on every claim
- Escalate edge cases rather than letting AI default to a position
For governance leads
- Update your AI tool register against the Comcare expectations
- Review your Privacy Impact Assessment scope for external models
- Confirm your delegate training covers AI assisted decision making
- Map your audit trail process to the reasoning trail standard
- Brief your executive on where AI sits in your risk register
SRC Act sections referenced
- Section 14, compensation for injuries (general liability)
- Section 16, compensation in respect of medical expenses
- Section 19, compensation for injuries resulting in incapacity
These are the sections most likely to come up in any AI assisted workflow.
Reading the guidance against existing controls
Most scheme operators have existing controls that already address parts of what the regulator now articulates. The work is to map the existing controls against the five themes and identify the genuine gaps.
For most operators, the gap analysis comes back showing strong controls on theme one (delegation) and theme three (privacy), and weaker controls on theme two (reasoning trail), theme four (approved tool list), and theme five (training). The reason is structural. Delegation and privacy have been part of the operating model for decades. AI tooling is new enough that the controls around it have not yet matured.
The gap analysis is not a punishment exercise. It is a planning exercise. Identifying weak controls now lets the operator close the gaps before the next regulator engagement.
The role of internal audit
Internal audit functions in larger scheme operators are taking an increasing interest in AI assisted determinations. The questions they are asking are practical:
- Can you produce a list of every AI tool currently in use across claims management?
- Can you produce a sample of file notes showing AI usage being recorded?
- Can you walk us through the reasoning trail for a sample of AI assisted determinations?
- Can you show us your Privacy Impact Assessment for the tools in use?
- Can you produce evidence of training delivery and competency for AI users?
If the answer to any of these is "we are working on it", the audit finding is going to land harder than it needs to. The simpler answer is to do the work now, while the regulator's expectations are documented but not yet a formal compliance regime.
The medium-term view
Looking 12 to 18 months ahead, the regulatory direction is clear. AI use in claims management will be expected to be governed, documented, and auditable. Scheme operators that have moved early on the five themes will be in a stronger position. Scheme operators that have not will face a more difficult catch-up cycle.
The good news is that the work is not large. It is detailed and operational, but not large. A focused quarter on inventory, approved list, training, file note templates, and reasoning trail discipline gets most operators most of the way there.
The harder work, the longer-tailed work, is cultural. Building a team where every case manager defaults to de-identification, every determination has an articulable reasoning trail, and every AI use is documented as a matter of routine. That is a multi-quarter effort, not a single sprint.
A practical 90-day plan
For scheme operators looking at the guidance and wondering where to start, a 90-day plan that lands the practical changes looks like this.
Days 1 to 14. Inventory and assessment. Audit the AI tools in actual use across the claims function. Be thorough. Most operators discover tools they did not know about. Map each tool against the regulator's expectations and identify the gaps.
Days 15 to 30. Approved list and Privacy Impact Assessments. Decide which tools are formally approved. For each approved tool, confirm a Privacy Impact Assessment exists and is current. For tools not on the approved list, communicate the change to the team and pause use.
Days 31 to 60. File note templates and training. Update file note templates to include the AI usage block. Run training sessions on AI usage, the de-identification toolkit, and the reasoning trail standard. Make the training scenario-based rather than lecture-based.
Days 61 to 90. Audit and feedback. Pull a sample of recent AI assisted determinations and walk them through the reasoning trail standard. Use the audit findings as a feedback loop into training. Refine the file note template if the audit surfaces issues.
The 90-day plan is not exhaustive. It is the minimum viable response to the regulator's expectations. Operators that do this well will be in a strong position for the next round of regulator engagement.
The bottom line
The April 2026 guidance does not change the SRC Act and does not stop AI being used. It writes down what was already true and gives scheme operators a clearer line on the workflows they need to support. The work for the next quarter is operational, not philosophical.
Read the guidance. Inventory the tools. Train the delegates. Audit the trail.
---
Content disclaimer: This article is for general educational purposes only and does not constitute legal advice, liability determination guidance, or a substitute for professional judgement. Workers compensation decisions must be made by appropriately qualified and authorised persons under the Safety, Rehabilitation and Compensation Act 1988. All AI outputs described in this article require human review before use in any claims management context.
TheAICommand. Intelligence, At Your Command.
