The reasoning has to travel.
Note for readers. The case profile in this piece is illustrative. It is built from concerns being raised at conference and in tribunal commentary about AI assisted determinations. It is not a real decided matter. The framing below uses placeholder details rather than any real claimant or claim. The point is the principle, which is real: a determination that cannot be unwound to its evidence is a determination at risk.
Case at a glance
| Field | Value | |---|---| | Profile name | Illustrative AI assisted determination case | | Citation | Illustrative profile only. Not a decided matter. | | Tribunal | Administrative Review Tribunal (ART) framing | | Determination type | Section 14 liability, then section 19 incapacity recalculation | | Outcome framing | Set aside and remitted for reconsideration |
What the profile is about
[CLAIMANTNAME] lodged a claim under section 14 of the SRC Act for a [CONDITION] said to have arisen out of employment. The case manager prepared a determination using AI drafting tools. The determination accepted causation under section 14 and, six months later, recalculated incapacity benefits under section 19 with reference to a vocational assessment.
[CLAIMANTNAME] sought review. The point at issue was not whether AI had been used. It was whether the reasoning could be reproduced when challenged.
At review, the determination's reasoning trail was incomplete. Some paragraphs in the section 14 determination contained confident statements that did not map back to evidence on file. The section 19 recalculation referenced calculations that the case manager could not, on the day, walk a tribunal member through.
The determination was set aside and remitted.
What the framing tells us
This profile, illustrative as it is, lines up with the early signals from the Tribunal and from regulator commentary across 2026. The Tribunal is not concerned with the use of AI tools in drafting. The Tribunal is concerned with three things:
- Whether the determination is supported by the evidence on file.
- Whether the case manager, as the delegated decision maker, can articulate the reasoning that led to the determination.
- Whether procedural fairness was observed.
Where AI drafting has been used and the reasoning trail is intact, the use of AI is unproblematic. Where AI drafting has been used and the reasoning trail is broken, the determination cannot be defended. The use of AI is not the problem. The lost reasoning trail is.
The AI implication
There are three lessons that fall out of this framing, each of which translates into specific practitioner action.
Lesson one. The case manager must be able to defend every paragraph. If a paragraph in a determination came from an AI draft and the case manager cannot, on the day, explain why the paragraph says what it says, the paragraph should not be in the determination.
Lesson two. AI confidence language is a trap. AI tools tend to write with assertive prose. Prose that reads well at the desk can read as overstatement at review. Every confident assertion needs to be earned by evidence on file.
Lesson three. The file note matters. A short paragraph in the file note recording that AI was used, that inputs were de-identified, and that the case manager reviewed and edited the draft is a defensible record. Silence is not.
The reasoning trail standard
A defensible reasoning trail covers four things:
- The legal test that applied (which SRC Act section, which threshold).
- The evidence that was considered (what was on file, what was weighed).
- The reasoning that connected the evidence to the test (why this evidence supports this conclusion).
- The conclusion that issued (the determination wording).
When a determination is challenged, the case manager should be able to walk a reviewer through these four things in order. AI drafting can support every one of these steps. AI drafting cannot replace any of them.
De-identification callout. All claim level details in this article are placeholders. No real claimant data, no real claim numbers, no real treating practitioner identifiers appear here. When discussing AI assisted determinations in your own materials, the same standard applies as a matter of course.
Practical workflow after this kind of framing
If your team has been using AI drafting and you have not yet adopted a reasoning trail standard, the following sequence helps.
- Map your current AI workflow against the four-step reasoning trail. Find where the trail can break.
- Update your file note template to require a brief AI usage disclosure for any determination where AI was used in drafting.
- Review a sample of recent AI assisted determinations to see whether you can, today, reproduce the reasoning trail. If you cannot, treat the gap as a training and process issue.
- Brief your delegates on what tribunal members are looking for at review.
Risks and guardrails
Three concrete risks emerge from this kind of profile.
Reasoning trail gaps. The risk is that AI drafted prose ends up in a determination without the underlying reasoning being on file. The control is a paragraph by paragraph review where every assertion is checked against the file.
Calculation opacity. The risk is that AI assisted calculations under section 19 are accepted without the case manager being able to explain the maths. The control is to have the case manager redo the calculation independently, even if the AI got it right, so that the calculation is genuinely the case manager's.
Documentation drift. The risk is that the file note does not capture the use of AI, the de-identification step, or the review process. The control is a standardised file note template that prompts for these things.
For practitioners
- Capture the AI prompt and the AI output as part of your file note
- Edit AI drafts to remove confident language unless the file supports it
- Identify which paragraphs of a determination came from AI suggestion
- Be ready to explain how each finding maps to the evidence on file
- Treat any AI assertion you cannot trace as a paragraph to rewrite
For governance leads
- Mandate a reasoning trail standard for every AI assisted determination
- Sample-audit determinations to test whether the trail is reproducible
- Update internal policy to require AI usage to be disclosed in file notes
- Train delegates on how to defend AI assisted reasoning at review
- Treat reasoning trail gaps as a thematic risk, not a one-off issue
SRC Act sections referenced
- Section 14, compensation for injuries (general liability)
- Section 5B, definition of disease (including the significant degree test at s5B(3))
- Section 19, compensation for injuries resulting in incapacity
Each is referenced as it applies to the framing above. Practitioners should always check the current Act text before relying on any specific provision.
How this lands at review
A reviewer who reads an AI assisted determination is asking a small number of questions. Was the right legal test identified. Was the evidence on file fairly considered. Was the reasoning that linked evidence to test articulable. Was the determination wording supported by the analysis.
The illustrative profile above fails the third question. The case manager could not, on the day, reproduce the reasoning that linked evidence to test. The cause was a workflow that put AI in the drafting seat without keeping the case manager in the reasoning seat.
Two adjustments would have changed the outcome. The first is the file note discipline. A short note recording that AI was used, that inputs were de-identified, that the case manager reviewed the draft, and that any AI assertions not supported by the file were rewritten. The second is the calculation discipline. The case manager redoes the section 19 maths personally, even where the AI got it right, so that the calculation is genuinely the case manager's.
Both adjustments take minutes per claim. They are entirely within the existing operational rhythm.
What practitioners are doing differently now
Across scheme operators, three practical changes have followed the kind of framing this profile illustrates.
Change one. File note templates updated. A short AI usage block is now standard in many file note templates. It captures whether AI was used, what for, that inputs were de-identified, and that the case manager reviewed the draft.
Change two. Calculation redo. Where AI assists with section 19 calculations, the case manager runs the calculation independently and uses the AI result as a cross-check. The case manager's figure is the authoritative one.
Change three. Confidence language audit. Case managers are reading their drafts looking specifically for confident assertions that are not earned by evidence on file. Confident language is the most common point of failure at review.
These changes are small, repeatable, and visible at audit. They are also what closes the gap between AI drafted and AI defended.
A note on tribunal posture
It is worth being clear about how tribunal members are likely to approach AI assisted determinations. Three observations, drawn from public commentary across 2026.
Tribunal members are not technology assessors. They are not interested in the model card or the architecture of the AI tool. They are interested in whether the determination is supported by the evidence and whether the reasoning is sound. The use of AI is a context, not a question, in their reasoning.
Tribunal members notice over-confident prose. Determination wording that asserts more than the evidence supports is a long-standing review risk. AI drafting tends to produce confident prose. The combination is more visible at review than it used to be.
Tribunal members value clear file notes. A short paragraph in the file note that records how AI was used, that inputs were de-identified, and that the case manager reviewed the output, is read positively at review. It signals discipline. The absence of such a note is read negatively.
The posture is not adversarial. It is professional. Practitioners who keep the discipline visible in their file notes will find that AI drafting becomes a non-issue at review.
The bottom line
The Tribunal is not anti-AI. The Tribunal is pro-reasoning. If your reasoning trail is intact, AI drafting is invisible to the review process. If your reasoning trail is broken, the use of AI becomes visible for all the wrong reasons.
Defend the trail, not the tool.
---
Content disclaimer: This article is for general educational purposes only and does not constitute legal advice, liability determination guidance, or a substitute for professional judgement. Workers compensation decisions must be made by appropriately qualified and authorised persons under the Safety, Rehabilitation and Compensation Act 1988. All AI outputs described in this article require human review before use in any claims management context.
TheAICommand. Intelligence, At Your Command.
