AI drafts in minutes. The SRC Act still demands human judgement.
The state of play
Across Commonwealth schemes, AI tools are being trialled for liability drafting, statement summarisation, and TOOCS coding support. The technology is faster than human drafting on every dimension that does not require legal reasoning. The SRC Act 1988, however, was written for delegated human decision makers. None of the recent regulator commentary has changed that.
This guide sets out a practitioner framework for using AI inside SRC Act determinations without breaching the standards that apply to your delegation.
Where the legal line sits
The SRC Act creates a series of decisions that can only be made by an appropriately delegated person. The most common are:
- Section 14 compensation for injuries (general liability)
- Section 16 compensation for medical expenses
- Section 19 compensation for injuries resulting in incapacity (weekly amounts)
- Section 24 compensation for permanent impairment
Each of these involves a qualifying threshold that cannot be sidestepped. Section 14 requires the case manager to be satisfied that an injury arose out of or in the course of employment. Section 16 requires a finding of reasonable medical treatment. Section 19 requires careful arithmetic and an active assessment of incapacity.
AI can prepare a draft of any of these. AI cannot lawfully make any of these.
The AI workflow that fits inside the SRC Act
The most defensible workflow for AI assisted determinations follows five steps. Each is reversible, auditable, and easy to evidence in a file note.
- Frame the question. The case manager identifies the SRC Act test that applies and the evidence on file.
- De-identify the input. Every claimant identifier, claim number, treating practitioner name, and employer reference is replaced with placeholders before any prompt is constructed.
- Generate the structured draft. The AI produces a structure with the facts as supplied, the test as named, and the reasoning aligned to the test.
- Map the draft to the evidence. The case manager walks the draft against the actual claim file, line by line, confirming each factual assertion exists in the source documents.
- Issue the decision. The case manager applies the legal test, accepts or rejects the draft reasoning, and signs the determination as the delegated decision maker.
The AI is a tool inside step three. The decision sits with the case manager across the entire workflow.
De-identification callout. No prompt sent to any external AI tool may contain a claimant name, claim number, date of birth, exact address, treating practitioner name, or employer reference. Use placeholders such as [CLAIMANTNAME], [CLAIMNUMBER], [CONDITION], [TREATINGPRACTITIONER]. This is not a guideline. It is a control.
What changes between determinations
The framework holds across the four most common decision types, but the prompt and the human checks shift.
Section 14 compensation for injuries. The AI is most useful in structuring the causation analysis and in drafting the wording around section 5B significant degree where a disease component is present. The human must apply the legal test for "arising out of or in the course of employment" and the relevant exclusions in section 5A (including reasonable administrative action).
Section 24 permanent impairment. The AI helps with structuring the tables of permanent impairment and the binding-on-Comcare assessment text. The human must satisfy themselves that the impairment guide has been correctly applied and that the binding nature of the assessment is correctly characterised.
Section 16 medical treatment. The AI is helpful in drafting the reasonableness analysis. The human must weigh the treating practitioner evidence against any IME and apply the legal test for reasonable treatment.
Section 19 incapacity. The AI is useful for the arithmetic and the structured calculation narrative. The human must verify the inputs, confirm the normal weekly earnings calculation, and apply the section 19 reduction logic correctly.
Risks and guardrails
Three risks come up most often when scheme operators trial AI inside determinations.
Hallucinated authority. AI tools sometimes invent SRC Act sections or paraphrase them inaccurately. The control is a hard rule that any section reference in an AI draft must be checked against the actual Act before issuing.
Sliding into outcome advice. AI tools can drift from drafting analysis to recommending an outcome. The control is to keep the prompt narrow, to remove outcome language from prompts, and to treat any AI suggestion of a determination outcome as a flag to slow down, not speed up.
Privacy creep. Pasting claim documents into a tool to "let it summarise" is the highest privacy risk in the workflow. The control is to do summarisation only inside tools that have a Privacy Impact Assessment in place and to keep all summarisation work in de-identified form.
For practitioners
- Use AI for first draft structure, never for the legal conclusion
- Strip identifiers before any tool input every single time
- Map every AI claim to the underlying SRC Act test before issuing
- Document your review steps in the file note as evidence of HITL
- Treat AI confidence language as a prompt for further human checks
For governance leads
- Establish an AI determinations register that names tool, date, reviewer
- Audit a sampled five percent of AI assisted determinations monthly
- Confirm your Privacy Impact Assessment covers external model processing
- Mandate de-identified inputs as a control, not a guideline
- Brief your delegates on what AI can and cannot lawfully decide
A worked example
A case manager is preparing a section 14 determination for [CLAIMANTNAME], claim [CLAIMNUMBER], in respect of a [CONDITION]. The treating practitioner [TREATINGPRACTITIONER] has provided a report. The case manager:
- De-identifies the report into a working copy with placeholders.
- Prompts the AI to produce a structured draft analysing causation under section 14, flagging any section 5A or section 5B considerations.
- Reviews the draft against the actual file, confirming each factual assertion is supported.
- Applies the legal test, accepts the structure, edits the reasoning where the AI has overstated, and signs the determination as the delegated decision maker.
- Adds a file note recording the use of AI, the de-identification step, and the human review.
The AI saved drafting time. The case manager owns the decision.
The prompt structure that holds up
A defensible prompt for AI assisted SRC Act drafting follows a consistent pattern. It names the section that applies, supplies the de-identified facts, names the legal test, and asks for a structured draft rather than a recommendation. The prompt does not ask the AI for an outcome. It does not invite the AI to weigh evidence. It does not ask the AI whether liability should be accepted.
The five-part prompt template in widest use across scheme operators looks like this. Section heading naming the SRC Act provision. Statement of the de-identified facts in chronological order. Statement of the legal test the case manager intends to apply. Request for a structured draft setting out the test and applying it to the facts. Closing instruction reminding the AI not to recommend an outcome and to flag any factual gaps.
Outputs from this kind of prompt are noticeably easier to review and noticeably less prone to drift. The case manager edits the draft, applies the legal test personally, and proceeds to issue the determination.
File note conventions that hold up
A short, well-structured file note is the primary defence of any AI assisted determination. Most scheme operators are converging on a four-line standard.
Line one. AI tool used. Names the specific tool and version, where versioning is available. This matters for audit traceability and for any post hoc review of how the tool performed at the time.
Line two. Purpose. What the AI was used for. Drafting structure. Statement summarisation. Calculation cross-check. Be specific about the narrow task; vague descriptions of "AI assistance" do not survive scrutiny.
Line three. De-identification confirmed. A simple statement that inputs were de-identified before use. The case manager is signing off that the privacy step was taken.
Line four. Human review. A statement that the case manager reviewed the AI output, edited it as required, and that the determination text issued reflects the case manager's reasoning. This is the line that closes the reasoning trail.
Four lines. One paragraph. Defensible at audit. The discipline of writing it routinely, on every AI assisted determination, is a small operational cost with a large governance return.
When AI should not be used
Three categories of determination, in our reading, sit outside the comfortable zone for AI drafting. The categories are not absolute. They are the categories where the additional human cost of review tends to outweigh the drafting time savings.
Significant degree disease claims under section 5B. The legal characterisation of "significant contributing factor" is sufficiently fact-specific and sufficiently nuanced that AI drafting often produces text that needs to be substantially rewritten. The time saved is small. The risk of a subtle drift in the legal test is real.
Credibility-driven determinations. Where the case manager's reasoning depends meaningfully on a credibility assessment of conflicting accounts, AI drafting tends to flatten the nuance. The case manager is better off drafting the credibility analysis from scratch.
Determinations involving the interaction of multiple Act provisions. Where a single determination has to navigate, for example, sections 14, 16, 19, and 24 together, the AI drafts often miss the interactions. The drafting saving is overtaken by the time cost of correcting interaction errors.
In each of these categories, AI is still useful for narrower sub-tasks (statement summarisation, evidence indexing, calculation cross-checks). It is just not useful for the substantive drafting.
What changes after the first month
Teams that adopt the framework usually see four things shift in the first month.
First, drafting time on routine determinations falls. Section 14 acceptance determinations on uncomplicated facts can move from forty-five minutes to twenty. The savings are real, even with the additional review steps the framework requires.
Second, file notes become more uniform. The four-line standard, applied routinely, produces file notes that are easier to audit and easier to defend. The variability that used to come from individual case manager habits drops.
Third, edge cases become more visible. The framework forces the case manager to engage with the legal test explicitly. Cases that fall in awkward parts of the test surface earlier, which improves both quality and timeliness.
Fourth, the team's confidence with AI use goes up. Once the framework is embedded, case managers stop worrying about whether the use is appropriate and start using the tool with the discipline that the framework prescribes. The anxiety drops, the productivity rises.
These four shifts are observable in scheme operators that have adopted similar frameworks. They are not hypothetical. They are what the framework delivers when it is applied consistently.
The bottom line
AI is faster than human drafting. The SRC Act is unmoved by speed. The framework is simple. Use AI inside step three. Keep the human at the centre of every other step. Document the workflow so that any reviewer can see where the human judgement lived.
The technology is ready. The legal architecture has not changed.
---
Content disclaimer: This article is for general educational purposes only and does not constitute legal advice, liability determination guidance, or a substitute for professional judgement. Workers compensation decisions must be made by appropriately qualified and authorised persons under the Safety, Rehabilitation and Compensation Act 1988. All AI outputs described in this article require human review before use in any claims management context.
TheAICommand. Intelligence, At Your Command.
