The independent review of the Safety, Rehabilitation and Compensation Act 1988 reported on 25 September 2025 and was released publicly in December 2025. The panel was chaired by Ms Justine Ross with Professor Robin Creyke AO and Mr Greg Isolani. The report runs to several hundred pages and produces 124 recommendations across the scheme. It is the most comprehensive examination of the SRC Act since the scheme started in 1988. Sources: Comcare, Report of the SRC Act Review, DEWR, Getting the best outcomes for injured and ill workers, Ministers' media centre, Independent review of Comcare legislation released.
If you are an employer in the Comcare scheme, the review is doing two things to you at once. It is producing a long list of potential statutory changes that will be tabled to government over the next twelve to twenty-four months. It is producing a consultation period in which your scheme manager, your legal team, and your government relations team need to form a coherent organisational position before submissions close.
A few thousand pages of legislative review, distilled into a defensible organisational position, in a short consultation window. This is exactly the kind of work AI can help with, if you are deliberate about how you do it.
This article is the practitioner workflow. Set up a project space. Run four prompt patterns. Run two human review gates. End up with a defensible internal brief and the start of a submission you can give to legal.
What the review actually does
The review is not legislation. It is a recommendation set. The Australian Government has indicated it will "carefully consider" the recommendations, with reform progressed through a normal legislative cycle. That means three things matter for employers in the meantime.
First. The review's recommendations land before the legislation does. An employer who waits for the bill is six to twelve months too late. Submissions, consultation forums, and stakeholder input happen on the recommendation set, not on the bill.
Second. The review reframes how the scheme is debated. The panel found the Comcare scheme is "increasingly out of step with the realities of contemporary work" and that holistic reform has not happened in nearly forty years. That framing colours the consultation. Employers arguing for incremental change are arguing against the panel's framing. Knowing the framing is the first step to engaging with it.
Third. The 124 recommendations are not equally weighted for your organisation. Some change premium calculation, others change rehabilitation timing, others change reconsideration mechanics under sections 60-62 of the SRC Act, and a few touch the basic compensability thresholds in section 14 of the SRC Act and the reasonable-administrative-action carve-out in section 5A of the SRC Act. Your job is to identify the dozen that materially affect you and form a position on each. AI helps you find them.
A note on what is publicly known. Specific recommendations covered in early reporting include imposing a legal duty on employers to intervene as soon as possible after a workplace incident, requiring Comcare to provide early payments and supports while a determination is pending, and giving workers a legal right to choose their own treatment team with penalties for interference. Source: Canberra Times, Mandatory changes proposed for Comcare scheme. The full set requires reading the report.
Set up the project space first
Before any prompt runs, set up the project space. The pattern follows what an injury-determination project space looks like, with three differences.
The system prompt names the task. "You are an AI analytical assistant supporting a Comcare-scheme employer assessing the December 2025 SRC Act Review. You produce drafts and analyses for review by senior managers and external legal counsel. You do not state organisational positions as settled."
The reference pack is bigger. The SRC Act 1988, current text. The SRC Act Review report, December 2025 release. Comcare guidance current at the time of the review. Your organisation's existing internal positions on the Act, redacted of commercially sensitive elements where needed. Anchor every reference to a public source where one exists. The review document is publicly available. The Act is publicly available. Use the Federal Register of Legislation for the Act.
The guardrail file expands. The model must not state organisational positions as settled. The model must not produce a submission ready to be sent. The model must not invent recommendations the report does not contain. The model must flag any analysis that depends on a recommendation it cannot find a citation for. Every output is conditional on senior management and legal review.
Once those three are in place, the four prompt patterns become useful.
Prompt pattern 1. The recommendation triage
The first pattern produces a triaged map of the 124 recommendations against your organisation. It is the highest-leverage prompt of the four. It runs once, early, and shapes everything that follows.
Output of this prompt is a working document, not a final triage. The high-impact bucket should have ten to fifteen recommendations. If it has more than twenty-five, the prompt has not weighted hard enough. If it has fewer than five, the operational profile in the prompt was too generic.
Prompt pattern 2. The section-by-section comparison
The second pattern produces a comparison of recommendation language against current SRC Act text, for the recommendations triaged as high-impact. This is the legal heavy-lift, made tractable.
The output of this prompt becomes the spine of an internal briefing pack for senior management and legal. It is what you would have asked a junior policy lawyer to draft a decade ago. The model now drafts it in hours instead of weeks. The legal review still happens.
Prompt pattern 3. The transitional and operational impact map
The third pattern asks what changes for current claims if a recommendation becomes law. This is the operational reading.
The output of this prompt is a planning document. It tells you what your operations team needs to start preparing now, before the bill lands, so you are not running implementation under deadline pressure.
Prompt pattern 4. The submission scaffold
The fourth pattern produces the scaffold of a submission for your organisation to make to government during consultation. This prompt runs late in the workflow, after the first three have been reviewed and your senior management has agreed organisational positions on the high-impact recommendations.
The output is a scaffold, not a submission. Your government relations team and your external legal counsel finish the submission. The scaffold gives them a head start of weeks.
The two human review gates
The workflow has two non-negotiable human review gates.
Gate one. After the recommendation triage in prompt pattern 1, before any further analysis runs. Senior management reviews the high-impact bucket. Add or remove. Disagreements at this gate are usually about strategic intent rather than statutory interpretation. The triage shifts. The downstream work follows.
Gate two. After the section-by-section comparison in prompt pattern 2, before the transitional and submission patterns run. External legal counsel or senior in-house counsel reviews the statutory comparison. The legal review is the gate that turns AI-drafted analysis into legally defensible analysis. The model can produce a workmanlike comparison. The lawyer is the one who confirms the recommendation reads as stated, the section reads as stated, and the change is described accurately.
Skipping either gate is the most common failure mode. The cost of running them is the legal and senior management time. The cost of skipping them is a submission that argues against a recommendation that does not exist, or for a position that conflicts with the actual statutory text. Both happen, every consultation cycle, with un-gated AI drafting.
What never to do, on a review of this scale
Three patterns to avoid.
Do not paste internal position papers, settlement strategy memoranda, or Cabinet-in-confidence material into a project space hosted on a shared endpoint without confirming the data classification rules for that endpoint and your scheme's controls for that classification.
Do not let the AI draft the submission. The AI drafts the scaffold. Lawyers and policy advisers draft the submission. The difference is not pedantic. A regulator reading a submission can usually tell when the position came from the model rather than the organisation.
Do not assume the model has read the report. Confirm. The fastest way to confirm is the citation rule. Every analysis output names the recommendation by number and quotes the relevant text. If the citation is missing, the analysis cannot be relied on. If the citation is invented, the analysis is wrong.
A fourth, less obvious. Do not run the workflow once and stop. The consultation cycle is iterative. New positions emerge. New evidence comes in. The triage from week one is not the triage from week six. Re-run the prompts at each major decision point.
The internal capability you build
A workflow like this builds internal capability over time. Three concrete artefacts come out of running it well.
First. A project space your team can re-use for the next review. The next major scheme review is not far away. The system prompt, the reference pack pattern, and the four prompt patterns transfer.
Second. A briefing pack template your senior management understands. The triage, comparison, and transitional pattern produce documents your executive team will recognise. Future legislative reviews can reuse the same pack structure, which means executive review is faster.
Third. An evidence base for your submissions library. Your government relations team can refer back to past submissions, the internal positions that supported them, and the reasoning trails the AI produced. The reasoning trails are useful when a similar question lands again.
The condensed workflow
Set up the project space with three files. Tight system prompt. Reference pack including the SRC Act 1988, the SRC Act Review, and your operational profile. Guardrail file naming what the model must not do.
Run prompt pattern 1 to triage the 124 recommendations. Senior management reviews. Adjust the bucket.
Run prompt pattern 2 to compare the high-impact recommendations against the current Act, section by section. External or in-house legal counsel reviews. Adjust the comparison.
Run prompt pattern 3 to map operational and transitional impact. Operations team reviews. Build the implementation backlog.
Run prompt pattern 4 to scaffold the submission. Government relations and legal finish the submission.
Run the workflow again a month later, against the same project space, when new evidence has come in.
The 124 recommendations of a major statutory review used to take a senior policy team three months to triage and another three to draft a submission against. AI does not change the legal review. It changes the time to insight. Six months becomes six weeks. The cost of doing it badly is high. The cost of not doing it at all, while the consultation runs, is higher.
General information and education only. Not legal, compliance, financial, or professional advice. Submissions to government on the SRC Act Review should be reviewed by qualified legal counsel before lodgement.*
TheAICommand. Intelligence, At Your Command.
