TheAICommand Brief

Issue 10: AI in HR practice

TheAICommand BriefIssue 10: AI in HR practiceAudience: HRPublished 6 May 2026

1. The month in AI

OAIC issues guidance on AI in employee data handling

The Office of the Australian Information Commissioner published guidance reminding employers that the Privacy Act applies to AI processing of employee personal information, with a specific call out for hiring tools.

Source: oaic.gov.au

Fair Work Commission flags AI-related unfair dismissal claims trend

The Commission noted a small but rising number of unfair dismissal matters involving AI-assisted performance management decisions, signalling closer scrutiny on procedural fairness.

Source: fwc.gov.au

DEWR releases workforce AI literacy framework

The Department of Employment and Workplace Relations published a workforce AI literacy framework to support adult learning and reskilling, focused on practical job task application.

Source: dewr.gov.au

2. Four actions HR teams can take this week.

This week is HR. The four actions assume you operate inside an Australian employer of more than 100 people, with a mix of HR generalists, specialists, and people leaders. Each is achievable in a working week and produces a record you can show your people committee.

One. Audit one hiring step where AI is or could be in play. Resume screening, interview scheduling, scoring. Document the data flow and the human review point.

Two. Draft one AI use clause for your standard employment contract or policy suite. Cover acceptable use, prohibited use, and data classification rules.

Three. Build a one-page AI literacy primer for your people leader cohort. What AI can do well, what it cannot, and what to escalate to HR.

Four. Talk to one employee who quietly uses AI today. Five questions. The fastest way to learn what your policy needs to address is asking the people already doing it.

Why these: Each action targets a specific HR exposure: hiring fairness, contract clarity, leader capability, and ground truth on actual use. Done together they produce a credible early-stage HR AI position inside one month.

3. Why HR sets the speed limit on enterprise AI adoption.

Most enterprise AI conversations start with security and procurement. They land with HR. The reasons are practical. Acceptable-use policies sit in the people handbook. Performance frameworks decide whether AI use shows up as productivity or risk. Hiring is where AI tools generate the most direct privacy and discrimination exposure. Three patterns are emerging in mid-sized Australian employers. First, the AI policy is downstream of the IT acceptable use policy. That means the document arrives late and reads like a security memo. The fix is to put HR in the drafting room from day one. Second, performance frameworks are silent on AI use. Employees who use AI to draft a customer email faster get the same review as those who do not. That is the right answer if AI is a tool. It is the wrong answer if your framework rewards effort over output. Third, hiring tools that use AI for resume screening or interview scoring need the same scrutiny as any other selection tool. The OAIC guidance and the Fair Work Commission's signal both point in one direction. Disclosure, audit trail, and human-in-loop review for any decision that affects employment status. The practitioner-grade question is no longer whether HR allows AI. It is whether your handbook, performance framework, and hiring stack make AI use visible enough that you can defend a decision later. Employers that build that visibility this quarter will move faster on adoption. Those that do not will hit a wall the first time an unfair dismissal claim or privacy complaint puts their AI use under examination.

4. Prompt of the week.

This prompt produces a draft AI use policy clause set, plus a manager talking-points sheet. Paste your existing acceptable use policy text and the top three roles in your business. The model returns three policy clauses, three role-specific guidance notes, and a list of escalation triggers.

You are an HR policy advisor supporting an Australian employer of more than 100 people. You support a Head of People preparing an AI use policy update.

Inputs I will provide:
- Existing acceptable use policy text.
- Top three job families in the business and a one-line description.
- Known AI tools approved for use, if any.
- Industry sector and any specific privacy obligations beyond the Privacy Act 1988.

Produce:
1. Three policy clauses suitable for inclusion in the people handbook, covering acceptable use, prohibited use, and data classification.
2. Three role-specific guidance notes, each no longer than 100 words, written in plain English for the relevant manager.
3. A list of five escalation triggers that require HR review before AI is used, with a one-sentence reason for each.

Do not invent obligations the inputs do not mention. Where evidence is insufficient, flag the gap and suggest a primary source for the user to consult. Treat any reference to specific employee personal information as a sign that the input was not properly redacted and stop.

How to use it. Paste your existing acceptable use policy. Replace any specific employee references with role descriptors before pasting. Run the prompt. Send the policy clauses to your legal team for sign-off and the role-specific guidance to the relevant managers.

What to watch for. Models will draft clauses that sound legally tight but may not reflect your specific industrial instrument or enterprise agreement. Treat every clause as a starting draft. The legal review step is where the industrial weight sits. Privacy obligations beyond the Privacy Act may also apply in your sector.

← All editions

General information and education only. Not legal, compliance, financial, or professional advice.

TheAICommand. Intelligence, At Your Command.