TheAICommand Learning Library
Module LM-G02
Privacy Act 1988 (Cth) and the Australian Privacy Principles
A practitioner module for Australian financial services. Privacy Act amendments (2024), the new statutory tort of serious invasion of privacy (2025), Automated Decision Making transparency (2026), and operating the framework with AI.
Module Metadata
Learning outcomes
By the end of this module a learner will be able to:
- Explain (Bloom: Understand) the structure of the Privacy Act 1988 (Cth), the 13 Australian Privacy Principles, and the role of the OAIC.
- Apply (Bloom: Apply) the 2024 amendment package, including the statutory tort and the Automated Decision Making transparency obligations, to common Australian FS use cases.
- Analyse (Bloom: Analyse) a Notifiable Data Breach scenario and produce a defensible eligible data breach assessment within the 30-day statutory window.
- Evaluate (Bloom: Evaluate) AI tooling against APP 1, APP 6, APP 11 and the new ADM transparency obligations and select an appropriate deployment pattern.
- Create (Bloom: Create) a Privacy Impact Assessment outline and an ADM disclosure statement using a governed AI workflow with mandatory human-in-the-loop checkpoints.
- Justify (Bloom: Evaluate) the interaction between the Privacy Act, APRA CPS 234, and CPS 230 in a regulated financial services environment.
1. Executive Summary
The Privacy Act 1988 (Cth) is the principal Commonwealth statute governing personal information handling in Australia. It binds APP entities, including all Australian financial services providers because of the financial services carve-out from the small business exemption. The 13 Australian Privacy Principles (APPs) set lifecycle obligations from collection through to disposal. The Office of the Australian Information Commissioner (OAIC) is the regulator and now operates with materially expanded enforcement powers under the Privacy and Other Legislation Amendment Act 2024.
This matters for Australian FS for three reasons. First, customer trust depends on demonstrably lawful and transparent personal information handling, particularly across credit, claims, and identity. Second, the regulatory perimeter has tightened: a statutory tort of serious invasion of privacy is live from 10 June 2025, the Automated Decision Making transparency obligations commence 11 December 2026, and OAIC now has a tiered civil penalty regime plus an infringement notice power. Third, AI adoption multiplies the surface area of every APP obligation: models, cloud inference, and decision systems all sit inside the privacy perimeter.
After this module you will be able to:
- Stand up a defensible privacy posture across the customer information lifecycle without invoking unnecessary legal escalation.
- Run an APP-mapped impact assessment for a new product, vendor, or AI capability and identify the privacy controls that must be in place before go-live.
- Operate a Notifiable Data Breach assessment within the statutory window and draft regulator and individual notifications.
- Build a Claude or ChatGPT project space that supports privacy-by-design drafting, mapping, and review with mandatory de-identification, governance logging, and human review.
- Translate the new ADM obligations into board-ready language and into the disclosure mechanisms that customers will see.
2. Regulatory and Strategic Context
Issuer and statutory authority
The Privacy Act 1988 (Cth) is administered by the OAIC, with the Privacy Commissioner leading privacy-specific functions including assessments, determinations, enforceable undertakings, infringement notices, and Federal Court proceedings. The Act sits inside a broader information policy stack including the Consumer Data Right under the Competition and Consumer Act 2010 (Cth), CPS 234, and the Identity Verification Services Act 2023 (Cth).
Scope of application in financial services
Most FS entities are captured regardless of turnover because section 6D of the Act excludes entities that provide a health service or hold credit reporting information from the small business exemption. Banks, insurers, superannuation trustees, AFSL and ACL holders, and brokers are all caught. Part IIIA and the Credit Reporting Code add specific obligations for credit providers and credit reporting bodies. APRA-regulated entities sit in an overlap zone where APP 11 is reinforced by CPS 234 and service provider arrangements by CPS 230.
Key dates and transitional periods
The Privacy and Other Legislation Amendment Act 2024 received Royal Assent on 10 December 2024. Immediate commencement provisions included the OAIC infringement notice power, the new tiered civil penalty regime, doxxing offences, and the APP 8 white-list mechanism for prescribed countries. The statutory tort of serious invasion of privacy commenced 10 June 2025 (Schedule 2). The Automated Decision Making transparency obligations commence 11 December 2026 (Schedule 1, amending APP 1) and require entities using computer programs to make or substantially make decisions that significantly affect individuals to update their privacy policies with prescribed disclosures. Treat 2025 to 2027 as an active reform window.
Tiered civil penalty regime (post-2024)
Three tiers now apply. Tier 1 (serious or repeated interference with privacy under s 13G): maximum civil penalty for a body corporate is the greatest of $50m, 30 percent of adjusted turnover for the relevant period, or 3 times the benefit obtained. Tier 2 (interference with privacy that is not serious or repeated): substantially lower civil penalty units. Tier 3 (specific administrative breaches): lowest tier. OAIC may also issue infringement notices for specific contraventions without court proceedings.
Interplay with adjacent frameworks
- APRA CPS 234 Information Security (LM-G07): operational implementation of APP 11. CPS 234 sets information security capability and incident notification timing (72 hours) that runs faster than the NDB clock.
- APRA CPS 230 Operational Risk Management (LM-G05): personal information processing in critical operations, material service provider obligations, business continuity, tolerance levels.
- AML/CTF Act 2006 (Cth) (LM-G03): KYC collection sits inside APP 3, tipping off provisions (s 123) constrain SMR discussion. AML data must still meet APP 11.
- Corporations Act 2001 (Cth) (LM-G01): directors duties oversight of privacy risk, Part 9.4AAA whistleblower discloser protections.
- Consumer Data Right under the Competition and Consumer Act 2010 (Cth): separate Privacy Safeguards (PS1 to PS13) that mirror but are distinct from the APPs.
Visual 1: Regulatory authority map (Australian privacy stack)
Layered diagram with statutory authority flowing down and complaints flowing up.
3. Core Concepts and Defined Terms
Defined terms (minimum 8 rows)
The 13 Australian Privacy Principles in plain English
The APPs are organised around the personal information lifecycle. Practitioners should treat them as an ordered control framework rather than a checklist:
APPs 1 and 2 govern the entry point. APP 1 requires open and transparent management through a current privacy policy and named accountable officers. APP 2 grants the option to deal anonymously or pseudonymously where lawful and practicable; anonymity is rarely achievable in FS because of identity and AML obligations, but pseudonymity is more often available than entities assume.
APPs 3 to 5 govern collection. APP 3 limits collection of personal information to that reasonably necessary for, or directly related to, the entity's functions or activities, and restricts sensitive information collection to consent-based or specifically authorised circumstances. APP 4 deals with unsolicited personal information. APP 5 sets the notification obligations at the point of collection.
APP 6 governs use and disclosure. The default rule is primary purpose only. Secondary use is permitted where the individual would reasonably expect it and the secondary purpose is related (or directly related, for sensitive information). Exceptions cover consent, authorisation by law, enforcement, and serious threat. APP 7 limits direct marketing. APP 8 governs cross-border disclosure and requires reasonable steps to ensure overseas recipients meet APP-equivalent standards. The 2024 Act introduced a white-list mechanism for prescribed countries.
APPs 10 and 11 govern data quality and security. APP 11 is the operational backbone for FS: take reasonable steps to protect information from misuse, interference, loss, unauthorised access, modification, and disclosure, and destroy or de-identify when no longer needed. CPS 234 operationalises APP 11 for APRA-regulated entities and is more prescriptive.
APPs 12 and 13 govern individual rights. APP 12 grants access (with limited exceptions). APP 13 grants correction rights and requires reasonable steps to notify other recipients where correction has occurred.
From 11 December 2026, new APP 1 sub-paragraphs (Schedule 1, 2024 Act) require entities using computer programs to make decisions that significantly affect individuals to disclose that use, the kinds of personal information used, and the kinds of decisions made. This is transparency, not consent. Plan an APP 1 uplift through calendar 2026.
4. Practical Application in Australian FS
Four worked examples spanning ADI, insurance, superannuation, and AFSL settings. All identifiers are placeholders.
(a) ADI: Customer service AI assistant suggesting case actions
Trigger. [ADI Placeholder] deploys an AI assistant in retail banking that recommends next-best actions on hardship cases, drawing on transaction history and prior case notes.
Obligations. APP 1 (new APP 1 ADM disclosure from 11 December 2026), APP 3 (collection necessity), APP 6 (use limitation), APP 11 (security), CPS 234.
Artefact. PIA using the OAIC PIA structure, with ADM disclosure language ready for the privacy policy uplift. CPS 230 vendor risk assessment with residency and retention controls.
Audit trail. PIA signed off by Chief Privacy Officer and CISO, recorded in the privacy register. Model card maintained. ADM disclosure ledger. CPS 234 residual risk acceptance.
(b) General insurer: Claims triage with biometric voice analytics
Trigger. [Insurer Placeholder] pilots voice-print analytics to flag identity fraud on motor claims. Vendor is offshore and proposes training on aggregated samples.
Obligations. APP 3 (sensitive information consent for biometrics), APP 8 (cross-border), APP 11 (template security), new APP 1 ADM disclosure from 11 December 2026, and statutory tort exposure if used beyond disclosed purpose.
Artefact. Express, specific, current, voluntary and informed consent design. APP 8 reasonable steps assessment. ADM disclosure language. PIA rejecting the broader 'training on aggregated samples' use as outside primary purpose.
Audit trail. Consent log, vendor due diligence file, APP-equivalent contract clauses, biometric template destruction schedule, complaint pathway prepared.
(c) Superannuation trustee: Member-facing chatbot summarising statements
Trigger. [Trustee Placeholder] deploys a member chatbot that summarises last-statement balance and insurance entitlements on natural language queries.
Obligations. APP 1, APP 3 (collection minimisation), APP 6 (aligned to APP 5 notification), APP 11 (authentication), trustee best interests duty under the SIS Act 1993, CPS 234.
Artefact. Authentication tied to portal identity. Output filter blocking disclosure of sensitive information outside the authenticated member. Transcript retention policy. Out-of-scope rejection pathway.
Audit trail. Transcript retention log, authentication failure register, hallucination rate evaluation, quarterly Trustee Risk Committee reporting.
(d) AFSL holder: Adviser productivity copilot drafting Statements of Advice
Trigger. [Licensee Placeholder] rolls out a copilot that drafts SOAs from adviser fact-find data and notes.
Obligations. APP 6 (use limitation), APP 8 (if offshore), APP 11 (tenancy and security), best interests duty under the Corporations Act 2001 (Cth), new APP 1 ADM disclosure from 11 December 2026.
Artefact. System prompt prohibiting client direct identifiers. Fact-find redaction routine producing placeholder inputs. SOA reviewer checklist. Privacy policy update describing the copilot's role.
Audit trail. Redaction logs, version-controlled prompts, SOA reviewer sign-off log, model risk register entry against the Risk Appetite Statement.
Visual 2: NDB notification timeline (process diagram)
Visual 3: Comparative obligation table (Privacy Act vs CPS 234 vs CDR Privacy Safeguards)
Visual 4: Privacy Act operating model RACI
R = Responsible, A = Accountable, C = Consulted, I = Informed.
Visual 5: Illustrative penalty exposure for serious or repeated interference with privacy
Stacked bar chart by turnover band showing the three Tier 1 statutory ceilings. Illustrative.
Note. Statutory maxima, not expected outcomes.
Visual 6: The 5 things to remember
5. Operating the Privacy Framework With AI
AI multiplies the surface area of every APP obligation, and produces strong leverage for privacy operations when governed properly.
Use cases at scale
Eight high-leverage use cases:
- Drafting and refreshing APP 1 privacy policies, including the new ADM disclosures.
- Mapping personal information flows across systems, vendors, and regions for PIAs and Records of Processing.
- Drafting Privacy Impact Assessments using OAIC PIA guidance as a knowledge source.
- First-pass NDB eligibility analysis from a de-identified incident summary, labelled DRAFT FOR PRIVACY OFFICER REVIEW.
- Drafting NDB customer notification language with placeholder fields for the regulator-required elements.
- ADM disclosure statements for the privacy policy and customer-facing FAQs.
- Board and Risk Committee updates on privacy KRIs, breach trends, and regulator developments.
- Regulator correspondence triage with classification, statutory mapping, and draft acknowledgement.
Project space setup
Set up two parallel workspaces, one in Claude Projects and one in ChatGPT Projects or a Custom GPT, sharing one knowledge base. The duplication is a continuity control.
Knowledge sources (de-identified, no live customer data):
- Privacy Act 1988 (Cth) consolidated text (latest)
- OAIC APP Guidelines (current edition)
- OAIC NDB Resource Hub materials
- OAIC PIA Guide
- OAIC De-identification Decision-Making Framework
- APRA CPS 234 (and the related Practice Guide CPG 234)
- APRA CPS 230 (and the related Practice Guide CPG 230)
- Your entity's privacy policy, privacy framework, breach response procedure, and privacy-by-design playbook (de-identified or marked CONFIDENTIAL with access controlled)
File structure:
- /01-statutes-and-rules/
- /02-regulator-guidance/
- /03-internal-templates/
- /04-prompt-library/
- /05-output-archive/ (with retention metadata)
System prompt scaffold (paste at project level, not per chat):
Naming. [PIA-YYYYMM-Project-vN], [NDB-YYYYMMDD-Incident-vN], [ADM-YYYYMM-System-vN]. Append Privacy Officer initials and status flag (DRAFT, REVIEW, FINAL).
Prompt library (minimum 6 prompts)
Each prompt follows the Role / Context / Task / Constraints / Output Format / Quality Bar pattern.
Governance, audit, privacy, and risk appetite controls
De-identification. Inputs to any model not within an enterprise tenancy bound to APP-equivalent handling must be de-identified to OAIC standard. Remove direct identifiers, reasonably identifying combinations, and unnecessary sensitive information. When in doubt, redact and use placeholders.
Prohibited inputs. Customer PII, claimant data, KYC and identity data, biometrics, market sensitive data, sanctions data, unreviewed regulator drafts, raw incident data prior to privilege assessment, and Part 9.4AAA whistleblower information.
Human-in-the-loop. Every NDB output, regulator response, privacy policy update, PIA, and ADM disclosure must be reviewed by a named accountable officer before use. Model output is draft. The accountable human is the decision.
Retention and logging. Maintain prompt and output logs for at least two years, access controlled to Privacy Officer, internal audit, and model risk lead. Treat the log as APP 11 protected if it contains personal information.
Model selection. Prefer enterprise tenancies (Claude Enterprise, ChatGPT Enterprise) over consumer products. For regulated information, prefer sovereign cloud or contractually equivalent protections. For sensitive information at scale or critical operations, on-premise or private cloud.
CPS 230 critical operations. If the AI tool is part of a critical operation (for example, AI-assisted credit decisioning), set tolerance levels, identify alternative processes, and ensure workflow survival under a model outage.
APP alignment. APP 1 (governance), APP 5 (notification), APP 6 (use limitation), APP 8 (cross-border), APP 11 (security), APP 12 (access). Treat the workspace itself as a system that processes personal information.
Quality assurance loop
Run every output through this five-step QA rubric before it leaves the workspace:
- Accuracy: Is every cited section, APP, or guideline correctly attributed?
- Currency: Does the output reflect the regulatory position as at the date of generation, including the 2024 amendments and the ADM commencement date?
- Privacy hygiene: Is the input fully de-identified? Are any placeholders properly labelled?
- Decision integrity: Are statements of fact distinguished from inferences? Is uncertainty flagged?
- Sign-off readiness: Is the output ready for review by the named accountable officer, with all assumptions explicit?
Red team prompt to stress-test your own draft:
Scaling pattern
Maintain the prompt library in version control with change logs. Quarterly model evaluation cadence with documented test cases. KRIs for prompt completion volumes, NDB time-to-decision, ADM disclosure currency, and PIA backlog. Treat material prompt library changes as CPS 230 changes.
6. Common Pitfalls and Watch-outs
Treating de-identification as binary. Run the de-identification test against the OAIC framework using the motivated intruder test. Re-test if the data, audience, or technology changes.
Using consumer AI for regulated tasks. Move regulated workflows to enterprise tenancies with contractual APP-equivalent terms. Block consumer endpoints at the network layer for staff handling regulated data.
Confusing the NDB 30-day clock with CPS 234 72-hour notification. Map both clocks at the start of every incident response. The faster obligation governs. Document each separately.
Assuming legitimate interest is an APP 6 exception. There is no general legitimate interest carve-out under the APPs. Use APP 6.2(a) reasonable expectation, consent, or specific legal authorisation. Do not import GDPR concepts uncritically.
Treating ADM transparency as a 2026 problem. Privacy policies need uplift through 2026. Inventory ADM use now, classify by significance, and plan disclosure language. Late preparation invites enforcement attention.
Overlooking APP 8 'reasonable steps' for offshore vendors. Reasonable steps must be proportional to sensitivity. For sensitive information offshore, expect contractual APP-equivalent terms, audit rights, and data residency commitments.
Confusing the statutory tort with regulator action. The tort is private civil action. Regulator enforcement is separate. Both can run in parallel for the same conduct.
Failing to log AI-generated outputs. Maintain prompt and output logs for two years. Treat the log as APP 11 protected. Apply access controls.
7. Decision Frameworks and Tools
Decision tree: Is this an eligible data breach?
- Has personal information been the subject of unauthorised access, disclosure, or loss? If no, NDB does not apply. If yes, continue.
- Is the entity an APP entity? If no, the NDB scheme does not apply. If yes, continue.
- Is a reasonable person likely to conclude that the access, disclosure, or loss is likely to result in serious harm to one or more individuals? If yes, eligible data breach.
- Have remediation steps already removed the risk of serious harm? If yes, the breach is not eligible. Document the assessment.
- If still unclear, run the s 26WH assessment within 30 days. Document each step.
- If APRA-regulated, run the parallel CPS 234 materiality assessment with the 72-hour notification clock.
Maturity ladder: Privacy operating model
- Level 1 - Reactive: Incident-driven privacy work. Privacy policy outdated. No PIAs. No model inventory.
- Level 2 - Documented: Privacy policy current. PIAs run on major projects. NDB procedure tested annually. AI governance in place but not enforced.
- Level 3 - Embedded: Privacy-by-design tollgates in change management. PIAs on every product, vendor, and AI deployment. Privacy KRIs reported quarterly. Mandatory privacy training. AI workspace governed.
- Level 4 - Optimised: APP 1 disclosures live for ADM. NDB drills quarterly. Privacy posture reviewed against peers and global standards. AI workspace producing measurable productivity uplift with documented controls.
- Level 5 - Anticipatory: Contributes to industry standard-setting. Anticipates regulatory reform. Designs for the next privacy reform tranche before it commences.
Self-check questionnaire (rate 1 to 5)
- Our privacy policy reflects current APPs and our actual data handling, including ADM use.
- We can produce a current personal information register on demand within five working days.
- We have run an NDB drill in the last 12 months.
- We have an AI workspace governance standard with mandatory de-identification and human review.
- We have inventoried ADM use cases that significantly affect individuals.
- We have rehearsed the parallel NDB and CPS 234 notification timing.
- Our APP 8 cross-border arrangements are documented, current, and proportionate to data sensitivity.
Score 30 to 35: Embedded or above. Score 20 to 29: Documented. Score below 20: Reactive. Use the gap to set a 12-month uplift plan.
8. Further Reading and Authoritative Sources
Primary statutes and rules:
- Privacy Act 1988 (Cth)
- Privacy and Other Legislation Amendment Act 2024 (Cth)
- Privacy Regulation 2013 (Cth)
- Privacy (Credit Reporting) Code 2014 (registered code)
OAIC guidance:
- OAIC Australian Privacy Principles Guidelines (current edition)
- OAIC Notifiable Data Breaches Resource Hub
- OAIC Guide to Undertaking Privacy Impact Assessments
- OAIC De-identification Decision-Making Framework
- OAIC Privacy Self-Assessment Tool for APP Entities
APRA guidance:
- Prudential Standard CPS 234 Information Security
- Prudential Practice Guide CPG 234 Information Security
- Prudential Standard CPS 230 Operational Risk Management
- Prudential Practice Guide CPG 230 Operational Risk Management
Adjacent and international:
- Competition and Consumer Act 2010 (Cth) Pt IVD (Consumer Data Right)
- National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF)
- ISO/IEC 27701 Privacy Information Management System
Professional bodies and resources:
- International Association of Privacy Professionals (IAPP) Australia and New Zealand chapter
- Governance Institute of Australia, Privacy and Ethics resources
- Risk Management Institution of Australasia, Privacy and AI risk publications
9. Closing Sign-off
This module provides general information and education for Australian financial services practitioners. It is not legal, compliance, or professional advice. Apply the framework to your entity's specific circumstances, take advice where the position is unclear, and document your decisions.