ASIC's AI Supervisory Posture, Decoded
← GRC

ASIC's AI Supervisory Posture, Decoded

ASIC's posture on AI in financial services is now visible across REP 798, the 2026 Key Issues Outlook, and recent statements from the Chair. Five themes shape supervisory expectation, and three create immediate work for compliance teams.

·10 min read·monthly

GRC content. Written for compliance, risk, and audit professionals in Australian financial services. General information. Not legal or compliance advice.

Existing law applies to AI. Apply it the same way.

Context for general readers: ASIC is the Australian regulator for financial services conduct, market integrity, and consumer protection. When a regulator wants to influence how the industry behaves but does not need to make new law, it issues reports, regulatory guides, information sheets, and public statements. ASIC's posture on AI in financial services has been articulated through several documents: Report 798 'Beware the gap: Governance arrangements in the face of AI innovation' (29 October 2024), the Key Issues Outlook 2026, and public commentary from ASIC Chair Joe Longo. Together, these tell licensees how the regulator expects them to apply the obligations they already have to AI systems. For compliance teams, this is the most actionable form of supervisory communication, because it is the regulator describing its supervisory lens.

ASIC's supervisory posture on AI has crystallised around a single proposition: AI is a use case to which existing law applies. The most direct articulation came in REP 798, where the regulator reviewed AI adoption across 23 AFS and credit licensees and warned of an emerging governance gap. ASIC Chair Joe Longo summarised the position plainly in the accompanying media release: "Existing consumer protection provisions, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks." This article extracts the five themes that matter most for compliance teams in regulated financial services, and identifies the three immediate actions that flow from them.

The framing

ASIC's framing across REP 798, the 2026 Key Issues Outlook, and recent public commentary is consistent: AI does not change the legal obligations of an Australian Financial Services Licence holder. The conduct obligations, the disclosure obligations, the design and distribution obligations, the dispute resolution obligations, the duty to act efficiently, honestly and fairly. All of these continue to apply. The presence of AI in the operational stack does not move the obligation; it changes where the licensee needs to look to evidence compliance.

This framing has practical consequences. It means licensees should not be looking for an AI-specific regulatory regime to comply with. They should be looking at how their existing compliance program reaches the AI components of their operations.

It also means the supervisory communication is, in a sense, a recalibration tool. Many licensees have been treating AI as a special domain governed by special rules. ASIC is asserting that the special domain framing is not how the regulator sees it. AI use is a use case to which existing law applies, with the supervisory question being whether the licensee has applied the existing law adequately.

Theme 1: Personalisation and the personal advice line

ASIC's first theme reinforces a position the regulator has taken consistently: AI-driven personalisation can cross the line from general information into personal advice, regardless of how the licensee characterises the interaction.

The supervisory expectation is that licensees evaluate AI-driven customer interactions against the personal advice test in section 766B of the Corporations Act 2001, with attention to what a reasonable person in the customer's position would expect. Where AI tools take into account a customer's specific circumstances and produce recommendations that the customer would expect to apply to their situation, the personal advice obligations apply.

The practical implication: licensees deploying AI in customer-facing channels need a clear policy on what these tools can and cannot say, with the policy applying to the substance of the interaction rather than the formal characterisation.

Theme 2: Disclosure and explainability

ASIC's second theme focuses on disclosure. REP 798 found that nearly half of surveyed licensees lacked policies addressing consumer fairness or bias, and even fewer had disclosure policies regarding AI use to consumers. Where AI is used to make or materially influence decisions affecting customers, the regulator's stated expectation is that licensees have made appropriate disclosure of that fact. The supervisory question is not whether the customer received boilerplate text mentioning AI; it is whether the customer received information sufficient to understand how the AI use affects their experience.

The standard ASIC supervisory approach to disclosure (substance over form) extends here. Disclosure that is technically present but ineffective at communicating the relevant information is not adequate.

The practical implication: licensees should review existing disclosure documents and customer communications for AI-relevant content, with attention to whether the disclosure is meaningful from the customer's perspective.

Theme 3: Consumer remediation and dispute resolution

ASIC's third theme connects AI deployment to internal dispute resolution and remediation. The Corporations Act and RG 271 require licensees to maintain effective dispute resolution processes. Where AI tools influence outcomes that customers later dispute, the licensee must be able to investigate and remediate those disputes effectively.

The supervisory concern, as flagged in REP 798: AI systems that produce decisions without explanation create dispute resolution friction. Where the licensee cannot explain why a particular customer received a particular outcome, the dispute resolution process is impaired.

The practical implication: licensees need an internal explanation capability for AI-influenced decisions that meets the timeframes and quality expectations of RG 271. This is operationally significant work.

Theme 4: Market integrity

ASIC's fourth theme covers market integrity. Where licensees use AI in trading, market making, or related activities, market integrity rules continue to apply. The regulator's expectation is that licensees have specific governance over AI uses that touch market activity, with attention to the potential for AI-driven actions to amplify market events or create disorderly outcomes.

The practical implication: AI uses in market-facing activities sit inside the market integrity supervisory lens, not just the operational risk lens. Governance and reporting cadence should reflect this.

Theme 5: Operational risk and resilience

ASIC's fifth theme overlaps with APRA's CPS 230 territory but reaches non-APRA-regulated licensees as well. Where AI tools are embedded in operations that affect customer outcomes, the licensee's operational resilience capability needs to extend to those tools. The 2026 Key Issues Outlook flags AI-driven cybercrime and the resilience implications of AI-enabled conduct as enduring concerns.

For licensees not subject to CPS 230, this theme effectively imports many of the same expectations under ASIC's general efficiency and operational adequacy framing. ASIC's broader cyber resilience program provides a structural parallel; the regulator's expectation is that AI-related operational resilience receives comparable attention.

Three immediate actions

The five themes translate into three actions that compliance teams should be progressing this quarter.

1. Personalisation review

Walk through every AI-driven customer interaction in scope of the licence. Document, for each, whether the design intent is general information, general advice, or personal advice. Test whether the operational behaviour matches the design intent. Where the line is blurred, restructure the interaction or accept the personal advice consequences.

2. Disclosure refresh

Review existing customer disclosure documents and AI-related customer communications for adequacy against the substance test. Update where the disclosure is generic or insufficient. REP 798's finding that fewer than half of surveyed licensees had AI disclosure policies suggests this is a common gap.

3. Dispute resolution explanation capability

Map the AI-influenced decisions that customers might dispute. Build an internal capability to explain those decisions within the RG 271 timeframes. Where the AI tool does not provide adequate explanation natively, restructure the workflow so a human reviewer captures the explanation at the time of decision, not retrospectively when a complaint arrives.

How the posture intersects with APRA, AUSTRAC, and OAIC

ASIC does not operate alone. Its public commentary on AI explicitly recognises the role of other regulators with overlapping interests. APRA on prudential and operational risk. AUSTRAC on AML/CTF. OAIC on privacy. AHRC on discrimination and human rights. Treasury on legislative direction.

For practitioners, this means the supervisory questions on a single AI use case can come from multiple regulators. A personalisation engine used in retail banking, for example, sits inside ASIC's conduct lens, APRA's operational risk lens, and OAIC's privacy lens simultaneously. The compliance program needs to satisfy all three, not just one.

The pattern that has emerged in the most mature institutions is to design AI governance frameworks that map to all relevant regulatory regimes from inception, rather than retrofitting compliance after the supervisory questions arrive. ASIC's posture reinforces that this multi-regulator perspective is now the standard expectation, not the leading-edge practice.

What ASIC has not said

It is also worth being explicit about what ASIC has not said.

ASIC has not imposed a new licensing condition for AI use. The existing licensing framework continues to apply.

ASIC has not prohibited specific AI use cases. The regulator's view, restated by the Chair in October 2024, is that the existing law applies; how it applies depends on the use case.

ASIC has not provided a safe harbour for AI uses that satisfy specific design criteria. Compliance is judged against the existing obligations, applied to the specific facts.

ASIC has not signalled forthcoming legislative change in the immediate term. Legislative direction sits with Treasury and Parliament; ASIC's communication is supervisory communication within the existing framework.

These absences are themselves important. Practitioners who hoped for AI-specific safe harbours, prohibitions, or new licensing categories should adjust expectations. The supervisory tools available to ASIC are the same tools that have always been available; the question is how the regulator will deploy them in the AI context.

Anticipating the supervisory engagement

The pattern that has emerged from ASIC's recent supervisory engagement on adjacent issues (DDO, dispute resolution, conduct in retail credit) is a structured one. The regulator typically begins with thematic reviews or industry surveys, follows with public reports identifying common issues, and then takes selective enforcement action where issues persist.

For AI, the early stages of this pattern are already in motion. REP 798 was the first thematic review and identified the governance gap. The 2026 Key Issues Outlook elevates AI to an enduring supervisory priority. Industry surveys and thematic engagement are likely through 2026 and into 2027. Enforcement action, where it comes, is likely to focus on egregious cases (systematic distribution drift, misleading personalisation, dispute resolution failure) rather than minor compliance gaps.

The institutions best placed for this engagement will be those that can show, in writing, that they have applied the existing law to their AI uses deliberately and consistently. ASIC's posture is, in essence, the regulator telling the industry what kind of evidence will be looked for. Compliance teams should treat it as a planning input.

The next stage of the supervisory cycle is likely to involve specific information requests to selected licensees, as ASIC builds an evidence base. The posture signals what the regulator will be asking about; the requests, when they arrive, will test whether the licensee can answer. The work to do now is preparing the answers.

Direction of travel

ASIC's posture does not by itself change the law. It clarifies how the regulator will apply the existing law to AI. Combined with APRA's expected model risk thematic review, AUSTRAC's emerging engagement on AML/CTF AI uses, and the Voluntary AI Safety Standard's voluntary baseline, the supervisory landscape for AI in financial services is now broadly mapped.

For compliance teams, the implication is that AI governance is no longer a forward-looking topic. It is an active supervisory area, and the existing obligations apply now. Practitioners who treat REP 798 and the 2026 Key Issues Outlook as planning documents rather than future-state aspiration will be in the strongest position for the supervisory engagement that is already beginning.

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.

TheAICommand. Intelligence, At Your Command.

Context

ASIC is Australia's corporate, markets, and financial services regulator. Its mandate covers conduct, disclosure, and consumer protection in financial services. ASIC has been active on AI since at least 2023, including through Report 762 on DDO and investment products and, more directly, through Report 798 'Beware the gap: Governance arrangements in the face of AI innovation' published 29 October 2024. ASIC's articulated posture sits alongside complementary work by APRA, AUSTRAC, OAIC, and the AHRC.

AI angle

ASIC's framing positions AI as a use case to which existing law applies, rather than a new regulatory regime requiring fresh law. The supervisory expectation is that licensees apply their existing conduct, disclosure, and risk obligations to AI systems with the same rigour as to other operational technology. The implications cut across personalisation, market integrity, and consumer remediation.

Primary sources

ASICAI governancefinancial servicessupervisionconsumer outcomes
← Back to GRC

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.