FAR pushes accountability to a named person. AI does not change that.
Context for general readers: The Financial Accountability Regime Act 2023 makes individual senior executives in banks, insurers, and super funds personally answerable for what happens inside the parts of the business they run. It was designed in response to the Hayne Royal Commission to address a perceived gap in personal accountability at the senior executive level. Each Accountable Person has a documented set of responsibilities, and APRA and ASIC can take action against them individually if they fail to take reasonable steps to discharge those responsibilities. AI tooling has entered every part of regulated financial services, and so it has entered the accountability picture as well.
This article examines how AI tooling decisions intersect with FAR. It is written for boards, accountable persons, and the GRC professionals who support them. The framing is practical: how does an accountable person evidence that they have taken reasonable steps to address AI risk inside their portfolio?
Why FAR matters for AI governance
FAR's defining feature is the personal accountability dimension. A breach of an accountable person's accountability obligations can result in civil penalties, disqualification, and reputational consequences for that individual. The regime is designed to make decisions about systems, controls, and customer outcomes traceable to a person, not a committee.
For AI specifically, this matters because the operational reality of AI deployment is often committee-driven. AI governance councils, ethics committees, technology steering committees, and risk fora all touch AI tooling decisions. FAR cuts across this committee landscape and asks who, individually, is accountable for the outcome. The committee may make the decision; the accountable person bears the consequence.
For AI tooling, this matters in two ways.
First, the procurement and deployment decision. When a regulated entity acquires an enterprise AI capability and deploys it inside a regulated function, the accountable person responsible for that function inherits accountability for the consequences. The procurement may have been led by IT or by a separate AI capability function, but the accountability sits with the line owner.
Second, the operational use decision. Even where AI tooling has been procured centrally, individual business unit decisions about how to deploy the tooling create accountability exposure. An accountable person whose team uses an AI tool in a regulated process is accountable for whether that use is fit for purpose, governed appropriately, and consistent with the entity's broader risk appetite.
The prescribed responsibility map
FAR requires regulated entities to allocate a defined set of prescribed responsibilities to accountable persons. The list includes (among others) management of the entity's policies and procedures, management of operational risk, management of compliance with regulatory obligations, management of customer outcomes, and management of the integrity of internal management information systems.
AI tooling decisions can sit inside any of these. The mapping work practitioners need to do:
- Operational risk management. AI tooling that supports operational processes (claims, underwriting, customer service) sits inside this responsibility. The accountable person is responsible for ensuring the AI tool is included in the operational risk framework.
- Regulatory compliance management. AI tooling used in regulatory monitoring (AML/CTF, conduct, complaints handling) sits inside this responsibility. The accountable person is responsible for ensuring the AI tool produces compliance outcomes consistent with the regulatory regime.
- Customer outcomes management. AI tooling that influences customer experience (recommendation engines, personalisation, automated decisioning) sits inside this responsibility. The accountable person is responsible for ensuring those outcomes are consistent with target market and conduct expectations.
- Information system integrity. AI tooling that processes management information sits inside this responsibility. The accountable person is responsible for ensuring the information produced is reliable.
The supervisory expectation under FAR is that the responsibility map is documented, current, and reflective of how the entity actually operates. Where AI tooling has been deployed without the responsibility map being updated, the entity has a documentation gap that supervisors may pursue.
What "reasonable steps" looks like for AI
The FAR Act 2023 does not require accountable persons to prevent every adverse outcome inside their portfolio. Section 28 of the Act requires them to take reasonable steps to discharge their accountability. The reasonableness standard is fact-specific, but the supervisory expectation has shape.
For AI tooling, four categories of reasonable step are likely to be tested.
1. Awareness
An accountable person whose portfolio uses AI tooling needs to know about it. Awareness requires the existence of a list. An accountable person who cannot describe, in broad terms, the AI tools operating in their portfolio is not in a strong position to evidence reasonable steps.
The practical action: each accountable person should receive a quarterly summary of AI tooling in scope of their portfolio, with material changes flagged.
2. Governance design
Reasonable steps include ensuring that the governance framework around AI tools is adequate. This is not the accountable person personally writing the policy; it is ensuring that a policy exists, is current, and is being followed.
The practical action: the entity's AI governance framework should be visible to accountable persons, with attestations on adoption and adherence.
3. Monitoring and escalation
Reasonable steps include having a way to know when something is going wrong. For AI tooling, this means monitoring metrics that reveal degradation, including output quality, model drift, and adverse customer outcome indicators.
The practical action: AI-specific risk reporting should reach accountable persons through the existing risk reporting channels, not as a separate AI committee report. AI risk is not separate from operational risk or conduct risk; it is a manifestation of them.
4. Response capability
Reasonable steps include being able to respond when an issue emerges. For AI tooling, this means having an incident response capability that includes AI-specific failure modes (for example, output quality degradation).
The practical action: the existing incident response framework should be tested against AI-specific scenarios, not just availability and security incidents.
The shared accountability problem
A specific FAR challenge worth examining is the shared accountability problem. AI tools are often selected centrally (by IT, by a separate AI capability team, or by procurement) but deployed across multiple business units, each with its own accountable person. When something goes wrong with the tool, the accountability picture can be unclear.
The joint information paper from APRA and ASIC on FAR implementation anticipated exactly this issue, noting that prescribed responsibilities should be allocated to a single accountable person rather than split across several. The implication for AI tooling: each AI tool in scope should have a clearly designated accountable person, even if multiple business units use the tool.
The practical pattern emerging in major institutions is to designate the accountable person responsible for the most material use of the tool as the lead accountable person for the tool itself, with the accountable persons of secondary user business units having supporting accountability for their specific use cases. This pattern works only if it is documented; an undocumented allocation is unlikely to satisfy FAR's documentation expectations.
Records and attestation
FAR creates specific record-keeping obligations. Entities must maintain records of accountability allocations, of changes to those allocations, and of decisions made by accountable persons in discharge of their accountabilities. Where AI tooling decisions sit inside FAR responsibilities, the records of those decisions are part of the FAR record.
The supervisory practice that has emerged for FAR more generally is the periodic attestation cycle: accountable persons attest, on a defined cadence, that they have taken reasonable steps to discharge their accountabilities. Where AI tooling is in scope of an accountable person's portfolio, the attestation should explicitly cover AI risk.
The practical task for compliance teams supporting accountable persons: ensure that the attestation framework prompts the accountable person to consider AI tooling risk explicitly, with sufficient information to make the attestation meaningful. An attestation made without supporting evidence is an attestation that creates personal exposure for the accountable person without giving them protective evidence.
Insurer and trustee considerations
FAR's commencement for insurers and superannuation trustees is staged through 2026. The architecture is the same as for ADIs, but the specific prescribed responsibilities are differently weighted. For insurers, AI tooling in claims management, underwriting, and policy distribution intersects directly with prescribed responsibilities. For trustees, AI tooling in member communication, investment recommendation (subject to the relevant licensing framework), and complaints handling sits inside the responsibility map.
The implementation lessons from ADIs are directly transferable. Insurers and trustees preparing for FAR commencement should not treat AI tooling governance as a separate workstream. It is part of the FAR readiness work, and it should be inside the responsibility map from day one.
Practical implications this quarter
For boards, accountable persons, and the GRC teams supporting them:
- Refresh the responsibility map for AI tooling visibility. Where AI has been deployed since the last map review, ensure the responsibility allocation is documented.
- Build AI tooling into the existing accountable person reporting cadence. Quarterly AI portfolio updates to accountable persons, with clear escalation triggers, are a sensible operational standard.
- Test reasonable-steps evidence on a sample basis. Pick one AI use case in each accountable person's portfolio and walk through what reasonable steps look like for that case. This is internal audit work in many entities; it can also be done as part of FAR readiness review.
- Coordinate the AI governance committee with the FAR governance framework. Where AI governance has been built as a parallel structure (for example, an AI Council reporting to the executive committee), the relationship to the FAR-defined accountability flow should be explicit.
Direction of travel
FAR is a relatively new regime for ADIs and is in staged commencement for insurers and trustees. The supervisory engagement on AI under FAR has been quiet, but the joint information paper from APRA and ASIC made clear the regulators expect accountable persons to take a forward-looking view on emerging risks. AI is the most prominent emerging risk in the financial services operating environment.
The institutions that will be best placed are those that treat AI tooling decisions as a routine part of the FAR responsibility framework, not as a special category requiring its own governance overlay. The accountability runs through the line, and the documentation should follow. As AI tooling continues to embed inside regulated workflows, the FAR responsibility map will need ongoing maintenance to reflect operational reality. Practitioners who treat the map as a living document, refreshed whenever material AI deployments occur, will keep their accountable persons in a defensible position. Practitioners who treat the map as an annual artefact will find it lagging behind operational reality, with the gap visible to supervisors.
Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.
TheAICommand. Intelligence, At Your Command.
