CPS 234 and AI Vendors: A Due Diligence Framework
← GRC

CPS 234 and AI Vendors: A Due Diligence Framework

CPS 234 has been in force since 2019. AI vendors stretch the framework in specific ways: training data exposure, model update opacity, and inference infrastructure that crosses the standard's information asset boundaries. A practical due diligence framework.

·10 min read·monthly

GRC content. Written for compliance, risk, and audit professionals in Australian financial services. General information. Not legal or compliance advice.

Standard third-party security questionnaires do not cover AI. They need to.

Context for general readers: CPS 234 is the rule APRA uses to make sure regulated entities (banks, insurers, super funds) maintain strong information security. It covers the entity's own systems and the systems of its third-party suppliers. The standard is principles-based: it says what good information security looks like and requires the entity to maintain capability commensurate with its risks, rather than prescribing specific controls. AI vendors create a particular challenge because the information assets they touch (training data, prompts, model weights, inference logs) do not always fit cleanly into the categories that traditional information security frameworks were built around.

This article provides a practical due diligence framework for assessing AI vendors against CPS 234 expectations. It is structured as a working framework that risk and information security teams can apply directly. It assumes familiarity with CPS 234 and CPG 234.

What changes with AI vendors

A traditional SaaS vendor relationship has reasonably well-understood information security boundaries. The vendor processes data the entity provides, applies the documented controls, and returns outputs or maintains a hosted service. Information asset categories are clear: customer data, transaction data, employee data, configuration data.

AI vendors stretch this picture in five ways.

First, prompt data. When a regulated entity uses an AI tool, the prompts (often containing customer information, internal policy detail, or strategic context) become a new information asset category. Standard data protection language often does not address whether prompts are retained, where they are stored, and who can access them.

Second, fine-tuning data. Where the entity is using a fine-tuned model, the training data it provides creates a long-lived information asset embedded in the model itself. The data leaves the entity's direct control and becomes a feature of the model.

Third, model weights and embeddings. Some AI vendor architectures involve creating embeddings or model variants that are specific to the entity. These are derived information assets that may carry sensitivity from the source data.

Fourth, inference logs. The vendor may retain logs of inferences (input, output, metadata) for service improvement, debugging, or abuse monitoring. These logs can contain customer data and may be subject to access by vendor personnel that the entity has not approved.

Fifth, the upstream provider chain. Many AI vendors are themselves built on top of foundation model providers. The contracting layer (the AI vendor) may not be the layer where the most sensitive information processing actually happens.

CPS 234 treats all of this as in-scope. The information assets relevant to AI vendor relationships need to be identified, classified, and protected commensurate with their sensitivity.

Due diligence framework

A practical CPS 234-aligned framework for AI vendor due diligence covers eight areas.

1. Information asset identification and classification

Document, for the proposed AI vendor relationship, every information asset category that will be exposed: prompts, fine-tuning data, embeddings, inference logs, configuration. For each, identify the originating data source, the sensitivity classification under the entity's information classification scheme, and the regulatory regimes that apply (Privacy Act, banking secrecy, AML/CTF Rules).

The deliverable is an information asset inventory specific to the vendor relationship. This becomes a control point: any change to what data flows to the vendor requires a review against this inventory.

2. Vendor architecture and chain mapping

Document, for the proposed AI vendor, the architecture of the service. Identify the foundation model provider, the inference infrastructure provider, the data residency for each component, and any sub-processors. CPS 234 paragraph 16 requires the entity to take steps to ensure information security is maintained when information assets are managed by a third party. The entity needs to know what the chain looks like to assess this.

The deliverable is an architecture diagram and chain map that the entity holds independently of the vendor's marketing material.

3. Control commitments

Identify the information security controls the vendor commits to operate, and the evidence available for those commitments. Standard evidence sources include SOC 2 Type II reports, ISO 27001 certifications, and security questionnaire responses. AI-specific evidence sources are emerging but not yet standardised.

The CPS 234 expectation is that the entity has assessed the vendor's controls as commensurate with the sensitivity of the information assets exposed. Where the vendor's control evidence is thin in AI-specific areas (training data isolation, inference log access controls, model update governance), the entity needs a documented compensating control or an accepted residual risk.

4. Data handling commitments

Document, contractually and operationally, what the vendor will do with prompts and inference data. Specifically:

  • Will prompts be used for model training? If yes, are there opt-outs and how are they evidenced?
  • How long are prompts and inference data retained?
  • Who at the vendor (or its sub-processors) can access prompts and inference data?
  • Are prompts and inference data ever co-mingled with other customers' data, including for service-improvement purposes?

These questions should be answered in writing before vendor selection. The vendor's standard terms often do not address them clearly.

5. Update and change management

AI vendors update their underlying models. The behaviour of the service the entity is using may change without a new contract being signed. CPS 234 paragraph 21 requires controls to be tested for design and operating effectiveness on a regular basis. Where the underlying model can change without notice, the testing cadence needs to match.

Document the vendor's change management approach for model updates: notice given, testing the entity can perform, ability to roll back to a previous model version, and any substitute behaviour during a model deprecation.

6. Incident notification and response

CPS 234 requires the entity to notify APRA of material information security incidents. For AI vendor relationships, this requires the entity to know when an incident has occurred at the vendor. The standard contractual approach (vendor notifies entity within X hours of becoming aware) may not be sufficient where the incident category is novel (for example, prompt injection leading to inadvertent disclosure of one customer's data to another).

Document the vendor's incident notification commitments, including coverage of AI-specific incident categories. Where the categorisation is unclear, the entity should request specific clarification.

7. Testing and assurance

CPS 234 requires control testing on a regular basis. For AI vendor relationships, this should include both standard control testing (penetration testing, configuration review) and AI-specific testing (prompt injection, data leakage, output quality drift).

Where the entity cannot directly test the vendor's controls, the testing capability of the vendor itself becomes part of the assurance picture. SOC 2 Type II reports and equivalent are useful but not sufficient for the AI-specific layer.

8. Exit planning

CPS 234 expects the entity to maintain information security capability through transitions. For AI vendors, exit planning has additional dimensions: what happens to fine-tuned models, embeddings, and historical inference data when the relationship ends? Are there irreducible information leakage paths (for example, model weights derived from the entity's data that cannot be reversed)?

Document the exit pathway, including data deletion commitments, attestation processes, and any residual information assets that will continue to exist post-exit.

Specific AI failure modes the framework should address

Beyond the eight framework areas, due diligence work should explicitly consider three AI-specific failure modes that traditional information security questionnaires often do not cover.

The first is prompt injection. A malicious or careless input causes the AI tool to behave in unintended ways, potentially exposing data or executing actions outside the design intent. The vendor's protections against prompt injection (input filtering, output review, system prompt protection) should be evaluated.

The second is data leakage between customers. Where the vendor operates a multi-tenant inference infrastructure, mistakes in tenant isolation can expose one customer's data to another. The vendor's tenant isolation architecture, including cache and embedding-store isolation, should be examined explicitly.

The third is training data contamination. Where the vendor uses customer data for any training purpose (even with consent), the data becomes part of the model's behaviour and may surface in responses to other customers. The vendor's commitments and operational controls on training data use should be specific and verifiable.

These failure modes are AI-specific. They do not feature in standard information security questionnaires because the underlying technology was not in scope when the questionnaires were designed. Adding them is straightforward but requires deliberate attention.

Working with the major AI providers

Vendor practice varies considerably across the major enterprise AI providers, particularly on data residency, audit rights, and the form in which compliance documentation is made available. Each vendor's published trust centre or compliance portal is the appropriate primary reference; due diligence work should review those sources directly at the time of assessment rather than rely on generic comparisons that age quickly.

The practical implication for due diligence: the assessment framework should be applied to each vendor on its own terms, rather than expecting uniformity across providers. A vendor that cannot provide a particular control evidence type may still be acceptable if the entity has documented compensating controls and accepted residual risk explicitly.

The supervisory expectation is not that every AI vendor relationship is perfect. It is that the entity has assessed each relationship deliberately, documented the assessment, and managed the residual risk consciously.

The continuous monitoring requirement

CPS 234 paragraph 21 expects controls to be tested for design and operating effectiveness on a regular basis. For AI vendors, this requirement intersects with the change cadence problem covered in our CPS 230 piece: the vendor's underlying model can change without notice, and the testing cadence needs to match.

The practical pattern for material AI vendor relationships: a continuous or near-continuous monitoring approach focused on output quality, with periodic deeper assurance on the underlying control framework. Quarterly assurance with continuous output monitoring is a reasonable starting point for material vendors. Annual-only assurance is unlikely to be sufficient.

Practical implications this quarter

For information security and third-party risk teams:

  1. Build an AI-specific addendum to the standard third-party security questionnaire. The eight areas above translate directly into questionnaire sections.
  2. Reassess the highest-impact AI vendor relationships against the framework. Where the original due diligence pre-dated AI-specific governance attention, gaps are likely.
  3. Build the architecture and chain map for each material AI vendor. This is the artefact most often missing in current third-party files.
  4. Coordinate with procurement on standard AI-specific contractual terms. Standard SaaS contract language often does not adequately address the AI-specific information asset categories.

Direction of travel

CPS 234 has been operationally embedded for several years. Its application to AI vendors is emerging supervisory territory, with the cyber resilience thematic review of 2024 already signalling APRA's interest in third-party assurance. Practitioners who treat AI vendor due diligence as a structured extension of the existing CPS 234 framework, rather than a new and separate discipline, will find the implementation cleaner and the supervisory conversation easier.

The pace of underlying technology change means the framework itself will need refreshing periodically. The eight areas above are unlikely to change quickly, but the specific control evidence that vendors can provide, the standard contractual terms available, and the supervisory expectations on each will continue to evolve. Building in an annual framework refresh, with quarterly review of material vendor relationships, is a sensible operational rhythm for the next two to three years.

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.

TheAICommand. Intelligence, At Your Command.

Context

CPS 234 is APRA's prudential standard on information security. It requires regulated entities to maintain information security capability commensurate with the size and extent of their information security threats and vulnerabilities. The standard covers governance, information asset identification and classification, implementation of controls, incident management, testing, internal audit, and APRA notification of material incidents. It applies to information assets managed by third parties, not just by the entity itself.

AI angle

Enterprise AI tools introduce new information asset categories (training data, prompts, model weights, inference logs, embeddings) and new exposure surfaces (third-party model providers, downstream sub-processors, fine-tuning data). The CPS 234 framework reaches all of this, but the standard third-party security questionnaire often does not.

Primary sources

CPS 234APRAinformation securitythird-party riskAI vendors
← Back to GRC

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.