Express Law No. 327

4 April 2025

Artificial Intelligence (AI) model clauses

On 17 March 2025, the Commonwealth published model clauses to aid government buyers procuring:

  • services where the seller may be using AI systems in the provision of the services
  • bespoke AI systems.

The AI model clauses can also be used for other AI use cases such as procuring software or cloud services with integrated AI.

The AI model clauses were developed by the Digital Transformation Agency, with support from AGS. They can be found on the BuyICT website.

The AI model clauses can be used with Digital Marketplace 2 contracts or in other agreements (with tailoring required to suit the specific contract).

Background

The Digital Transformation Agency (DTA) has recently established the Digital Marketplace Panel 2 (DMP2). The DMP2 strengthens the way the Commonwealth procures digital and Information and Communications Technology (ICT) services. This includes more protection for Commonwealth entities procuring through DMP2 (Buyers) in terms of cyber security, confidentiality, contract performance, misconduct and change of control.

As part of the Commonwealth’s uplift in ICT procurement, the DTA has created standalone model clauses specifically designed to manage emerging risks, issues and ethical challenges for the procurement of AI systems.

The new AI model clauses are a set of detailed clauses to assist Buyers procure:

  • services where the seller may be using AI systems in the provision of the services (for example, if a consultant uses AI in the preparation of presentations and reports)
  • bespoke AI systems (for example, chat-bots for government websites or AI tools to assist government decision making).

Buyers will also find the model clauses a useful resource for other use cases, such as:

  • the development of automated decision-making tools (via algorithm rather than generative AI)
  • procuring or using off the shelf AI systems
  • procuring a product with embedded or integrated AI capabilities.

The AI model clauses do not form a complete contractual arrangement. Buyers need to select clauses based on the specific way in which AI will be used under the contract and to address the specific risks for that use.

The AI model clauses can be used with DMP2 contracts, or in other agreements. When drafting contracts for procuring AI, Buyers can refer to these AI model clauses, DTA’s cyber security model clauses and the BuyICT ClauseBank. Buyers will need to tailor the clauses as needed to suit the specific contract.

When developing the AI model clauses, the team considered the approach of other jurisdictions including Australian states and territories and the EU, like the Artificial Intelligence Act 2024 of the European Union (EU), the EU AI model clauses and standards from the International Organisation for Standardisation.

AGS also consulted with the DTA’s AI taskforce, including their technical experts, and aligned the model clauses with other Australian Government guidance and policies, such as the:

These documents focus on principles of AI use, which are addressed in the model clauses, including:

  • fairness
  • privacy
  • accountability
  • safety
  • explainability.

Before preparing an approach to market or drafting a contract for procuring AI systems or procuring services involving AI, Buyers should consider:

  • the listed Commonwealth policies and associated guidance
  • any internal AI related policies.

Buyers should also take into consideration PSPF Directions 001 and 002 of 2025. In these Directions the use of DeepSeek and Kaspersky products and applications were determined to pose an unacceptable level of security risk to the Commonwealth. Consequently, Commonwealth entities must prevent the access, use or installation of DeepSeek and Kaspersky and remove all existing instances of DeepSeek and Kaspersky from all Australian Government systems and devices.

Types of procurement

Services where the seller uses AI in the provision of services (Section 1)

AI is increasingly being used by sellers who provide professional and consulting services in the course of their work. For example, to assist in the preparation of reports or to analyse and summarise information.

The AI model clauses for this use case include requirements for the seller to:

  • obtain the approval of the Buyer for use of AI in the provision of the services
  • conduct quality assurance checks to confirm the accuracy and reliability of AI outputs
  • keep detailed records of AI use.

The model clauses also allow the Buyer to specify banned AI systems that the seller must not use in the provision of the services (for example, DeepSeek and Kaspersky).

Bespoke AI systems (Section 2)

The AI model clauses for this use case anticipate the seller will be responsible for most aspects of the development and implementation of the AI system.

Principle-based provisions (clauses 2–5)

Below are some of the key principle-based clauses.

Intended use and statement of requirement (cl 2) [1]

The AI model clauses adopt the concept of ‘intended use’. The seller, when supplying an AI system, must develop, deliver, install and integrate the AI system to meet the Buyer’s intended use and comply with the Buyer’s statement of requirement/specification (cll 2.1 and 2.2).

Under any procurement, it is essential that Buyers prepare adequate statements of requirement. This is particularly important in an AI context given the complex and technical nature of procuring AI systems.

Buyers should carefully consider their internal processes and refer to the Commonwealth policies and associated guidance when preparing their statement of requirement.

The AI model clauses provide some additional guidance explaining what should be included in the statement of requirement for an AI system procurement. This includes that the Buyer should specify:

  • what environment is the AI system to be deployed in (e.g. on premises, public cloud, private cloud or a combination of these)
  • what training and testing methodology (including types of training), duration and approval process is going to be used
  • how issues are reported and resolved and the level of support offered
  • how transparency and explainability standards will be met (e.g. with regular reports).

Fairness (cl 2.6) [2], compliance with laws and policies (cl 3) [3] including privacy (cl 4) [4]

The model clauses include requirements for the seller to ensure the AI system:

  • does not discriminate against any person or group on the grounds of any protected characteristic set out in the applicable anti-discrimination legislation (cl 2.6.1(a))
  • is developed and will operate on an ethically sound basis (cl 2.6.1(b))
  • does not pose a reputational risk or undermine public confidence in the government (cl 2.6.1(f)).

Sellers are also obligated to comply with:

  • laws and policies relating to the development, delivery, installation and integration of the AI system (cl 3.1.1)
  • new laws and policies that are introduced during the term of the contract (cl 3.2.1) – this is important given the constantly evolving nature of AI technology
  • Privacy Act 1988 (Cth) including by acting on eligible data breaches (cll 4.1 and 4.2).

Human oversight (cl 5.1) [5]

Another critical element of AI is human oversight. Human oversight helps keep AI systems transparent and accountable. The AI model clauses set out the broad obligations of the seller to ensure human oversight of the AI system (cl 5.1). Crucially, the seller must design and develop the AI system so that it can be effectively overseen and monitored by humans (cl 5.1.1(a)(i)). The AI model clauses also detail:

  • requirements for seller personnel responsible for human oversight (cl 5.1.1(b))
  • what relevant information and guidance the seller should give to Buyer personnel who are responsible for human oversight (cl 5.1.1(c)).

Detailed provisions (clauses 6–13)

Below are some of the key clauses setting out detailed obligations.

Training, testing and monitoring of the AI system (cl 6.2)

The AI model clauses set out obligations for the training, testing and monitoring of the AI system. Under these clauses the seller must ensure under cl 6.2.4 that:

  • the training, testing and monitoring of the AI system identify any output or model performance which may result in:
    • an individual being treated differently on the basis of a protected characteristic set out in anti-discrimination legislation
    • bias
  • it identifies hallucinations or model drift in the AI system, which is when flawed training data or other factors lead to inaccurate or nonsensical outputs.

Data mining and ingesting (cl 11)

Under the AI model clauses, sellers can only use Buyer data in accordance with the contract. Sellers are also prohibited from conducting data mining activities or ingesting Buyer data into a large language model or AI system (unless otherwise specified in the contract).

Risk management (cl 13)

After considering its risk assessment, the Buyer may wish to require the seller to provide a risk management system. The AI model clauses include several options for this.

The Buyer should require the seller to comply with the Buyer’s AI policy/risk management system in providing the AI system if it has one (cl 13.1). This is appropriate where the Buyer has already established a robust AI management system in the Buyer’s organisation.

The Buyer may further require the seller to establish and implement an AI risk management system. The Buyer may include either a:

  • short clause which requires compliance with ISO/IEC 42001:2023 Information Technology – Artificial intelligence – Management System (cl 13.2) or
  • detailed clause which allows the Buyer to specify what the seller’s AI risk management system must cover (cl 13.3).

The AI Clause bank can be found on the BuyICT website.

 

[1] This addresses paragraph 2 of the AI Assurance Framework – Purpose and expected benefits. Australia’s AI Ethics Principles also make clear that AI system objectives should be clearly identified and justified.

[2] This addresses the Australian AI Ethics Principle of fairness which requires AI systems to be inclusive and accessible. It also addresses paragraph 4 of the AI Assurance Framework – Fairness.

[3] This addresses the Australian AI Ethics Principles of: Human, societal and environmental wellbeing; Human-centred values and Fairness. This also addresses paragraph 4 of the AI Assurance Framework – Fairness.

[4] This addresses the AI Ethics Principle of privacy protection – AI systems should respect and uphold privacy rights. This also addresses paragraph 6 of the AI Assurance Framework – privacy protection.

[5] This addresses the AI Ethics Principle of accountability – human oversight of AI systems should be enabled. This also addresses paragraph 9 of the AI Assurance Framework – accountability and paragraph 5 – reliability and safety.

Contacts

SYD
Supit, Jane

Senior Executive Lawyer and Director Sydney office

Important: This material is not professional legal advice to any person on any matter. It should not be relied upon without checking. The material is provided to clients for information only. AGS is not responsible for the currency or accuracy of the content of external website links referred to within this material. Please contact AGS before any action or decision is taken on the basis of any of the material in this message.