Search for:

The Australian Government’s interim response to the “Safe and responsible AI in Australia” discussion paper flags a risk-based approach to AI governance in Australia including a mix of voluntary AI safety standards, voluntary labelling and watermarking for generative AI and the development of mandatory guardrails with a particular focus on high-risk and frontier AI applications.

In brief

On 17 January 2024, the Australian Government released its interim response to submissions received in relation to its 2023 Safe and Responsible AI in Australia discussion paper, which sought views on whether Australia has the right regulatory and governance arrangements in place to support the safe and responsible use and development of artificial intelligence (AI) technologies.

The Government in its interim response agrees with several key points expressed in submissions, including concerns that high-risk and frontier AI applications are not currently subject to sufficient regulation and that a risk-based regulatory approach to AI is appropriate, targeting regulatory requirements for AI applications with a higher risk of harm. The interim response proposes that additional guardrails (including mandatory guardrails for high-risk AI) should be put in place to address potential harms associated with high-risk AI applications, subject to further consultation.


The interim response also indicates the Australian government’s next steps for AI governance will focus on:

  • Preventing harm through testing, transparency and accountability
  • Clarifying and strengthening laws
  • International work to support safe development and deployment of AI
  • Maximizing the benefits of AI

Key takeaways

  • The Government’s immediate focus is to consider what mandatory AI safeguards are appropriate and how best to implement them as part of a risk-based regulatory approach to AI, informed by developments in other countries. This includes further consultation with industry to formulate an appropriate definition of “high-risk” AI in the Australian context.
  • Once the concept of “high-risk” AI is appropriately defined, a key safety measure the Government will focus on is developing regulatory guardrails around testing, transparency and accountability for such “high-risk” AI applications.
  • The use of low risk AI tools and applications is likely to be allowed to continue under existing legal frameworks without the need for bespoke AI regulation, to preserve the benefit and utility of those technologies.
  • While considering mandatory guardrails for AI development and use, the Government indicates it is already working with industry to develop a voluntary AI Safety Standard and develop options for a voluntary labelling and watermarking scheme for AI-generated materials.
  • The Government is also establishing a temporary expert advisory group to support the development of further guardrails for AI.

In more detail

Proposed actions

The Government will consider and consult on the introduction of new mandatory guardrails for the development and deployment of AI in high-risk settings. These guardrails are expected to focus on:

  • Testing – the Government will consider imposing requirements relating to testing AI systems before and after release, measures of best practice for safety, auditing and performance monitoring, and security thresholds.
  • Transparency – this would involve improving transparent communication to users when AI is used in systems or to generate content, and publicly reporting limitations of AI and the data which is used to train and test an AI model.
  • Accountability – the Government suggests that developers or deployers of AI products should be trained in certain settings, and that there should be specific roles responsible for AI safety. 

The Government also intends to define what qualifies as “high-risk” AI in an Australian context, which will be important to enable businesses to understand their regulatory obligations once mandatory guardrails are in place.

In parallel with its consideration of mandatory guardrails for “high-risk” AI development and use, the Government indicates it is already working with industry to develop a voluntary AI Safety Standard implementing (non-binding) risk-based guardrails, which may help businesses to manage risks associated with AI deployments, whether or not in a high-risk setting.

In the immediate future, the Government also states that it will:

  • Set up a temporary expert advisory group which will coordinate the development of AI guardrails.
  • Task the National AI Centre with collaborating with industry to “produce a best-practice and up-to-date voluntary AI risk-based safety framework” for Australian businesses.
  • Assess, in consultation with industry, the potential for a voluntary watermarking and labelling scheme for businesses to disclose the use of AI generated content in high-risk settings.

Other planned actions include:

  • Considering options for strengthening existing laws to address risks and harms from AI, building on recent and already-proposed reforms on matters relevant to AI (e.g., privacy, online safety, misinformation) – potential reforms to Australia’s privacy laws include requiring non-government entities to conduct a privacy impact assessment for activities with high privacy risks to identify and manage, minimize or eliminate risks (which is already a requirement for government entities) and other amendments with a focus on increasing transparency and integrity of automated decision-making which uses personal information.
  • Taking forward commitments made in the Bletchley Declaration, including supporting the development of a State of the Science report.
  • Continuing to engage internationally, aiming to shape global AI governance opportunities, support Australian involvement in developing technical AI standards, and understand other jurisdictions’ responses to AI.
  • Considering opportunities to support the development and adoption of automation technologies such as AI and robots, building on existing investments in this area.

A principled approach leveraging existing requirements

The interim response indicates the Government will not follow the prescriptive approach set to be adopted by the EU that seeks to manage the lifecycle of AI via a comprehensive legislative approach under the AI Act, but will rather adopt a principles-based approach similar to that implemented in the UK and when defining the mandatory guardrails for high-risk AI applications, will look to leverage existing regulatory requirements wherever appropriate.

Implications

While the interim response does not include firm commitments to a legislative path forward for AI regulation, it does openly acknowledge that the current laws and regulations in Australia are not adequate to address the rapid moving nature of AI and its associated risks. The interim response does not provide firm timelines for the implementation of any proposed changes but indicates that there are a mix of short- and long-term developments on the horizon. The Government indicates that it will consult closely with industry, academia and the community on its proposed actions for AI. For this reason, all businesses with an interest in AI (whether developers, providers or users of AI solutions) should carefully consider the interim response and look for opportunities to have their say.

* * * * *

With thanks to Nicola McGran (Summer Clerk) and Liz Grimwood-Taylor (Senior Knowledge Lawyer) for their assistance with this alert.

Author

Adrian Lawrence is the head of the Firm's Asia Pacific Technology, Media & Telecommunications Group. He is a partner in the Sydney office of Baker McKenzie where he advises on media, intellectual property and information technology, providing advice in relation to major issues relating to the online and offline media interests. He is recognised as a leading Australian media and telecommunications lawyer.

Author

Toby Patten is a partner in Baker McKenzie's Technology and Healthcare teams in Melbourne. He joined the Firm in March 2005.

Author

Anne has been with Baker McKenzie since 2001. Prior to that, she spent four years with the Australian Attorney-General's Department/Australian Government Solicitor mostly working on large IT projects.
In her time at Baker McKenzie, Anne has spent 18 months working in London (2007-2008) and, more recently, three years working in Singapore (2017-2020).

Author

Caitlin Whale is a partner in the Technology, Communications and Commercial team. She advises on technology, outsourcing and commercial law issues. Caitlin advises on technology and rights-specific issues in large corporate and commercial transactions, and has experience in managing multi-territory licensing and divestments for multi-national clients. She has extensive experience in advising on a range of commercial arrangements, including licence and software agreements, research and development and collaboration agreements, supply agreements and distribution agreements. Caitlin has experience in rights management and enforcement, advising on the ownership, registration, exploitation and protection of copyright, trade marks and designs. She has represented rights-owners and users and has particular experience in relation to online infringement issues.

Author

Jarrod Bayliss-McCulloch is a special counsel in the Information Technology & Commercial department at the Melbourne office of Baker McKenzie and advises on major technology-driven transactions and regulatory issues spanning telecommunications, intellectual property, data privacy and consumer law with a particular focus on digital media and new product development. Jarrod joined the Firm in 2009 and his prior experience includes working in strategy consulting and development economics.

Author

Alex is a senior associate at Baker McKenzie in the Technology, Healthcare & Life Sciences team, having started as a graduate with the Firm in 2018.

Alex also holds a Bachelor of Science with a double major in Genetics and Molecular Biology.