Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Australia should ‘be the adult in the room’ on AI regulation

While looming Californian legislation sparks debate over safety and innovation, Australia will do well to maintain a “careful and cautious” approach when it comes to the regulation of artificial intelligence, legal experts say – an approach that must be replicated when dealing with clients.

user iconJerome Doraisamy 13 September 2024 Politics
expand image

Californian bill

As recently reported by Lawyers Weekly’s sister brand, Cyber Daily, the US state of California is set to pass SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would impose several requirements for AI developers to ensure they are developed ethically and safely.

These requirements, among others, include developers implementing a full shutdown capability before a model is developed and for the developer to “retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years, including records and dates of any updates or revisions and would require a developer to grant to the Attorney-General access to the unredacted safety and security protocol”.

It would also allow the state’s Attorney-General to take action against developers who do not follow the regulations.

Another important obligation in the bill, Clyde & Co partner Alec Christie outlined, is to require a “kill switch” for the covered AI frontier models, “which to be is kept and available for relevant use by the authorities (in addition to the developer organisation) to turn off the systems as required”, he noted.

Another critical provision, which Christie mused was “perhaps lost in the hype” around the other headline obligations in the bill, is the proposed whistleblower provisions that apply to all AI developers (not just for the covered AI frontier models).

These will, he submitted, “give significant rights and protections to whistleblowers within AI organisations to report practices contrary to the obligations and, in addition, any AI developments and/or training that they believe is likely to cause harm”.

SB 1047, Mills Oakley partner Dalvin Chien surmised, is an example of the direction that most legislators are taking in providing safeguards and prohibiting, in effect, dangerous AI.

Like the European Union’s AI Act, he pointed out, the bill intends to regulate AI to prevent harm, albeit via a different scope and approach.

This all said, Chien opined, “well-intentioned, SB 1047 could impose a cost burden on AI companies and could potentially slow innovation in AI models which typically rely on higher levels of computing power”.

Christie agreed that the bill is a “well-intentioned first step”, but he also voiced concern with it.

“This is state (rather than US federal) legislation, which means that if it takes off (as often happens where Californian is a ‘first mover’), Australian businesses could end up facing 50 different sets of state AI regulations in this space, which will make doing business in the US in this AI space very complicated,” Christie said.

A suggested role for Australia

For Sebastian Sequoiah-Grayson, a senior lecturer in epistemics at UNSW’s School of Computer Science and Engineering, the emergence of California’s bill is unsurprising.

“Any new technology with as much power and ubiquity as artificial intelligence will elicit astonishment, wonder, hope, fear, hubris, and more,” he said.

What is even less surprising, he continued, are the details of the bill.

“The focus on existential risk emerges from a fascination with eschatological annihilation that has been a part of our species since the dawn of language. ‘How will it all end?’ we ask. Not with AI, it won’t,” he said.

“The cult of existential risk exists for the same reason as any other cult: for the wealth and power of its leaders, at the expense of the rest of us.”

This is not to say, Sequoiah-Grayson went on, that AI does not bring real risks: “Of course it does,” he said.

“There are the environment costs from rare-earth mineral mining, and from the truly huge power and water consumption. There are real risks to academic integrity across our schools and universities. There are threats to social cohesion from the automation of spurious conspiratorial discourses. Nowhere in the bill’s eight sections is any serious mention made of any of this (sans an oblique reference to deception in section 3),” he said.

A more serious approach is necessary, Sequoiah-Grayson proclaimed.

“Australia would do well to be the adult in the room at this point,” he said.

“We should recognise the very real ability of technology to affect our attitudes at speed.”

“Witness those towards privacy over the last 20 years alone. By anticipating the attitudinal changes precipitated by AI, along with all the behavioural and legal movements that are sure to be in attendance, we will absorb AI into our psychosocial milieu successfully, just like we have with its technological forebears,” he said.

“But, let’s be real. AI is not threatening our existence. Neither is SB 1047 threatening ‘California’s unique status as the global leader in AI’. No one is considering, seriously, that any of Silicon Valley natives are opposing California’s SB 1047 in favour of a federal-level response to AI because they have abandoned their crypto-libertarian proclivities after undergoing a sudden conversion to federalist oversight in some kind of life-changing road-to-Washington moment.”

“It is not an angel that we hear singing, just an agenda.”

Steps being taken in Australia

The approach that Australia’s federal government is favouring to the regulation of AI does indeed appear to be more “cautious and careful”, Chien said.

At the start of the year, the government announced a risk-based framework for AI regulation, formed a Senate select committee to explore AI opportunities and impacts in March and released the national framework for the assurance of artificial intelligence in government, continuing the government’s commitment to putting the rights, wellbeing, and interests of people first in June.

Then, last week, it published its Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings, for which consultation is open in the coming weeks, and a new Voluntary AI Safety Standard, which has immediate effect.

In a statement, Minister for Industry and Science Ed Husic said: “Australians know AI can do great things, but people want to know there are protections in place if things go off the rails. We need more people to use AI, and to do that, we need to build trust.”

The latest announcements come following the release of the Responsible AI Index 2024, commissioned by the National AI Centre, which found that Australian businesses “consistently overestimate” their capability to employ responsible AI practices.

Almost four in five Australian businesses, the report noted, believe they are implementing AI safely and responsibly, but less than three in 10 are actually doing so.

“Artificial intelligence is expected to create up to 200,000 AI-related jobs in Australia by 2030 and contribute $170 billion to $600 billion to GDP, so it’s crucial that Australian businesses are equipped to properly develop and use the technology,” said Minister Husic.

Considering the “cautious and careful” approach being taken Down Under, Chien reflected, and in light of what he called Australia’s “maturing technology ecosystem”, he felt that there is a “good argument for the more balanced and well-considered federal framework involves industry collaboration which we are taking”.

“The downside, of course, is that there could be a counterargument that we are a little slow to act,” he said.

When asked if Californian-style legislation could or should be enacted Down Under, Christie responded that some of the requirements and considerations around cyber security, safety standards, and the “kill switch” idea “are worthy of consideration” in Australian legislation.

However, he went on, in the sense that the Californian thresholds are “potentially way too high to cover any Australian developments”, and whether these obligations should apply more widely than simply the largest AI frontier models (i.e. to all key AI), he did not feel this should be replicated here.

Role of lawyers

Keeping across relevant developments, Christie mused, is both important and difficult.

One must have, he said, a “holistic view, including not only the laws that have passed in Australia but [also] international laws that are being considered, the approach that other regulators outside of Australia are taking, a view ‘over the horizon’ (what is reasonably expected to be coming their way) and to understand the key AI issues, uses and concerns across key jurisdictions as these will influence Australian lawmakers and regulators”.

Moreover, he added, it is “fundamental to understand the extraterritorial application as they apply and when they will apply to Australian corporates doing business in that state”.

Chien reflected that a key objective of his, as a senior tech and AI lawyer, is to help clients navigate regulations in a pragmatic way.

“The key isn’t the technology or the regulation of the technology but rather how the technology will help an organisation or person achieve their objective (be that a public interest objective, a commercial objective, or an objective grounded on social justice) in a safe and responsible way,” he said.

Given the fluid nature of the space, he noted, a two-pronged approach is required: “One, revisit contracts and processes frequently, and two, we need to deal with change in a very even-handed and calm way. Be kind to your lawyers and compliance team!”

Jerome Doraisamy

Jerome Doraisamy

Jerome Doraisamy is the editor of Lawyers Weekly. A former lawyer, he has worked at Momentum Media as a journalist on Lawyers Weekly since February 2018, and has served as editor since March 2022. He is also the host of all five shows under The Lawyers Weekly Podcast Network, and has overseen the brand's audio medium growth from 4,000 downloads per month to over 60,000 downloads per month, making The Lawyers Weekly Show the most popular industry-specific podcast in Australia. Jerome is also the author of The Wellness Doctrines book series, an admitted solicitor in NSW, and a board director of Minds Count.

You can email Jerome at: This email address is being protected from spambots. You need JavaScript enabled to view it. 

You need to be a member to post comments. Become a member for free today!