The legal reforms needed to protect users of artificial intelligence
Class action solicitors with one of the biggest compensation firms have proposed a suite of reforms designed to tighten the protections and legal recourse against harms caused by artificial intelligence.
Shine Lawyers’ senior associate Atanaan Ilango and solicitor Dr Benjamin Koh have identified several gaps within the present legal framework in their submission to the Department of Industry, Science and Resources’ inquiry into “supporting responsible AI”.
Specifically dealing with the free software and clauses under the user agreements, Mr Ilango and Dr Koh write that statutory guarantees may not apply or may be ineffective if the initial developer is not subjected to Australian legislation or the original developer of the free software cannot be found or has insufficient assets or insurance.
The class action solicitors recommended that proposed AI-related laws should “specifically state that any terms that exclude liability within user agreements … are deemed unfair and voided”.
This is in line with changes to Australia’s unfair contract terms laws in the Competition and Consumer Act 2010 and Australian Securities and Investments Commission Act 2001, in which AI-related laws should state any terms within user agreements that exclude liability or prevent a right to participate in a class action is “voided”.
Turning to the inconsistent laws in consumer and non-consumer use, particularly in respect of the way AI works through its black boxes, Mr Ilango and Dr Koh wrote that there are “enough uncertainties in the current framework that warrant legislative reform”.
“It is therefore recommended that all AI developers have a presumed duty to [their] end users and non-contracting third parties.
“This then provides to the general public better legal protection in the event a software or AI product causes them harm,” they wrote.
As Mr Ilango and Dr Koh conceded it would be “unrealistic and impractical” to make the software error-free, they recommended harm under negligence laws should be restricted to those that cause significant harm and pay to consider explicitly including mental harm.
They added there should also be a “rebuttal presumption” against AI developers “of a causal link between a failed duty of care and harm caused by the AI system once a breach of care is established”.
“This overcomes the difficulties posed by the uncertainties regarding the workings of black boxes and the lack of transparency. It also brings AI products more in line with consumer protections.”
Mr Ilango and Dr Koh said placing the burden on AI developers to rebut the presumption will encourage manufacturers to “incorporate some kind of data log or safety systems”.
Naomi Neilson
Naomi Neilson is a senior journalist with a focus on court reporting for Lawyers Weekly.
You can email Naomi at: