AI Under the Gavel: The Imperative for Legal Regulation

Judicial figures within the Australian legal system typically hold lengthy appointments and are not subject to electoral pressures. Due to doctrines like stare decisis, once established, laws tend to be resistant to change. However, the rapid rise of AI (known for its deliberate pace) has prompted the legal system, which typically evolves slowly by design, to reevaluate its regulations regarding the use of AI tools in litigation and investigation. This shift reflects a growing recognition of the need to adapt to the challenges and opportunities presented by AI technologies.

Promoted by Siera Data 08 November 2024 Corporate Counsel
expand image

We explore the recent developments in AI regulation in Australia, set against global trends.

Australia; A Frameless Framework?

Australia currently lacks a federal, AI-specific regulatory framework; however, we do use existing laws to address certain aspects of AI technology1. In June 2023, the Australian Government released a discussion paper, “Safe and Responsible AI in Australia”, seeking input on managing AI risks2.

Earlier this year, the government issued an interim response, outlining proposed actions in four key areas:

1. Preventing harm through testing, transparency, and accountability

2. Strengthening laws to protect citizens

3. Collaborating internationally for safe AI development

4. Maximising AI’s benefits

To address the first point, the government committed to developing an AI Safety Standard and implementing risk-based guardrails for the industry. The Standard included both voluntary guidelines, and proposals for mandatory regulations in high-risk AI settings. While the definition of "high-risk" AI applications remains unclear, the government’s discussion paper suggested defining them as systems with "systematic, irreversible, or perpetual" impacts, such as AI used in medical surgery or self-driving cars, aligning with the EU’s approach3.

In September this year, the government released the voluntary guardrails, which form part of the AI Safety Standard4. These currently apply to all Australian organisations. The mandatory guardrails, outlined in a discussion paper, are still under consultation before being finalised.

Global Developments in AI Regulation

AI holds immense potential, and worldwide calls for action are growing to govern its safe development and responsible use. While Australia, the US, China, and EU, adopt different regulatory approaches, we all share the same goal: to create a cohesive framework for managing AI risks and harnessing its benefits responsibly.

United States

Like Australia, the US lacks comprehensive AI regulation, with AI currently governed by a mix of federal and state laws. On 30 October 2023, President Biden issued an Executive Order to guide federal AI adoption, aiming to ensure the "safe, secure, and trustworthy" use of AI. Key provisions include requiring developers of foundational AI models to share safety test results with the government, establishing AI safety standards, and creating guidelines for content authentication and watermarking.

This move follows the cautionary case of Mata v. Avianca5, where two New York attorneys submitted a brief citing fictitious case law generated by ChatGPT – unverifiable by opposing counsel or the judge. While the judge noted that using AI tools is not inherently improper, he emphasised that lawyers must ensure the accuracy of their filings. The attorneys were fined, highlighting the importance of diligence in AI use within legal practice.

At the state level, various laws have been enacted to improve transparency, address sector-specific issues, and mandate impact assessments and data transparency.

China

China has been proactive in AI regulation, implementing specific laws since 20216, though a comprehensive AI framework is still pending. Key regulations focus on the use of algorithmic recommendation and deep synthesis technologies (a form of generative AI) for internet services, the development of generative AI, and ethical reviews of AI research. China also regularly releases new standards for public consultation, including recent updates on data security and content regulation for generative AI.

EU

The EU's Artificial Intelligence Act (EU AI Act), which took effect on 1 August 2024, is the world’s first comprehensive AI regulation. It adopts a risk-based approach to regulate the entire AI lifecycle, with obligations imposed on various stakeholders across the AI value chain. AI systems are categorised into four risk levels: unacceptable, high, limited, and minimal, with most obligations targeting developers (“providers”) of high-risk systems, such as medical devices and critical infrastructure.

General-purpose AI models, like large language models, are addressed separately, with additional requirements for those identified as posing "systemic risk."

Similar to the EU's GDPR, the AI Act has extraterritorial reach, applying to organisations outside the EU, including those in Australia. It also imposes significant penalties for non-compliance, including substantial fines based on the nature of the violation.

Where to from here?

As the judicial system continues to assess the various impacts of AI, it’s encouraging to see a focus on regulation that prioritises safety without stifling innovation.

Here are a few actionable steps to help safeguard AI use:

  • Ensure your training data is diverse and representative to mitigate unintentional bias. Regularly test and audit AI models for fairness, require vendors to disclose bias audits, and establish feedback loops to allow users to report biases and drive necessary adjustments.

  • Establish accountability by designating responsible parties to oversee AI systems, manage data security, and address performance issues. Conduct regular audits for compliance with governance standards, and maintain thorough documentation of model versions, policy updates, and key decision rationales, including human oversight.

  • Stay informed as AI governance evolves by joining AI-focused networks or working with specialists. Partnering with experts ensures compliance, aligns AI practices with business goals, and helps avoid costly mistakes.

  • Establish clear ethical guidelines for AI use in legal practice, emphasising accuracy and accountability in all submissions.

  • Training is essential. Legal professionals should be increasingly required to learn about AI tools' capabilities and limitations to better apply them in litigation, as well as AI fundamentals and ethics.

  • Judges are becoming more vigilant in scrutinising evidence and submissions that rely on AI-generated content, so it's crucial to ensure all information is verified and reliable.

Whilst AI regulation is taking shape globally in various forms, there is widespread agreement on the importance of integrating ethics into its use. Ultimately, we are all on the same side with this unified consideration, and that is a rather positive and reassuring position to be in.

For more information, contact Siera Data


1 Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure and reliable; Online Safety Act 2021; Privacy Act 1988; Corporations Act 2001 etc
2 https://consult.industry.gov.au/supporting-responsible-ai/submission/list
3 https://consult.industry.gov.au/ai-mandatory-guardrails Page 39
4 https://www.industry.gov.au/publications/voluntary-ai-safety-standard
5 Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D.N.Y. Jun. 22, 2023)
6 Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms; Ethical Norms for New Generation AI; Provisions on the Management of Algorithmic Recommendations in Internet Information Services – September 2021

LW discover

Latest articles