Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Navigating AI regulation in the profession

As AI tech continues to revolutionise legal workflows, the legal profession must address how and to what extent AI should be regulated, with many emphasising the importance of education and transparency in leveraging AI’s benefits while maintaining ethical standards.

user iconLauren Croft 03 July 2024 Big Law
expand image

Over the course of last year, AI tech and platforms like ChatGPT made global headlines (which you can read about here) and have since begun to reshape legal workflows.

Clients are demanding efficiency more than ever – and a recent report from Dye & Durham recently revealed that in 2024, firms will turn to AI more and more in a bid to keep talent and drive efficiency.

However, as the use of AI and generative AI (GenAI) becomes more common in legal workplaces, there have been a number of concerns raised around the risks of AI continuing to go largely unregulated.

 
 

The use of AI platforms was also recently flagged in the Victorian Legal Services Board and Commissioner’s 2024 Risk Outlook.

“Notwithstanding the benefits of AI, it pays to remember that, unlike a great lawyer, AI cannot exercise superior judgement. It remains your responsibility to understand a matter in the context of legal principles, the human psychology of its parties, external complicating factors and future possibilities. It’s ultimately your expertise that clients rely upon when they engage your services,” the LSBC said.

A recent LexisNexis white paper, Navigating the Challenges of AI Regulation, also emphasised that regulations need to be adapted quickly to address emerging risks and ensure responsible AI development and deployment moving forward.

“Australia’s approach to AI regulation is currently in a state of flux. Unlike the European Union’s recently enacted Artificial Intelligence Act (AIA), Australia currently lacks a single, comprehensive legal framework dedicated to AI,” the report said.

“Existing laws related to data privacy, consumer protection, and discrimination offer some level of oversight, but concerns linger regarding their effectiveness in addressing the unique challenges posed by AI technologies.”

Can AI be self-regulated?

Many jurisdictions are lacking comprehensive legal frameworks, according to the LexisNexis white paper, and tech companies are therefore “self-regulating” and have established their own AI ethics principles.

“These principles often focus on areas like fairness, transparency, and accountability in AI development and deployment,” the white paper said.

“However, the effectiveness of self-regulation remains a point of debate. Critics argue that these principles lack concrete enforcement mechanisms and may be primarily used for public relations purposes.”

Thomson Reuters’ State of the UK Legal Market survey recently found that among the UK legal profession, 48 per cent of lawyers in private practice and 50 per cent of in-house lawyers think the profession should self-regulate its use of AI.

Following the release of this report, Lawyers Weekly recently asked its audience, via a LinkedIn poll, how the use of artificial intelligence should be regulated, or not regulated.

At the time of closing, the poll results were as follows:

When asked how lawyers’ use of AI should be regulated, if at all, 53 per cent of respondents said it should be government-regulated, while 47 per cent said it should be self-regulated within the legal profession.

The LinkedIn poll is, of course, not a scientific study and should not be taken as such. However, it does offer insight into the mindset of Australian lawyers and what they may be thinking about AI moving forward.

This near-even split among the legal professions in both the UK and Australia comes as the regulation of AI is one of the most hotly contested topics across the globe, especially as the tech increasingly – and swiftly – advances.

In conversation with Lawyers Weekly, King & Wood Mallesons partner and technology sector lead Cheng Lim said that due to the ethical duties lawyers are already subject to in Australia, the regulation of AI specifically to daily legal practice is somewhat redundant.

“The government is currently considering how AI is best regulated across the economy to address a wide range of issues. Many existing laws – Australian Consumer Law, contract law, defamation to name a few – already regulate harms that AI can cause,” he said.

“We believe that new horizontal AI legislation should only be introduced if identified harms presented by AI cannot be adequately addressed in existing regulatory regimes. As Australian lawyers are already subject to ethical duties to act competently, in the best interests of clients and to not mislead the courts, we don’t see that there is any compelling reason to specifically regulate the use of AI in the legal industry.

“These ethical duties apply equally to the use of generative AI as to any (more traditional) ways of delivering legal services used by lawyers. Any use by Australian lawyers of generative AI which creates hallucinations or inaccurate output would, in all likelihood, breach those ethical duties.

When asked if the federal government believed a lawyer’s use of AI (in the course of daily legal practice) should be regulated, the Attorney-General’s Department submitted that “the regulation of the legal profession is primarily a matter for states and territories”.

“Lawyers are bound by professional and ethical obligations, including fiduciary duties, statutory professional obligations and duties to the court. The government is committed to ensuring that the design, development and deployment of AI systems in Australia is safe and responsible,” the department said.

However, Hicksons partner David Fischl​​​​ noted that in many Australian states and territories, the discussion of AI regulation is in very early stages.

“The Queensland Law Society released an AI Guidance Statement and has a GenAI Working Group. However, other state industry bodies have only just begun discussing the prospect of AI regulation and guidance. The Law Society of NSW announced that an AI taskforce would be set up as a 2024 priority, but there is no movement towards AI regulations or codes.

“The Australian Solicitors Conduct Rules provide guidance on what should be professional obligations on solicitors in their use of AI. The existing rule to act ‘ethically’, including ‘deliver legal services competently, [and] diligently’ requires solicitors to understand the risks and benefits of AI before it is used to deliver advice to clients.

“There is, however, no doubt that the rules require updating to address use of AI in provision of legal services. For example, solicitors’ obligations around ‘supervision of legal services’ currently only refer to ‘supervision over solicitors and all other employees’. There should be specific obligations around properly understanding and supervising use of technology before it is used,” Fischl said.

Thomson Reuters legal tech chief product officer Kriti Sharma said that either way forward, trust is important.

“Whether through self-regulation or government standards, trust is needed to give the legal profession the confidence it needs to drive forward and harness AI’s potential benefits. AI adoption in the legal sector is happening now, and as companies build and apply AI, many proactively are creating their own guidelines,” she said.

“Guardrails, in the form of regulation around the safe use and application of AI, also play a critical role, helping to establish accountability in the AI ecosystem and giving businesses the clarity they urgently need to drive AI adoption. We cannot afford to run AI that humans cannot trust, and this rests on thoughtful implementation of safeguards, policies that uphold the standards of the profession and training in the safe usage of this technology.”

Challenges in regulating AI

The challenges in regulating AI, according to the LexisNexis white paper, include defining AI, keeping up with innovation, data privacy and overlapping legal considerations – as “the pervasiveness of AI makes it a technology that intersects with numerous existing regulatory areas”.

The white paper noted that the balance of regulation is “a critical challenge”; strict regulation could stifle innovation, while no regulation at all could mean that the unethical or harmful use of AI increases.

Enforcing regulations, however, is especially challenging when dealing with complex AI systems and deep learning algorithms.

“Unlike simpler models of AI, deep learning finds patterns that are non-linear and statistical, further obscuring the reasoning process. The massive datasets used in training also make it challenging to pinpoint how individual pieces of data influence the outcome. This lack of transparency can hinder explaining decisions, detecting bias, and debugging errors,” the white paper said.

Potential bias in AI has also been flagged in the recruitment and redundancy process, as reported by Lawyers Weekly last year, with legal recruiters emphasising that “the human element in the recruitment process will always be crucial”.

In light of these challenges, the NSW government initiated a new inquiry in June 2023 to examine and report the role of AI in NSW, with a particular focus on “current and future extent, nature and impact”, “the social economic and technical opportunities, risks and challenges presented by AI”, and “the effectiveness and enforcement of Commonwealth and NSW laws and regulations regarding AI”.

Following this, Brett McGrath, the president of the Law Society of NSW, advocated for a collaborative approach that leverages existing frameworks and international best practices in regard to AI.

“AI brings with it many challenges and opportunities, and appropriately flexible, scalable and principles-based regulation will help ensure that the community can both reap the benefit of AI and be protected from its potential threat,” he said.

“The NSW Parliament should build upon and adapt existing approaches that are emerging both in Australian and international jurisdictions and seek consistency with respect to privacy, cyber security, consumer rights and human rights law.”

Legal frameworks and mitigating risks moving forward

Many Australian legal organisations made submissions to the Senate select committee inquiry on adopting artificial intelligence, including the Human Rights Law Centre, which said there is a growing awareness of the potential threats the technology pose if left unregulated.

“Advances in technology should serve our communities, not put people at risk of harm. We need laws to ensure that artificial intelligence technology and the corporations driving it forward are transparent and accountable to people,” Human Rights Law Centre senior lawyer David Mejia-Canales said.

“The Albanese government should follow the world-leading examples set by Europe, Brazil and Canada to ensure that Australia’s regulation of artificial intelligence is grounded in human rights laws and principles.”

Similarly, the Australian Human Rights Commission emphasised that “unchecked proliferation of powerful AI technologies may radically reshape economies and societies over the coming decade – for both better and for worse” and said that Australia should strengthen existing legislation and then introduce specific legislation, if necessary, to address risks that are not currently within the scope of the existing regulatory framework.

The path towards effective AI regulation is complex and ongoing – and clear and adaptable frameworks are needed, according to the LexisNexis white paper, which further noted that “industry leaders, alongside developers and researchers, hold the responsibility of embedding these principles into the design, development, and deployment of AI systems”.

“AI users serve a vital role in empowering themselves and holding these actors accountable for the responsible use of this powerful technology. While establishing clear and adaptable legal frameworks remains crucial, the work does not stop there. Businesses must proactively stay informed about evolving regulations, foster ethical AI cultures, and continuously audit practices for compliance,” the report said.

As such, the Law Council submitted to the Senate inquiry that any regulations should be implemented quickly, particularly as this tech continues to evolve.

“The Law Council considers that any regulatory framework adopted in Australia should be flexible and scalable to future technological change, and so will allow for sufficient innovation while maintaining appropriate guardrails. Any approach should focus on the particular characteristics and functional capabilities of the technologies that demand a regulatory response (for example, their adaptivity and autonomy) rather than strict rules for particular new iterations,” the submission said.

“In light of the rapid developments in AI, any framework, regulatory or otherwise, should be subject to shorter, accelerated review cycles than would normally apply. We support a risk-management approach that focuses on having adequate systems in place to monitor and detect adverse consequences when they do arise.”

In its own submission, the Attorney-General’s Department said that given the continued evolution and transformation of AI tech, it’s “critical that Australia harnesses the opportunities arising from the uptake of AI technologies, as well as ensuring our legal frameworks remain fit for purpose to ensure transparency, accountability and integrity to safeguard Australians from potential harm from the use of AI technologies, including bias and discrimination, and maintaining Australians’ trust and confidence in our laws and institutions.”

Fischl agreed with the sentiment of transparency and concluded that moving forward, regulation needs to be coupled with education in order to truly succeed.

“I believe that ‘transparency is key’ in the creation and delivery of legal advice. To be transparent about use of AI in delivery of legal advice, solicitors need to properly understand how the technology works, including those risks and benefits. Without this understanding, our profession is in danger of falling out of step with the rest of the world.

“Regulation, supported with education and guidance to our profession, is critical to ensuring our profession can adequately serve the community, delivering the benefits of AI. AI will ensure legal advice becomes more accurate, prompt, and available at a lower cost,” he said.

“Regulation by itself will not be enough to ensure that the profession can achieve this, nor will it sufficiently protect our profession and our clients. Parallel education programs are essential to ensure lawyers are capable of properly engaging with the world through this new paradigm.”

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.