Bridging gaps between lawyers and AI
While readily implemented in the majority of legal practices, there are still myriad challenges associated with artificial intelligence (AI) impacting lawyers getting the best use out of it.
Speaking on a recent episode of LawTech Talks, Sparke Helmore Lawyers partner Dalvin Chien and UNSW senior lecturer and Unisearch expert Dr Sebastian Sequoiah-Grayson spoke about how some of these challenges start with a basic lack of understanding on what AI is actually there to support lawyers with.
“The first component is that it relies on data. It relies on a lot of data, historical data at this point.
“The second part of artificial intelligence is self-learning. It’s capable of undertaking analysis, and it’s then also capable of learning from that analysis. The whole idea is that the machine is self-taught.
“The third aspect of artificial intelligence is the output. Generative AI, in particular, will provide you [with] a contextual answer.”
Mr Chien continued: “So issue number one is it’s complicated, right? It’s like explaining the blockchain, but it’s actually not that complicated at its core.
“The other issue that lawyers tend to struggle with is, ‘How do I relate to artificial intelligence?’ … Often the question is asked, ‘Look, if I don’t use an AI tool, have I discharged my duty of competence? Have I discharged my duty to my client?’ And the less popular answer is, ‘Imagine if I were to use a typewriter to draft pleadings and charge my client for it. If there’s a readily available generative AI tool that can help me in managing a matter, should I not use it?’ And I appreciate that that’s not the most popular response, but the non-use of a readily available tool can be a problem.”
Dr Sequoiah-Grayson agreed that understanding the best use of AI is a common challenge legal professionals face, particularly because there’s no one-size-fits-all approach.
“When you were talking about responses within the legal world perhaps being too black and white or overly standardised, that’s a conversation we’re already having in the academic space. Since the rise of generative AI, there’s been a sort of shift from concerns about the teaching and research and all sorts of practices across the academic space not being standardised enough for reasons of fairness, et cetera and so on,” he explained.
“Now, the concern is about authentic teaching and authentic research … And one of the reasons that generative AI is so popular right now is not just what it’s able to do, but how easy it is to use as the interface – whoever it is who has the best interface will win the race.
“And for anyone wondering how generative AI works or what it means to use it, just go and use it. I mean, one day’s playing around with it will make it abundantly clear, and it’s that ease of use. We had smartphones long before the iPhone, but, gosh, they were horrible to use, and nobody bothered. Then the iPhone came out, and it changed everything.”
Issues from a lack of regulation surrounding AI are also causing some lawyers to be confused about how to make best use of it, according to Mr Chien.
“You regulate technology, broadly speaking, in three ways. The first is you allow market participants to sort it out amongst themselves. So that’s the realm of contract law. I entered into a contract with you. There are a few rules associated with that, but ultimately, I’ve got the freedom to choose the terms that I sign up to.
“The second is you hit it with a sledgehammer. It’s mandatory laws: ‘Thou shalt not do blah’.
“The third is through voluntary codes – industry. For you to play in this space, you need to comply, and you need to sign up to this charter that talks about things.”
Mr Chien continued: “I appreciate that the conversation about the threat of AI can be overstated, but a few months ago, I looked at basically what the world was doing. I looked at China, Europe, America, Australia, Malaysia, et cetera. And what fascinated me was the similarities.
“What I’ve seen is there’s almost a universal acceptance that AI left unchecked could be damaging for society because [of] three reasons: it’s not diverse and inclusive, it’s not explainable. If I’m talking to a robot, do I know I’m talking to a robot? Is it a product liability issue? Is it the person [who] purchased the AI? But universally, I say, but amongst the countries that I just said, you saw almost an enshrinement, or you’re starting to look at the enshrinement of AI ethical principles, and I’ve never seen it at that level.”
Despite the challenges associated with AI take-up, Mr Chien said there are ways to eliminate or at least mitigate the roadblocks legal professionals have with making sure they utilise it best.
“The first point I want to make is that it’s not going anywhere. I think we have to embrace it. There are two practical steps to take. The first is to create a policy. Each company, firms, universities, et cetera, does need to actively consider this. That policy basically needs to talk about whether the organisation will allow the use of ChatGPT or generative AI,” he said.
“The second is [addressing], what are the issues if we are to use it? What are the proprietary considerations? What are the information barriers? What are the ethical considerations?
“Speaking of ethics, in the not-so-distant future, at the very least, you’ll see a concept, I think, called artificial intelligence ethics by design. We’re starting to see it in the context of cyber security, where everyone’s saying, ‘Look, anytime you look at a privacy project where you are collecting personal information, you need to have design thinking before I embark on this journey: what are the implications, what are the implications to the community? What are the legal obligations, et cetera?’
“In my world, it’s a privacy threshold assessment and a privacy impact assessment. So [with] artificial intelligence ethics or considerations by design, do it at the start.”
NB: This transcript has been edited slightly for publishing purposes. You can listen to the full episode here: