Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

‘Safe experimentation’ with ‘technology that lies’

When implementing AI into their practices, lawyers should exercise caution where possible, as this tech is “going to develop in unexpected ways”, with new risks emerging as it evolves.

user iconLauren Croft 01 October 2024 Corporate Counsel
expand image

Last month, technology solutions company Consilio launched Guided AI Review, a locally hosted, tailor-made AI solution that combines custom algorithms with seasoned, AI-trained document review experts to deliver faster, higher-quality results in document review for the legal profession.

Following this launch, Consilio senior prototyping engineer Mitchell Scott reflected on AI and the future of the legal profession on an episode of The Corporate Counsel Show, produced in partnership with LOD, recently acquired by Consilio.

 
 

“The last, like two, three years has been an absolute whirlwind ever since ChatGPT hit the mainstream and started becoming front and centre in people’s minds,” Scott said.

“AI went from being this concept that people had heard of but didn’t really have a tangible idea about, and suddenly people are using it every single day to automate their tasks and also just see where it fails and kind of have firsthand experience of it.”

In terms of the state of affairs for AI in Australia, Scott emphasised that while more legal workplaces are utilising AI than ever, the use of this emerging technology is still a new concept.

“Despite being kind of a couple of years into this AI hype cycle, like having seen ChatGPT and then the rise of other providers and then the rise of open source models, it seems we are still very much at the beginning. I feel like there isn’t a strong idea yet of where the risks in AI are really going to come from,” he said.

“We’ve seen some kind of stumbling blocks from the lawyer side. I’m not a practising lawyer currently, but people relying on GenAI to do legal work and it hallucinating. And this being a new technology that people aren’t really familiar with, we’ve never really had a technology that lies to us.”

While the use of AI is yet to be fully regulated in Australia, Scott explained that Australia’s approach “seems to be” more careful compared to other overseas markets.

“We’ve seen the EU take quite a heavy approach to regulating AI, and some of the regulations coming out of, for example, California as well, seem to have tried to not have that same lead time that we usually see with legislation regulating new technologies,” he said.

“I think Australia’s approach, and being a bit more of a cautious, wait-and-see sort of approach, seems to be a little more cognisant of the fact we don’t know where this technology is going. It’s going to develop in unexpected ways, and I don’t think we’ve seen a lot of the harm come out of the technology that is possible.”

One of the headline “harms” that lawyers are currently experiencing around AI – and one of the key reasons the tech needs regulation – is hallucinations, where AI will provide inaccurate information to the user.

“These technologies don’t always have a sense of logic, and they aren’t grounded in sort of facts. The way that our human reasoning works, the way I like to explain it, is they seem to be the logical descendant of autocorrect, so they just predict the next word over and over and over, and they just pick what it thinks you want to hear. So that’s not always correct; that’s not always logical or factual,” Scott said.

“So, that’s where hallucinations come from, and we, as a kind of legal community, are starting to become aware that that’s a possibility. We’ve also seen stumbling blocks when it comes to using services like OpenAI, ChatGPT. I think what a lot of people didn’t appreciate when ChatGPT was first released was that it was a public service; it was open to the public and free.

“And if you’ve ever heard that, the old saying that if you’re not the customer, then you’re the product, I think that was very much the case with OpenAI. They needed more training data for their models. And I think a lot of people didn’t quite realise that the data they were putting in, be it confidential, privileged or corporate secrets, could actually form part of the training set and be reincorporated back into the model. And other users could see that in effect.”

In light of these ongoing risks, Scott recommended a more cautious approach to experimenting with and implementing new AI technologies.

“I think it does come back to that safe experimentation sort of angle. You know, there’s going to be mistakes, there’s going to be missteps. I think the key is to do them in a place where lawyers can feel safe and especially not feel judged,” he said.

“I think one of the really great uses of GenAI is to ask those silly questions or to really just have a safe space where you can try something and see if it works.”

The transcript of this podcast episode was slightly edited for publishing purposes. To listen to the full conversation with Mitchell Scott, click below:

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.