Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Hyper-smart AI ‘not that far away’ for lawyers

In terms of the usability of ChatGPT for legal professionals, this professor revealed that copyright and legal liability are two of the bot’s main drawbacks — but that artificial intelligence (AI) technology like this will only continue to evolve moving forward.

user iconLauren Croft 17 April 2023 Big Law
expand image

Patrick Fair is the principal at Patrick Fair Associates and an adjunct professor in the School of Information Technology at Deakin University.

Speaking recently on The Lawyers Weekly Show, produced in partnership with Legalwise Seminars, he spoke about ChatGPT and what it means for legal practice, as well as some of the issues and challenges around the program and the broader legal implications of such technology.

After making global headlines over the last few months, artificial intelligence platforms like ChatGPT are changing — and will continue to — the day-to-day operations of legal practice to some extent. You can read Lawyers Weekly’s full coverage of ChatGPT and what lawyers need to know, here.

Throughout the last 25 years, Mr Fair has been involved in the policy and public implications of the internet, both in terms of lobbying regulators and liaising with industry point of view.

“This new technology here is going to have implications right across the economy. So, I was very interested to get across it,” he said.

“I wouldn’t hold myself out as any sort of special expert in these issues, but I hope that I’ve been looking at it enough so I can get people thinking and that I’m able to share some of the things that I’ve come across in a way which would help people to get a start with something that’s going to really change the way we practise law.

Despite concerns raised around the accuracy of the program, ChatGPT and similar AI tech are only set to improve moving forward, something which Mr Fair said would also be a result of giving AI more data and information to work with.

“One of the ways to make it a lot better is just to give it more data and for it to learn, to become a bigger model with more information. But there’s a debate in the AI community about that, with one group saying, ‘Look, unless it can actually comprehend the way a human can comprehend and have a model of the universe forming in its head, it’s not really conscious, and it can’t really get a lot better’,” he explained.  

“So, when we get to the model, which is not only trained on everything that’s available and up to date, plus it has a kind of ability to know things and know what’s right and what’s wrong, then we’ll really get something; and that’s not that far away.”

In terms of the lawfulness of this type of AI platform and it’s utilisation, there are a number of considerations for legal professionals, according to Mr Fair, who said that legal liability is a key issue in terms of the practical use of ChatGPT.

“At a fundamental level, the first thing to say is that it hallucinates because it’s only predicting what the right answer might be based on analysis of language. It’s often wrong. I’ve been really impressed by how accurate some of its responses to very esoteric technical legal questions [have] been, but I’ve also been amazed about this phenomenon where it will make up an answer to something, you will tell it that’s the wrong answer, it’ll apologise to you, and then it’ll tell you another wrong answer, and then it will cycle through the wrong answers,” he added.  

“So, from a legal liability point of view, read the disclaimers on the front page. You should not go to print relying on this output from this thing. You need to check it. It is very helpful and useful, particularly with soft things like lists and summaries and first draughts of documents, but you cannot rely on it from a legal point of view.”

Copyright is another “area of hot interest” — and something that will potentially pave the way for law reform in order to assign it.

“There are a number of fronts where this is challenging. One is the right to scan and absorb information for the learning process, and there’s some litigation going on in the US where people are claiming that it shouldn’t have been trained on their material. Then there’s the question of who owns the output. If you read the terms of ChatGPT, it assigns the copyright in the output to the person who asked the question, while noting that it could give the same answer to different people and that each one would be original, but it’s not guaranteeing that it won’t,” Mr Fair added.

“That’s a very contentious question from a copyright point of view because these are generative tools. If what you give them to generate something is not sufficient to show that you are the creator of the work, and that seems to be a possibility if you set it up just so that it automatically produces, say, an article about sports results based on actual sports results, but it writes it up and makes it an article on something which has been happening; if that’s an automatic process, then there is no human creator, and there is no copyright.

“So, there’s nothing for you to own and nothing for you to protect. So that’s an area where I think there’s going to be a need for some law reform.”

Given how fast this technology is developing, regulations will need to be quickly brought in to keep up, warned Mr Fair.

“ChatGPT is owned by OpenAI, and it’s got a lot of money in there from Microsoft, where it’s being deployed as part of Bing. There are careful disclaimers, and they’re also constantly building new guardrails to prevent it from being used to create hate speech and to produce outcomes which are damaging or unlawful,” he said.  

“But there are versions of this kind of software [that] are freely available to be downloaded on people’s PCs and can be tested. And when we see how that is used to, for example, create spam emails or fraudulent websites or build things [that] are potentially dangerous to community safety, we will need regulations that control their use and give law enforcement the ability to act quickly to stop the adverse consequences.”

The transcript of this podcast episode was slightly edited for publishing purposes. To listen to the full conversation with Patrick Fair, click below:

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.

You need to be a member to post comments. Become a member for free today!