Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

ChatGPT not likely to wholly replace lawyers (yet)

After a rumoured $10 billion injection from Microsoft, Open AI and ChatGPT are currently making global headlines. Here’s what it could mean for lawyers, law students and the regulatory landscape.

user iconLauren Croft 02 February 2023 NewLaw
expand image

Last November, artificial intelligence (AI) research and deployment company OpenAI launched ChatGPT, a chatbot interface for an AI tool called GPT3 that “interacts in a conversational way” and generates text in response to different prompts. The bot is able to come up with comprehensive and coherent responses — and could potentially be used in a number of industries for admin tasks or document drafting.

Just last week, tech giant Microsoft, which originally invested $1 billion into the start-up in 2019, announced a further multiyear, multibillion-dollar investment in OpenAI, with Microsoft Azure continuing to be the exclusive cloud provider for the company moving forward. The most recent investment, as reported by Semafor, is rumoured to be $10 billion, but Microsoft is yet to confirm.

In a statement confirming the partnership, Microsoft chairman and chief executive Satya Nadella said that the aim was to continue “responsibly advance cutting-edge AI research and democratise AI as a new technology platform”.

“In this next phase of our partnership, developers and organisations across industries will have access to the best AI infrastructure, models, and toolchain with Azure to build and run their applications,” he said.

What (or who) will ChatGPT replace?

A number of jobs — including those within the legal sphere — could be on the chopping block in favour of AI programs like ChatGPT, with the bot already being used to generate legal documents and outline school assignments.

Last week, Minnesota University Law School Professor Jonathan Choi gave ChatGPT the same test law students take, with 95 multiple-choice questions and 12 essay questions, as reported by CBS News. ChatGPT scored a low-passing mark of C+.

“Overall, ChatGPT wasn’t a great law student acting alone,” Professor Choi later tweeted.

“But we expect that collaborating with humans, language models like ChatGPT would be very useful to law students taking exams and to practicing lawyers.”

Since the program became free and easily available in November last year, UNSW Professor Cath Ellis has seen numerous uses for ChatGPT — but also a few mistakes.

“I know of teachers who have used it to write lesson plans, social media marketing folks using it to write social media marketing plans, and I’ve heard of someone using it to write answers to job interview questions in real time during a telephone interview. I’ve seen it write a sonnet in the style of Shakespeare about returning to work after the holidays. You name it, it can do it. The quality is variable.

“I’ve seen it produce explanations of complex, contested and nuanced topics that are clear, reasoned and error-free. I’ve also seen it make some pretty basic mistakes (the sonnet in the style of Shakespeare didn’t have a Shakespearean sonnet rhyme scheme, for instance). And it doesn’t know the answers to some questions that are comparatively easy to figure out via Google,” she explained.

“An amusing example is that it didn’t know the answer to the question ‘how old do you have to be to use ChatGPT?’ Two people I know asked it this question, and it gave two different answers, both of which were wrong.”

ChatGPT can also code computer applications and software and write basic emails, as well as mid-level writing jobs. Additionally, Columbia Business School Professor Oded Netzer told CBS MoneyWatch that AI programs like this one can also be of use for basic legal forms, documents and agreements.

“There are parts of a legal document that humans need to adapt to a particular situation, but 90 per cent of the document is copy pasted,” he told CBS MoneyWatch.

“There is no reason why we would not have the machine write these kinds of legal documents. You may need to explain first in English the parameters [sic], then the machine should be able to write it very well. The less creative you need to be, the more it should be replaced.”

However, Law Council of Australia president Luke Murphy told Lawyers Weekly that while AI may be able to complete menial legal tasks, it’s unlikely to replace lawyers just yet.

“Sophisticated AI tools like ChatGPT offer potential opportunities to support the legal profession in undertaking various administrative tasks. However, lawyers are especially relied upon for their sound judgement — the extent to which AI can replicate this acumen remains to be seen,” he said.

“How AI is incorporated into legal practice will be a vital area of focus in the coming years, and may even play a role in creating opportunities for increased access to justice. However, the use of this technology in the legal profession is still in its relative infancy and requires significant human consideration to ensure that the output of these tools is correct and useable.”

Regulatory (and other) concerns

Professor Ellis said that one potential legal implication of ChatGPT is around intellectual property and copyright — and said that, as such, regulating AI programs won’t be an easy feat.

“These concerns have been most vociferous in the use of another OpenAI tool, which is called Dall-E. This tool turns text into images, but it does so by drawing on the work of millions, if not billions, of artists, both alive and dead. The same goes for the text it generates. Because it has been ‘trained’ on a huge volume of already written material (including millions of published books), it is using the intellectual property of others for commercial gain. Regulating it in terms of cheating is going to be tricky.

“The NSW and Qld departments of education have recently announced that they are going to ‘ban’ its use in public schools. What this means, in effect, is that they blocked students accessing it via their internet on campus. Blocking it is one thing, [but] banning it is well-nigh impossible (students will still have access from home and via their phone data). The reasoning behind the ban is because the Terms of Use for all OpenAI tools (ChatGPT, Dall-E etc) were changed last December to require all users to be over the age of 18,” she explained.  

“This makes sense not just because it is in the terms of use, but also because one of the known problems with GPT3 is that it has had a bit of an unfortunate habit of blurting out some pretty off-colour stuff. Reports suggest that this is because some of the material on which it has been trained were taken from some of the more unsavoury corners of the internet, including sites like 4Chan and 8Chan.”

Moreover, Mr Murphy warned that utilising AI tools like ChatGPT must be done with caution — both from lawyers, law students and academics.  

“Where these tools are utilised by lawyers, this must be done with extreme care. Lawyers must always keep front of mind their professional and ethical obligations to the court and to their clients. How these tools interact with our overarching duties as lawyers remains an important issue with which the profession is currently grappling,” he said.

“It is also critical to ensure that law students are properly and rigorously assessed, ahead of taking up future roles in the legal profession. The Law Council understands that Australian universities are proactively considering measures [that] are directed towards ensuring the integrity of assessments, given concerns raised by the availability of ChatGPT and similar AI platforms.”

Australian universities tightening anti-cheating rules around AI

The scope of the bot’s capabilities also extends to students using ChatGPT to cheat on assignments and exams, with a number of universities across Australia voicing concerns and putting stricter anti-cheating and academic integrity rules in place.

To this end, Professor Ellis told Lawyers Weekly, ChatGPT could be potentially dangerous for law students — and for their teachers.

“The chat function [of ChatGPT] allows any adult to ask the tool to generate text or answer questions as if they were chatting to another person. The risk it presents is that many of the things assessment tasks in schools and tertiary institutions ask students to do can now be done by this tool, usually to a passable standard,” she said.

“This means that students may be tempted to cheat using this or other AI tools, and it will be hard for teachers to detect if the student has demonstrated genuine learning.”

Deputy chief executive of the Group of Eight Australian universities, Dr Matthew Brown, recently told The Guardian that universities across the country would be “proactively tackling” programs like ChatGPT through increased staff training and more targeted tech detection strategies.

“Our universities have revised how they will run assessments in 2023, including supervised exams … greater use of pen and paper exams and tests … and tests only for units with low integrity risks,” he told The Guardian.

“Assessment redesign is critical, and this work is ongoing for our universities as we seek to get ahead of AI developments.”

Despite this, in terms of what law school teachers and university professors should be doing in the face of the rising AI phenomenon, Professor Ellis’ advice was to “be alert but not alarmed”.

“We have to be realistic about what we can change this close to the start of the academic year, but we also have to accept that we can’t just put our heads down and plough through it as if nothing has changed. On the one hand, we do need to be prepared to take extra steps to secure our assessment tasks to assure that genuine learning has occurred. But on the other hand, we should also be turning to face the challenges and opportunities that tools like ChatGPT present,” she added.

“These tools are already being used in the workforce — including in the legal profession — to achieve things that had been previously unimaginable. There will be no going back. We need to ensure we graduate professionals who are able to work with AI tools in ways that are ethical, moral and, most importantly, legal.”

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.

You need to be a member to post comments. Become a member for free today!