Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

‘The creative and positive use’ of GenAI in legal education

As the profession becomes increasingly digitised, legal educators now must assume GenAI and other emerging tech will be a normal part of legal practice moving forward – and accommodate AI tech in what graduates are being taught.

user iconLauren Croft 29 July 2024 Big Law
expand image

Since the onset of ChatGPT in 2022, AI tech has prompted waves of change in the legal profession, forcing legal workplaces and educators to evolve to keep up with new tech developments.

This is particularly prevalent as traditional roles within legal practice evolve with the adoption of AI, according to College of Law Centre for Legal Innovation executive director Terri Mottershead.

“The fundamental change to traditional roles is from knowledge worker or gatekeeper to experienced trusted advisor. Lawyers have been in the business of finding, acquiring, collating, analysing, interpreting and advising on knowledge that often only they could access, control or know how to interpret. And, our industry still charges for that by the hour,” she told Lawyers Weekly.

 
 

“Additionally, we still tend to work too often on the premise that clients must come to us – our office or the court – that’s all changing! AI may have a way to go before it makes legal knowledge completely transparent, accessible, affordable and levels the playing field for all who have it, but it is heading in that direction rapidly.

“The impact of this on our roles as lawyers is already apparent. Our work will become less about what we know and more about what we do with it, how we focus on outputs and not hours, how we customise solutions for clients, and how we make their interactions with us seamless, convenient, and immediate. We won’t be able to do that without AI or deep, personal human connections – knowing our clients well and demonstrating our humanity – these are the capabilities we need to enhance and recruit for right now.”

In light of all of this, Mottershead said the profession needs to be “asking and answering” what AI “won’t be able to do”. The answer, she said, will become a “competitive advantage” and a market differentiator.

“What measures can be taken now to support the transition? Learn, educate, engage, experiment, iterate and then wash and repeat …forever! Resources need to be directed to supporting the people part first and foremost … then transformation, continuous improvement, agility … then tech,” she added.

“Our challenge is to avoid the trap of thinking that the priorities should be reversed.”

Following the launch of ChatGPT, universities across Australia expressed concerns about the bot being used to cheat within schools, with a panel discussion at UNSW emphasising the importance of teaching students to use AI tools “ethically, morally and legally”.

This also led to “widespread panic”, according to College of Law chief academic officer Lewis Patrick, who said “the threat to academic integrity became obvious” following the emergence of the platform.

“This thinking has changed, and there is now emerging a consensus that focuses on the creative and positive use of generative AI in education. AI has enormous potential as a co-teacher and a co-designer. It can assist teachers and course designers to be more creative and innovative in the activities and assessments they provide to students.

“And it is now obvious to educators that an important part of teaching in any discipline, especially law, is going to be assisting students to use AI in ethical and professional ways. All students now need to be equipped to work in an AI world,” he said.

“Academic integrity is still a vital concern, of course, but education providers are developing strategies to protect this. At the College, we rely strongly on oral assessment as a central part of our strategy to ensure academic integrity. This is coupled with the redesign of coursework activities to ensure that when students use AI, they do so responsibly and critically.”

As a result, the College of Law has developed an academic AI policy, aimed at making sure students receive an ethical and “authentic learning” experience amid new tech.

“The College’s pedagogical approach has always been based on learning applied in a real-world context, both in the practical legal training (PLT) program and in our applied law master’s program. The real world of legal practice now includes a range of technological applications – most recently and most strikingly, the rapidly evolving use of generative AI,” Patrick said.

“Our Academic AI Policy is aimed at reducing the amount of time that lecturers need to spend in low-level, repetitive tasks so that they have more time to spend in high-value interactions with their students where they can share their practice expertise.

“These interactions will include live web conferencing sessions and one-on-one personal calls to check in on progress. AI tools will be provided to students to allow them to improve their work prior to submission, and lecturers will be freed up to be less a marker of students’ work and more a supportive coach and supervisor.”

The College’s Academic AI Policy takes the approach of embedding the use of AI, and significantly the critique of AI, into the formative tasks that make up the curriculum for students, according to Patrick.

“As an example, under our policy, some tasks in a program are being redesigned to allow students to use generative AI – but with the requirement that they must also submit a reflection on the adequacy of the AI draft and how the student improved it. Both their work in the task and their reflection will be assessed.

“In other tasks, such as drafting a statement of claim, we may provide the student with an AI-generated first draft that contains errors and require the student to critique and correct it. Both these approaches require the student to exercise critical thinking, demonstrate knowledge and apply legal principles and skills – exactly as they will need to do when they engage with generative AI in practice,” he said.

“An essential ethical dimension under our policy is transparency. Students will always be required to disclose when they have used AI in a task and failure to do so will be an academic conduct matter. The broader ethical considerations around use of AI in legal practice are addressed in a series of compulsory online modules embedded in every course.”

The ethical considerations of AI generally are also addressed in a series of compulsory online training modules – although Patrick said that ethics around the use of AI still have some evolving to do.

“The ethics of the use of AI in the broader legal landscape are still undefined and a work in progress. There is still no settled position on the ownership of what generative AI produces, on whether there is an obligation to disclose the use of AI, on how intellectual property and privacy should best be protected and to what extent the use of AI restricts diversity,” he said.

“It will probably take years before these issues are settled through a combination of legislation and new professional conduct regimes.”

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.