Policies for use of ChatGPT needed in legal workplaces
AI technologies are set to have a massive impact for lawyers, law students and the regulatory landscape — meaning that moving forward, legal workplaces will need to implement regulatory policies for ChatGPT and similar platforms.
OpenAI’s chatbot, ChatGPT, has made global headlines this year.
There are a number of other risks associated with AI technologies, particularly when used within workplaces.
In a legal context, Harmers Workplace Lawyers executive counsel and team leader Amy Zhang told Lawyers Weekly, this could look like data privacy and confidentiality breaches, as well as a waiver of privilege.
“Employees may be inputting sensitive and highly confidential information into ChatGPT, creating the risk that such data may be inadvertently or deliberately revealed to third parties or the general public, including through ChatGPT using such data when responding to other requests or through hacking. It is unclear how the data inputted into ChatGPT is stored, how that data is protected, and who that data is available to.
“The disclosure or misuse of such data can result in significant consequences, including not just the revelation of personally and/or commercially sensitive and confidential data, but potential exposure to claims for damages and penalties for breaches of the law,” she said.
“The exposure of commercially sensitive data to ChatGPT may also be a relevant consideration when it comes to the enforcement of restraints or the making of privilege claims, as courts may consider that the data is not truly confidential and/or there has been the relevant inconsistency such as to waive confidentiality and/or privilege, and that such information should not be protected in those circumstances.”
While the bot can be a fantastic tool for those in a number of different roles, it does not appear likely that ChatGPT will wholly replace lawyers — at least not yet, with concerns raised around the accuracy of the program.
“Another significant danger of ChatGPT is that the information it provides may not be accurate. This is acknowledged by ChatGPT itself. Because the model is trained on a large amount of data, it may not always be able to distinguish between credible and inaccurate information, and may be easily influenced into believing inaccurate information is accurate,” Ms Zhang explained.
“Indeed, when I was testing ChatGPT for the first time, the platform was (confidently) providing inaccurate legal advice and even citing cases that do not exist, and was easily led to believe certain information was true. The information in its system is also not up to date, with ChatGPT stating that it is only aware of world developments up to 2021. Accordingly, there may be significant consequences in relying on the data provided by ChatGPT, particularly if such data is inaccurate or out of date.”
In terms of mitigating these risks, ChatGPT should be used in a “limited and careful manner”, according to Ms Zhang, who said that any content generated should be looked at closely regarding both reliability and accuracy.
“This would include not inputting confidential, sensitive or privileged information into the platform, and not taking any advice or information it generates at face value and relying on or accepting such information as accurate without independently and critically assessing the same. In its current form and with its current deficiencies, it would be dangerous to rely on and use ChatGPT for legal research or the formulation of legal advice,” she said.
“Given the risks associated with ChatGPT, legal workplaces should have policies in place governing the use of ChatGPT for legal work. This may extend all the way up to banning the use of ChatGPT for legal work. At the very least, there should be policies prohibiting the inputting of confidential, sensitive or privileged information into the platform and the reliance on ChatGPT for legal research purposes or the preparation of legal advice.”
Therefore, ChatGPT can still be used within legal workplaces — but not relied on wholly.
“ChatGPT may be able to assist with basic drafting, such as drafting of non-legal correspondence and documents, and may be able to assist with BD and marketing tasks, including drafting articles. It may also assist with simplifying and de-jargoning complex or long-form content, although care will need to be taken around inputting confidential, sensitive or privileged information into the platform and the accuracy of the output,” Ms Zhang said.
“In its current form and with its current deficiencies, it would be dangerous to rely on and use ChatGPT for anything other than basic non-legal tasks; however, the potential is definitely there for such a platform to revolutionise and make more efficient how we do certain legal tasks.”
Lauren Croft
Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.