Advertisement
Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

AI literacy (not hallucinations) the big worry, experts warn

As more lawyers face disciplinary actions for improperly using AI in the legal profession, the root cause has emerged as insufficient training and a lack of experience with AI rather than the often-discussed problem of AI hallucinations.

user iconGrace Robbie 24 February 2025 Big Law
expand image

Despite the existence of established guidelines and regulations governing the utilisation of artificial intelligence (AI) within the legal profession, lawyers continue to encounter disciplinary measures for the inappropriate and improper use of this technology.

Earlier this month, the Office of the NSW Legal Services was requested to investigate a lawyer’s use of ChatGPT in an outline of submissions and an amended application that included “non-existent” citations and quotations.

To raise awareness of such issues, and as reported by Reuters, US personal injury law firm Morgan & Morgan sent an email earlier this month to its legal team of over 1,000 members, alerting them about the risks AI poses in legal filings.

The email stressed that while technology can be a valuable tool, “it can never be relied on solely for legal work”, as it has the potential to “generate plausible responses that may be entirely fabricated information”.

This message comes in the wake of an incident in which two of its lawyers were threatened with sanctions by a federal judge in Wyoming for including “fictitious case situations” in a lawsuit against Walmart.

One of the lawyers involved in the matter admitted that the citations were generated by an AI program that had “hallucinated” the cases.

While much attention has been paid to AI’s tendency to “hallucinate” or generate false information, experts argue that the root cause of the issue lies not just in the technology itself but in lawyers’ insufficient understanding of the limitations and risks associated with AI tools.

In a statement provided to Lawyers Weekly, Dr Ilia Kolochenko, partner and cyber security practice lead at New York-headquartered firm Platt Law LLP, pointed out that while many lawyers are aware of the risks associated with AI, the core issue within the legal professions lies in “insufficiently trained or inexperienced lawyers, ranging from associates to senior partners, who may all have a knowledge gap in a specific area of legal practice”.

“The AI hallucinations phenomena has been so widely mediatised and enthusiastically discussed virtually by every single user of all social networks, that it would be unwise to hypothesise that lawyers are unaware of the hallucinations or other AI-related risks.

“The true problem is that even after a thorough verification of AI-generated or AI-tuned legal content, say, a complex multijurisdictional M&A contract or a response to antitrust probe, some lawyers simply cannot spot legal mistakes related to both procedure and substance of the case,” Kolochenko said.

An additional concern, according to Kolochenko, is that some lawyers prefer to quickly scan AI-generated context for apparent errors rather than consulting more experienced colleagues when confronted with uncertainties or complex legal matters.

“Instead of talking to their more experienced colleagues about uncertainties or complex legal issues, many lawyers rather follow their gut feeling while reviewing AI-generated content, merely looking for obvious mistakes or gross inaccuracies.

“As a result, lawyers may not only provide defective and harmful legal advice to clients, but also gradually lose their intellectual capacities, as several recent researches consistently demonstrate,” Kolochenko said.

Harry Surden, a law professor at the University of Colorado’s law school who specialises in AI and the law, told Reuters that while AI tools like ChatGPT can be increasingly useful when applied correctly, lawyers must invest time in understanding “the strengths and weaknesses of the tools”. ​​

He also pointed out that the issue is not so much that AI is generating hallucinations but rather the growing number of instances demonstrating that many lawyers “lack AI literacy” to understand fully or identify errors in legal content generated by AI.

As GenAI continues to evolve and assist in various aspects of the legal process, an increasing number of lawyers are adopting this technology. According to a report by Thomson Reuters, 63 per cent of lawyers surveyed stated that they are using AI in their practice, with 12 per cent indicating that they use it on a regular basis.

Andrew Perlman, the dean of Suffolk University’s law school, put the matter bluntly that lawyers who fail to double-check AI-generated citations are guilty of “incompetence, just pure and simple”.

You need to be a member to post comments. Become a member for free today!