Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

AI in practice: Experiencing the benefits and dangers

At this time, lawyers can no longer look towards the horizon stroking their chin, considering the coming of the AI apocalypse. The upheaval is upon us, disrupting the way we work. And that’s OK, writes Yule Guttenbeil.

user iconYule Guttenbeil 18 May 2023 SME Law
expand image

Much has been said about how artificial intelligence (AI) will make many professions obsolete, including (and perhaps especially) lawyers. The media seems to be filled with “end-of-days” headlines and out-of-context hot-takes. I haven’t found much of the discussion to be useful in helping me figure out how AI will affect the way I work. So what follows is a summary of my experience experimenting with AI and beginning to integrate it into the way I work.

The journey begins

I have used Notion for about four years. When I was given the opportunity in February to use its AI feature, I opted in without thinking it would really offer me much in the way of improving the way I work. I started feeding it simple, single-paragraph prompts “Write a story about a ghost ship where all the inhabitants are actually alive, but simply can’t see each other” or “write a blog about brand protection in Australia”.

The responses I received were somewhat impressive but not terribly useful. They might provide a general structure to use as a starting point for further development, but not a finished article by any means.

As I pushed the envelope by giving the AI longer, more specific prompts and follow-up instructions, I began to see the power of the technology to improve the quality of my work.

I am currently a sole practitioner, and I miss having access to one of the biggest benefits of larger firms — other people to bounce ideas off and ensure I haven’t missed anything in my work. I have found that the current raft of AI assistants are pretty good at doing that.

I can feed Notion AI a complete draft email and ask it to “improve the writing”. It will generally do a solid job of editing the email: correcting spelling and grammar; removing redundant text and repetition; expand on some ideas; and generally, produce a well-written and concise email.

It will also usually puff it out with unnecessary connecting phrases, add supplicating framing, and be gratuitously formal. So, while I appreciate the second set of eyes, running text through an AI will also need a second manual edit or additional instructions in the prompt to avoid some of this.

Given how useful I found Notion AI, I began using ChatGPT-3 to compare its performance. I discovered that both were good and had different strengths and weaknesses. As giving the same task to two different people will yield two different but equally good results from each; the AIs performed the same task with vastly different but equally valid results.

I found that Notion AI is better at “light touch” tasks like revising an email, letter, or article so as to maintain my written voice. ChatGPT completely erases any distinctiveness about my writing style. Instead, it writes in a way that is conceptually sound, strong, and confident. I tend to feed it a list of ideas I want it to develop into a complete first draft.

At this stage, I was pretty happy with the results and could see that using AI as part of my everyday workflow would improve the quality of my work and speed up the proofreading process. I was then contacted by Connor James of Law Quarter, a friend of mine who is both a computer scientist and a lawyer. He has developed a software solution called Titan that integrates with GPT-4 (the multi-modal large language model behind ChatGPT-4) and unleashes its power on legal and regulatory documents. This allows documents of unlimited length to be reviewed by GPT-4 and provides different types of feedback on it. (GPT-4 usually has a limitation of approximately 5,000 words — not very useful for most legal document reviews). I was blown away by what it could do. It was very accurate at identifying missing clauses and making recommendations on areas that can be improved.

Unfortunately, Titan was not yet available for use; but now I’d caught the bug. I could see what was truly possible with the technology if implemented well. I tried downloading an earlier Titan tool, also developed by Connor, that runs inside MS Word. But since I am a Mac user, I was unable to use it. So, I searched for something similar and found Spellbook, a plugin for MS Word that uses a mix of AI models (including GPT-4) — more on that soon.

Down the rabbit hole

Before I go on, I must mention one of the strangest and disturbing experiences I have had in my experiments using generative AI. I gave ChatGPT-4 a task to find a more recent case than 2015 on a particular legal principle, and draft a summary of it. The purpose was to replace the example currently being used in some training material that was considered out of date.

ChatGPT provided an excellent case summary, including its full citation. Wow! I then searched for the case using the citation but couldn’t find it. There was a case with the same name, but the reference number for the journal related to a completely different case. I then checked the case with the full name but a different law journal reference and discovered that the subject matter did not relate to the issue I had asked ChatGPT to find a case on.

I pointed this out to ChatGPT. It apologised, then generated a new summary with full case citation of a different case that it assured me was a real case. It was not. ChatGPT even went so far as to provide me with hyperlinks to the case it told me it had summarised. The link worked, but it connected to a completely different and unrelated case.

Once again, the case with the cited name bore no relation to the subject matter described in the summary generated by ChatGPT. ChatGPT straight up lied to me. I called Connor and asked him what was going on. He told me that when AI generates and refers to information that does not exist — it’s called “hallucinating”. It is actually quite common. He assured me that Titan has reduced the hallucination problem by incorporating real source material.

This experience was fundamental in forming my views on how to use AI assistants. It’s even why I use the term AI assistant, because they are not fit to be relied on as an expert. They are tools that must be handled with great care by people who understand whether the generated text is useful or not, and how it must be edited. The dangers of AI are very real. To a great degree, this is because it will inevitably be used by unsophisticated users that confidently treat junk outputs from an AI as reliable and high quality.

It’s like magic!?

Now, back to Spellbook. I have signed up for early access and have found that Spellbook does some of the things that Titan does, but from what I can see — not as well. However, it works quite differently from the web version of Titan that uses GPT-4. It will automatically generate a summary of the document that allows quick navigation of the document, create simple explanations of the document or specific clauses, find conflicting terms, and cast a range of other useful “spells”. The main thing I use it for is quickly drafting clauses or definitions to brief.

My results have been extremely mixed. Sometimes the clause it drafts is incredibly good. In some cases, shockingly so. But more often, I have found it unfit for purpose, which is a weird place to be with a software product. We expect software to get it right all the time. With the current raft of AI assistants, you are doing well if it satisfactorily meets the prompt 50 per cent of the time. With Spellbook, I’ve found it to be useful about 40 per cent of the time. In most cases, it will “think” for quite some time before saying, “Sorry, something went wrong. Please wait a few seconds and try again”. I’ll repeat the same prompt a number of times to no avail. Other issues I have experienced include it generating a completely unrelated response to a prompt, or telling me that clauses are missing that are clearly in the document.

Yet, I’m still prepared to pay for it. The way I view Spellbook (and AI assistants of the current generation generally) is as a work in progress. The technology is still new, and developers are working quickly to build useful products using impressive but far-from-perfect large language models. I had experienced the same issues with Spellbook in ChatGPT as well (though, in both cases, less frequently than when I started).

So, the issues are not unique to them. I also acknowledge that I have been given early access to a very new and unfinished product — for which I am grateful. For instance, being able to work on longer Word documents is a clear drawcard for Spellbook, but it can only assess four pages on either side of the cursor’s location in the document. This limitation will soon be removed, meaning that it will become more accurate at determining missing clauses and performing tasks across the entire document.

If software like Spellbook can help me complete 40 per cent of drafting or review tasks in half the time it would usually take, that is still a significant net benefit. Now, Spellbook isn’t the only legal AI assistant plugin for Word. But the reason I opted to use it was because it is developed by a company with a longstanding background in legal tech, Rally Legal. Even though I can’t say it’s great right now, I am confident it will get there. I also anticipate other software coming to market in the future, which can only improve over time.

Where to now?

AI isn’t going anywhere. The legal industry has long been touted as being ripe for upset by AI technology. The implication usually being that lawyers won’t be needed. I don’t believe that will ever be the case. AI is a tool that requires someone with expertise to discern whether its outputs are valuable or not. Experienced lawyers are the only ones that can do that at this time.

At its heart, law is a profession of avoiding, managing, and resolving conflicts between human beings. All an AI can do is make suggestions based on what it has seen done in the past. While it might be generative, it is not original. It cannot truly understand a person’s motivations or what will be acceptable to them, which is always the issue at the heart of the law. It can, however, help lawyers in their work of creating documents used to manage their client’s conflicting desires.

None of that is to suggest that AI is not upsetting the way lawyers work. It absolutely is. In particular, it will act as a barrier to entry into the profession by graduate lawyers. This is because the menial legal tasks usually given to junior lawyers and clerks can generally be easily done by AI. It will make the learning curve from graduate to experienced and knowledgeable lawyer incredibly steep. This will absolutely affect the legal labour market and likely push the development of legal AI to do more of the technical and sensitive work as well.

At this time, lawyers can no longer look toward the horizon stroking their chin, considering the coming of the AI apocalypse. The upheaval is upon us, disrupting the way we work. And that’s OK.

While often slow to adopt new technologies, legal practice usually gets on top of it pretty well. Word processing, email, and even video calls have all had major impacts on how law is practised. The sky never fell down. Lawyers will figure out how best to use AI and respond to its use more broadly in society as well. All we can do is our best to understand its limitations, how best to use it, and the impacts of how others use it. We can then adopt it into our practice in a way that enhances our work and provides better value to our clients.

Yule Guttenbeil is the principal of Attune Legal.

You need to be a member to post comments. Become a member for free today!