Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Legal departments should ‘be open’ to making GenAI work for them

For legal teams looking to experiment with GenAI, this engineer recommends “trial and error in a safe place”, as well as using AI platforms more often to get comfortable.

user iconLauren Croft 15 October 2024 Corporate Counsel
expand image

Last month, technology solutions company Consilio launched Guided AI Review, a locally hosted, tailor-made AI solution that combines custom algorithms with seasoned, AI-trained document review experts to deliver faster, higher-quality results in document review for the legal profession.

Following this launch, Consilio senior prototyping engineer Mitchell Scott reflected on self-hosted AI platforms and the future of the legal profession on an episode of The Corporate Counsel Show, produced in partnership with LOD, recently acquired by Consilio.

 
 

The last few years have been a “whirlwind”, according to Scott, who said that self-hosted AI platforms are becoming more and more common.

“On one side, you have these service providers, such as OpenAI and Microsoft Azure, [which] are providing these services where you can effectively ask a question of their models, they hold them behind this walled garden and then you get a backer response. You have no visibility over what’s happening; you have no control over how the model works; it’s provided as a black box. Whereas over the last year especially, we found there’s been this incredible confluence of open-source models,” he said.

“These are models that you can download from well-known repositories, well-known sources on the internet. They are made by reputable people like Meta, other AI research organisations like OpenAI, but actually open, and you can take those, and with the right hardware, if you have the supercomputer capacity that you need to run them, you can actually run these all inside your own environment. So, you’re not sending your data off to some third party, you’re actually keeping it all within your environment.”

These self-hosted AI platforms come with a range of benefits, including being faster and more accurate than larger models.

“You can have all this data within your own environment. You’re not kind of sending, in our case, privileged legal data that can often be quite sensitive related to litigation and things like that. So, that’s not being sent off to a third party, that’s not leaving our own internal network. As soon as any kind of data leaves the network, there is the risk that someone can intercept it. But on top of that, having these models locally allows us to do some really cool things,” Scott said.

“So, for instance, we can take client data and with their permission, working in partnership with them, we can modify a model so that it knows more about their company. We can take all those internal documents, that very valuable private data, and use it to create models that will better be able to review documents from their company. We’ve also found that when you’re using these local models, they’re a lot faster, and when they’re kind of homed in on a certain use case, a lot more accurate at the same time.

“So if you’re using a very powerful model, then I’ve always thought it’s a bit like taking a Lamborghini to the shops. Like you’re paying an enormous amount of money to do a very specialised task and paying that cost of it being able to go to a racetrack. Whereas if you right size these models, if you use a small model and tune it for a very specific use case, you get kind of better performance at a fraction of the cost.”

For legal businesses and teams across the profession, this approach is the way forward, according to Scott.

“All the data that we interact with as lawyers is by definition mostly privileged. It’s very confidential. It’s the stuff that, as corporate lawyers, is sort of life and death for our clients. So I think that as the price of computer goes down, we can’t see these crazy GPU prices from Nvidia forever. There are competitors already emerging,” he said.

“So as the price of computer goes down and the expertise starts to be developed around the world to actually host these supercomputer clusters in your own data centre, I think we’re actually going to move towards our own on-premises local models fine-tuned for specific use cases.”

However, the profession is still in the “early days” in terms of what data management needs to look like in order to use generative AI (GenAI) effectively, added Scott.

“GenAI is not this silver bullet that can take your messy data in 20,000 different places and instantly form a cohesive answer from it. Data management is still one of the missing pieces before we can truly leverage GenAI properly. So being able to really kind of lay out everything that a system would need to know rather than kind of making assumptions about the business or about the area that you work in. If you can really be explicit, when you give data to GenAI, it does a lot better.

“Getting familiar with GenAI and upskilling, there’s really no silver bullet apart from just using the technology and using it directly. I use ChatGPT literally every single day. It writes a lot of my code, not the really important stuff, but it automates a lot of those low-value sort of tasks, as well as all sorts of other things,” he said.

“I think there’s really no reason that we shouldn’t be interacting more directly with it. We’ve seen really low uptake in the market from tools that sort of obfuscate or wrap the base technology. And I think we’ve seen this huge uptake of ChatGPT over those tools, simply because people can trust and understand the technology more when they’re using it directly.”

Furthermore, Scott advised legal teams keen on getting started to experiment with the technology and be open-minded moving forward.

“I really think it is trial and error, but trial and error in a safe space. We’ve seen [that] there were some early missteps when people were putting in confidential data into ChatGPT, so having a place where you can try and experiment and potentially fail is really, really important. There’s really no way to understand where the boundaries are, what it can and can’t do unless you’re able to really give it a go,” he said.

“And be open to finding ways to make it work. Like, I’ve always found that this technology is very, you can attack it from a number of different ways. You can ask a question in so many different ways or get it to do a task in so many different ways. You have to keep this open mindset and want it to succeed. If you’re trying to find ways to make it fail, you definitely can, but that’s not really the point, in my opinion.”

The transcript of this podcast episode was slightly edited for publishing purposes. To listen to the full conversation with Mitchell Scott, click below:

Lauren Croft

Lauren Croft

Lauren is a journalist at Lawyers Weekly and graduated with a Bachelor of Journalism from Macleay College. Prior to joining Lawyers Weekly, she worked as a trade journalist for media and travel industry publications and Travel Weekly. Originally born in England, Lauren enjoys trying new bars and restaurants, attending music festivals and travelling. She is also a keen snowboarder and pre-pandemic, spent a season living in a French ski resort.