Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Using a risk-based approach for generative AI

Amid myriad data, privacy, and cyber security concerns, in-house lawyers have been told to adopt a risk-based approach to using generative AI tools.

user iconMalavika Santhebennur 29 February 2024 Corporate Counsel
expand image

Ahead of her panel session at the Corporate Counsel Summit 2024, financial services legal counsel Chloe Giamadakis said time- and resource-stretched in-house counsel could access a range of artificial intelligence (AI) tools to introduce efficiencies and streamline processes.

These tools could assist with tasks such as due diligence by uncovering background information and reviewing certain areas of focus, as well as document automation based on different data input for drafting standard form documents such as non-disclosure agreements.

Other tools could help lawyers analyse large intellectual property portfolios while drawing insights from the content, Ms Giamadakis said.

“I think corporations, and several industries in particular, are willing to invest in AI and see how it could be better used in their business,” she told Lawyers Weekly.

At the summit, Ms Giamadakis will speak about how in-house counsel could harness generative AI tools to increase efficiencies and outline what ethical adoption of these tools looks like.

How to implement AI tools

Ms Giamadakis recommended adopting a “risk-based approach” for using AI tools in light of the range of data, privacy, and cyber security concerns that are now prevalent.

Indeed, 2022 bore witness to “potentially the most serious privacy breach in Australian history” (at the time), where millions of Optus customers had their names, dates of birth, phone numbers, and email addresses stolen by cyber criminals, as well as licence and passport numbers in some cases.

The Medibank breach followed this in October 2022, when it confirmed to the market that it had detected “unusual activity” on its network before disclosing that customer data had been accessed and stolen, affecting as many as 9.7 million current and former Medibank, ahm, and international student customers.

Both data breaches triggered numerous class actions and raised serious regulatory concerns.

In view of this, all organisations are seriously taking these concerns and considerations into account when implementing strategies around the usage of AI tools, Ms Giamadakis said.

“They are thinking about where data is going and housed, and who has access to it,” she said

“As with anything new, you always have groups who look to exploit these vulnerabilities, so I think that data and privacy overlay is really important. On top of this, you have all your ethical duties that you need to consider as a lawyer, including the concept of privilege.”

Navigating the limitations of AI

While acknowledging the benefits of AI tools for the workplace, Ms Giamadakis also cautioned lawyers to be aware of its limitations, especially as society has become accustomed to having immediate access to information.

She cautioned that any output from AI tools would still require a human hand to ensure accuracy and suitability to the task at hand.

“You need to make sure that you give it prompts in the context of what you’re working on,” she pointed out.

“You need to fact-check the information produced by generative AI tools because these technologies learn from human use. As a result, it could inadvertently reflect human biases. You need to ask how it came to a decision because you don’t get that level of transparency with AI.”

She continued: “AI quickly generates a response for you, which is great if you need something quick. But you need to check it afterwards to make sure that the output is suited to your specific circumstances.”

Ms Giamadakis said she hopes that robust frameworks and guidelines will continue to be introduced in the near future to reduce some of the risks associated with AI, making it more fit for purpose.

Designing frameworks comes with its own set of challenges because AI has a global reach, which makes monitoring information increasingly complex, she noted.

“But I think, once we get to a point where it is a bit more regulated and operating within a framework, AI will be more fit for purpose, and its use will grow exponentially,” Ms Giamadakis said.

Time is of the essence, however, particularly with new technological tools such as AI that do not have the human overlay.

Humans are required to build trust in it before it is embedded in workplaces, Ms Giamadakis said.

“It’s about developing trust in how AI sources, handles, and interprets information and ultimately generates an outcome for you,” she concluded.

To hear more from Chloe Giamadakis about the practical uses of generative AI for your legal teams and organisation, come along to the Corporate Counsel Summit 2024.

It will be held on Thursday, 2 May, at The Star Sydney.

Click here to book your tickets and don’t miss out!

For more information, including agenda and speakers, click here.

You need to be a member to post comments. Become a member for free today!