Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

4 issues to prepare for with AI regulation

In the face of proposed frameworks and guidance for artificial intelligence, law department leaders need to address four critical areas, according to Gartner.

user iconJerome Doraisamy 04 July 2023 Corporate Counsel
expand image

America-headquartered technological research and consulting firm Gartner has identified four key issues that general counsel, heads of legal and chief legal officers must be across in creating oversight of artificial intelligence (AI).

Awareness, and ultimate action, on the quartet of considerations will, Gartner argued, enable businesses and organisations to move forward while waiting on regulatory guidance, both domestically and globally.

 
 

Law department leaders, Gartner senior research principal Laura Cohn explained, “can examine where the various proposals overlap to help senior leaders and the board prepare for regulatory shifts as they develop their corporate AI strategy”.

“While laws in many jurisdictions may not come into effect until 2025, legal leaders can get started while they wait for finalised regulation to take shape,” she said.

Embedding transparency

According to Ms Cohn, “transparency about AI use is emerging as a critical tenet of proposed legislation worldwide”.

“Legal leaders need to think about how their organisations will make it clear to any humans when they are interacting with AI,” she submitted.

For example, with AI use in marketing content and the hiring process, legal leaders can help by updating the privacy notices and terms of conditions on their company’s websites to reflect AI use, Gartner outlined.

Ensuring risk management is continuous

“GC and legal leaders should participate in a cross-functional effort to put in place risk management controls that span the lifecycle of any high-risk AI tool,” Ms Cohn continued.

“One approach to this may be an algorithmic impact assessment (AIA) that documents decision making, demonstrates due diligence, and will reduce present and future regulatory risk and other liability.”

Besides legal, Gartner added, GCs should involve information security, data management, data science, privacy, compliance, and the relevant business units to get a fuller picture of risk.

“Since legal leaders typically don’t own the business process they embed controls for, consulting the relevant business units is vital,” it stressed.

Building governance with human oversight

Elsewhere, accountability from persons within the organisation will be fundamental.

“One risk that is very clear in using LLM tools is that they can get it very wrong while sounding superficially plausible,” said Ms Cohn.

“That’s why regulators are demanding human oversight, which should provide internal checks on the output of AI tools.”

Companies may want to designate an AI point person to help technical teams design and implement human controls, Gartner suggested.

“Depending on which department hosts the AI initiative, this person could be a team member with deep functional knowledge, a staffer from the security or privacy team, or, if there are integrations with enterprise search, the digital workplace lead,” it posited.

“The GC could also establish a digital ethics advisory board of legal, operations, IT, marketing and outside experts to help project teams manage ethical issues, and then make sure the board of directors is aware of any findings.”

Guarding against risks

Finally, Ms Cohn said, it is clear that regulators want to protect the data privacy of individuals when it comes to AI use.

As a result of this, it will be “key for legal leaders to stay on top of any newly prohibited practices”, such as biometric monitoring in public spaces, she opined.

Legal and compliance leaders should, Gartner espoused, manage privacy risk by applying privacy-by-design principles to AI initiatives. For example, the firm said, they would do well to require privacy impact assessments early in the project or assign privacy team members at the start to assess privacy risks.

“With public versions of LLM tools, organisations should alert the workforce that any information they enter may become a part of the training dataset. That means sensitive or proprietary information used in prompts could find its way into responses for users outside the business,” Gartner noted.

“Therefore, it’s critical to establish guidelines, inform staff of the risks involved, and provide direction on how to safely deploy such tools.”

Jerome Doraisamy

Jerome Doraisamy

Jerome Doraisamy is the editor of Lawyers Weekly. A former lawyer, he has worked at Momentum Media as a journalist on Lawyers Weekly since February 2018, and has served as editor since March 2022. He is also the host of all five shows under The Lawyers Weekly Podcast Network, and has overseen the brand's audio medium growth from 4,000 downloads per month to over 60,000 downloads per month, making The Lawyers Weekly Show the most popular industry-specific podcast in Australia. Jerome is also the author of The Wellness Doctrines book series, an admitted solicitor in NSW, and a board director of Minds Count.

You can email Jerome at: This email address is being protected from spambots. You need JavaScript enabled to view it.