Why your firm needs an AI use policy
The future may already be here. If not, it is rapidly approaching. As such, having policies in place is paramount, writes Schellie-Jayne Price.
Around two years ago, a talented new junior joined me at work. Despite not having a law degree, or indeed any degree at all, this new colleague was incredibly versatile, astonishingly well read and able to prepare drafts in phenomenally short time frames. My junior, GPT-3 (Generative Pre-trained Transformer No. 3), has now gone viral as the large language model (AI) underpinning the chatbot known as ChatGPT.
Law firms must prepare for this AI journey now, or they risk being left behind. They can and should anticipate a future of exponential change requiring adaptive resilience at a level beyond anything ever seen before in this most traditional of professions.
Based on our experience with other organisations, here are some proposed actions for law firms seeking to engage responsibly with AI:
Educate and consult
Providing education on machine learning (the specific branch of AI underpinning these recent advances) and related ethics is foundational to create and embed a culture of responsible and effective AI use. Lawyers, law firm staff and clients are key stakeholders in the AI journey and should be consulted in the preparation and iterative revision of the firm’s AI philosophy, strategy and policy.
Articulate AI philosophy and strategy
A law firm’s AI philosophy is the touchstone for expeditious policy revisions in response to rapidly evolving AI capabilities and law firm experience using AI. In addition, a law firm’s AI philosophy and strategy is an important accoutrement for future-proofing the practice in business development, client relationship management, talent attraction and retention.
Develop policy
A law firm’s AI policy should set out the expectations and limits on AI use. If the firm intends to whitelist certain types of AI (e.g. purchase a licence for organisational use of ChatGPT), the terms and conditions of the commercial licence will inform the policy.
General policy guidance may include the following:
(a) Where AI-generated content is included in your work, acknowledge the use of AI and clearly identify the AI-generated content. A referencing standard may be provided to systematise AI referencing;
(b) AI may generate erroneous content which appears plausible, even authoritative. Apply healthy scepticism to AI output, using your legal knowledge, experience and judgement. Independently check all AI output with reliable sources and reference those sources. AI is a tool that may augment your productivity; however, you remain responsible for the accuracy and completeness of your work;
(c) Do not include personal, sensitive, proprietary, client or confidential information in your prompts to the AI (NB: this should be supplemented and adjusted in response to the AI terms and conditions of use, including where AI has been whitelisted by the law firm or where the firm has determined that specified AI should not be used);
(d) AI is not appropriate for all circumstances. Use good judgement, common sense and consideration of the values and ethics of your law firm in determining when to use it; and
(e) This policy will be reviewed regularly and will be updated to reflect your law firm’s experiences using AI, developing practice and new technology.
Prepare plans
Informed by the AI philosophy, strategy and policy, a law firm’s AI plans should include matters such as AI prompt engineering, machine learning techniques and ethics training, applicable process if AI outputs highly resemble prior works, change management, upskilling/reskilling and actions to enhance adaptive resilience of the firm’s personnel.
The impact of AI on knowledge work is unparalleled in modern history. Early studies are reporting 35 to 50 per cent increases in productivity for writers using ChatGPT.
Law firms harnessing this AI productivity boost are not only securing a competitive advantage, but they are also preparing for a future where a lawyer’s duties of competence and acting in the best interests of clients require effective use of AI. That future may already be here. If not, it is rapidly approaching.
Schellie-Jayne Price is a partner at Stirling & Rose, a lecturer in law and technology at Murdoch University Law School, and an advisory board member for the Centre for Legal Innovation.