AI: Why judges and lawyers are not immune
The legal profession will be among the first to be substantially disrupted by the advent of artificial intelligence tools such as ChatGPT, writes Dr Matt Collins AM KC.
Judges and lawyers reason using the crude tool of analogy. In 1874, Justice William Brett compared that then relatively newfangled technology, the telegram, to a postcard. In 1932, Sir Leo Cussen, acting chief justice of Victoria, compared listeners to the radio to persons sitting on one side of a partition listening to another person uttering words from the other. Analogies can, however, only take one so far. In 2001, Baron Bingham of Cornhill endorsed my hesitant description of the internet as a communications “revolution” as “wholly apt”. In 2005, Sir David Eady agreed, observing that some changes, like the advent of the internet, are a quantum leap that simply overtakes what came before.
In 1962, Arthur C. Clarke famously said that “any sufficiently advanced technology is indistinguishable from magic”. AI is a particular form of magic; from the user’s perspective, often indistinguishable from conscious human exertion.
In its current iterations, AI has strengths and weaknesses. Like human beings, it can make odd and hard-to-explain errors. It “hallucinates”. But AI is developing at an almost unimaginable and quickening pace. It is already being integrated into other online services, such as search engines, and in short order, it will be part of our everyday lives.
Early AI technology had the potential to assist in some legal problem solving by applying predefined rules to an agreed set of objective facts. Its utility was limited because most disputes cannot be resolved by the simple application of an “if this, then that” form of analysis. More commonly, there will be facts in dispute, which have to be determined having regard to differing recollections, the analysis of documentary evidence, assessments of witness credibility and consideration of competing expert opinions. Then, the law has to be applied to the facts as determined. The law may be far from certain, calling for the interpretation of laws passed by parliaments or the analysis and application of judge-made common law precedents and equitable principles.
For these and other reasons, judges and lawyers have tended to assume that there is limited potential for AI to disrupt the judiciary and legal profession. But modern AI developments reveal that to be a complacent conceit.
The techniques deployed by AI involve, at their core, probabilistic analysis — predicting the sequence of words most likely to provide a coherent and contextually appropriate response to a query. Pattern recognition and extrapolation are also essential features of neural processing. We learn from experience, including our failures.
The rub for judges and practitioners is that our legal system, too, is probabilistic at core, both in relation to the determination of facts and the application of the law to facts.
Fundamental to the determination of the facts in issue in a dispute is the identification of probabilities from all of the circumstances — in the civil context, where the balance of probabilities lies; in the criminal context, whether there is reasonable doubt as to guilt. We assess those probabilities by reference to a range of matters, including some matters that AI may already do better than we can, such as determining the inherent likelihood of the existence of a state of affairs, and identifying consistencies and inconsistencies in witness testimony and as between testimony and documents.
And fundamental to the application of the law to facts are doctrines, which are hierarchical or probabilistic, and thus equally ripe for disruption. One example is parliamentary sovereignty — the principle that judges are bound to apply laws that have been validly passed by parliaments, even where they conflict with the common law. Another is the doctrine of stare decisis — the principle that courts should follow and apply judicial precedents, so that like cases are decided in a like manner. Through these and other doctrines, our legal system values certainty and predictability, which are also the hallmarks of AI analysis.
I do not suggest that judges and lawyers are on the verge of extinction. For the time being, at least, AI is a tool that, if used judiciously, has the potential to increase productivity, reduce costs and improve outcomes for litigants. It also, however, has profound implications for education, training and the legal workforce. My warning is that courts and the profession must come to grips with the implications of AI urgently: unless we find ways to harness it to our advantage, we will become hostage to or overtaken by it. While predictions vary widely, on one estimate, automation could eliminate as many as 73 million jobs in the US by 2030, equating to 46 per cent of the current American workforce. Many of those jobs could well be in the justice sector.
The futurist Raymond Kurzweil’s Law of Accelerating Returns posits that technological progress is exponential, accelerating as it builds upon previous innovations. Kurzweil predicted in 2001 that we would experience, this century, the equivalent of about 20,000 years of progress, measured at 2001 rates. For those who have found it difficult enough to survive the introduction of the fax machine, email and internet, that may be a sobering thought, but the future is coming whether we like it or not. Hold on to your hats; we ain’t seen anything yet.
Dr Matt Collins AM KC is a Senior Fellow at Melbourne Law School, a former president of the Australian Bar Association and the Victorian Bar, and the author of The Law of Defamation and the Internet (OUP, Oxford, 2001, 2005, 2010) and Collins on Defamation (OUP, Oxford, 2014). He has provided advice to the developers of ChatGPT.