Advertisement
Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

ChatGPT blunder sees lawyer referred to regulator

A lawyer has been referred to a disciplinary regulator for filing two documents that relied on “non-existent” citations and quotes.

user iconNaomi Neilson 04 February 2025 Big Law
expand image

The Office of the NSW Legal Services Commissioner (OLSC) was asked to investigate a lawyer’s use of ChatGPT in an outline of submissions and amended application he filed with the Federal Circuit and Family Court of Australia (FCFCOA) last October.

After the other party called these errors out, the lawyer – whose name was redacted by the court – tried to file updated submissions, but Judge Rania Skaros said the damage had already been done.

“By the time this occurred, the court and the associates had already spent a considerable amount of time attempting to locate the cases and enquiring as to whether the correct [Administrative Appeals Tribunal] decision had been filed,” Judge Skaros said.

“As a consequence of the [lawyer’s] conduct, the matter could not conveniently proceed, and further timetabling was required to facilitate the proper filing of an amended application and submissions on behalf of the [lawyer’s client].”

The lawyer, who has practised for close to three decades, was asked to file an affidavit that would provide a full explanation of what went wrong and make submissions as to why the court should not refer the judgment to the Legal Services Commissioner.

He explained due to time constraints and health issues, he decided to use an artificial intelligence program, accessed a site, and “inserted some words and the site prepared a summary of cases for him”.

He said the summary read well, “so he incorporated the authorities and references into his submissions without checking the details”.

After the false citations and quotes were pointed out, the lawyer sent correspondence to an associate with the court, without either copying in the other party or seeking consent to send the email.

When asked about his reasons for this, the lawyer said it was “not his intention to circumvent the established protocols” and added “he deeply regrets his actions” for misinterpreting his requirements.

Judge Skaros accepted the email was not a deliberate intent to exclude the other party from communications with the court.

While the court accepted the lawyer was “deeply embarrassed” of his conduct and would not repeat it, Judge Skaros said there was “strong public interest” in referring the matter to the regulator.

“The use of generative AI in legal proceedings is a live and evolving issue,” Justice Skaros said.

“The court agrees with [the other party] that the misuse of generative AI is likely to be of increasing concern and that there is a public interest in the OLSC being made aware of such conduct as it arises.

The case is Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95 (31 January 2025).

Naomi Neilson

Naomi Neilson

Naomi Neilson is a senior journalist with a focus on court reporting for Lawyers Weekly. 

You can email Naomi at: This email address is being protected from spambots. You need JavaScript enabled to view it.

You need to be a member to post comments. Become a member for free today!