You have 0 free articles left this month.
Register for a free account to access unlimited free content.

Lawyers Weekly - legal news for Australian lawyers

Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUM MEDIA

Advertisement
Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Is your AI putting client confidentiality at risk?

While AI continues to transform and streamline legal workplace technology, one Queensland-based firm has warned that this technology can introduce “serious legal risks” – many of which may go unnoticed.

user iconGrace Robbie 10 April 2025 Big Law
expand image

From streamlining research processes to enhancing document management, artificial intelligence (AI) has revolutionised how the legal profession operates, and its influence is only expected to persist and intensify.

However, RedeMont, a Gold Coast law firm, has cautioned that such AI tools are doing far more than simply “process[ing] documents”, as many assume – they’re also capable of “analys[ing] emails, messages, and video calls”, and some advanced features even “capturing facial expressions and metadata”.

These capabilities, presented as improvements to “user experience”, possess the potential to introduce “serious legal risks” for law firms.

RedeMont warned that the processing of confidential information through AI software, without obtaining “explicit consent”, may lead to firms “unknowingly breach[ing] [their] confidentiality obligations” to clients.

Ryan Solomons, a partner in dispute resolution, pointed out that many firms tend to “assume” that their software providers “handle data responsibly”. However, he stressed that AI tools can “store and process information in ways that conflict with privacy law and confidentiality obligations”.

Whether liability lies with the software provider has become a recurring topic of discussion. Solomons clarified that “in most cases, businesses bear the greater risks” due to technology companies typically including “indemnity clauses” to protect themselves.

Another significant concern that arises with AI systems storing or transmitting data without consent, the Queensland-based firm explained, is the potential for breaching non-disclosure agreements (NDAs) – even if “no human intentionally disclosed the data”.

To help navigate this evolving risk landscape, RedeMont has listed a series of recommendations to mitigate confidentiality risks related to AI, including:

  • Monitoring software updates – With AI features often rolled out through “automatic updates”, RedeMont advises firms to “review and control these before deployment”.

  • Appointing a compliance officer – Having a designated “IT risk manager”, RedeMont explained, can ensure that systems and processes “maintain legal compliance”.

  • Reviewing terms and conditions – RedeMont advocates for firms to better understand “how AI tools collect, process, and store data”.

  • Avoiding public AI tools – RedeMont stressed that firms should refrain from feeding sensitive data into publicly available AI tools and should instead use “self-hosted or private AI solutions”.

  • Training staff – Firms must ensure that all their employees understand the implications of AI use and follow “best practices” for data protection.

While Australia has not yet experienced high-profile legal cases involving AI-related breaches of confidentiality, Solomons warns that “it’s only a matter of time legal precedents are set”, urging businesses to “act now rather than wait for engagement actions taken against them”.

“AI is here to stay, but so are the risks. Companies must proactively assess their use of AI-powered tools and implement safeguards to protect client confidentiality,” RedeMont said.

You need to be a member to post comments. Become a member for free today!