Greater principles needed for AI regulation
Principles must be the cornerstone of artificial intelligence regulation, according to a big four accounting firm technology lawyer.
Speaking to Lawyers Weekly, KPMG Law partner Kate Marshall offered some insight into current opportunities being made available from artificial intelligence, and how legal professionals can make sure its appropriately utilised.
She explained a fundamental, but important change in our approach to technology and the way we often think about rules and regulations is the swing towards the design of AI use “to accommodate everybody”.
“In the past, we’ve often accepted compromises,” Ms Marshall said, giving the example that if something was okay for 80 per cent of people, then it was deemed “good enough”.
“In the past we might have accepted that or we might have accepted a certain level of risk,” she explained, even if it may not have allowed somebody with a disability to be able to utilise something.
But now, Ms Marshall said, there is recognition that “one of the core principles is that it should be designed for all so we should be thinking about everybody in that ecosystem, whether you’re a controller, or a user, or a person at the margins that has some particular need”.
Accountability is another principle that Ms Marshall expects to be a factor in AI usage from a legal perspective.
“The accountability piece is not so simple as ‘you’ve done something and it impacted on me, therefore I’ve got a right to sue you and recover damages',” she noted, with the debate needed to consider who will be accountable for AI.
“Is it the coder? Is it the producer of that application or that system? Is it the one that controls the organisation behind it that’s actually putting it out there in the market? Who is it?” she queried.
Ms Marshall said she doesn’t think there is an answer for accountability at the moment, but “for there to be trust in AI, there needs to be a degree of accountability”.
Another real challenge Ms Marshall considered for AI surrounds transparency.
“True AI… is really difficult to understand, and I’m not sure anybody could really understand every element of it,” Ms Marshall offered.
“I think I want to know that there is something making a decision about me, and I want to know what the principles are around how that decision is going to made.”
To further her point, she gave an example of receiving a benefit from the government, and decisions being made based on an individual’s circumstances as to whether there is an entitlement to that benefit or not.
“A similar sort of thing applies to AI – what are the rules that are embedded in there that are impacting on a decision being made about me?” Ms Marshall said.
“I want to know that there’s an automatic decision or automated decision being made about me and what the fundamental rules are that have been applied,” she considered, extending this knowledge to an “ability to challenge it if I think it’s wrong”.
In terms of all the talk around AI regulation, Ms Marshall said she thinks it’s “a really important conversation that we have in Australia and it feeds into conversations that are going on at a global level”.
“What does introducing AI at scale really mean for us as businesses, for us as communities, what does that mean for the future of our children, and should it be regulated?” she queried.
“Should there be standards that apply to it? Or are we happy to just let it go and see what happens?”