Goodbye job applications, hello dream career
Seize control of your career and design the future you deserve with LW career

Do we need to control the use of AI?

As we continue to embrace the potential of AI to revolutionise various sectors, we must also consider the need for comprehensive controls and regulations to ensure the technology is employed responsibly, writes Julian Ryan.

user iconJulian Ryan 05 June 2023 SME Law
expand image

Artificial intelligence (AI) has emerged as a powerful technology with the potential to transform numerous aspects of our lives, from healthcare and transportation to finance and entertainment. The tremendous potential of AI is exciting, particularly when we consider how our lives have changed thanks to other technology. In Australia, technological advancements are embraced with enthusiasm; Australia ranks third in 5G-connected devices per capita, and it was part of the early development of Google Maps, Wi-Fi, the black box flight recorder, and the cochlear implant.

As with any technology, though, it’s important to consider the need for effective controls and regulations to govern the use of it. The widespread use of AI is still relatively recent, and as such, we are only beginning to understand the potential risks. So, how can we strike a balance between innovation and responsible use to safeguard the interests of individuals and society as a whole? Do we need new regulations to control and monitor the use of AI?

Safeguarding privacy and data protection

One of the foremost concerns surrounding AI implementation is the potential mishandling of personal data used to train AI. There is concern over a lack of consent and transparency with regard to how data is collected and used by AI.

Existing regulation does provide data protection for individuals. Under the current Privacy Act, the handling of personal information is covered, regardless of whether it involves AI or any other technology.

Furthermore, in 2020, the Australian government released the AI Ethics Framework, which provides guidance on responsible AI use and highlights the importance of privacy and data protection. However, the principles outlined are voluntary rather than mandated by regulation.

As AI technologies continue to advance and present new privacy challenges, it is worth considering whether the existing regulations adequately address these concerns.

Many countries are currently reviewing and updating their privacy regulations to address the specific implications of AI. Italy went as far as to temporarily ban ChatGPT while OpenAI implemented changes to the way in which it collects data and verifies the age of users of the platform.

Mitigating bias and discrimination

A risk often raised with regard to AI is bias in the data. If historical biases and prejudices exist within the datasets used for training AI algorithms, these biases can be perpetuated and even amplified, leading to unfair and discriminatory outcomes — a good example of the GIGO (“garbage in, garbage out”) principle at play. As the use of AI grows, it may be important to consider controls, such as regular audits, to mitigate bias and discrimination in the information produced by AI systems.

Ensuring transparency and explainability

As AI becomes increasingly embedded in our daily lives, it is essential that its decision-making processes are transparent and explainable. Controls could help ensure that AI systems provide clear explanations of their decision-making processes, such as establishment of standards, audits, and regulatory frameworks that require organisations to disclose information about the functioning and limitations of their AI systems.

Addressing ethical concerns

AI raises a multitude of ethical questions that demand careful consideration. There is a need for ethical guidelines to regulate the use of AI in morally challenging scenarios, such as diagnosing a patient or where AI is used in assessing job applications (data bias could perpetuate discrimination on the basis of gender, race or socioeconomic status). Public discourse and involvement from a range of stakeholders, including government, industry experts, and the community, would be crucial in developing these controls to ensure the technology aligns with the values and principles of Australian society.

Conclusion

As we continue to embrace the potential of AI to revolutionise various sectors, we must also consider the need for comprehensive controls and regulations to ensure the technology is employed responsibly. Protecting privacy, mitigating bias, ensuring transparency, and addressing ethical concerns are all critical components of effective control measures. By striking a balance between innovation and responsibility, we can harness the benefits of AI while safeguarding the wellbeing and rights of individuals.

However, you don’t have to wait for official regulations; consider how you can control the use of AI in your personal life and in the workplace.

Julian Ryan is a senior associate at FAL Lawyers.

You need to be a member to post comments. Become a member for free today!