Australia looking at new laws for high-risk AI use

The Sydney Morning Herald reported that Australia is starting work on laws on the use of artificial intelligence (AI) in high-risk areas. These include areas like law enforcement, job recruitment, and healthcare.

Industry and Science Minister Ed Husic announced the creation of a new advisory body, with experts from various fields. The aim is to introduce mandatory safety requirements for AI applications that could significantly impact people's lives and career prospects. The main focus will be on AI technologies that can affect personal safety or future opportunities in employment or law. The intention is to ensure that AI operates as intended and gains public trust. This plan is part of the government's response to a consultation process that sought opinions on the safe and responsible use of AI.

The speed of change of AI technologies, such as generative chatbots and machine learning systems, creates both opportunities and challenges globally. While AI has the potential to transform industries and significantly benefit the Australian economy, its rapid growth has sparked ethical and moral concerns. As Husic said “These technologies are going to shape the way we do our jobs, the performance of the economy and the way we live our lives. So what we need to ensure is that AI works in the way it is intended, and that people have trust in the technology.”

As we've written here before, there are already regulations and laws for AI use in Europe, the United States, China, the United Kingdom and other countries. And in Australia, regulations and guidance around AI use in specific industries - like public sector and education

Future Direction

The AI advisory body will explore legislative options, including the possibility of a dedicated AI Act or amendments to existing laws. A key concern will be addressing algorithmic bias, which can lead to discrimination based on race, gender, or other characteristics. This issue has been highlighted in instances where AI has been used in law enforcement and job recruitment, leading to biased outcomes.

Simon Bush, CEO of the Australian Information Industry Association, supports the government's decision to focus on high-risk settings. He notes that the definition of high risk and the required safeguards will be subject to debate.

Update - 17th January

Adding an update to this - the government's interim response to their consultation is now online. Their proposals include a voluntary watermarking system for AI-generated content, AI Safety Standardm and an expert advisory group to support development of mandatory guardrails for specific scenarios/use cases. The guardrails include testing, transparency (of model design and data) and accountability (training, certification for developers and organisational accountability expectations)

Heres' the interim response link: https://consult.industry.gov.au/supporting-responsible-ai

Next
Next

Embracing the Generative AI Revolution: Five Key Strategies for Organisational Leadership