Last week, the Australian Federal Government introduced the 'Voluntary Artificial Intelligence (AI) Safety Standard,' a guideline designed to outline ten suggested AI safeguards and their implementation.
The government asserts that these guardrails aim to enable organizations to leverage AI benefits while managing associated risks to people and groups. However, experts from RMIT University have expressed skepticism regarding the standard's potential impact. They view it as a step in the right direction but criticize it for being vague.
Balancing Safeguards with Practicality
RMIT’s Kok-Leong Ong, Professor of Business Analytics and Director of the Enterprise AI and Data Analytics Hub, acknowledges the importance of integrating safeguards as AI usage expands. Nevertheless, Ong points out that the voluntary nature of these measures might not be effective enough. The ambiguity of the guidelines may lead to inconsistent application among businesses, leaving them to self-assess risks.
Moreover, Ong highlights concerns about workforce readiness. Many employees lack the necessary training to implement the new safeguards effectively. The balance between safety and efficiency is also a challenge, as some measures, such as mandatory disclosure of AI use, could disrupt existing processes. An ABC survey revealed that a significant portion of businesses using AI do not inform their employees or customers, and many have not conducted human rights or risk assessments on their AI practices.
Insufficient Current Standards
The Federal Government is working on more comprehensive regulations for AI. Three options are being considered: adjusting existing digital software laws, creating a standalone act, or imposing restrictions on AI tools deemed too risky. One proposal includes mandating that developers and deployers of AI disclose their use of these tools to Australians, particularly when decisions about individuals are made by AI or when interactions involve AI-generated content.
Lisa Given, Professor of Information Sciences and Director of the RMIT Centre for Human-AI Information Environments and the Social Change, notes that while the voluntary standards are a positive interim measure, mandatory regulations are crucial for ensuring consumer protection. Such measures would align Australia with international standards, such as those in the European Union. Given emphasizes that mandatory safeguards are essential for ensuring transparency in AI's design, application, and use.
Addressing Public Concerns
A survey conducted by the University of Queensland in March revealed a divided public opinion on AI. While 40% of Australians support AI development, another 30% are opposed. Opinions on whether AI will ultimately benefit society are also split, with 40% affirming its positive impact and 40% dissenting. There is, however, a strong consensus on the need for enhanced regulatory oversight, with 90% of respondents advocating for the establishment of a new regulatory body dedicated to AI.
Federal Industry Minister Ed Husic has acknowledged these concerns and encourages businesses to take proactive measures to address them. He emphasizes that while AI holds significant potential, ensuring that adequate protections are in place is crucial to maintaining public trust and safety.