Artificial Intelligence & Machine Learning , Government , Industry Specific

Businesses Wary of Overregulation in Australia's AI Plan

Industry Groups Fear Too Many Rules Could Hurt Innovation and AI Investments
Businesses Wary of Overregulation in Australia's AI Plan
Downtown Sydney, Australia (Image: Shutterstock)

The Australian government has proposed new regulation to place mandatory guardrails on the use of AI in high-risk settings, but private sector industry groups fear that too many laws and mandatory guardrails for AI could deter businesses from investing in or deploying the AI technology.

See Also: Strengthen Cybersecurity with Zero Trust Principles

The Australian government released a discussion paper last week that details a proposal to implement 10 guardrails to ensure the safe and responsible development and deployment of AI technologies in high-risk settings. The paper serves as a prelude to future regulatory action that will govern how AI-based technologies are developed and deployed.

The discussion paper accompanied the release of another government publication, the Voluntary AI Safety Standard, which includes 10 voluntary guardrails that organizations can adopt to prepare for future legislation, set up risk management processes, develop AI strategies and implement appropriate data governance practices.

Industry and Science Minister Ed Husic said that the government's intent is to provide businesses with practical guidance on how to safely and responsibly use and innovate with AI, prior to framing stringent regulations to ensure that AI technologies adhere to ethical standards and do not violate individual rights and freedoms.

The Industry and Science Ministry relied on a National AI Center-commissioned survey that found very few businesses deployed AI systems safely, even though a vast majority of them believed their deployments were safe.

The guidance follows an interim government response to a public consultation in January, which proposed safety guardrails for high-risk AI implementations, while allowing low-risk deployments to continue unimpeded to boost the economy and augment human productivity (see: Australia Proposes Mandatory Guardrails for High-Risk AI Use).

In the consultation paper, the government sought feedback from the public on whether it should reform existing regulatory frameworks to implement the guardrails on a sector-by-sector basis, introduce new framework legislation to adapt existing regulatory frameworks across the economy or introduce a new cross-economy AI-specific act.

The government said existing regulations in Australia place obligations on organizations that deploy AI models but not on the developers of such technologies. Additionally, existing regulations are sector-specific, which impedes the uniform application of AI safety laws on organizations operating across multiple sectors.

Industry Fears AI Overregulation

Australian business groups have largely welcomed the government's proposed guardrails on high-risk AI use, but have cautioned that too much regulation could create uncertainty and impede innovation in the field.

Bran Black, chief executive of the Business Council of Australia, said the group supports the government's risk-based approach to AI, which ensures the government does not strictly regulate low-risk AI deployments. He said that the government must ensure new regulations do not duplicate existing laws.

The Australian Chamber of Commerce and Industry also warned that too many regulations governing the safe and responsible development and deployment of AI could severely affect small businesses, 82% of which already face major or moderate impact on their businesses from regulatory red tape.

The chamber said the Privacy Act, which is presently undergoing reform, is likely to significantly increase obligations for processors of personal data and introduce extensive new obligations for small businesses. The government is also weighing new legislation to support its eight-year cybersecurity strategy and is likely to bring in further legislation to place mandatory guardrails on AI deployments.


About the Author

Jayant Chakravarti

Jayant Chakravarti

Senior Editor, APAC

Chakravarti covers cybersecurity developments in the Asia-Pacific region. He has been writing about technology since 2014, including for Ziff Davis.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.in, you agree to our use of cookies.