devxlogo

AI Expert Criticizes Australia’s Regulation Gap

ai regulation gap in australia
ai regulation gap in australia

One of Australia’s leading artificial intelligence scholars has issued a stark warning, arguing that the country is falling behind on AI rules as systems spread across daily life. The comments, made by Professor Toby Walsh, highlight rising concern over how Australia manages fast-moving technology in government, business, and public services.

Walsh, a professor of AI and a long-time adviser on tech ethics, said weak or slow policy leaves the public exposed to risks such as bias, surveillance, and misinformation. He spoke as governments worldwide move to set clearer guardrails. His remarks land amid growing pressure for Australia to decide when and how to set mandatory standards for high-risk AI.

A Call for Stronger Rules

“Toby Walsh says he despairs at Australian government’s lack of regulation of artificial intelligence.”

Walsh has warned for years that voluntary guidelines are not enough. He argues that models used in healthcare, policing, education, and hiring should meet strict safety, transparency, and accountability tests. His stance reflects a fear that Australia will import powerful tools without the checks many peers are adopting.

Background and Global Context

Australia began a national consultation on “safe and responsible AI” in 2023. The government signaled a risk-based path, with possible mandatory rules for high-risk uses. Since then, it has backed industry codes, testing programs, and public sector guidance. But critics say these steps are too slow and too soft for high-stakes applications.

Other major economies are moving faster. The European Union adopted the AI Act in 2024, which sets strict duties for high-risk systems and bans some uses. The United States issued a 2023 executive order on safety testing, transparency, and rights protections. The United Kingdom chose a lighter approach led by regulators, but is increasing funding for testing and oversight.

  • EU: binding rules for high-risk AI and enforcement by national authorities.
  • US: safety reporting and red-teaming for frontier models under federal direction.
  • UK: regulator-led guidance with investment in assurance and standards.
See also  AI’s Ad War Is Good For Users

Government Response and Industry Concerns

Canberra has argued that flexibility is needed to support innovation while protecting the public. Officials say any laws should target specific risks, not the technology as a whole. They point to existing consumer and privacy rules, and ongoing work with standards bodies.

Industry groups warn that heavy-handed laws could chill investment. Startups fear high compliance costs and legal uncertainty. Larger firms say clarity is welcome but seek alignment with global standards to avoid duplication. Many companies prefer co-regulation, where codes are backed by law for the most sensitive sectors.

Risks Driving the Debate

Walsh and other experts cite near-term harms. These include faulty decision tools in hiring or credit, opaque algorithms in government services, and the spread of deepfakes during elections. There are also workplace issues, from surveillance to deskilling, and mounting copyright disputes.

They argue that high-risk systems should meet minimum requirements before deployment. That could include impact assessments, independent testing, public registers, and clear avenues for redress when things go wrong. Without this, they warn, trust will erode and benefits will stall.

What To Watch Next

Australia is weighing next steps after its consultations. Options include mandatory guardrails for high-risk uses, stronger privacy updates, and a national AI assurance framework. Coordination with global partners on safety testing and model reporting is also on the table.

The next months will test whether the government sets binding duties or relies on guidance. Walsh’s warning adds pressure to act, as other jurisdictions lock in detailed rules. The choice now is how to provide clear, enforceable protections without stifling useful tools.

See also  Army Turns Apache Into Drone Hunter

For now, Australia sits at a crossroads. The public wants safe and fair systems. Companies want clear and predictable rules. Experts like Walsh want stronger oversight before harms spread. How policymakers balance these goals will shape how AI is used across the country.

steve_gickling
CTO at  | Website

A seasoned technology executive with a proven record of developing and executing innovative strategies to scale high-growth SaaS platforms and enterprise solutions. As a hands-on CTO and systems architect, he combines technical excellence with visionary leadership to drive organizational success.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.