Skip to playerSkip to main contentSkip to footer
  • yesterday
Australia has introduced AI safety standards and policy proposals, but legal accountability remains a grey area. Here’s where things stand in 2025.

Category

🗞
News
Transcript
00:00Can artificial intelligence be held legally accountable if it causes harm,
00:05spreads misinformation, or makes a flawed decision? It's a question lawmakers around
00:10the world are scrambling to answer. Under Australian law, AI is not a legal entity,
00:20which means it cannot be sued, fined, or imprisoned like a person or a corporation can.
00:25In 2024, the European Union introduced the world's first comprehensive AI act,
00:32which includes strict rules around accountability and transparency.
00:35It's now pretty much smarter than humans. And that's the dangerous part. There are no rules,
00:45no laws, no regulations that govern AI. It is a wild west. And who is leading it?
00:53It's the tech giants. And have the tech giants demonstrated to date that they are ethically
01:00driven and purposeful in their mission? No, they haven't. In early 2025, the federal government
01:06released a discussion paper proposing tougher regulations around high-risk AI systems and the
01:12introduction of a new AI safety commissioner. From September 2024, Aussie government agencies were
01:20also required to publish transparency statements about their AI use and appoint officials responsible
01:26for its safe deployment. The research that we're releasing today from the National AI Centre
01:32shows that nearly 80% of businesses in Australia think they're doing the right thing, but only around
01:3830% are putting in place the responsible practices required to use AI. So what we need to do
01:44is create that bridge between best intention and best practice. Currently, most laws still treat AI as
01:51a tool, with responsibility falling on the humans or organisations that use or develop them. For example,
01:58if a self-driving car crashes, the company behind the software may be liable, and not the AI itself.
02:04We have to be very careful that what we tell the AI we want is what we actually want, because what AI does
02:12is ruthlessly pursue those goals that we give it. And if those goals are even slightly misaligned with
02:19our own, we could end up with some really problematic consequences.

Recommended