Australia says Meta TikTok YouTube not complying with child social media ban

Three months after Australia became the first country in the world to ban children under 16 from holding social media accounts, its online safety regulator says the platforms are not doing enough to make the ban work. eSafety Commissioner Julie Inman Grant released her first compliance report on Tuesday, alleging that Facebook, Instagram, Snapchat, TikTok, and YouTube have failed to take the “reasonable steps” the law requires to keep young Australians off their services.

The numbers tell a story of partial progress and substantial failure. Since the Online Safety Amendment (Social Media Minimum Age) Act took effect on 10 December, some five million Australian accounts belonging to under-16s have been deactivated. But the compliance report, which surveyed 898 parents at the end of January, found that roughly seven in ten children who previously used social media still had an account on Facebook, Instagram, Snapchat, or TikTok after the ban. Children are retaining accounts, creating new ones, and passing through age assurance systems that appear unable to stop them.

Inman Grant said her office had “significant concerns about the compliance” of half the ten platforms covered by the law. The five under investigation, Facebook, Instagram, Snapchat, TikTok, and YouTube, could face court action that eSafety will decide upon by mid-year. Courts can impose fines of up to 49.5 million Australian dollars ($33 million) for systemic failures to comply. The other five, Reddit, X, Kick, Threads, and Twitch, are not currently under investigation.

The eSafety report identified what it called “poor practices” among the platforms. Some allow unlimited attempts for a user to pass age assurance checks. Others prompt users to try again even after they have declared themselves underage. Neither practice suggests a system designed to keep children out; both suggest systems designed to avoid losing users.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Communications Minister Anika Wells was blunter in her assessment. The five criticised platforms, she said, were deliberately not complying with Australian law. They are choosing, in her words, to do “the absolute bare minimum because they want these laws to fail.” Her reasoning was strategic: Australia's ban is the first of its kind, and more than a dozen countries have signalled interest in similar measures since December. France's National Assembly approved a bill in January banning social media for under-15s. Denmark announced restrictions for under-15s. Malaysia set a 2026 implementation date for an under-16 ban. Indonesia plans to restrict access to platforms including YouTube, TikTok, and Facebook for children across its entire digital ecosystem. If Australia's law is seen to fail, Wells argued, it would chill the global momentum.

The platforms responded with varying degrees of engagement. Meta, which owns Facebook and Instagram, told the Associated Press it was committed to compliance but acknowledged that “accurately determining age online is a challenge for the whole industry.” Snap, Snapchat's parent company, said it had locked 450,000 accounts and remained “fully committed to implementing reasonable steps.” TikTok declined to comment. Alphabet, which owns YouTube and Google, did not respond to a request for comment.

The compliance challenge is real, not merely a convenient excuse. The law does not prescribe specific age-verification technologies. It does not require government ID checks, biometric scans, or any particular mechanism. Instead, it places the onus on platforms to take “reasonable steps,” a deliberately flexible standard that leaves the definition of adequacy to the courts. Some platforms are using behavioural inferencing, analysing patterns of activity to estimate a user's age. Others are deploying AI tools that attempt to estimate age from photographs. None of these methods is remotely foolproof, and the eSafety Commissioner has acknowledged that age assurance may take days or weeks to work fairly and accurately.

Lisa Given, an information sciences expert at RMIT University in Melbourne, framed the legal question precisely. If a platform has implemented age assurance and taken multiple steps to exclude young users, is that reasonable, even if the underlying technology is flawed? “Should they be held accountable for a piece of technology that is not 100 per cent and likely not going to be 100 per cent foolproof any time soon?” she asked. That question will now travel to the courts.

It may arrive there alongside a constitutional challenge. Reddit has filed a case in the Australian High Court arguing the ban infringes on Australia's implied freedom of political communication. A second challenge was filed by the Digital Freedom Project, a Sydney-based rights group. The High Court has directed both cases to proceed in tandem, with a preliminary hearing set for 21 May to establish a date for oral arguments. The constitutional question, whether banning minors from platforms that host political speech and civic discourse is compatible with democratic freedoms, will test the limits of how far governments can go in regulating digital access.

The law itself was passed by Parliament on 29 November 2024 with bipartisan support, reflecting a political consensus that the mental health risks of social media for young people outweigh the access benefits. Importantly, it imposes no penalties on children or parents. The entire enforcement burden falls on platforms, a design choice that underscores the law's theory of change: that the companies profiting from children's attention should bear the cost of limiting it.

What happens next in Australia will reverberate well beyond it. The compliance report, the pending court action, and the constitutional challenge will together produce a body of legal precedent that every country considering similar restrictions will study. The platforms know this. The question, as Minister Wells framed it, is whether they are treating Australia's law as a problem to solve or a precedent to sabotage. The eSafety Commissioner's next move, expected by mid-year, will determine whether the world's first social media age ban has enforcement teeth or merely enforcement ambitions.

Also tagged with