From the 10th of December, Australia will prohibit children under the age of 16 from having social media accounts. The landmark policy, viewed globally as a “real-life laboratory”, aims to understand and hopefully reduce the impact of social media on youth mental health.
Praised by many parents and criticised by some campaigners, the ban has sparked questions about feasibility, enforcement and the tools platforms will rely on to stay compliant.
Major platforms, including Facebook, Instagram, Snapchat, TikTok, Threads, X, YouTube, Reddit and the streaming services Kick and Twitch all fall under the new rules.
Services such as YouTube Kids, Google Classroom and WhatsApp are not included, though the government has signalled it may expand the list, particularly to gaming platforms.
Responsibility sits with the platforms, not parents or children.
Companies must take “reasonable steps” to prevent under-16s from accessing their services and could face penalties of up to A$49.5 million for repeated failures.
Regulators expect companies to deploy age-assurance measures, but they have deliberately avoided specifying which technologies must be used. The result is a patchwork of solutions that vary by platform.
Traditional age checks, such as document uploads and facial recognition, introduce friction and raise privacy concerns. Even more problematic is their vulnerability to manipulation. Reports from Australia have already highlighted teens bypassing AI-based checks using simple tricks like masks or photo substitutions.
Additional loopholes remain open. VPNs can obscure location, gaming apps often include messaging or social elements, and many online communities sit outside the platforms covered by the ban.
Concerns have also been raised about false positives that may inadvertently block legitimate users, and false negatives that fail to catch underage sign-ups. With high penalties attached, the risk for platforms is significant.

One solution stands out for being both robust and familiar: mobile-network-based verification.
Almost every young person has a mobile number, making it a universal identifier that avoids intrusive ID uploads. Our mobile identity checks use real-time network data to confirm whether a mobile number belongs to an adult account holder, creating a more reliable age signal than document scans or biometric tools.
Fewer drop-offs
No need for users to find documents, take photos or upload sensitive information, this allows verification for ‘thin-file’ customers.
Reduced fraud risk
Network-verified data is far harder to spoof than images or scans.
Privacy-friendly
No storage of biometric data or identity documents.
Consistent across platforms
Mobile verification creates a unified method for meeting regulatory expectations without forcing companies to build bespoke age-assurance flows.
Platforms in Australia will begin gradually deactivating under-16 accounts and reporting their progress to regulators. This policy could shape future legislation elsewhere, particularly if the ban shows measurable benefits to young people’s wellbeing.
Insiders at major platforms have noted concerns: implementation is complex, it may isolate some teenagers who rely on online communities for support, and it risks pushing others towards unregulated spaces. Yet governments worldwide will be watching closely.
As regulations tighten and expectations rise, mobile-network-based verification provides an accurate, low-friction and privacy-conscious method for platforms seeking to comply without compromising user experience.
Last updated on December 3, 2025
Australia’s under-16 social media ban highlights the need for reliable, privacy-conscious age checks. Mobile-network-based verification provides a low-friction, accurate solution that protects users while keeping platforms compliant.
Learn how TMT ID can helpWe provide the most comprehensive device, network and mobile numbering data available
Contact us > Chat to an expert >