Show summary Hide summary
Governments from Canberra to Madrid are moving quickly to curb children’s access to mainstream social networks, arguing the step is needed to curb harm. The shift, led by Australia’s December 2025 law, is forcing parents, platforms and regulators to confront how to verify ages, enforce rules and balance safety with privacy.
Policymakers frame these measures as responses to online bullying, addictive design and safety risks. Yet privacy advocates warn that requiring robust identity checks could introduce new dangers, and experts note enforcement will be technically and legally complex.
Which countries are acting — and where things stand
Below is a concise snapshot of countries that have either adopted limits or are pursuing legislation. The list captures the age threshold under discussion and the current legal stage.
UAE breaks with OPEC: immediate risks for oil prices, consumers and investors
Cohere, Aleph Alpha join forces: what the tie-up means for the AI landscape
- Australia — ban in force (children under 16): Law enacted December 2025 requires platforms to prevent under-16s from accessing major services; fines for noncompliance can reach AUD 49.5 million. Platforms cannot rely solely on self-declared ages.
- Denmark — proposal (under 15): Coalition and some opposition parties have signalled support; legislation could become law by mid-2026, with a government-backed app for digital age checks under development.
- France — parliamentary move (under 15): Lawmakers approved a measure in late January 2026 that still requires further parliamentary steps, including Senate consideration, before becoming final.
- Germany — discussion stage (under 16 proposed): Conservative leaders have floated a ban, but partners in the governing coalition have expressed reservations about a blanket prohibition.
- Greece — close to announcement (under 15): Officials signalled plans to follow other EU states in restricting access for younger teens.
- Malaysia — announced plan (under 16): Government declared intentions in November 2025 to roll out age-based restrictions during 2026.
- Slovenia — drafting law (under 15): Draft legislation targets social networks that host user-shared content, citing apps like TikTok and Instagram.
- Spain — proposed (under 16): Prime Minister announced plans requiring parliamentary approval; the government is also pushing measures to increase executive accountability for platform content.
- United Kingdom — under review (under 16 under consideration): Officials will consult parents, young people and civil groups and may require platforms to curb features that promote compulsive use, such as infinite scrolling.
What these laws would require — and why enforcement is hard
At the heart of the debate is how to verify a user’s age without compromising privacy. Governments are asking social networks to adopt layered verification — not just self-reported birthdays — which could include government ID checks, third-party authentication apps, or technology-based signals.
Privacy groups argue these methods risk collecting sensitive personal data and could be misused. Amnesty Tech and others have warned that mandatory ID checks may create surveillance risks and could disproportionately impact marginalized communities.
The technical reality is messy: teenagers determined to bypass restrictions can use VPNs, alternative apps, or fake documents. Platforms will need to balance accuracy with data minimization; regulators will face pressure to identify acceptable, proportionate verification methods.
Consequences for platforms and families
For major tech companies, the new rules mean large-scale compliance work: building verification flows, auditing youth-safety features and potentially accepting financial penalties. For parents and teenagers, restrictions could push communication toward private messaging apps or smaller, less-regulated services.
That migration poses its own risks: private channels can be harder to moderate, and younger users may lose access to educational or support communities hosted on mainstream platforms.
Policy trade-offs are clear. Stronger barriers may reduce exposure to harmful public content, but they also raise questions about digital inclusion, adolescents’ rights to information and the practicality of a global patchwork of rules.
Looking ahead
Expect continued political debate and legal challenges. Several measures are still passing through parliaments or awaiting implementation details; others depend on technological answers that satisfy both safety advocates and privacy defenders.
Key milestones to watch in the coming months: further parliamentary votes in Europe, the rollout of any government-backed age-verification tools, and announcements from major platforms about how they will adapt their sign-up and moderation systems.
Whatever the outcome, these developments mark a turning point in how democracies seek to govern the online lives of young people — and they will shape the rules tech companies, families and courts must follow for years to come.












