Protecting or excluding? How Europe is rethinking children’s lives online

In 2026, momentum is building across Europe to restrict children’s access to social media. Driven by growing concerns over mental health, online harms, and addictive platform design, governments and regulators are increasingly turning to age-based bans and stricter verification controls. Yet while the intention is clear, the effectiveness and wider consequences of these measures remain deeply contested.

At the centre of the EU’s approach is a proposed age-verification system announced by the European Commission in April 2026, set to launch “soon”. Designed as a privacy-preserving solution, the system would allow users to provide their age without directly sharing personal data with platforms.

In practice, however, the process is far from seamless. Users would need to download an app, consent to data use, scan an identity document (including its embedded chip), and complete facial recognition checks. Platforms could require verification each time users attempt to access age-restricted services.

While the system aims to balance privacy with protection, critics have raised several concerns: its technical complexity, potential privacy implications, and the relative ease with which it could be circumvented through tools like VPNs. There are also fears that such systems shift responsibility away from platforms and onto individual users- a stark contrast to broader regulatory frameworks that hold companies accountable for building safety into their products by design.

Alongside EU-level efforts, individual member states are pressing ahead with their own restrictions. France has already legislated a ban on social media use for children under 15, framing it as a response to a public “health emergency.” Greece has announced similar plans, with legislation expected to take effect in 2027, requiring strict age verification or imposing penalties on platforms that fail to comply.

Other countries, including Spain, Austria, Denmark, Slovenia, Ireland, and Italy, are exploring or preparing comparable measures, typically targeting users between 14 and 16 years old. These initiatives are shaped in part by Australia’s landmark 2025 under-16 ban – the first of its kind in the world – and by mounting political pressure to hold large tech companies to account.

Supporters of restrictions argue that social media poses clear and escalating risks to children. Platforms are engineered to maximise engagement, often through addictive features, constant connectivity, and AI-powered personalised content feeds designed to keep young users hooked.

The scientific picture, however, is more nuanced. Research broadly agrees that screen time cannot be characterised as simply good or bad. While some studies link heavy social media use to adverse mental health outcomes, particularly among vulnerable adolescents, there is no clear consensus that social media directly causes mental health disorders. Its effects vary considerably depending on the type of activity, the duration and timing of use, and the individual characteristics of the child. Research has shown that positive uses of social media, such as connecting with friends, authentic self-expression, and engaging with non-idealised content, can support wellbeing and even foster a sense of flourishing.

That said, a strong correlation between excessive use – typically defined as more than a couple of hours per day – and poor mental health outcomes has been consistently documented. Research from Imperial College London suggests these effects are largely driven by disrupted sleep: greater social media use, particularly in the evenings, reduces sleep duration and quality, with lasting consequences for children’s mental health. Research from the European Commission’s Joint Research Centre (JRC) reinforces this picture, highlighting clear links between excessive social media use and increased anxiety and depression among young people. Contributing factors include exposure to harmful content, cyberbullying, and behaviours such as doomscrolling all contribute to poorer mental health outcomes, particularly for more vulnerable groups. Content related to self-harm and body image is of particular concern, with algorithms frequently amplifying these risks.

The impacts are not limited to mental health. Screen use is strongly associated with disrupted sleep patterns and emerging – if less conclusive – evidence points to physical and developmental effects as well. The relationship between social media and health is complex and often bidirectional, but the evidence suggests that how young people engage with these platforms, and what they are exposed to, matters significantly.

These nuances suggest that the issue is not social media itself, but how, why, and by whom it is used. As a result, broad, one-size-fits-all bans may fail to address underlying problems such as cyberbullying, social isolation, or algorithmic amplification of harmful content

Despite these concerns, many experts caution that outright bans may be a blunt and potentially counterproductive tool.

Digital technologies have been woven into children’s lives for over two decades. Social media and online platforms offer genuine opportunities for learning, socialisation, creativity, and access to information. Severing that access entirely risks excluding young people from an increasingly essential dimension of modern life.

Bans also do not necessarily eliminate risk. They may instead drive children toward less regulated or underground digital spaces, where safeguards are weaker and harms harder to detect.

There are also legitimate concerns around rights and proportionality. Blanket restrictions may conflict with children’s right to participate in decisions that affect them  – as enshrined in the UN Convention on the Rights of the Child. Critics argue that policies imposed without consulting young people risk undermining their autonomy and missing the mark on their actual needs.

Rather than relying solely on bans, many experts advocate for a more comprehensive strategy that balances protection with inclusion. This includes stronger platform regulation – requiring companies to build safer environments by default through a “safety-by-design” approach – alongside clear and enforceable accountability standards specific to children.

Education is another key pillar. Expanding digital literacy programmes for children, parents, and educators can help young people navigate online spaces more safely and critically.

And across the board, policymakers are urged to pursue evidence-based interventions tailored to different age groups and levels of vulnerability, rather than imposing blanket restrictions. Consulting children themselves is also essential to ensure policies reflect their experiences and needs.

The current wave of social media restrictions reflects legitimate and urgent concerns about children’s well-being in a rapidly evolving digital landscape. But exclusion alone does not guarantee protection. A safer internet for children is more likely to emerge not from locking them out, but from reshaping the environments they enter – through smarter regulation, more responsible design, and better-informed use. As Europe moves forward, the central challenge will be to strike the right balance: shielding children from harm while preserving their access to the opportunities and rights that define life in the digital age.

 

Experts in effecting change

The Whitehouse team are expert political consultants, providing public affairs advice and political analysis to a wide range of organisations in the UK and across the EU.

Whether you’re working in the UK, EU, or both - get in touch with us to discuss how we can support your business: info@whitehousecomms.com.