Now that Children’s Mental Health Week is here, it’s time to ask: how are young people navigating a world that feels more complex and more permanently online than ever before?
Today, one in six adolescents in the UK now reports experiencing a mental health disorder, while demand for professional care continues to far outstrip supply. Social media, academic pressures, and growing isolation combine to intensify stress, leaving many young people searching for help wherever they can find it.
It is in this environment that AI chatbots like ChatGPT or Snapchat’s My AI have begun to play a role in young people’s emotional lives. Offering instant responses, anonymity, non-judgmental interactions, and round-the-clock availability, these tools are undeniably appealing. But their growing use raises deeper questions about what we value in mental health care, the role of human connection, and how regulation must evolve alongside technology.
The appeal of AI is clear. Long waiting lists, high costs, and strict eligibility criteria can make accessing therapy feel out of reach. In a culture that prioritises perfection and success – amplified by social media’s carefully curated projections of constant happiness and achievement – stigma and fear of judgment can make reaching out to a counsellor feel especially daunting.
In these moments, AI offers something that feels radically different: a private space where questions are met without judgement, advice is instant, and “support” is accessible at any hour. For a teenager lying awake at 2am, that immediacy and anonymity can feel transformative.
Yet this is where the tension lies. Technology can mimic conversation, but it cannot replicate empathy, human connection, or the ethical responsibility that trained professionals bring to sensitive situations.
A 2025 study by the Youth Endowment Fund, surveying nearly 11,000 children aged 13 to 17 across England and Wales, reveals why AI tools matter. Some young people reported that chatbots “calm them down and give them more confidence”. Others described how the anonymity made it easier to ask questions they would never voice in person. One respondent reflected: “I know it’s not a real person, but it shows affection”.
These insights should act as a wake-up call: our healthcare and community mental health support systems are failing to listen, leaving too many young people to seek care in artificial and unregulated digital spaces rather than through responsible human connection.
Two trends stand out. First, the mounting pressures of modern life are taking a clear toll on young people’s mental health. Academic demands, relentless doom-scrolling of increasingly polarised and toxic social media, and the constant pressure to perform, compare, and succeed leave little room for rest or reflection.
Second, AI is rapidly becoming a space for support, despite being largely unregulated and untested in emotional or mental health contexts. Chatbots respond to every prompt we give them, from drafting an email to offering personal advice, but they do so without the genuine understanding of human emotional and psychological complexity. This responsiveness can make them seem like a lifeline, while masking the limits of their comprehension.
As the Youth Endowment Fund study shows, AI tools cannot reliably detect high-risk situations such as self-harm or abuse. Unlike licensed professionals, they cannot intervene in moments of crisis, potentially leaving vulnerable young people unsupported.
This raises urgent questions. Should policymakers require AI tools to include monitoring or alarm systems capable of flagging serious risk? How do we balance safeguarding with privacy? These are difficult issues, but in a landscape where AI-based support tools are only going to grow, regulation and oversight are not optional – they are essential.
Still, this need not be a pessimistic conversation. The rise of AI presents a real opportunity to rethink how mental health support is delivered and can be improved, both inside and outside clinical spaces.
Imagine schools where regulated AI tools can provide approved, interactive lessons on emotions and wellbeing, helping students explore their mental health without fear or stigma. Used responsibly, these tools could act as a first line of support, normalising conversations about mental health, while human professionals remain central to assessment, intervention, and care.
The hope for the future is tangible: a generation of young people growing up with tools that extend the reach of human care rather than replace it, and a system where seeking help is normal, accessible, and effective.
It leaves us with one final question: how do we build a mental health ecosystem that combines the scale and accessibility of AI with the irreplaceable value of professional human support?
Driving positive change
At Whitehouse Communications, we help organisations navigate complex regulatory landscapes and engage strategically with policymakers.
For more information, please get in touch at info@whitehousecomms.com.