State of the Media: Online Safety in an AI Era – Can the Law Keep Up?

The Online Safety Act has been at the centre of a media storm this month, following revelations that Grok AI has been used at scale to generate sexualised images of women and children online without their consent.  

The scale and severity of the issue provoked a widespread reaction, and in doing so, has reopened fundamental questions about the effectiveness of the Online Safety Act as a mechanism for regulating online harm.  

The Act came into force less than a year ago. At its core is a significant shift in responsibility: tech companies are now legally required to protect user safety by preventing and removing illegal content, with Ofcom empowered to levy fines of up to £18bn or 10% of global turnover for breaches. The Act was built around a clear model at the time: platforms host content, users upload it, and regulators intervene when harm occurs.  

The rapid rise of generative AI has disrupted this logic. Under the Act, it is illegal to create or share explicit images without consent.  

However, the waters become murkier when the creator is actually an automated system embedded within a platform itself. Questions of liability become harder to answer when platforms like X look less like neutral intermediaries, and more like content engines in their own right. 

The government’s response to this issue is telling, both in how the Act may work in practice, and where the cracks are starting to show. 

Secretary of State for Science, Innovation and Technology Liz Kendall moved in quickly. In a statement to the Commons, she confirmed that the banning of AI nudification tools would be brought forward through amendments to the Data (Use and Access) Act and that the supply of such tools would be made illegal under the Crime and Policing Bill. In the same statement, she also announced that the government would consult on a potential ban on social media use for under-16s.  

These interventions were strong – but they also prompt and uncomfortable question. Why is it only after a high-profile failure that government is able to act with such speed and clarity? As AI technologies develop at pace, policymakers face a growing challenge: how to get ahead of emerging harms – rather than responding once damage has already occurred.  

There are also broader questions about whether there is scope for the government to go further. Ministers have previously resisted calls to explicitly regulate generative AI platforms in line with other services that pose a high risk of producing or spreading illegal or harmful content, instead favouring a pro-innovation approach designed to attract investment and talent to the UK. Yet as generative AI technologies continue to develop, is there a risk that this approach could leave gaps in oversight, particularly for groups most vulnerable to harm?  

Ofcom’s response to the Grok controversy also highlights a growing tension between the regulator and the government. Its intervention was notably firmer than would have been possible before the Online Safety Act, where enforcement relied largely on voluntary cooperation. Since the Act came into force, Ofcom has already smaller pornography websites over failures to implement adequate age-verification measures. 

However, taking on a global platform with an embedded AI system presents a very different challenge. Testing the strength of the Act, and Ofcom’s ability to enforce it, against a major tech player is a far more complex and politically sensitive exercise.  

This pressure has not gone unnoticed. Ofcom has faced criticism from both opposition figures and ministers, with Liz Kendall warning that the regulator risks losing public trust if it does not accelerate implementation of the Act. There have even been calls for Ofcom to introduce formal sanctions should enforcement continue to lag. 

This raises a crucial question: does Ofcom have the resources, authority, and appetite to take on the largest technology companies, at a time when expectations around online safety enforcement have never been higher? 

The Grok controversy and the government’s response show that the ground is already shifting before the Online Safety Act has had time to settle. The legislation was designed for an internet of posts and platforms, not one of models and machines.  

The government’s cautious stance on proposals to ban social media for under-16s may offer some indication of the direction of travel. Ministers have argued that a blanket ban risks driving harmful behaviour underground, making it harder to detect and regulate. 

Ultimately, the success of this approach will depend on robust and confident regulation by Ofcom, alongside the development of clear guardrails that protect users while still allowing space for responsible innovation. Without this, the gap between regulation and technological capability will continue to widen.  

Driving Positive Change 

At Whitehouse Communications, we help organisations operating at the cutting edge of technology and media navigate complex regulatory landscapes and engage strategically with policymakers.  

Our cross-sector expertise and deep understanding of government priorities enable us to support businesses in shaping their messaging, anticipating regulatory change, and positioning themselves as trusted partners in policy development.  

If you would like to understand what these developments mean for your organisation, please get in touch with our specialist team at info@whitehousecomms.com.