Theย Online Safety Act has been at the centre of a media storm this month, following revelations that Grok AI has beenย used at scale to generate sexualised images ofย women and children online without their consent.ย ย
The scale and severity of the issue provoked a widespread reaction, and in doing so, hasย reopened fundamental questions about the effectiveness of the Online Safety Act as a mechanism for regulating online harm.ย ย
The Act came into force less than a year ago. At its core is a significant shift in responsibility:ย tech companies are now legally required to protect user safety by preventing and removing illegal content, with Ofcom empowered to levy fines of up toย ยฃ18bn or 10% of global turnover for breaches. Theย Act was built around a clear model at the time: platforms host content, users upload it,ย and regulators intervene when harm occurs.ย ย
The rapid rise of generative AI has disrupted this logic.ย Under the Act, it is illegal to create or shareย explicit images without consent.ย ย
However, theย waters become murkier when the creator isย actuallyย anย automated system embedded within a platform itself. Questions of liability become harder to answer when platforms like X look less like neutral intermediaries, and more like contentย engines in their own right.ย
The governmentโs response to this issue is telling, both in how the Act may work in practice, and where the cracks are starting to show.ย
Secretary of State for Science, Innovation and Technology Liz Kendallย moved in quickly. In a statement to the Commons, she confirmed that the banning of AIย nudificationย tools would be brought forward through amendments to the Data (Use and Access) Act and that the supply of such tools would be made illegal under the Crime and Policing Bill.ย In the same statement, she also announced that the government would consult on a potential ban on social media use for under-16s.ย ย
These interventions were strong โย but they also prompt and uncomfortable question. Why is it only after aย high-profileย failure that governmentย is able toย act with such speed and clarity? Asย AI technologies develop at pace, policymakers face a growing challenge: how to get ahead of emerging harms โย rather than responding once damage has already occurred.ย ย
There are also broader questions about whether there is scope for the government to go further. Ministers have previously resisted calls to explicitly regulate generative AI platforms in line with otherย services that poseย a high riskย of producing or spreading illegal or harmful content, instead favouring a pro-innovation approach designed to attract investment and talent to the UK.ย Yet as generative AI technologies continue to develop, is there a riskย that this approach could leave gaps in oversight, particularly for groups most vulnerable to harm?ย ย
Ofcomโs response to the Grok controversy also highlights a growing tension between the regulator and the government. Its intervention was notably firmer than would have been possible before the Online Safety Act,ย whereย enforcement reliedย largely onย voluntary cooperation. Since the Actย came into force, Ofcom has already smaller pornography websites over failures to implement adequate age-verification measures.ย
However, taking on a global platform with an embedded AI system presentsย a very differentย challenge. Testing the strength of the Act, and Ofcomโs ability to enforce it, against a major tech player is a far more complex and politically sensitive exercise.ย ย
This pressure has not gone unnoticed.ย Ofcom has faced criticism from both opposition figures and ministers, with Liz Kendall warning that the regulator risks losing public trust if it does not accelerate implementation of the Act. There have even been calls for Ofcom to introduce formal sanctions should enforcement continue to lag.ย
This raises a crucial question: does Ofcom have the resources, authority,ย and appetite to take on the largest technology companies, at a time when expectations around online safety enforcement have never been higher?ย
The Grok controversy and the governmentโs response show thatย the ground is already shifting before the Online Safety Act has had time to settle. The legislation was designed for an internet of posts and platforms,ย not one of models and machines.ย ย
The governmentโs cautious stance on proposals to ban social media for under-16s may offer someย indicationย of the direction of travel. Ministers have argued that a blanket ban risks driving harmful behaviour underground, making it harder to detect and regulate.ย
Ultimately, theย success of this approach will depend onย robust and confident regulation by Ofcom, alongside the development of clear guardrails that protect users while still allowing space for responsible innovation.ย Without this, the gap betweenย regulation and technological capability will continue to widen.ย ย
Driving Positive Changeย
At Whitehouse Communications,ย we help organisationsย operatingย at theย cutting edgeย of technology and media navigate complex regulatory landscapes and engage strategically with policymakers.ย ย
Our cross-sectorย expertiseย and deep understanding of government priorities enable us to support businesses in shaping their messaging,ย anticipatingย regulatory change, and positioning themselves as trusted partners in policy development.ย ย
If you would like to understand what these developments mean for your organisation, pleaseย get in touch withย our specialist team atย info@whitehousecomms.com.
