AI frontiers: Exploring UK and EU’s approaches to AI regulation

The EU will soon introduce its AI Act, the world’s first regulation on rapidly developing AI technology, which is expected to take full legal effect from Spring 2026. The UK on the other hand, has been less eager to introduce legislation to regulate AI due to concerns that a prescriptive approach would ‘stifle innovation’. Conventional wisdom suggests that the AI Act could have a ‘Brussels effect’ – the indirect influence on other jurisdictions to align with EU law – similar to the EU’s GDPR rules which took full effect in 2018 and has become the global gold standard for data protection.

That said, post-Brexit the UK has diverged from the EU in a number of policy areas, and on AI the jury is still out on whether the UK will converge with the AI Act as the UK has embraced a ‘pro-innovation’ approach compared to the EU’s more rigid regulatory framework, which aligns in some ways and differs in others.

The UK and EU approaches are similar insofar as they both adopt a ‘risk-based’ approach to the regulation of AI, although the EU’s AI Act provides a prescriptive legal framework which categorises risk levels, whereas the UK’s principles are far looser and rely on regulators to assess AI-specific risks as they see fit.

The UK policy has a de-centralised and vertical approach consisting of five ‘principles’ with an emphasis on safety and transparency to guide sector-specific regulators to manage AI in their area of expertise. For example, the Financial Conduct Authority (FCA) will be expected to regulate AI in the financial services sector. This is a relatively loose non-legislative approach to regulating AI compared to the EU AI Act’s introduction of four risk categories for AI models: ‘minimal/none’, ‘limited’, ‘high’ and ‘unacceptable’, the latter of which will be banned.

The AI Act on the other hand, adopts a centralised and horizontal approach, meaning authority largely rests at the EU rather than Member State level and outlines rules for AI across all sectors. The latest draft of the AI Act introduces maximum fines of up to 7% of global annual turnover, or 30m euros, for the most severe breaches of provisions on the use of prohibited AI practices. Meanwhile, the UK’s approach to regulating AI does not include any financial repercussions for violations.

 The magnitude of AI developments is unquantifiable and is expected to impact almost every sector, not least agrifoods. The European Parliament signed off the AI Act on 13th March, meaning it remains only for the Council to provide its seal of approval – expected in the coming weeks – for the regulation to be finalised. In the UK, sector-specific regulators will publish their AI strategy plans by 30th April, providing companies with clarity on their obligations.

With rapid innovations in AI technology in the coming months and years, it is certain that the application of the UK and EU’s regulatory approaches will have extensive ramifications on a vast array of businesses on both sides of the Channel.

At Whitehouse Communications, our team of experts are driven to help clients navigate the complicated worlds of tech and communication policy. Your issues are our issues. We want to help your organisation deliver significant policy and regulatory changes. Whether you’re working in the UK or the EU – get in touch with us to discuss how we can help.