LawTeacher.com

Sectoral regulators and AI: the UK’s multi-agency governance model

May 7, 2025

Researcher with AI brain concept on screen

Artificial intelligence (AI) poses complex opportunities and risks that cut across many sectors of the economy. Governments are debating whether to regulate AI through a single dedicated authority or by empowering existing regulators in each sector. The United Kingdom has chosen a decentralised, multi-agency model for AI governance, overseeing AI through its existing sectoral regulators rather than creating a new AI-specific regulator. This essay analyses the UK’s approach and the roles of key regulators – including the Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA), and Medicines and Healthcare products Regulatory Agency (MHRA) – in issuing AI guidance. It also discusses how coordination is achieved through initiatives like the Digital Regulation Cooperation Forum (DRCF). The benefits and challenges of the UK model are examined, followed by a brief comparison to a centralised AI authority model such as the European Union’s proposed EU AI Office.

Background

The UK government’s policy is to integrate AI oversight into the existing regulatory landscape, aiming for a “pro-innovation” and flexible framework instead of new heavy-handed legislation​. In March 2023, the Department for Science, Innovation and Technology (DSIT) published the AI Regulation White Paper outlining five cross-sector principles for AI: (1) safety, security & robustness, (2) transparency & explainability, (3) fairness, (4) accountability & governance, and (5) contestability & redress​. Crucially, rather than immediately enshrining these principles in statute, the government chose to issue them on a non-statutory basis to be implemented by existing regulators​. The rationale was that each sectoral regulator has domain-specific expertise and is “best placed to take a proportionate approach to regulating AI” within its field​. This avoided rushing into broad new laws that might stifle innovation in a fast-moving field​.

Under this model, regulators are expected to interpret and apply the AI principles within their sector through guidance and oversight. Initially there is no legal mandate to do so, but the government signalled that “when parliamentary time allows” it may introduce a statutory duty requiring regulators to “have due regard” to the AI principles​. Such a duty would reinforce regulators’ mandates without creating a new AI regulator, unless the voluntary approach proves ineffective​. This staged approach reflects the UK’s preference for iteration and flexibility – regulators will start with guidance and voluntary coordination, and only later might legislation formalise their responsibilities​.

Several government bodies and forums support this framework. DSIT has established central support functions (sometimes termed an AI Regulatory “Hub” or coordination layer) within government to monitor risks, scan for regulatory gaps, and facilitate cooperation among regulators​. Importantly, no new AI-specific agency is being created​. Instead, coordination is enhanced through bodies like the Digital Regulation Cooperation Forum (DRCF) – a voluntary alliance of regulators with digital economy remits (ICO, FCA, Ofcom, and the Competition & Markets Authority (CMA)) working together on overlapping issues such as online algorithms. The government’s approach was also informed by consultations and a recognition that AI is already partly regulated by existing laws (for example, data protection, consumer protection, equality, financial services law)​. However, stakeholders warned that gaps and inconsistencies could arise without clear coordination​. This led to initiatives like asking each major regulator to publish an AI strategy by April 2024 and the issuance of DSIT’s Initial Guidance for Regulators on implementing the AI principles​​.

In summary, the UK’s model as of 2024 is sector-led AI governance under a common set of principles, backed by central guidance and inter-agency cooperation, but without an overarching AI regulator or a single “AI Act”. This stands in contrast to more centralised proposals elsewhere, which will be compared later. First, we examine the roles of key UK regulators and how they are addressing AI within their regulatory domains.

Key Regulatory Bodies and Their Roles

Information Commissioner’s Office (ICO)

The ICO is the UK’s independent authority established under the Data Protection Act 2018 to uphold information rights and enforce data protection law (including the UK General Data Protection Regulation). Given that much of modern AI involves processing personal data, the ICO plays a pivotal role in AI governance. It has effectively become a de facto AI regulator in areas related to data privacy and algorithmic accountability​. The ICO’s approach has been to apply existing data protection principles to AI systems and issue practical guidance to organisations deploying AI. For example, the ICO released extensive Guidance on AI and Data Protection (updated March 2023) to help organisations comply with privacy, fairness, transparency and security requirements when using AI​. This guidance aligns with the government’s AI principles – indeed the ICO noted that principles like transparency, fairness and accountability in the UK’s AI White Paper closely mirror long-standing data protection principles​.

In its 2024 ICO AI Strategy (“Regulating AI: the ICO’s strategic approach”), the ICO confirmed that it can oversee AI throughout its lifecycle wherever personal data is involved​. The ICO asserted that data protection law is technology-neutral and risk-based, so it already provides a toolkit to address AI’s novel risks (e.g. bias or opacity) without needing AI-specific statutes​. Notably, UK law gives individuals rights against solely automated decisions with significant effects – Article 22 of the UK GDPR limits such AI-driven decisions and requires human review in many cases, providing a legal check on high-risk AI uses (Data Protection Act 2018, Schedule 2, Part 2). The ICO can enforce these provisions. For instance, if an insurance company or employer uses AI to make fully automated decisions, it must ensure compliance with fairness, transparency and the ability for individuals to contest those decisions, as required by data protection law. The ICO has powers to investigate and sanction organisations for AI-related data breaches or unfair processing. It has already conducted audits of AI systems (e.g. auditing AI-driven recruitment tools for compliance with equality and privacy law) and issued guidance on AI auditing and risk management (ICO, 2020, AI Auditing Framework).

Collaboration is also a key part of the ICO’s role. The ICO is an active member of the DRCF, working with other regulators on cross-cutting AI issues such as algorithmic transparency. In the ICO’s words, it “continues to engage with the UK government, along with our partners within the Digital Regulation Cooperation Forum (DRCF), on … broader proposals on regulatory reform” in AI​. By partnering with sector regulators, the ICO ensures that data protection considerations (like privacy and bias mitigation) are embedded in sector-specific AI guidance. For example, in medical AI, the ICO coordinates with the MHRA and NHS bodies to address patient data protection and trust in AI-driven devices​. Through such efforts, the ICO helps shape a consistent ethical and legal approach to AI across sectors, without being a sole “AI authority” itself.

Financial Conduct Authority (FCA)

The FCA is the UK’s financial regulator with a mandate to ensure consumer protection, market integrity and competition in financial services (Financial Services and Markets Act 2000). It oversees banks, insurers, asset managers and other firms – many of which are rapidly adopting AI for purposes like credit scoring, algorithmic trading, fraud detection, and customer service. Rather than new AI-specific regulations, the FCA’s approach is to apply existing financial regulations and principles to AI, and to provide guidance on their expectations. In April 2024, the FCA published an AI Update outlining its strategy for AI following the government’s AI White Paper​. The FCA welcomed the UK’s principles-based, sector-led model and noted that it is a “technology-agnostic, principles-based and outcomes-focused regulator”​. In practice, this means the FCA does not treat AI uniquely; instead, it requires that firms deploying AI must still meet the same regulatory standards for risk management, transparency, fairness and accountability that apply to any process.

The FCA has indicated that existing rules cover many AI issues. For example, the Senior Managers and Certification Regime (SM&CR) – which holds top executives accountable for a firm’s controls – applies to AI usage. The FCA and the Bank of England have explicitly considered whether a dedicated “AI responsible” senior manager role is needed, but even without one, all senior managers are responsible for risks in their area, which includes AI systems​. This ensures clear accountability for AI outcomes within financial firms. Similarly, if an AI algorithm makes decisions that affect consumers (such as loan approvals or insurance pricing), the FCA expects firms to ensure those decisions are explainable and non-discriminatory, in line with existing requirements (e.g. the FCA’s Treating Customers Fairly principle and Equality Act 2010 duties). The FCA has emphasised that firms should be able to explain their AI to regulators and evidence that it is deployed responsibly (FCA, 2022, AI in Financial Services Discussion Paper).

To support innovation, the FCA is also leveraging tools like sandboxes and TechSprints where AI developers can test new AI-driven financial products under regulatory oversight​. This helps identify regulatory adjustments needed for AI. The FCA’s coordination through the DRCF is particularly notable. The DRCF allows the FCA to collaborate with the CMA on issues like AI in advertising or with Ofcom on online fraud, recognising that digital markets often span multiple regulators​. For instance, the FCA and CMA have jointly studied the impact of AI machine learning models on consumers, and the FCA is working with DRCF partners on an “AI and Digital Markets” pilot hub​. The FCA’s work with the Bank of England (which regulates systemic financial risks via the Prudential Regulation Authority) is also key – together they have conducted surveys on AI adoption in finance and published guidance on managing AI models’ risks (Bank of England & FCA, 2022). In summary, the FCA’s role is to ensure that as financial firms embrace AI, they continue to comply with existing laws and that AI innovations do not undermine consumer protection or market fairness. Its issuance of AI guidance (such as the 2024 AI Update) and participation in cross-regulatory forums demonstrate the multi-agency model in action for the financial sector.

Medicines and Healthcare products Regulatory Agency (MHRA)

The MHRA regulates medicines and medical devices in the UK. With the rise of AI in healthcare – from diagnostic algorithms to AI-driven medical devices – the MHRA has taken lead on ensuring these technologies are safe and effective under the existing medical device regime. In UK law, many AI systems used for medical purposes qualify as medical devices and thus fall under the Medical Devices Regulations 2002 (as amended) and related statutes (reinforced by the Medicines and Medical Devices Act 2021). The MHRA has explicitly adapted its regulatory approach to address AI. In 2022, it launched a “Software and AI as a Medical Device Change Programme” with a published Roadmap to clarify requirements for AI and software in healthcare​. This includes developing guidance on key AI-specific issues like transparency, interpretability and adaptivity of AI in medical devices​. The aim is to ensure manufacturers of AI-based devices understand how to meet legal safety requirements and to maintain public trust in AI health solutions​.

Concretely, the MHRA has issued guidance documents (most recently updated in 2025) on how to determine if software or an AI system is a medical device, how to classify its risk, and what regulatory pathway it must follow​. For example, an AI diagnostic tool must undergo conformity assessment and obtain UKCA marking before deployment, just as any other medical device. The MHRA’s Software Group works across the agency to assure the safety of software/AI devices by reviewing technical files, monitoring post-market performance, and engaging with developers and hospitals​. A core objective is “ensuring medical device regulation is fit for purpose [and] meets the needs of software as well as AI, and is supported by robust guidance”​. This illustrates the sectoral regulator updating its framework (within existing law) to cover AI.

The MHRA also collaborates extensively: it has partnered internationally with the U.S. FDA and Health Canada to establish guiding principles for Good Machine Learning Practice in medical devices​. Domestically, the MHRA coordinates with bodies like the ICO and the National Data Guardian on matters where AI in devices intersects with data protection and ethics​. There is also an AI and Digital Regulations Service for health (hosted by NHSX) that links regulators in the health sector to provide unified guidance – another example of multi-agency coordination in a specific domain​. Notably, even in this high-stakes sector, the UK chose not to create a separate “AI health regulator” but to empower the MHRA to handle AI with support from others. The case of AI as a medical device was highlighted in the UK’s AI White Paper as a model of a regulator proactively adapting to AI innovation​. By developing targeted guidance (for instance on ensuring an adaptive AI’s changes do not compromise safety), the MHRA seeks to maintain patient safety under existing legal powers, demonstrating the multi-agency model’s flexibility.

Other Regulators and Coordination (DRCF)

Beyond the ICO, FCA, and MHRA, several other regulators are part of the UK’s AI governance tapestry. The Competition and Markets Authority (CMA) examines AI impacts on competition – for example, investigating whether algorithms could facilitate price-fixing or whether Big Tech companies’ deployment of AI foundation models could create market dominance. The Office of Communications (Ofcom), which regulates telecoms and online content, is looking at AI in the context of internet safety and broadcasting (for instance, the role of AI recommendation systems and content moderation under the Online Safety Act 2023, which Ofcom will enforce). Rather than each of these acting in isolation, the UK emphasises joined-up regulation through the Digital Regulation Cooperation Forum (DRCF).

The DRCF was established in 2020 as a collaborative forum bringing together the ICO, CMA, Ofcom, and later the FCA, “to work together in regulating online services” and digital technology issues​. It aims to harness the collective expertise of multiple regulators when issues of data, privacy, competition, and content intersect​. AI is a prime example of such an intersection, and the DRCF has made AI one of its priority areas. Through the DRCF, regulators coordinate research (for example, a joint study on algorithmic transparency), publish joint guidance where appropriate, and identify any regulatory gaps between them. The DRCF’s 2023 workplan included an Algorithmic Processing workstream to ensure coherent approaches to AI across data protection, competition law and consumer protection. By facilitating information-sharing and collaborative projects, the DRCF helps prevent inconsistent or duplicative regulation of AI.

For instance, if the CMA examines competition issues in AI-driven digital advertising, it can draw on the ICO’s expertise in algorithmic transparency to ensure any remedies also respect privacy. Likewise, the FCA and CMA have coordinated via DRCF on AI in financial advertising and fraud, aligning their interventions. The government’s Initial Guidance for Regulators (Feb 2024) explicitly encouraged use of forums like the DRCF to achieve the consistent application of AI principles across sectors​. The benefit of DRCF is evident in industry feedback: businesses have called for “further system-wide coordination to clarify who is responsible for addressing cross-cutting AI risks”​. In response, the UK’s multi-agency model relies on cooperation mechanisms (like DRCF and others in specific sectors) to knit together the patchwork of regulators into a coherent oversight regime. This coordination is what makes the decentralised model workable, as discussed next in evaluating its advantages and limitations.

Benefits of the Multi-Agency Model

The UK’s decision to regulate AI through existing sectoral regulators offers several notable benefits:

1. Leverages Domain Expertise: Each regulator brings deep understanding of its sector’s technology, risks and industry practices. This domain-specific expertise allows more tailored and nuanced oversight of AI uses in that sector​. For example, the MHRA’s familiarity with medical safety standards is invaluable in evaluating an AI diagnostic tool, just as the FCA’s knowledge of financial risk controls is crucial for AI trading algorithms. A one-size-fits-all AI regulator might lack this specialised insight. The UK model “makes use of regulators’…expertise to tailor implementation [of AI principles] to the specific context in which AI is used”​. This should lead to more proportionate, relevant rules for AI in each domain (e.g. finance-focused guidance from the FCA, health-focused guidance from MHRA, etc.), rather than overly general rules.

2. Avoids Regulatory Duplication and Keeps the Framework Flexible: By not creating a new bureaucracy, the UK avoids duplicating regulatory structures. Existing regulators can integrate AI oversight into their current activities, which is efficient and avoids confusion over jurisdiction. The government explicitly chose not to “duplicate the work undertaken by regulators” and not to “create a new AI regulator”​. Instead, a light “coordination layer” supports them​. This approach is inherently flexible – regulators can experiment with guidance, sandboxing, and soft law approaches, and the central government can monitor outcomes. If the landscape changes (as AI technology undoubtedly will), the framework can evolve iteratively. The absence of rigid new legislation at the outset means the UK can adapt its policy faster if needed. The principles-based approach is also inherently flexible; it provides direction (e.g. “AI should be fair, transparent”) without hard-coding detailed rules that might quickly become obsolete. Regulators have latitude to interpret these principles in ways that make sense for their sector. This agility is touted as a competitive advantage for the UK in attracting AI innovation​.

3. Encourages Innovation and Reduces Compliance Burden (Proportionality): A central theme for the UK is ensuring AI regulation is “pro-innovation” and proportionate. Using existing regulators means that, initially, no entirely new compliance regimes are imposed on businesses – companies largely continue dealing with the regulators and laws they already know, now with additional AI guidance. The government was concerned that rushing in new AI laws could “place undue burdens on businesses” and stifle innovation​. The multi-agency model, coupled with non-statutory guidance, was seen as a way to minimise bureaucratic burden in the early stages. Firms developing AI in the UK face principles and guidelines rather than immediately enforceable new rules, giving them flexibility in how to comply. This is complemented by innovation-friendly initiatives like regulatory sandboxes (e.g. the FCA and ICO have AI sandboxes) which help companies trial AI solutions under regulator guidance rather than fear punitive action. The UK approach tries to strike a balance: address risks when necessary but “support innovation and work closely with business” in doing so​. This is intended to make the UK an attractive environment for AI development, in contrast to jurisdictions with very strict upfront requirements. From a proportionality standpoint, regulators can calibrate their interventions based on actual risk evidence in their sector, rather than a blanket approach. For instance, the ICO might focus enforcement on egregious misuse of personal data by AI, while allowing benign uses to flourish with guidance.

4. Coherence through Common Principles and Coordination: Although decentralised, the UK framework is held together by common AI principles and coordination bodies like the DRCF. This offers a “best of both worlds”: sector-specific oversight but within an overall coherent strategy. The principles act as a unifying thread – they “drive consistency across regulators while providing flexibility”​. Regulators are collectively committed to the same high-level outcomes (fair, safe, accountable AI) even as they implement them differently. Meanwhile, forums such as the DRCF and the planned central functions in DSIT help prevent siloed regulation. The DRCF’s strengthening of cooperation ensures that issues spanning multiple domains (e.g. an AI-driven online advertising platform impacting privacy, competition, and consumer harm) can be addressed jointly, reducing the chance of contradictory regulatory demands. Industry feedback was supportive of having some “central coordination to support regulators on issues requiring cross-cutting collaboration”​. The government’s response – funding an AI and Digital Hub and tasks like cross-sector risk assessment – is a benefit in that it should improve regulatory coherence without heavy-handed centralisation​. Essentially, the UK model can adapt to “AI that arises across, or in gaps between, existing regulatory remits” by using coordination rather than creating a single super-regulator​.

5. Builds on a Strong Legal Foundation Already in Place: The multi-agency approach recognises that the UK already has many laws that apply to AI (even if they don’t mention AI explicitly). Data protection law, consumer protection law, equality law, financial regulations, product safety law, etc., all provide tools to address certain AI harms. The government noted that “UK laws, regulators and courts already address some of the emerging risks posed by AI technologies”​. By channeling AI governance through existing regimes, the UK leverages this foundation. For example, the Equality Act 2010 prohibits discrimination, which covers biased AI decisions in employment or services; the Consumer Protection Act 1987 (and its future updates) covers defective products, which can include unsafe AI products; the Financial Services and Markets Act 2000 empowers the FCA to regulate unfair or harmful practices in financial services, AI-driven or not. Thus, the multi-agency model can quickly bring AI under regulatory oversight via these laws. It also means that enforcement mechanisms (like the ICO’s ability to levy fines under the Data Protection Act 2018 or the MHRA’s power to withdraw unsafe devices) are already available and tested. This continuity provides legal certainty: businesses know that existing laws still apply to their new AI systems. In contrast, a new AI regulator might require entirely new enforcement powers and processes to be established from scratch. The UK model avoids that by using what’s already in place, which is an efficient use of regulatory capacity.

These benefits illustrate why the UK believes a decentralised model can be effective. Regulators can be nimble, knowledgeable, and collaborative, and regulation remains proportionate to avoid hindering innovation. However, this approach is not without challenges. The next section addresses the potential downsides and difficulties in the multi-agency scheme.

Limitations and Challenges

While the UK’s multi-agency governance model has advantages, it also faces significant challenges and potential drawbacks:

1. Risk of Inconsistent or Fragmented Regulation: With multiple regulators independently interpreting AI principles, there is an inherent risk of inconsistency. Each regulator may set different expectations, definitions, or thresholds for AI in their sector. This could lead to a patchwork of approaches that is confusing for companies deploying AI across sectors. Industry stakeholders have warned that “conflicting or uncoordinated requirements from regulators” could impose unnecessary burdens and regulatory incoherence​. For example, what counts as “explainable AI” might be defined one way by the ICO for data protection and another way by the FCA for financial services, making it hard for a firm to comply with both. The White Paper explicitly acknowledges the danger that without strong coordination, some regulators could “interpret the scope of their remit… more broadly than intended to fill perceived gaps”, potentially resulting in overlap and uncertainty​. There is also a lack of a single authoritative guidance – unlike a single AI Act that provides one rulebook, the UK model could yield divergent guidance documents (one from each regulator). This fragmentation could undermine the very clarity and predictability that regulation is meant to provide.

The government’s use of common principles is meant to mitigate this, but principles allow room for variation. Indeed, the UK has not adopted a single definition of “AI” in law – it identified only broad characteristics (“adaptivity” and “autonomy”) and left it to regulators to define AI in context. This means what falls under “AI” could differ by regulator. While context-specific definitions can be more accurate, it “no doubt increase[s] the complexity” for businesses subject to different regulators​. Smaller companies in particular may struggle to navigate multiple regulators. The fictional case study of “AI Fairness Insurance Ltd” in the White Paper illustrated how one AI system (insurance pricing) had to consider an array of laws and regulators – data protection (ICO), equality (Equality and Human Rights Commission), consumer law (CMA/Trading Standards), and financial regulation (FCA under FSMA 2000)​. The case noted a “lack of support… to navigate the regulatory landscape” and “no cross-cutting principles and limited system-wide coordination” at present​. This exemplifies the compliance complexity that firms might face in the multi-agency model, unless coordination significantly improves.

2. Potential Gaps in Oversight: Decentralisation raises the question: who regulates AI applications that fall between sector boundaries or involve new domains? It’s possible that certain AI-driven activities might not fit neatly into any single regulator’s remit. For instance, consider a general-purpose AI system that is used in many contexts (a large language model providing services to various industries). No single sector regulator “owns” general-purpose AI – each may only address its slice (when the AI is applied in their sector). This creates a risk of regulatory gaps where important issues (like the development of foundational AI models, or AI used by the public sector outside clear sector regulations) might not be fully addressed. The UK government has implicitly recognised this risk: “some AI risks arise across, or in the gaps between, existing regulatory remits”​. Without a central authority, proactive identification of these gaps relies on coordination forums or the government’s central monitoring. If those are ineffective, certain AI harms could go unregulated. For example, prior to any specific mandate, no regulator might take ownership of AI ethics in public sector use or AI in education, etc., assuming it is someone else’s job. The government’s answer is to use the central support function for horizon scanning and gap analysis​, but this is an ongoing governance challenge to manage.

3. Uneven Regulator Capacity and Expertise: Not all regulators are equally prepared or resourced to deal with AI issues. Some, like the ICO and FCA, have already built internal expertise on AI and digital technology. Others may have limited capacity or technical understanding of AI​. This unevenness can lead to weaker oversight in certain sectors. A regulator with budget or skills constraints might not issue robust AI guidance or might fail to identify AI-related risks in their domain. The government’s approach places a lot of trust in regulators stepping up to the AI challenge. However, if a regulator does not prioritise AI (especially while it’s not a statutory duty), their sector could become an Achilles heel in the governance framework. The UK is attempting to address this by encouraging regulators to up-skill and by facilitating knowledge exchange (for instance, the AI Regulators’ Group convenes different authorities to share best practices, and the DRCF provides a platform for learning from each other). Still, there is a capacity issue: overseeing algorithmic systems can require new technical tools (such as algorithm audit methods) which some regulators have to acquire. Moreover, regulators have their hands full with their existing mandates; adding AI as another responsibility without extra resources could stretch them thin. Recognising this, the White Paper mentioned providing central government support to regulators, including possibly funding an AI technical capability to assist them​. The success of the UK model will depend on whether all relevant regulators can reach a sufficient level of AI competency and coordination.

4. Lack of Immediate Legal Force and Accountability: At present, the UK’s AI principles and the guidance flowing from them are not directly legally binding. Regulators can enforce existing laws (e.g. the ICO can enforce data protection law, the MHRA can enforce medical device regulations), but the new AI-specific principles (like “accountability” or “contestability”) are not yet codified duties. This voluntary, non-statutory phase relies on regulators’ goodwill and stakeholders’ cooperation. A risk is that without legal mandates, some regulators or companies might deprioritise AI governance if it conflicts with other objectives. The government does plan to introduce a statutory “duty to have due regard” to the AI principles for regulators in the future​. However, even that is a somewhat soft obligation (similar to equality duty for public bodies) – it would require regulators to consider the principles, but not necessarily to guarantee specific outcomes. There is also a question of accountability: with no single body in charge, who is accountable if AI harms slip through the cracks? If, say, an AI system causes widespread consumer harm because two regulators each thought the other was handling it, it may be hard to assign responsibility in the diffuse multi-agency setup. A single AI agency model could arguably provide clearer lines of accountability (“the buck stops there”). The UK model must instead foster collective accountability, which can be tricky. In Parliament, some have raised concerns about this – indeed a Private Member’s Bill (Artificial Intelligence (Regulation) Bill [HL] 2022-23) was introduced by Lord Holmes of Richmond proposing a central AI regulatory body precisely to ensure clearer oversight and to “co-ordinate and ensure that current regulators” fulfill their duties in relation to AI​. The Bill also suggested measures like ‘AI responsible officers’ in companies​. While the government has not supported this Bill (and it is unlikely to become law​), its existence highlights concern that the current approach might leave accountability gaps and lacks teeth in the short term.

5. International Divergence and Competitiveness Concerns: As other jurisdictions (like the EU) implement stricter, centralised AI regulations, UK companies operating globally may face pressure to meet those higher standards anyway. Some critics argue that the UK’s piecemeal approach could, over time, erode public trust if people perceive that AI is not being rigorously overseen. If scandals or accidents involving AI occur in the UK, the absence of a dedicated regulator could be seen as a regulatory failure. Additionally, differences between the UK’s regime and others can pose compliance complexity for businesses – they might prefer one global compliance strategy following the strictest rules (often the EU’s), reducing the UK’s purported light-touch advantage. The UK has to ensure its myriad regulators produce outcomes that are interoperable with international frameworks, as the government acknowledges​. This is a challenge when the EU, for example, will have a single AI rulebook. To avoid the UK becoming an outlier (either too lax or too fragmented), ongoing alignment efforts are needed, which can be cumbersome across many agencies.

In summary, the UK’s multi-agency model, while innovative, must overcome coordination hurdles, ensure consistency, and possibly accept that at some point hard law or central mechanisms might be required if the voluntary, decentralised approach falls short. The government has left the door open to legislating duties on regulators or other interventions if monitoring shows the framework isn’t effective​. The next section contrasts this approach with the centralised model exemplified by the EU, highlighting how each tries to manage these trade-offs differently.

Comparison to Centralised Models

Other jurisdictions are pursuing more centralised AI governance, creating single frameworks or authorities for AI oversight. The clearest contrast to the UK is the European Union’s proposed AI Act and its associated governance structures. The EU is finalising a comprehensive Artificial Intelligence Act, a regulation that will apply uniformly across all member states. The AI Act adopts a centralised, top-down approach: it defines “AI systems” broadly, categorises them by risk (unacceptable risk, high-risk, limited risk, minimal risk), and imposes detailed legal requirements on providers and users of AI in the high-risk category (such as conformity assessments, documentation, transparency obligations, etc.). Enforcement of the AI Act will rely on designated national supervisory authorities in each EU country, but crucially it also provides for a central coordinating body – the European AI Office.

In January 2024, the European Commission established the European Artificial Intelligence Office in anticipation of the AI Act coming into force​. The EU AI Office will function as a central node to oversee implementation and ensure consistency in enforcement of the AI Act across Europe. It is set up within the European Commission (specifically DG CONNECT) and will issue guidance, facilitate the work of national regulators, and coordinate on cross-border AI issues​. The Commission emphasised that the AI Office’s role is to operate within the existing EU framework without replacing sectoral bodies, but to provide unified direction on AI regulation​. In effect, it is a centralised authority focused solely on AI, working alongside an European Artificial Intelligence Board (analogous to the GDPR’s European Data Protection Board) to harmonise regulatory practices. This model concentrates expertise on AI within one body and drives a single regulatory strategy.

The contrast with the UK model is stark:

  • Single Legal Framework vs Multiple Laws: The EU’s AI Act creates one horizontal law for AI systems, whereas the UK relies on many sectoral laws (data protection, financial law, etc.) supplemented by principles. The EU approach offers clarity and uniformity: the same definitions and requirements apply to AI in toys as to AI in insurance, if they are “high-risk” under the Act’s criteria. The UK approach might apply different laws in those two scenarios. A central AI law ensures minimum standards across the board – for example, any high-risk AI (from a medical device to an HR recruitment AI) in the EU will need a risk management system and human oversight per the Act. In the UK, those obligations would only come if and as imposed by the relevant regulator or existing law (which might vary; e.g., medical devices require extensive testing by law, but an HR recruitment AI tool might not be covered by any specific regulation except general equality law).
  • Dedicated AI Regulator vs General Regulators: The EU will effectively have a dedicated AI regulator at the EU level (the AI Office) plus specific national regulators (likely existing agencies, but they will be formally tasked with AI Act enforcement). This provides clear leadership on AI issues. The UK has no single coordinator of equivalent authority. The advantage of the EU’s model is a strong, unified enforcement mechanism – the AI Office can issue binding guidance or interpretative rulings ensuring all of Europe follows the same rules. It also serves as a one-stop shop for expertise on AI risks (especially new risks like generative AI, which the EU AI Office is expected to oversee directly for foundation models​). For companies operating in the EU, this potentially simplifies compliance: one set of rules and one primary interlocutor (in addition to whatever national authority is assigned). However, a disadvantage is potential rigidity – the AI Act’s requirements are inflexible once in law, and a central body may apply them bluntly without sector nuance. The UK’s multi-agency model might allow more bespoke solutions (as argued earlier), whereas the EU’s central model prioritises consistency and precaution.
  • Speed and Adaptability: The UK boasts that its approach is more adaptive – it can tweak guidance quickly via regulators. The EU AI Act, being legislation, is harder to amend if it becomes outdated. The EU has tried to future-proof it with broad definitions and the AI Office to handle guidance on new tech, but changes in law would require legislative process. The UK can iterate through regulator policy much faster. On the other hand, the EU’s comprehensive approach means many issues are addressed upfront (e.g. transparency obligations for AI that interacts with people, like chatbots, are written into the Act). The UK might address similar issues more slowly or unevenly via different regulators. For instance, the EU Act will mandate that AI-generated content is disclosed as such (to combat deepfakes) – a general rule across sectors. In the UK, there is no equivalent blanket rule; if tackled, it might come via an online safety code by Ofcom or guidance from the ICO on deepfakes, etc., and might not cover all cases.
  • Public Trust and Accountability: The EU believes a strong legal framework will enhance public trust in AI by demonstrably guarding against harms. A central AI authority also provides a clear place for the public and stakeholders to direct concerns or inquiries about AI governance. In the UK, accountability is dispersed; the public must identify which regulator to approach about a given AI issue (e.g. go to ICO for a data privacy issue, or to Trading Standards for a faulty AI product, etc.). The UK government trusts that the public will benefit from a lighter-touch regime that fosters innovation (indirectly benefiting society through economic growth and AI-enabled services), but there is a question of whether that generates the same confidence as a robust regulatory act. If the EU’s AI Act proves effective, it might set expectations for AI safety such that UK regulators will face pressure to match those outcomes anyway, but without the same legal mandate.

In practice, the UK and EU may gradually converge in certain aspects. The UK has stated it will promote “interoperability with international regulatory frameworks”​(gov.uk). We can expect UK regulators to watch EU developments closely and possibly incorporate similar standards via guidance (for example, the ICO or FCA might recommend practices that align with EU requirements so that UK firms aren’t shut out of EU markets). Conversely, the EU’s central model still involves national bodies, meaning some decentralisation within each country – it isn’t entirely monolithic. Additionally, other countries offer variants: for instance, Canada’s proposed AI and Data Act would create an AI regulatory function at the federal level; Japan is favouring a non-statutory approach closer to the UK’s; the United States currently relies on sectoral regulators (like the FTC for AI in consumer protection) and voluntary frameworks (NIST AI Risk Management Framework) rather than a new AI agency. So the spectrum of models is broad.

To illustrate, the UK’s multi-agency approach resembles the US in leveraging existing bodies (though the US lacks an official set of AI principles across agencies so far), whereas the EU’s is singular in creating a new legal scheme. Each approach has trade-offs between innovation and precaution, flexibility and certainty, decentralised expertise and central authority. The UK has prioritised innovation and flexibility, accepting some risk of fragmentation, whereas the EU prioritised uniform safeguards, accepting some added regulatory burden.

Conclusion

The UK’s multi-agency governance model for AI represents a deliberate choice to embed AI oversight within the fabric of existing regulatory institutions. By empowering regulators like the ICO, FCA, and MHRA to issue guidance and apply existing laws to AI, the UK aims to harness sector expertise and maintain a light-touch, adaptive framework that can grow with the technology​. This model offers clear benefits in terms of proportionality and agility, and it aligns with the UK’s broader strategy to be a global AI innovation hub. The formation of coordination bodies such as the DRCF demonstrates an awareness of the need for coherence in a decentralised system – effectively creating a networked multi-agency “virtual” AI regulator through collaboration, if not a single agency on paper.

However, the success of this model will depend on effective coordination and commitment. The UK must ensure that its many regulators act in concert, that no significant AI risks fall through regulatory gaps, and that businesses receive consistent signals. The planned statutory duty on regulators to consider the AI principles may help bind the framework together, but close monitoring will be essential. If the multi-agency approach delivers coherent and trusted AI governance, it could become a viable alternative to the centralised model and validate the UK’s decision. If it falters – for example, through contradictory guidance or failure to prevent harm – pressure may grow for a more unified solution.

Comparatively, models like the EU’s centralised AI Office illustrate the opposite approach of concentrating authority and rules in one place. The UK’s experiment will therefore be watched closely: it will show whether regulation by cooperation can match regulation by command in addressing the challenges of AI. In a rapidly evolving AI landscape, the UK may eventually adopt a hybrid approach, strengthening central coordination (or even legislation) if needed, while retaining sectoral expertise. As it stands in 2025, the UK’s multi-agency model is an ambitious, novel attempt to regulate emerging technology by “leveraging and building on existing regimes”​ (gov.uk) rather than starting anew, embodying both the possibilities and the challenges of modern governance in the AI era.

References

  • AI Regulation White Paper 2023 – Department for Science, Innovation and Technology (DSIT) (2023) “A pro-innovation approach to AI regulation (White Paper)”. London: UK Government. [Policy Paper]. gov.uk
  • DSIT Initial Guidance for Regulators 2024 – Department for Science, Innovation and Technology (2024) “Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators”. London: UK Government. [Policy Paper]. thecompliancedigest.com
  • Data Protection Act 2018 (c.12). London: HMSO. (UK legislation establishing the ICO and UK GDPR rules, including provisions on automated decision-making).
  • UK GDPR Article 22 – General Data Protection Regulation (UK GDPR), retained EU Regulation 2016/679 as amended in UK law. (Gives individuals rights regarding solely automated decision-making).
  • Financial Services and Markets Act 2000 (c.8). London: HMSO. (Primary legislation governing financial regulation in the UK; source of FCA’s regulatory powers).
  • MHRA Guidance: Software and AI as a Medical Device – Medicines & Healthcare products Regulatory Agency (2025) “Software and artificial intelligence (AI) as a medical device” (Guidance Publication, updated 3 Feb 2025). London: UK Government. gov.uk
  • ICO Guidance: AI and Data Protection – Information Commissioner’s Office (2023) “Guidance on AI and Data Protection” (online guidance, updated 15 March 2023). Wilmslow: ICO. ico.org.uk
  • ICO AI Strategy 2024 – Information Commissioner’s Office (2024) “Regulating AI: The ICO’s Strategic Approach”. Wilmslow: ICO. ico.org.uk
  • FCA AI Update 2024 – Financial Conduct Authority (2024) “Artificial Intelligence (AI) Update – following the Government’s response to the AI White Paper”. London: FCA (Corporate Publication, 22 April 2024). fca.org.uk
  • DRCF – Digital Regulation Cooperation Forum – Ofcom (2025) “Digital Regulation Cooperation Forum” (web page). London: Ofcom. ofcom.org.uk
  • Artificial Intelligence (Regulation) Bill [HL] 2022-23 – Private Member’s Bill introduced in House of Lords (Lord Holmes). (Proposed creation of a central AI authority in UK; not enacted). thecompliancedigest.com
  • EU AI Act (Draft) – European Commission (2021) “Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act)” COM(2021) 206 final. Brussels. (EU draft legislation introducing a risk-based AI regulatory framework).
  • European AI Office Decision 2024 – European Commission (2024) Commission Decision of 24 January 2024 establishing the European Artificial Intelligence Office (C(2024) 390 final). Brussels. digital-strategy.ec.europa.eu
  • EU Commission Press Release 24 Jan 2024 – European Commission (2024) “Commission launches AI innovation package to support SMEs and establishes European AI Office” (Press release, 24 January 2024). Brussels.
  • CMA Annual Plan 2023-24 (Digital Markets) – Competition & Markets Authority (2023) – sections on AI and algorithmic processing (outlining CMA’s focus on AI in competition and consumer protection).
  • Bridges v South Wales Police [2020] EWCA Civ 1058 – Court of Appeal (England & Wales) decision on police use of live facial recognition (illustrative of courts addressing AI under existing law – Data Protection and Human Rights Act).

Article by LawTeacher.com